licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.2.2 | 6a18107b6fc89bb32eb5dcec609a7355b53e8b78 | code | 16836 | #=--------------------------------------------------------------------
syntheticimages - Functions for creating various synthetic test images
for evaluating edge detectors. Most of these images
cause considerable grief for gradient based
operators.
Copyright (c) 2015-2017 Peter Kovesi
peterkovesi.com
MIT License:
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
PK August 2015 Original porting from MATLAB to Julia
November 2017 Julia 0.6
October 2018 Julia 0.7/1.0
---------------------------------------------------------------------=#
export step2line, circsine, starsine, noiseonf
export nophase, quantizephase, swapphase
#----------------------------------------------------------------------
# step2line
"""
A phase congruent test image that interpolates from a step to a line.
Generates a test image where the feature type changes from a step edge to a line
feature from top to bottom. Gradient based edge detectors will only correctly
mark the step-like feature towards the top of the image and incorrectly mark two
features towards the bottom of the image whereas phase congruency will correctly
mark a single feature from top to bottom. In general, natural images contain a
roughly uniform distribution of the full continuum of feature types from step to
line.
```
Usage:
img = step2line(sze; nscales=50, ampexponent=-1, ncycles=1.5, phasecycles=0.25)
Arguments:
sze::Integer - Number of rows in test image, defaults to 512.
Keyword Arguments:
nscales::Integer - No of Fourier components used to construct the signal.
Defaults to 50.
ampexponent::Real - Decay exponent of amplitude with frequency.
A value of -1 will produce amplitude inversely
proportional to frequency (corresponds to step feature).
A value of -2 will result in the line feature
appearing as a triangular waveform. Defaults to -1.
ncycles::Real - Number of wave cycles across the width of the image.
Defaults to 1.5
phasecycles::Real - Number of feature type phase cycles going vertically
down the image. Defaults to 0.25 giving a sequence of feature
phase congruency angle varying from 0 to pi/2.
Returns:
img::Array{Float64,2} - The test image.
Examples of use:
> img = step2line() # Default pattern
> img = step2line(ncycles=3, ampexponent=-1.5); # 3 cycles, 'soft' step to line
> img = step2line(ncycles=3, ampexponent=-1.5, phasecycles = 3);
```
See also: [`circsine`](@ref), [`starsine`](@ref)
"""
function step2line(sze::Integer = 512; nscales::Integer=50, ampexponent::Real = -1,
ncycles::Real = 1.5, phasecycles::Real = 0.25)
# Construct vector of angles over desired number of cycles
x = (0:(sze-1))/(sze-1)*ncycles*2*pi
img = zeros(sze,sze)
phaseoffset = 0.0
for row = 1:sze
signal = zeros(1,sze)
for scale = 1:2:(nscales*2-1)
img[row,:] .+= scale^float(ampexponent).*sin.(scale.*x .+ phaseoffset)
end
phaseoffset += phasecycles*2*pi/sze
end
#=
figure
colormap(gray)
imagesc(im), axis('off') , title('step to line feature interpolation')
range = 3.2
s = 'Profiles having phase congruency at 0/180, 30/210, 60/240 and 90/270 degrees'
figure
subplot(4,1,1), plot(im(1,:)) , title(s), axis([0,sze,-range,range]), axis('off')
subplot(4,1,2), plot(im(fix(sze/3),:)), axis([0,sze,-range,range]), axis('off')
subplot(4,1,3), plot(im(fix(2*sze/3),:)), axis([0,sze,-range,range]), axis('off')
subplot(4,1,4), plot(im(sze,:)), axis([0,sze,-range,range]), axis('off')
=#
return img
end
#----------------------------------------------------------------------
# circsine
"""
Generate a phase congruent circular sine wave grating.
Useful for testing the isotropy of response of a feature dectector.
```
Usage: img = circsine(sze; wavelength = 40, nscales = 50, ampexponent = -1,
offset = 0, p = 2, trim = false)
Arguments:
sze::Integer - The size of the square image to be produced. Defaults to 512.
Keyword arguments:
wavelength::Real - The wavelength in pixels of the sine wave. Defaults to 40.
nscales::Integer - No of Fourier components used to construct the
signal. This is typically 1, if you want a simple sine
wave, or >50 if you want to build a phase congruent
waveform. Defaults to 50.
ampexponent::Real - Decay exponent of amplitude with frequency.
A value of -1 will produce amplitude inversely
proportional to frequency (this will produce a step
feature if offset is 0)
A value of -2 with an offset of pi/2 will result in a
triangular waveform. Defaults to -1;
offset::Real - Angle of phase congruency at which the features of the
star pattern will be generated at. This controls the feature type.
0 for a step-like feature, pi/2 for a line/triangular-like feature.
Defaults to 0. If nscales = 1 use pi/2 to get continuity
at the centre.
p::Integer - Optional parameter specifying the norm to use in
calculating the radius from the centre. This defaults to
2, resulting in a circular pattern. Large values gives
a square pattern
trim::Bool - Optional boolean flag indicating whether you want the
circular pattern trimmed from the corners leaving
only complete circles. Defaults to false.
Returns:
img::Array{Float64,2} - The test image.
Examples:
> circsine(nscales = 1) - A simple circular sine wave pattern
> circsine(nscales = 50, ampexponent = -1, offset = 0) - Square waveform
> circsine(nscales = 50, ampexponent = -2, offset = pi/2) - Triangular waveform
> circsine(nscales = 50, ampexponent = -1.5, offset = pi/4) - Something in between
square and triangular
> circsine(nscales = 50, ampexponent = -1.5, offset = 0) - Looks like a square but is not.
```
See also: [`starsine`](@ref), [`step2line`](@ref)
"""
function circsine(sze::Integer=512; wavelength::Real=40, nscales::Integer=50,
ampexponent::Real=-1, offset::Real=0, p::Integer=2,
trim::Bool=false)
if isodd(p)
error("p should be an even number")
end
# Place origin at centre for odd sized image, and below and the the
# right of centre for an even sized image
if iseven(sze)
l = -sze/2
u = sze/2-1
else
l = -(sze-1)/2
u = (sze-1)/2
end
# Grid of radius values
r = [(x.^p + y.^p).^(1.0/p) for x = l:u, y = l:u]
img = zeros(size(r))
for scale = 1:2:(2*nscales-1)
@. img += scale^float(ampexponent) * sin(scale * r * 2*pi/wavelength + offset)
end
if trim # Remove circular pattern from the 'corners'
cycles = floor(sze/2/wavelength) # No of complete cycles within sze/2
@. img *= (r < cycles*wavelength) + (r >= cycles*wavelength)
end
return img
end
#----------------------------------------------------------------------
# starsine
"""
Generate a phase congruent star shaped sine wave grating.
Useful for testing the behaviour of feature detectors at line junctions.
```
Usage: img = starsine(sze; ncycles=10, nscales=50, ampexponent=-1, offset=0)
Argument:
sze::Integer - The size of the square image to be produced. Defaults to 512.
Keyword arguments:
ncycles::Real - The number of sine wave cycles around centre point.
Typically an integer, but any value can be used.
nscales::Integer - No of fourier components used to construct the
signal. This is typically 1, if you want a simple sine
wave, or >50 if you want to build a phase congruent
waveform. Defaults to 50.
ampexponent::Real - Decay exponent of amplitude with frequency.
A value of -1 will produce amplitude inversely
proportional to frequency (this will produce a step
feature if offset is 0)
A value of -2 with an offset of pi/2 will result in a
triangular waveform.
offset::Real - Angle of phase congruency at which the features of the
star pattern will be generated at. This controls the feature type.
0 for a step-like feature, pi/2 for a line/triangular-like feature.
Returns:
img::Array{Float64,2} - The test image.
Examples:
> starsine(nscales = 1) - A simple sine wave pattern radiating out
from the centre. Use 'offset' if you wish to
rotate it a bit.
> starsine(nscales = 50, ampexponent = -1, offset = 0) - Square waveform
> starsine(nscales = 50, ampexponent = -2, offset = pi/2) - Triangular waveform
> starsine(nscales = 50, ampexponent = -1.5, offset = pi/4) - Something in between
square and triangular
> starsine(nscales = 50, ampexponent = -1.5, offset = 0) - Looks like a square but is not.
```
See also: [`circsine`](@ref), [`step2line`](@ref)
"""
function starsine(sze::Integer=512; ncycles::Real=10, nscales::Integer=50,
ampexponent::Real=-1, offset::Real=0)
# Place origin at centre for odd sized image, and below and the the
# right of centre for an even sized image
if iseven(sze)
l = -sze/2
u = sze/2-1
else
l = -(sze-1)/2
u = (sze-1)/2
end
# Grid of angular values
theta = [atan(y,x) for x = l:u, y = l:u]
img = zeros(size(theta))
for scale = 1:2:(nscales*2 - 1)
@. img += scale^float(ampexponent)*sin(scale*ncycles*theta + offset)
end
return img
end
#----------------------------------------------------------------------
# noiseonf
"""
Create \$1/f^p\$ spectrum noise images.
When displayed as a surface these images also generate great landscape
terrain.
```
Usage: img = noiseonf(sze, p)
Arguments:
sze::Tuple{Integer, Integer} or ::Integer
- A tuple (rows, cols) or single value specifying size of
image to produce.
p::Real - Exponent of spectrum decay = 1/(f^p)
Returns:
img::Array{Float64,2} - The noise image with specified spectrum.
Reference values for p:
p = 0 - raw Gaussian noise image.
= 1 - gives the supposedly 1/f 'standard' drop-off for
'natural' images.
= 1.5 - seems to give the most interesting 'cloud patterns'.
= > 2 - produces 'blobby' images.
```
"""
function noiseonf(sze::Tuple{Integer, Integer}, p::Real)
(rows,cols) = sze
# Generate an image of random Gaussian noise, mean 0, std dev 1.
img = randn(rows,cols)
imgfft = fft(img)
mag = abs.(imgfft) # Get magnitude
phase = imgfft./mag # and phase
# Construct the amplitude spectrum filter
# Add 1 to avoid divide by 0 problems later
radius = filtergrid(rows,cols) * max(rows,cols) .+ 1
filter = 1 ./ (radius.^p)
# Reconstruct fft of noise image, but now with the specified amplitude
# spectrum
newfft = filter .* phase
img .= real.(ifft(newfft))
return img
end
function noiseonf(sze::Integer, p::Real)
return noiseonf((sze, sze), p)
end
#--------------------------------------------------------------------
# nophase
"""
Randomize image phase leaving amplitude spectrum unchanged.
```
Usage: newimg = nophase(img)
Argument: img::AbstractArray{T,2} where T <: Real - Input image
Returns: newimg::Array{Float64,2} - Image with randomized phase
```
In general most images will be destroyed by this transform. However, some
textures are reproduced in an 'amplitude only' image quite well. Typically
these are textures which have an amplitude spectrum that have a limited number
of isolated peaks. That is, a texture made up from a limited number of strong
harmonics.
See also: [`noiseonf`](@ref), [`quantizephase`](@ref), [`swapphase`](@ref)
"""
function nophase(img::AbstractArray{T,2}) where T <: Real
# Take FFT, get magnitude.
IMG = fft(img)
mag = abs.(IMG)
# Generate random phase values
# ** ? Should I just randomize the phase for half the spectrum and
# generate the other half as the complex conjugate ? **
phaseAng = rand(Float64, size(img))*2*pi
phase = cos.(phaseAng) .+ im.*sin.(phaseAng)
# Reconstruct fft of image using original amplitude and randomized phase and
# invert. Note that we must take the real part because the phase
# randomization applied above did not respect the complex conjugate symmetry
# that we should have in a real valued signal
newfft = mag .* phase
return real.(ifft(newfft))
end
#--------------------------------------------------------------------
# quantizephase
"""
Quantize phase values in an image.
```
Usage: qimg = quantizephase(img, N)
Arguments: img::Array{T,2} where T <: Real - Image to be processed
N::Integer - Desired number of quantized phase values
Returns: qimg::Array{Float64,2} - Phase quantized image
```
Phase values in an image are important. However, despite this, they can be
quantized very heavily with little perceptual loss. The value of N can be
as low as 4, or even 3! Using N = 2 is also worth a look.
See also: [`swapphase`](@ref)
"""
function quantizephase(img::AbstractArray{T,2}, N::Integer) where T <: Real
IMG = fft(img)
amp = abs.(IMG)
phase = angle.(IMG)
# Quantize the phase values as follows:
# Add pi - .001 so that values range [0 - 2pi)
# Divide by 2pi so that values range [0 - 1)
# Scale by N so that values range [0 - N)
# Round twoards 0 using floor giving integers [0 - N-1]
# Scale by 2*pi/N to give N discrete phase values [0 - 2*pi)
# Subtract pi so that discrete values range [-pi - pi)
# Add pi/N to counteract the phase shift induced by rounding towards 0
# using floor
phase = floor.( (phase .+ pi .- 0.001)/(2*pi) * N) * (2*pi)/N .- pi .+ pi/N
# Reconstruct Fourier transform with quantized phase values and take inverse
# to obtain the new image.
QIMG = amp.*(cos.(phase) .+ im*sin.(phase))
return real.(ifft(QIMG))
end
#--------------------------------------------------------------------
# swapphase
"""
Demonstrates phase - amplitude swapping between images.
```
Usage: (newimg1, newimg2) = swapphase(img1, img2)
Arguments:
img1, img2::Array{<:Real,2} - Two images of same size to be used as input
Returns:
newimg1::Array{Float64,2} - Image obtained from the phase of img1
and the magnitude of img2.
newimg2::Array{Float64,2} - Phase of img2, magnitude of img1.
```
See also: [`quantizephase`](@ref), [`nophase`](@ref)
"""
function swapphase(img1::AbstractArray{T,2}, img2::AbstractArray{T,2}) where T <: Real
if size(img1) != size(img2)
error("Images must be the same size")
end
# Take FFTs, get magnitude and phase
IMG1 = fft(img1); IMG2 = fft(img2)
mag1 = abs.(IMG1); mag2 = abs.(IMG2)
phase1 = IMG1 ./ mag1; phase2 = IMG2./mag2
# Now swap amplitude and phase between images.
NEWIMG1 = mag2.*phase1; NEWIMG2 = mag1.*phase2
newimg1 = real.(ifft(NEWIMG1))
newimg2 = real.(ifft(NEWIMG2))
# Scale image values 0-1
newimg1 = imgnormalize(newimg1);
newimg2 = imgnormalize(newimg2);
return newimg1, newimg2
end
| ImagePhaseCongruency | https://github.com/peterkovesi/ImagePhaseCongruency.jl.git |
|
[
"MIT"
] | 0.2.2 | 6a18107b6fc89bb32eb5dcec609a7355b53e8b78 | code | 8809 | #=--------------------------------------------------------------------
Utility functions for ImagePhaseCongruency
Copyright (c) 2015 Peter Kovesi
peterkovesi.com
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
The Software is provided "as is", without warranty of any kind.
PK August 2015
October 2018 Julia 0.7/1.0
---------------------------------------------------------------------=#
import ImageMorphology: label_components, component_indices
export replacenan, fillnan
export hysthresh, imgnormalize
# Note the following two functions are in ImageProjective Geometry but are
# duplicated here to minimise dependencies.
export imgnormalise, histtruncate
#----------------------------------------------------------------------
# fillnan
"""
Fill NaN values in an image with closest non NaN value.
This can be used as a crude (but quick) 'inpainting' function to allow a FFT to
be computed on an image containing NaN values. While the 'inpainting' is crude
it is typically good enough to remove most of the edge effects one might get at
the boundaries of the NaN regions. The NaN regions should then be remasked out
of the final processed image.
```
Usage: (newim, mask) = fillnan(img)
Argument: img - Image to be 'filled'.
Returns: newim - Filled image.
mask - Binary image indicating the valid, non-NaN, regions in
the original image.
```
See also: [`replacenan`](@ref)
"""
function fillnan(img::AbstractArray{T,N}) where T where N
mask = .!isnan.(img) # Non-NaN image regions
if all(mask)
newimg = copy(img)
@warn("All elements are NaN, no filling possible");
return newimg, mask
end
# Generate feature transform from non NaN regions of the image.
# F will contain cartesian indices of closest non NaN points in the image
F = Images.feature_transform(mask)
# Fill NaN locations with value of closest non NaN pixel
newimg = copy(img)
for i in eachindex(F)
newimg[i] = img[F[i]]
end
return newimg, mask
end
#----------------------------------------------------------------------
# replacenan
"""
Replace NaNs in an array with a specified value.
```
Usage: (newimg, mask) = replacenan(img, defaultval=0)
Arguments:
img - The Array containing NaN values.
defaultval - The default value to replace NaNs.
Returns:
newim - Filled image,
mask - Boolean image indicating non-NaN regions in the original
image.
```
See also: [`fillnan`](@ref)
"""
function replacenan(img::AbstractArray{T,N}, defaultval::Real = 0) where T <: AbstractFloat where N
mask = .!(isnan.(img))
newimg = copy(img)
newimg[isnan.(img)] .= defaultval
return newimg, mask
end
#----------------------------------------------------------------------
# hysthresh
"""
Hysteresis thresholding of an image.
```
Usage: bw = hysthresh(img, T1, T2)
Arguments:
img - Image to be thresholded
T1, T2 - Upper and lower threshold values. T1 and T2
can be entered in any order, the larger of the
two values is used as the upper threshold.
Returns:
bw - The binary thresholded image as a BitArray
```
All pixels with values above threshold T1 are marked as edges. All pixels that
are connected to points that have been marked as edges and with values above
threshold T2 are also marked as edges. Eight connectivity is used.
"""
function hysthresh(img::AbstractArray{T0,2}, T1::Real, T2::Real) where T0 <: Real
bw = falses(size(img))
if T1 < T2 # Swap T1 and T2
T1,T2 = T2,T1
end
# Form 8-connected components of pixels with a value above the
# lower threshold and get the indices of pixels in each component.
label = label_components(img .>= T2, trues(3,3))
pix = component_indices(label)
# For each list of pixels in pix test to see if there are any
# image values above T1. If so, set these pixels in the output
# image. Note we ignore pix[1] as these are the background pixels.
for n = 2:length(pix)
for i in eachindex(pix[n])
if img[pix[n][i]] >= T1
bw[pix[n]] .= true
break
end
end
end
return bw
end
#----------------------------------------------------------------------
"""
imgnormalise/imgnormalize - Normalises image values to 0-1, or to desired mean and variance
```
Usage 1: nimg = imgnormalise(img)
```
Offsets and rescales image so that the minimum value is 0
and the maximum value is 1.
```
Usage 2: nimg = imgnormalise(img, reqmean, reqvar)
Arguments: img - A grey-level input image.
reqmean - The required mean value of the image.
reqvar - The required variance of the image.
```
Offsets and rescales image so that nimg has mean reqmean and variance
reqvar.
"""
function imgnormalise(img::Array) # Normalise 0 - 1
n = img .- minimum(img)
return n = n/maximum(n)
end
# Normalise to desired mean and variance
function imgnormalise(img::Array, reqmean::Real, reqvar::Real)
n = img .- mean(img)
n /= std(img) # Zero mean, unit std dev
return n = reqmean .+ n*sqrt(reqvar)
end
# For those who spell normalise with a 'z'
"""
imgnormalize - Normalizes image values to 0-1, or to desired mean and variance
```
Usage 1: nimg = imgnormalize(img)
```
Offsets and rescales image so that the minimum value is 0
and the maximum value is 1.
```
Usage 2: nimg = imgnormalize(img, reqmean, reqvar)
Arguments: img - A grey-level input image.
reqmean - The required mean value of the image.
reqvar - The required variance of the image.
```
Offsets and rescales image so that nimg has mean reqmean and variance
reqvar.
"""
function imgnormalize(img::Array)
return imgnormalise(img)
end
function imgnormalize(img::Array, reqmean::Real, reqvar::Real)
return imgnormalise(img, reqmean, reqvar)
end
#----------------------------------------------------------------------
"""
histtruncate - Truncates ends of an image histogram.
Function truncates a specified percentage of the lower and
upper ends of an image histogram.
This operation allows grey levels to be distributed across
the primary part of the histogram. This solves the problem
when one has, say, a few very bright values in the image which
have the overall effect of darkening the rest of the image after
rescaling.
```
Usage:
1) newimg = histtruncate(img, lHistCut, uHistCut)
2) newimg = histtruncate(img, HistCut)
Arguments:
Usage 1)
img - Image to be processed.
lHistCut - Percentage of the lower end of the histogram
to saturate.
uHistCut - Percentage of the upper end of the histogram
to saturate. If omitted or empty defaults to the value
for lHistCut.
Usage 2)
HistCut - Percentage of upper and lower ends of the histogram to cut.
Returns:
newimg - Image with values clipped at the specified histogram
fraction values. If the input image was colour the
lightness values are clipped and stretched to the range
0-1. If the input image is greyscale no stretching is
applied. You may want to use imgnormalise() to achieve this.
```
See also: imgnormalise()
"""
function histtruncate(img::Array, lHistCut::Real, uHistCut::Real)
if lHistCut < 0 || lHistCut > 100 || uHistCut < 0 || uHistCut > 100
error("Histogram truncation values must be between 0 and 100")
end
if ndims(img) > 2
error("histtruncate only defined for grey scale images")
end
newimg = copy(img)
sortv = sort(newimg[:]) # Generate a sorted array of pixel values.
# Any NaN values will end up at the end of the sorted list. We
# need to ignore these.
# N = sum(.!isnan.(sortv)) # Number of non NaN values. v0.6
N = sum(broadcast(!,isnan.(sortv))) # compatibity for v0.5 and v0.6
# Compute indicies corresponding to specified upper and lower fractions
# of the histogram.
lind = floor(Int, 1 + N*lHistCut/100)
hind = ceil(Int, N - N*uHistCut/100)
low_val = sortv[lind]
high_val = sortv[hind]
# Adjust image
newimg[newimg .< low_val] .= low_val
newimg[newimg .> high_val] .= high_val
return newimg
end
function histtruncate(img::Array, HistCut::Real)
return histtruncate(img, HistCut, HistCut)
end
| ImagePhaseCongruency | https://github.com/peterkovesi/ImagePhaseCongruency.jl.git |
|
[
"MIT"
] | 0.2.2 | 6a18107b6fc89bb32eb5dcec609a7355b53e8b78 | code | 171 | using ImagePhaseCongruency
using Test
include("test_frequencyfilt.jl")
include("test_phasecongruency.jl")
include("test_syntheticimages.jl")
include("test_utilities.jl")
| ImagePhaseCongruency | https://github.com/peterkovesi/ImagePhaseCongruency.jl.git |
|
[
"MIT"
] | 0.2.2 | 6a18107b6fc89bb32eb5dcec609a7355b53e8b78 | code | 2967 | #=
Testing of frequencyfilt.jl
Hard to test these functions other than visually. This script simply
tries to run then all to make sure that they at least run
Set disp = true to display images for visual verification
=#
disp = false
using ImagePhaseCongruency, ImageCore, Test, TestImages, FFTW
if disp
using PyPlot
end
println("Testing frequencyfilt...")
function mypause()
println("Hit return to continue ")
a = readline()
return nothing
end
if disp; PyPlot.set_cmap(PyPlot.ColorMap("gray")); end
testimg = Float64.(Gray.(testimage("lighthouse")))
rows = 101
cols = 200
(f, fx, fy) = filtergrids((rows, cols))
if disp; imshow(f); title("filtergrids() f"); mypause(); end
if disp; imshow(fx); title("filtergrids() fx"); mypause(); end
if disp; imshow(fy); title("filtergrids() fy"); mypause(); end
f = filtergrid((rows, cols))
if disp; imshow(f); title("filtergrid() f"); mypause(); end
(H1, H2, f) = monogenicfilters((rows, cols))
if disp; imshow(real(-im*H1)); title("H1"); mypause(); end
if disp; imshow(real(-im*H2)); title("H2"); mypause(); end
if disp; imshow(f); title("monogenicfilters() f"); mypause(); end
(H, f) = packedmonogenicfilters((rows, cols))
if disp; imshow(real(H)); title("packedmonogenicfilters() H1"); mypause(); end
if disp; imshow(imag(H)); title("packedmonogenicfilters() H2"); mypause(); end
sze = (rows,cols)
cutin = 0.1
cutoff = 0.2
n = 2
boost = 2
f = lowpassfilter(sze, cutoff, n)
if disp; imshow(f); title("lowpassfilter() cutoff 0.2"); mypause(); end
f = bandpassfilter(sze, cutin, cutoff, n)
if disp; imshow(f); title("bandpassfilter() 0.1 0.2"); mypause(); end
f = highboostfilter(sze, cutoff, n, boost)
if disp; imshow(f); title("highboostfilter() 0.2 "); mypause(); end
f = highpassfilter(sze, cutoff, n)
if disp; imshow(f); title("highpassfilter() 0.2"); mypause(); end
f = 0.2
fo = 0.3
sigmaOnf = 0.55
@test abs(loggabor(0, fo, sigmaOnf) - 0) < eps()
@test abs(loggabor(fo, fo, sigmaOnf) - 1) < eps()
(f, fx, fy) = filtergrids((rows, cols))
(sintheta, costheta) = gridangles(f, fx, fy)
if disp; imshow(sintheta); title("filtergrids() sintheta"); mypause(); end
if disp; imshow(costheta); title("filtergrids() costheta"); mypause(); end
angl = pi/4
wavelen = pi/8
flter = cosineangularfilter(angl, wavelen, sintheta, costheta)
if disp; imshow(flter); title("cosineangularfilter"); mypause(); end
thetaSigma = .4
flter = gaussianangularfilter(angl, thetaSigma, sintheta, costheta)
if disp; imshow(flter); title("gaussianangularfilter"); mypause(); end
(P, S, p, s) = perfft2(testimg)
if disp; imshow(testimg); title("testimg"); mypause(); end
if disp; imshow(p); title("periodic testimg"); mypause(); end
if disp; imshow(s); title("testimg - periodic testimg"); mypause(); end
s1 = geoseries(0.5, 2, 4)
s2 = geoseries((0.5, 4), 4)
@test sum(abs.(s1 .- [0.5000, 1.0000, 2.0000, 4.0000])) < eps()
@test sum(abs.(s2 .- [0.5000, 1.0000, 2.0000, 4.0000])) < eps()
nothing
| ImagePhaseCongruency | https://github.com/peterkovesi/ImagePhaseCongruency.jl.git |
|
[
"MIT"
] | 0.2.2 | 6a18107b6fc89bb32eb5dcec609a7355b53e8b78 | code | 3245 | #=
Hard to test these functions other than visually. This script simply
runs then all to make sure that they at least run
Set the variable 'disp' to true to display the processed images
=#
disp = false
using Test, ImageCore, ImagePhaseCongruency, TestImages
if disp
using PyPlot
end
println("Testing phase congruency functions...")
#img = testimage("lena_gray")
img = Float64.(Gray.(testimage("lighthouse")))
disp ? imshow(img, ColorMap("gray")) : nothing
println("Phase preserving dynamic range compresion")
dimg = ppdrc(img, 50; clip=0.01, n=2)
disp ? imshow(dimg, ColorMap("gray")) : nothing
dimg = ppdrc(img, geoseries((20,60),3))
disp ? imshow(dimg[1], ColorMap("gray")) : nothing
println("phasecongmono")
(PC, or, ft, T) =
phasecongmono(img; nscale=4, minwavelength=3, mult=2,
sigmaonf=0.55, k=3, cutoff=0.5, g=10,
deviationgain=1.5, noisemethod=-1)
PC = phasecongmono(img, nscale=3)[1]
disp ? imshow(PC, ColorMap("gray")) : nothing
println("phasesymmono")
(phaseSym, symmetryEnergy, T) =
phasesymmono(img; nscale=3, minwavelength=3, mult=2,
sigmaonf=0.55, k=2, polarity=1, noisemethod=-1)
disp ? imshow(phaseSym, ColorMap("gray")) : nothing
phaseSym = phasesymmono(img)[1]
disp ? imshow(phaseSym, ColorMap("gray")) : nothing
println("phasecong3")
(M, m, or, ft, EO, T) = phasecong3(img; nscale=3, norient=6, minwavelength=3,
mult=2, sigmaonf=0.55, k=2, cutoff=0.5,
g = 10, noisemethod=-1)
disp ? imshow(M, ColorMap("gray")) : nothing
disp ? imshow(m, ColorMap("gray")) : nothing
println("phasesym")
(phaseSym, orient, totalEnergy, T) =
phasesym(img; nscale = 5, norient = 6, minwavelength = 3, mult= 2,
sigmaonf = 0.55, k = 2, polarity = 0, noisemethod = -1)
disp ? imshow(phaseSym, ColorMap("gray")) : nothing
println("ppdenoise")
cleanimage = ppdenoise(img, nscale = 5, norient = 6,
mult = 2.5, minwavelength = 2, sigmaonf = 0.55,
dthetaonsigma = 1.0, k = 3, softness = 1.0)
disp ? imshow(cleanimage, ColorMap("gray")) : nothing
println("monofilt")
nscale = 4
norient = 6
minWaveLength = 3
mult = 2
sigmaOnf = 0.55
dThetaOnSigma = 1.3
Lnorm = 0
orientWrap = false
(f, h1f, h2f, A, theta, psi) =
monofilt(img, nscale, minWaveLength, mult, sigmaOnf, orientWrap)
println("gaborconvolve")
(EO, BP) = gaborconvolve(img, nscale, norient, minWaveLength, mult,
sigmaOnf, dThetaOnSigma, Lnorm)
disp ? imshow(real.(EO[nscale,1]), ColorMap("gray")) : nothing
disp ? imshow(BP[nscale], ColorMap("gray")) : nothing
println("highpassmonogenic and bandpassmonogenic")
minwavelength = 4
maxwavelength = 20
n = 4
(ph, orient, E) = highpassmonogenic(img, maxwavelength, n)
disp ? imshow(ph, ColorMap("gray")) : nothing
disp ? imshow(orient, ColorMap("gray")) : nothing
disp ? imshow(E, ColorMap("gray")) : nothing
(ph, orient, E) = bandpassmonogenic(img, minwavelength, maxwavelength, n)
disp ? imshow(ph, ColorMap("gray")) : nothing
disp ? imshow(orient, ColorMap("gray")) : nothing
disp ? imshow(E, ColorMap("gray")) : nothing
nothing
| ImagePhaseCongruency | https://github.com/peterkovesi/ImagePhaseCongruency.jl.git |
|
[
"MIT"
] | 0.2.2 | 6a18107b6fc89bb32eb5dcec609a7355b53e8b78 | code | 1774 | #=
Testing of syntheticimages.jl
Hard to test these functions other than visually. This script simply
runs then all to make sure that they at least run
Set 'disp' = true to display the images
=#
disp = false
using ImagePhaseCongruency, ImageCore, TestImages
if disp
using PyPlot
end
println("Testing syntheticimages...")
function mypause()
println("Hit return to continue ")
a = readline()
return nothing
end
disp ? PyPlot.set_cmap(PyPlot.ColorMap("gray")) : nothing
#lena = Float64.(testimage("lena_gray"))
# We need the test image to be square for the phase amplitude tests.
testimg = Float64.(Gray.(testimage("lighthouse")))[1:512, 1:512]
sze = 512
img = step2line(sze; nscales=50, ampexponent=-1, ncycles=1.5, phasecycles=0.25)
if disp
imshow(img); title("step2line() test image")
mypause()
end
img = circsine(sze; wavelength = 40, nscales = 50, ampexponent = -1,
offset = 0, p = 2, trim = false)
if disp
imshow(img); title("circsign() test image")
mypause()
end
img = starsine(sze; ncycles=10, nscales=50, ampexponent=-1, offset=0)
if disp
imshow(img); title("starsign() test image")
mypause()
end
img = noiseonf(sze, 1.5)
if disp
imshow(img); title("Noise with amplitude spectrum f^{-1.5}")
mypause()
end
newimg = nophase(testimg)
if disp
imshow(newimg); title("Testimg with randomized phase")
mypause()
end
newimg = quantizephase(testimg, 4)
if disp
imshow(newimg); title("Testimg with phase quantized to 4 levels")
mypause()
end
(newimg1, newimg2) = swapphase(testimg, testimg')
if disp
imshow(newimg1); title("Testimg with phase of Testimg transposed")
mypause()
imshow(newimg2); title("Testimg transposed with phase of Testimg")
mypause()
end
nothing
| ImagePhaseCongruency | https://github.com/peterkovesi/ImagePhaseCongruency.jl.git |
|
[
"MIT"
] | 0.2.2 | 6a18107b6fc89bb32eb5dcec609a7355b53e8b78 | code | 842 | #=
Testing of utilities.jl
=#
disp = false
using ImagePhaseCongruency, Test, TestImages
if disp
using PyPlot
end
println("Testing utilities.jl...")
function mypause()
println("Hit return to continue ")
a = readline()
return nothing
end
if disp; PyPlot.set_cmap(PyPlot.ColorMap("gray")); end
# fillnan
# replacenan
img = ones(30,30)
img[5:20,10:15] .= NaN
(newimg, mask) = replacenan(img)
(newimg, mask) = replacenan(img, 2)
if disp; imshow(newimg); title("Filled image"); mypause(); end
if disp; imshow(mask); title("non-NaN regions"); mypause(); end
# hysthresh
img = zeros(30,30)
img[3:25,15] .= 5
img[8:12,15] .= 10
img[15:20,15] .= 10
img[15,15] = 20
bw = hysthresh(img, 8, 20)
if disp; imshow(img); title("Unthresholded image"); mypause(); end
if disp; imshow(bw); title("Thresholded image"); mypause(); end
| ImagePhaseCongruency | https://github.com/peterkovesi/ImagePhaseCongruency.jl.git |
|
[
"MIT"
] | 0.2.2 | 6a18107b6fc89bb32eb5dcec609a7355b53e8b78 | docs | 1395 | ImagePhaseCongruency
=======================
[](https://travis-ci.com/peterkovesi/ImagePhaseCongruency.jl)
----------------------------------------------

## Installation
`pkg> add ImagePhaseCongruency`
## Summary
This package provides a collection of image processing functions that exploit
the importance of phase information in our perception of images. Local phase
information, rather than local image gradients, is used as the fundamental
building block for constructing feature detectors.
The functions form two main groups:
1) Functions that detect specific patterns of local phase for the purpose of feature detection. These include functions for the detection of edges, lines and corner features, and functions for detecting local symmetry.
2) Functions that enhance an image in a way that does not corrupt the local phase so that our perception of important features are not disrupted. These include functions for dynamic range compression and for denoising.
## Documentation
* [The main documentation page](https://peterkovesi.github.io/ImagePhaseCongruency.jl/dev/index.html)
* [Examples](https://peterkovesi.github.io/ImagePhaseCongruency.jl/dev/examples/)
* [Function reference](https://peterkovesi.github.io/ImagePhaseCongruency.jl/dev/functions/) | ImagePhaseCongruency | https://github.com/peterkovesi/ImagePhaseCongruency.jl.git |
|
[
"MIT"
] | 0.2.2 | 6a18107b6fc89bb32eb5dcec609a7355b53e8b78 | docs | 4678 | # Function Reference
## Index
```@index
```
-----------------------------------------------------------------
```@docs
phasecongmono(img::AbstractArray{T1,2}; nscale::Integer = 4, minwavelength::Real = 3,
mult::Real = 2.1, sigmaonf::Real = 0.55, k::Real = 3.0,
noisemethod::Real = -1, cutoff::Real = 0.5, g::Real = 10.0,
deviationgain::Real = 1.5) where T1 <: Real
```
```@docs
ppdrc(img::AbstractArray{T1,2}, wavelength::Vector{T2}; clip::Real=0.01, n::Integer=2) where {T1 <: Real, T2 <: Real}
```
```@docs
highpassmonogenic(img::AbstractArray{T1,2}, maxwavelength::Vector{T2}, n::Integer) where {T1 <: Real, T2 <: Real}
```
```@docs
bandpassmonogenic(img::AbstractArray{T1,2}, minwavelength::Vector{T2}, maxwavelength::Vector{T3}, n::Integer) where {T1 <: Real, T2 <: Real, T3 <: Real}
```
```@docs
phasesymmono(img::AbstractArray{T1,2}; nscale::Integer = 5, minwavelength::Real = 3,
mult::Real = 2.1, sigmaonf::Real = 0.55, k::Real = 2.0,
polarity::Integer = 0, noisemethod::Real = -1) where T1 <: Real
```
```@docs
monofilt(img::AbstractArray{T1,2}, nscale::Integer, minWaveLength::Real, mult::Real,
sigmaOnf::Real, orientWrap::Bool = false) where T1 <: Real
```
```@docs
gaborconvolve(img::AbstractArray{T1,2}, nscale::Integer, norient::Integer, minWaveLength::Real,
mult::Real, sigmaOnf::Real, dThetaOnSigma::Real, Lnorm::Integer = 0) where T1 <:Real
```
```@docs
phasecong3(img::AbstractArray{T1,2}; nscale::Integer = 4, norient::Integer = 6,
minwavelength::Real = 3, mult::Real = 2.1, sigmaonf::Real = 0.55,
k::Real = 2, cutoff::Real = 0.5, g::Real = 10,
noisemethod::Real = -1) where T1 <: Real
```
```@docs
phasesym(img::AbstractArray{T1,2}; nscale::Integer = 5, norient::Integer = 6,
minwavelength::Real = 3, mult::Real = 2.1, sigmaonf::Real = 0.55,
k::Real = 2.0, polarity::Integer = 0, noisemethod::Real = -1) where T1 <: Real
```
```@docs
ppdenoise(img::AbstractArray{T1,2}; nscale::Integer=5, norient::Integer=6,
mult::Real=2.5, minwavelength::Real = 2, sigmaonf::Real = 0.55,
dthetaonsigma::Real = 1.0, k::Real=3, softness::Real=1.0) where T1 <: Real
```
```@docs
filtergrids(rows::Integer, cols::Integer)
```
```@docs
filtergrid(rows::Integer, cols::Integer)
```
```@docs
monogenicfilters(rows::Integer, cols::Integer)
```
```@docs
packedmonogenicfilters(rows::Integer, cols::Integer)
```
```@docs
lowpassfilter(sze::Tuple{Integer, Integer}, cutoff::Real, n::Integer)
```
```@docs
bandpassfilter(sze::Tuple{Integer, Integer}, cutin::Real, cutoff::Real, n::Integer)
```
```@docs
highboostfilter(sze::Tuple{Integer, Integer}, cutoff::Real, n::Integer, boost::Real)
```
```@docs
highpassfilter(sze::Tuple{Integer, Integer}, cutoff::Real, n::Integer)
```
```@docs
loggabor(f::Real, fo::Real, sigmaOnf::Real)
```
```@docs
gridangles(freq::Array{T1,2},
fx::Array{T2,2}, fy::Array{T3,2}) where {T1 <: Real, T2 <: Real, T3 <: Real}
```
```@docs
cosineangularfilter(angl::Real, wavelen::Real,
sintheta::Array{T1,2}, costheta::Array{T2,2}) where {T1 <: Real, T2 <: Real}
```
```@docs
gaussianangularfilter(angl::Real, thetaSigma::Real,
sintheta::Array{T1,2}, costheta::Array{T2,2}) where {T1 <: Real, T2 <: Real}
```
```@docs
perfft2(img::Array{T,2}) where T <: Real
```
```@docs
geoseries(s1::Real, mult::Real, n::Integer)
```
```@docs
step2line(sze::Integer = 512; nscales::Integer=50, ampexponent::Real = -1,
ncycles::Real = 1.5, phasecycles::Real = 0.25)
```
```@docs
circsine(sze::Integer=512; wavelength::Real=40, nscales::Integer=50,
ampexponent::Real=-1, offset::Real=0, p::Integer=2,
trim::Bool=false)
```
```@docs
starsine(sze::Integer=512; ncycles::Real=10, nscales::Integer=50,
ampexponent::Real=-1, offset::Real=0)
```
```@docs
noiseonf(sze::Tuple{Integer, Integer}, p::Real)
```
```@docs
nophase(img::AbstractArray{T,2}) where T <: Real
```
```@docs
quantizephase(img::AbstractArray{T,2}, N::Integer) where T <: Real
```
```@docs
swapphase(img1::AbstractArray{T,2}, img2::AbstractArray{T,2}) where T <: Real
```
```@docs
fillnan(img::AbstractArray{T,N}) where T where N
```
```@docs
replacenan(img::AbstractArray{T,N}, defaultval::Real = 0) where T <: AbstractFloat where N
```
```@docs
hysthresh(img::AbstractArray{T0,2}, T1::Real, T2::Real) where T0 <: Real
```
| ImagePhaseCongruency | https://github.com/peterkovesi/ImagePhaseCongruency.jl.git |
|
[
"MIT"
] | 0.2.2 | 6a18107b6fc89bb32eb5dcec609a7355b53e8b78 | docs | 8366 |
# ImagePhaseCongruency
```@setup genimages
using TestImages, Images, ImageContrastAdjustment
using Random
Random.seed!(1234)
save("testimg.png", restrict(testimage("mandril_gray")))
save("blobs.png", Gray.(testimage("blobs")))
img = centered(Gray.(restrict(testimage("lighthouse"))))[-127:128, -127:128]
img .+= 0.25 * randn(size(img))
save("testimgplusnoise.png", clamp01!(img))
img = testimage("m51")
img = adjust_histogram(centered(img)[-128:127, -128:127], LinearStretching())
save("m51.png", img)
```
This package provides a collection of image processing functions that exploit
the importance of phase information in our perception of images. The functions
form two main groups:
1) Functions that detect specific patterns of local phase for the purpose of feature detection.
2) Functions that enhance an image in a way that does not corrupt the local phase so that our perception of important features are not disrupted.
## Installation
`pkg> add ImagePhaseCongruency`
## Feature detection via phase congruency
|. | |
|---|---|
| | |
Rather than assume a feature is a point of maximal intensity gradient, the Local
Energy Model postulates that features are perceived at points in an image where
the Fourier components are maximally in phase. (See the Fourier Series logo of
this page). This model was developed by Morrone et al. [1986] and Morrone and
Owens [1987]. Kovesi [1997, 1999, 2002] subsequently developed methods of
computing phase congruency from quadrature pairs of log-Gabor wavelets.
Phase congruency is an illumination and contrast invariant measure of feature
significance. Unlike gradient based feature detectors, which can only detect
step features, phase congruency correctly detects features at all kind of phase
angle, and not just step features having a phase angle of 0 or 180 degrees.
Another key attribute is that phase congruency is a dimensionless quantity
ranging from 0 to 1, making it contrast invariant. This allows fixed threshold
values to be used over large classes of images.
* [`phasecongmono()`](@ref) Phase congruency of an image using monogenic filters.
* [`phasecong3()`](@ref) Computes edge and corner phase congruency in an image via log-Gabor filters.
* [Example](@ref Phase-congruency) of using `phasecongmono()` and `phasecong3()`.
## Phase symmetry
|. | |
|---|---|
| | |
A point of local symmetry in an image corresponds to a point where the local
frequency components are at either the minimum or maximum points in their
cycles, that is, where all the frequency components are at the most symmetric
points in their cycles. Like phase congruency, phase symmetry is a dimensionless
quantity.
* [`phasesym()`](@ref) Compute phase symmetry on an image via log-Gabor filters.
* [`phasesymmono()`](@ref) Phase symmetry of an image using monogenic filters.
* [Example](@ref demo_phasesymmono) of using `phasesymmono()`.
## Phase preserving denoising
|. | |
|---|---|
|  |  |
This is a wavelet denoising scheme that uses non-orthogonal, complex valued,
log-Gabor wavelets, rather than the more usual orthogonal or bi-orthogonal
wavelets. Thresholding of wavelet responses in the complex domain allows one to
ensure that perceptually important phase information in the image is not
corrupted. It is also allows threshold values can be determined automatically
from the statistics of the wavelet responses to the image.
* [`ppdenoise()`](@ref) Phase preserving wavelet image denoising.
* [Example](@ref demo_ppdenoise) of using `ppdenoise()`.
## Phase preserving dynamic range compression
|. | |
|---|---|
| ||
A common method for displaying images with a high dynamic range is to use some
variant of histogram equalization. The problem with histogram equalization is
that the contrast amplification of a feature depends on how commonly its data
value occurs and this can lead to some undesirable distortions in the relative
amplitudes of features. Phase Preserving Dynamic Range Compression allows
subtle features in images to be revealed without these distortions. It also
allows the scale of analysis to be controlled. Perceptually important phase
information is preserved and the contrast amplification of structures in the
signal is purely a function of their amplitude.
* [`ppdrc()`](@ref) Phase Preserving Dynamic Range Compression.
* [Example](@ref demo_ppdrc) of using `ppdrc()`.
## Supporting filtering functions
* [`gaborconvolve()`](@ref) Convolve an image with a bank of log-Gabor filters.
* [`monofilt()`](@ref) Apply monogenic filters to an image to obtain 2D analytic signal.
* [`highpassmonogenic()`](@ref) Compute phase and amplitude in highpass images via monogenic filters.
## Test images and functions for manipulating image phase
* [`step2line()`](@ref) A phase congruent test image that interpolates from a step to a line.
* [`circsine()`](@ref) Generate a phase congruent circular sine wave grating.
* [`starsine()`](@ref) Generate a phase congruent star shaped sine wave grating.
* [`noiseonf()`](@ref) Create \$ 1/f^p \$ spectrum noise images.
* [`nophase()`](@ref) Randomize image phase leaving amplitude spectrum unchanged.
* [`quantizephase()`](@ref) Quantize phase values in an image.
* [`swapphase()`](@ref) Demonstrates phase - amplitude swapping between images.
## Utility functions for construction of filters in the frequency domain
* [`filtergrids()`](@ref) Generate grids for constructing frequency domain filters.
* [`filtergrid()`](@ref) Generate grid for constructing frequency domain filters.
* [`gridangles()`](@ref) Generate arrays of filter grid angles.
* [`monogenicfilters()`](@ref) Generate monogenic filter grids.
* [`packedmonogenicfilters()`](@ref) Monogenic filter where both filters are packed in the one Complex grid.
* [`lowpassfilter()`](@ref) Construct a low-pass Butterworth filter.
* [`highpassfilter()`](@ref) Construct a high-pass Butterworth filter.
* [`bandpassfilter()`](@ref) Construct a band-pass Butterworth filter.
* [`highboostfilter()`](@ref) Construct a high-boost Butterworth filter.
* [`loggabor()`](@ref) The logarithmic Gabor function in the frequency domain.
* [`cosineangularfilter()`](@ref) Orientation selective filter with cosine windowing function.
* [`gaussianangularfilter()`](@ref) Orientation selective filter with Gaussian windowing function.
* [`perfft2()`](@ref) 2D Fourier transform of Moisan's periodic image component.
* [`geoseries()`](@ref) Generate geometric series.
## Misc functions
* [`fillnan`](@ref) Fill NaN values in an image with closest non NaN value.
* [`replacenan`](@ref) Replace NaNs in an array with a specified value.
* [`hysthresh`](@ref) Hysteresis thresholding of an image.
## References
M. C. Morrone and R. A. Owens. "Feature detection from local energy". Pattern Recognition Letters, 6:303-313, 1987.
M. C. Morrone, J. R. Ross, D. C. Burr, and R. A. Owens. " Mach bands are phase dependent". Nature, 324(6094):250-253, November 1986.
Peter Kovesi, "Symmetry and Asymmetry From Local Phase". AI'97, Tenth Australian Joint Conference on Artificial Intelligence. 2 - 4 December 1997. Proceedings - Poster Papers. pp 185-190. [preprint](https://www.peterkovesi.com/papers/ai97.pdf)
Peter Kovesi, "Image Features From Phase Congruency". Videre: A Journal of Computer Vision Research. MIT Press. Volume 1, Number 3, Summer 1999. [paper](http://www.cs.rochester.edu/u/brown/Videre/001/v13.html)
Peter Kovesi, "Edges Are Not Just Steps". Proceedings of ACCV2002 The Fifth Asian Conference on Computer Vision, Melbourne Jan 22-25, 2002. pp 822-827. [preprint](https://www.peterkovesi.com/papers/ACCV62.pdf)
Peter Kovesi, "Phase Preserving Denoising of Images". The Australian Pattern Recognition Society Conference: DICTA'99. December 1999. Perth WA. pp 212-217. [preprint](https://www.peterkovesi.com/papers/denoise.pdf)
Peter Kovesi, "Phase Preserving Tone Mapping of Non-Photographic High Dynamic Range Images". Proceedings: The Australian Pattern Recognition Society Conference: Digital Image Computing: Techniques and Applications DICTA 2012. [preprint](https://www.peterkovesi.com/papers/DICTA2012-tonemapping.pdf)
| ImagePhaseCongruency | https://github.com/peterkovesi/ImagePhaseCongruency.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | code | 837 | using AnalyticComb
using Documenter
DocMeta.setdocmeta!(AnalyticComb, :DocTestSetup, :(using AnalyticComb); recursive=true)
makedocs(;
modules=[AnalyticComb],
authors="fargolo <[email protected]> and contributors",
repo="https://github.com/fargolo/AnalyticComb.jl/blob/{commit}{path}#{line}",
sitename="AnalyticComb.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://fargolo.github.io/AnalyticComb.jl",
edit_link="main",
assets=String[],
),
pages=[
"Introduction" => "index.md" ,
"Installation" => "install.md",
"Examples" => "examples.md",
"Function definitions" => "functions.md",
],
)
deploydocs(;
repo="github.com/fargolo/AnalyticComb.jl",
devbranch="main",
)
| AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | code | 835 | module AnalyticComb
#using SymPy
using Reexport
using TaylorSeries
@reexport using SymPy
export
#basic operators
SEQ, MSET, PSET, CYC,
#misc
stirling_factorial_asym, stirling_catalan_asym, catalan_factorial, general_trees, triangulations,
#compositions and partitions
I_gf, partitions_gf, partitions_asym, primes_composition_asym,
restricted_sum_comp_gf, restricted_sum_comp, restricted_sum_part_gf, restricted_sum_part,
partitions_max_r, partitions_asym, fixed_size_comps, fixed_size_comps_asym,
#words and regular languages
words_without_k_run, longest_run_binary_asym,
bin_words_with_k_occurences, bin_words_with_k_occurences_constr,
bin_words_runs_prob, weighted_bin_runs_pval , weighted_bin_runs_prob
include("operators.jl")
include("misc.jl")
include("parts_comps.jl")
include("words_languages.jl")
end
| AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | code | 1193 | """
stirling_factorial_asym(n)
Stirling approximation for n! (EIS A000142)
``n! \\sim \\sqrt{2 \\pi n} {\\frac{n}{e}}^{n}``
"""
stirling_factorial_asym(n) = (n/exp(1))^n*sqrt(2*pi*n)
"""
stirling_catalan_asym(n)
Stirling approximation for n_th Catalan number. (EIS A000108)
``C_{n} \\sim \\frac{4^n}{\\sqrt{\\pi n^3}}``
"""
stirling_catalan_asym(n) = 4^n/(sqrt(pi*n^3))
"""
catalan_factorial(n)
The n_th Catalan number. (EIS A000108)
Calculated using factorials.
``C_{n} = \\frac{(2n)!}{(n+1)!n!}``
"""
catalan_factorial(n) = factorial(2n) / (factorial(n+1) * factorial(n))
"""
general_trees(n)
The number of general trees, given by (n-1)_th Catalan number. (EIS A000108)
Calculated using binomial coefficients.
``G_n = C_{n-1} = \\frac{1}{n} {2n - 2 \\choose n - 1}``
"""
general_trees(n) = 1/n * binomial(2n - 2,n-1)
"""
triangulations(n)
The number of triangulations, given by the n_th Catalan number. (EIS A000108)
Calculated using binomial coefficients. A maximal decomposition of the convex (n + 2)-gon defined by the points into n triangles.
``T_n = C_{n-1} = \\frac{1}{n+1} {2n \\choose n}``
"""
triangulations(n) = 1/(n+1) * binomial(2n,n)
| AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | code | 1157 | """
SEQ(z)
Sequence operator (Pólya quasi-inverse operator).
Defined as ``A = SEQ(B) \\implies A(z) = \\frac{1}{1 - B(z)}``.
"""
SEQ(z) = 1/(1-z)
"""
MSET(z,max)
Multiset operator (Pólya exponential operator).
Defined as ``A = MSET(B) \\implies A(z) = exp(\\sum_{1}^{\\infty} \\frac{1}{k} B(z^k))``.
Returns a SymPy `:Sym` object.
"""
function MSET(z,max)
n = SymPy.symbols("n")
return(exp(summation(1/n * z^n,(n,1,max))))
end
#MSET(z) = prod(1-z^n)^((-1)*(-B_n))
"""
PSET(z,max)
Powerset operator (modified Pólya exponential operator).
Defined as ``A = PSET(B) \\implies A(z) = exp(\\sum_{1}^{\\infty} \\frac{(-1)^{k-1}}{k} B(z^k))``.
Returns a SymPy `:Sym` object.
"""
function PSET(z,max)
n = SymPy.symbols("n")
return(exp(summation((-1)^(n-1)/n * z^n,(n,1,max))))
end
"""
CYC(z,max)
Cycle operator (Pólya logarithm).
Defined as ``A = CYC(B) \\implies A(z) = \\sum_{1}^{\\infty} \\frac{\\phi(k)}{k} log \\frac{1}{1-z^k}``.
Returns a SymPy `:Sym` object.
"""
function CYC(z,max)
n = SymPy.symbols("n")
return(summation(SymPy.sympy.totient(n)/n * z^n * log(1/(1-z^n)) ,(n,1,max)))
end
| AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | code | 3451 | """
I_gf(z)
Integers as combinatorial structures
``I(z)= \\sum_{n \\geq 1} z^n = \\frac{z}{1-z}``.
Returns a SymPy `:Sym` object.
"""
function I_gf()
z = SymPy.symbols("z")
return(z*SEQ(z))
end
"""
partitions_gf(z,max)
Generating function for integer partitions.
Defined as ``P(z) = \\prod_{m = 1}^{\\infty} \\frac{1}{1-z^m}``
Returns a SymPy `:Sym` object.
Use `SymPy.series` to obtain counts(EIS A000041): `SymPy.series(partitions_gf(z,10),z,0,8)` for n up to 8.
"""
function partitions_gf(z,max)
n = SymPy.symbols("n")
z = SymPy.symbols("z")
prod([1/(1-z^n) for n in 1:max])
end
#MSET(I_gf(z)) yields the correct gen. function, as in page 41 (equation 38),
# but series(MSET(I_gf(z))) returns coefficients in 2^n
"""
partitions_asym(n)
Asymptotics for partition of integers (EIS A000041) by Hardy and Ramanujan, later improved by Rademache
"""
partitions_asym(n) = (1/(4*n*sqrt(3)))*exp(pi*sqrt(2n/3))
"""
primes_composition_asym(n)
Asymptotics for composition of n into prime parts (EIS A023360).
``B_{n} \\sim 0.30365 * 1.47622^n``
"""
primes_composition_asym(n) = 0.30365 * 1.47622^n
"""
restricted_sum_comp_gf(r)
Generating function for compositions with restricted summand.
Returns a SymPy `:Sym` object.
"""
function restricted_sum_comp_gf(r)
z = SymPy.symbols("z")
return((1-z)/(1-2z+z^(r+1)))
end
"""
restricted_sum_comp(n,r)
Number of compositions of n with components in the set {1,2,..,r}.
r = 2 yields Fibonnaci numbers (EIS A000045): ``F_{n} = F_{n-1} + F_{n-2}``.
r>2 yields generalized Fibonacci numbers.
"""
function restricted_sum_comp(n,r)
restricted_sum_comp_gf(z,r) = (1-z)/(1-2z+z^(r+1))
taylor_ser = TaylorSeries.taylor_expand(z -> restricted_sum_comp_gf(z,r),0;order=n+1)
return(TaylorSeries.getcoeff(taylor_ser,n))
end
"""
restricted_sum_part_gf(r)
Generating function for partition with restricted summand.
Returns a SymPy `:Sym` object.
"""
function restricted_sum_part_gf(r)
z = SymPy.symbols("z")
return(prod([SEQ(z^i) for i in r]))
end
"""
restricted_sum_part(n,r)
Number of partitions with components in r with restricted summand n.
n must be an integer and r must be a set of integers, like in r = [1,5,10,25] , n = 99.
"""
function restricted_sum_part(n,r)
restricted_sum_part_gf(z,r) = prod([SEQ(z^i) for i in r])
taylor_ser = TaylorSeries.taylor_expand(z -> restricted_sum_part_gf(z,r),0;order=n+1)
return(TaylorSeries.getcoeff(taylor_ser,n))
end
"""
partitions_max_r(n,r)
Number of partitions of size n whose summands lie in the set {1, 2, ... , r}
n must be an integer and r is the maximum value in the set of summands.
"""
partitions_max_r(n,r) = n^(r-1)*(1 / (factorial(r)*factorial(r-1)))
"""
partitions_asym(n,tau)
Asymptotics for partitions of size n whose summands lie in the arbitrary finite set ``\\tau`` (tau).
tau must be an array of integers.
``{P_n}^{T} \\sim \\frac{1}{\\tau} \\frac{n^{r-1}}{(r-1)!}``
"""
partitions_asym(n,tau) = (1/prod(tau))* ((n^(length(tau)-1))/(length(tau) -1))
"""
fixed_size_comps(n,k)
Compositions of size n made of k summands,
``{C_n}^{k} = {n-1 \\choose k -1}``
"""
fixed_size_comps(n,k) = binomial(n-1,k-1)
"""
fixed_size_comps_asym(n,k)
Asymptotics for compositions of size n made of k summands,
``{C_n}^{k} \\sim \\frac{n^{k-1}}{(k-1)!}``
"""
fixed_size_comps_asym(n,k) = n^(k-1)/factorial(k-1) | AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | code | 4805 | """
words_without_k_run(k,n;m=2)
OGF of words without k consecutive occurrences of a designated letter
for an alphabet of cardinality m (defaults to binary words: m=2).
``\\frac{1 - z^k}{ 1 - mz + (m-1)z^{k+1} }``
For instance, if n=4 and k=3, these words are not counted: aaab, baaa, aaaa.
"""
function words_without_k_run(k,n;m=2)
word_run_ogf(m,k,z) = (1-z^k)/(1 - m*z + (m-1)*z^(k+1))
taylor_ser = TaylorSeries.taylor_expand(z -> word_run_ogf(m,k,z),0;order=n+1)
TaylorSeries.getcoeff(taylor_ser,n)
end
"""
longest_run_binary_asym(n)
Asymptotics for the average of the longest run of ``a``s in a random binary string of length n.
``\\log_2 n`` .
For instance, a random binary word with 10 letters (e.g. baaabababb) will present,
on average, ``3.32...`` consecutive ``a``s
"""
longest_run_binary_asym(n) = log2(n)
"""
bin_words_with_k_occurences(k,n)
The set L of binary words that contain exactly k occurrences of the letter b
``L = SEQ(a){(b SEQ(a))}^k \\implies L(z) = \\frac{z^k}{{(1-z)}^{k+1}} ``
For instance, among binary words with 10 letters, there are 210 words with 4 ``b``s.
(``[z^{10}]L(z) = 210``)
"""
function bin_words_with_k_occurences(k,n)
word_ogf(z,k) = (z^k)/((1 - z)^(k+1))
taylor_ser = TaylorSeries.taylor_expand(z->word_ogf(z,k),0;order=n+1)
TaylorSeries.getcoeff(taylor_ser,n)
end
"""
bin_words_with_k_occurences_constr(k,n,d)
The set L of binary words that contain exactly k occurrences of the letter b, constrained by the maximum distance d among occurrences
``L^{[d]} = SEQ(a){(b SEQ_{<d}(a))}^{k-1}(b SEQ(a))``.
For instance, among binary words with 10 letters, there are 45 words with 4 ``b``s in which the maximum distance between ``b``s is 2 (e.g.aaabababab)
(``[z^{10}]L^{[2]}(z) = 45``).
"""
function bin_words_with_k_occurences_constr(k,n,d)
# Binomial convolution. See page 52 in Flaj & Sedg.
accum = Int128[]
for j in 0:Int(round(n/d))
if(j>=k-1 || j*d >=n)
break
end
#print("\nj is ",j)
acc = (-1)^j * binomial(k-1,j) * binomial(n-d*j,k)
#print("\naccum is ",acc)
push!(accum,acc)
end
#print("\nSum is ")
sum(accum)
end
"""
bin_words_runs_coeff(r;n_tot=200)
Taylor series coefficient from generating function for binary words (when p=q=prob(a)=prob(b)) that never have more than r consecutive identical letters.
The number of binary words that never have more than r consecutive identical letters is found to be (set α = β = r).
n_tot defaults to 200, according to the example in Flajolet & Sedgewick pag. 52
"""
function bin_words_runs_coeff(r;n_tot=200)
w_rr(r,z) = (1-z^(r+1))/(1-2z+z^(r+1)) # OGF
#w_rr(r,z) = sum(z^x for x in 0:r)/(1 - sum(z^x for x in 1:r)) # Alternate form
coefs = TaylorSeries.taylor_expand(z -> w_rr(r,z),0;order=n_tot+1)
TaylorSeries.getcoeff(coefs,n_tot)
end
"""
bin_words_runs_prob(k,n)
Returns probablity associatied with k-lenght double runs (or either ``a``s or ``b``s) in a sequence of size n (when p=q=prob(a)=prob(b)).
``W ∼= SEQ(b) SEQ(a SEQ(a) b SEQ(b)) SEQ(a).``
For instance, if n=5 and k=2, the probability of obtaining strings such as bbaab and aabba.
"""
function bin_words_runs_prob(k,n)
#a = FastRational{Int128}(1/(2^n))
#a = Rational{BigInt}(1/(2^n))
a = 1//(BigInt(2)^n)
#a = 1/(2^n)
result_sym = a*(bin_words_runs_coeff(k;n_tot=n) - bin_words_runs_coeff(k-1;n_tot=n))
return(Float64(result_sym))
end
"""
weighted_bin_runs_prob(p,q,l,n)
Weighted model for consecutive runs in binary words.
Probablity of the absence of l-runs among a sequence of n random trials with probabilities p and q.
``[z^{n}] \\frac{1 - p^l z^l}{1 - z + q p^l z^{l+1}}``.
Use diff over output values to obtain a probability distribution. For n=15, p=0.4 and q=0.6:
`raw_probs = map(x->weighted_bin_runs_prob(0.4,0.6,x,15),0:1:15);plot(diff(raw_probs))`
"""
function weighted_bin_runs_prob(p,q,l,n)
if (p+q - 1 > 0.01) || (l > n)
println("p + q must be equal to 1 and l <= n")
return(NaN)
end
wei_mgf(p,q,l,z) = (1 - p^l * z^l )/(1 - z + q*(p^l)*(z^(l+1)))
tay_exp = TaylorSeries.taylor_expand(z -> wei_mgf(p,q,l,z),0;order=n+1)
TaylorSeries.getcoeff(tay_exp,n)
end
"""
weighted_bin_runs_pval(p,q,l,n)
p-value obtained from a one-tailed based on the exact distribution using the weighted model for consecutive runs `weighted_bin_runs_coeff`.
"""
function weighted_bin_runs_pval(p,q,l,n)
if (p+q - 1 > 0.01) || (l > n)
println("p + q must be equal to 1 and l <= n")
return(NaN)
end
weighted_coeffs = map(x->weighted_bin_runs_prob(p,q,x,n),(l-1):1:n)
probs = diff(weighted_coeffs)
Float64(sum(probs))
end
| AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | code | 1226 | using AnalyticComb
using Test
@testset "AnalyticComb.jl" begin
@testset "Words" begin
# Binary words with distance constraints
## 4 occurences in a 10-letter word separated by maximum distance of 2
@test bin_words_with_k_occurences_constr(4,10,2) == 45
@test words_without_k_run(3,10) == 504
binary_double_results = map(i->bin_words_runs_prob(i,200),3:12)
# Check if any of the results presents error > 0.01
@test sum(binary_double_results - map(x-> round(x;digits=3),
[6.5494e-8,0.00070,0.03397,0.16602,0.25741,0.22352,0.14594,0.08292,0.04402,0.02260]) .< 0.01) != 0
end
@testset "PartComps" begin
@test restricted_sum_part(99,[1,5,10,25]) == 213
z = SymPy.symbols("z")
integers = collect(SymPy.series(I_gf(),z,0,10),z)
@test integers.coeff(z,3) == 1
A000041 = [1, 2, 3, 5, 7, 11, 15, 22, 30, 42]
part_asym_diffs = (map(x -> partitions_asym(x),collect(1:10)) - A000041) ./ A000041
@test !(0 in (part_asym_diffs .< 1))
@test fixed_size_comps(10,3) == 36
@test fixed_size_comps_asym(10,3) == 50
end
end
| AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | docs | 2489 | # AnalyticComb.jl
[](https://fargolo.github.io/AnalyticComb.jl/dev/)
[](https://github.com/fargolo/AnalyticComb.jl/actions/workflows/CI.yml?query=branch%3Amain)
[](https://codecov.io/gh/fargolo/AnalyticComb.jl)
# Introduction
In 1751, Euler was studying the number of ways in which a given convex polygon could be decomposed into triangles by diagonal lines.[^1]
He realized that the progression of numbers in the solution $S = 1, 2, 5, 14, 42, 132,...$ was directly related to the coefficients of the series expansion of the polynomial fraction $\frac{1 − 2a − \sqrt{1−4a}}{2aa}$, that is:
$1 + 2a + 5a^2 + 14a^3 + 42a^4 + 132a^5 + ...$
Given any constructable combinatorial structure, one can use a set of operators to find a generating function and then approach the problem analytically.
See the [docs](https://fargolo.github.io/AnalyticComb.jl/dev/).
Check the text book by [Flajolet & Sedgewick](https://algo.inria.fr/flajolet/Publications/book.pdf) and [Coursera's full course by Robert Sedgewick](https://www.coursera.org/learn/analytic-combinatorics) for more.
Kudos to [Ricardo Bittencourt](https://github.com/ricbit/) for his introductory texts on the subject and for helping in an initial implementation.
# Quick start
## Install
Python package sympy is required for some utilities.
```
$python -m pip install --upgrade pip
$pip install sympy
```
Then, from Julia:
```
pkg>add AnalyticComb
```
## Example
This software can be used to solve problems such as Polya's problem of partitions with restricted summands [^2]. What is the number of ways of giving change of 99 cents using pennies (1 cent), nickels (5 cents), dimes (10 cents) and quarters (25 cents)?
```
julia> using AnalyticComb
julia> restricted_sum_part_gf([1,5,10,25]) # examine the generating function from specification SEQ(z)*SEQ(z^5)*SEQ(z^10)*SEQ(z^25)
1
────────────────────────────────────
⎛ 5⎞ ⎛ 10⎞ ⎛ 25⎞
(1 - z)⋅⎝1 - z ⎠⋅⎝1 - z ⎠⋅⎝1 - z ⎠
julia>restricted_sum_part(99,[1,5,10,25]) # Counts for 99 as a sum of elements in (1,5,10,25).
213
```
[^1]: Flajolet, P., & Sedgewick, R. (2009). Analytic combinatorics. Cambridge University press. Page 20
[^2]: Ibid. page 43
| AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | docs | 1659 | # Binary word double runs
Probability for consecutive double runs (either 0s or 1s) of lenght k in binary words of length n, use `p_binary_words_doub_runl(k,n)`
```
julia>using AnalyticComb
julia>p_binary_words_doub_runl(6,200) #e.g. 100000011010... or 01111110101...
0.166...
```
# Restricted summands problem
Solving Polya partitions with restricted summands (denumerants) problem[^1].
What is the number of ways of giving change of 99 cents using pennies (1 cent), nickels (5 cents), dimes (10 cents) and quarters (25 cents)?
That is, the number of ways to obtain 99 by summing 1s,5s,10s and 25s.
Or number of distinct sets ``S = \{k_1,k_2,_k_3,k_4}\, \quad k \in \mathbb{N}`` satisfying the equation :
`` 1 k_1 + 5 k_2 + 10 k_3 + 25 k_4 = 99``.
The generating function is:``P(z) = SEQ(z)*SEQ(z^5)*SEQ(z^{10})*SEQ(z^{25})`` and the solution is the
cofficient of ``z^{99}`` in the expansion: ``[z^{99}] T(P(z))``.
``T(P(z)) = ... + 213 z^{99} + ...``
Function `restricted_sum_part_gf(r)` returns the generating function for elements in `r` and `restricted_sum_part(k,r)` returns the coefficient in ``z^k`` for the generating function with elements in `r`.
```
julia> restricted_sum_part_gf([1,5,10,25]) # examine the generating function SEQ(z)*SEQ(z^5)*SEQ(z^10)*SEQ(z^25)
1
────────────────────────────────────
⎛ 5⎞ ⎛ 10⎞ ⎛ 25⎞
(1 - z)⋅⎝1 - z ⎠⋅⎝1 - z ⎠⋅⎝1 - z ⎠
julia>restricted_sum_part(99,[1,5,10,25]) # Counts for 99 as a sum of elements in (1,5,10,25).
213
```
[^1]: Flajolet, P., & Sedgewick, R. (2009). Analytic combinatorics. Cambridge University press. Page 43 | AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | docs | 646 | ```@meta
CurrentModule = AnalyticComb
```
# Index
List of functions.
```@index
```
# Operators
Basic operators SEQ, MSET, PSET, CYC.
```@autodocs
Modules = [AnalyticComb]
Pages = ["operators.jl"]
```
# Partitions and compositions
Partitions and compositions of integers.
```@autodocs
Modules = [AnalyticComb]
Pages = ["parts_comps.jl"]
```
# Miscellaneous
Miscellaneous functins.
```@autodocs
Modules = [AnalyticComb]
Pages = ["misc.jl"]
```
# Words and regular languages
Words counts, such as number of words with consecutive runs.
```@autodocs
Modules = [AnalyticComb]
Pages = ["words_languages.jl"]
```
| AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | docs | 2402 | ```@meta
CurrentModule = AnalyticComb
```
# AnalyticComb
Documentation for [AnalyticComb](https://github.com/fargolo/AnalyticComb.jl).
# References
This package implements solutions for combinatorial problems using analytic combinatorics.
Check the text book by [Flajolet & Sedgewick](https://algo.inria.fr/flajolet/Publications/book.pdf) and [Coursera's full course by Robert Sedgewick](https://www.coursera.org/learn/analytic-combinatorics).
Kudos to [Ricardo Bittencourt](https://github.com/ricbit/) for his introductory texts on the subject and for helping in an initial implementation.
# Background
In 1751, Euler was studying the number of ways in which a given convex polygon could be decomposed into triangles by diagonal lines.[^1]
He realized that the progression of numbers in the solution (1, 2, 5, 14, 42, 132,...) was directly related to the coefficients of the series expansion of the polynomial fraction
``\frac{1−2a−\sqrt{1−4a}}{2aa}``, that is:
``1+2a +5a^2 + 14a^3 + 42a^4 + 132a^5 + ...``
Given any constructable combinatorial structure, one can use a set of operators to find a generating function and then approach the problem analytically.
# Introduction
For newcomers, this an analytic approach to combinatorial problems. Modelling this type of problem often relies on intuitive arguments. The symbolic method describe such situations with a grammar of operators: Sum, Cartesian product, Sequence, Multiset, Powerset and Cycle. Such operators yield an algebraic expression (e.g. ``P(z)``), called the generating function, which is directly related to the problem via complex analysis. We are generally interested in the coefficients of its series expansion. That is, let the series expansion of ``P(z)`` be ``T(P(z)) = \sum_{n=1}^{\infty} a_n x^n``. Then, the values of ``a_n`` correspond to the counts of objects of size ``n`` in this combinatorial class.
For instance, the number of binary words (e.g. abababbabab...) of size n is given by ``W_n = 2^n``. Using the sequence operator (``SEQ(A) \implies A(z) = \frac{1}{1-z}``) , we find the generating function:
``W = SEQ(Z+Z) \implies W(z) = \frac{1}{1 - 2z}``. ``T(W(z)) = 1 + 2z + 4z^2 + 8z^3 + ...``.
This approach can be used to solve complex problems in a systematic way.
[^1]: Flajolet, P., & Sedgewick, R. (2009). Analytic combinatorics. Cambridge University press. Page 20
| AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 2.0.4 | 71cb4765d60fdf8fdb721bdd89f5cbff705fd2f5 | docs | 350 |
# Installation
Python package sympy is required for some functionalities. SymPy.jl is reexported in `AnalyticComb.jl`.
```
$python -m pip install --upgrade pip
$pip install sympy
```
Then, from Julia:
```
julia> # type the right bracket to enter pkg REPL ']'
pkg>add AnalyticComb
```
Or
```
julia>using Pkg; Pkg.add("AnalyticComb")
```
| AnalyticComb | https://github.com/fargolo/AnalyticComb.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 400 | using Documenter, ProxSDP
makedocs(
modules = [ProxSDP],
doctest = false,
clean = true,
format = Documenter.HTML(),
assets = ["assets/favicon.ico"],
sitename = " ",
authors = "Mario Souto, Joaquim D. Garcia and contributors.",
pages = [
"Home" => "index.md",
"manual.md"
]
)
deploydocs(
repo = "github.com/mariohsouto/ProxSDP.jl.git",
) | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 771 | path = joinpath(dirname(@__FILE__), "..", "..")
push!(Base.LOAD_PATH, path)
using ProxSDP, JuMP
# Create a JuMP model using ProxSDP as the solver
model = Model(ProxSDP.Optimizer)
set_optimizer_attribute(model, "log_verbose", true)
set_optimizer_attribute(model, "tol_gap", 1e-4)
set_optimizer_attribute(model, "tol_feasibility", 1e-4)
@variable(model, X[1:2,1:2], PSD)
x = X[1,1]
y = X[2,2]
@constraint(model, ub_x, x <= 2)
@constraint(model, ub_y, y <= 30)
@constraint(model, con, 1x + 5y <= 3)
# ProxSDP supports maximization or minimization
# of linear functions
@objective(model, Max, 5x + 3 * y)
# Then we can solve the model
JuMP.optimize!(model)
# And ask for results!
JuMP.objective_value(model)
JuMP.value(x)
JuMP.value(y) | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 688 | path = joinpath(dirname(@__FILE__), "..", "..")
push!(Base.LOAD_PATH, path)
using SumOfSquares
using DynamicPolynomials
using ProxSDP, SCS
# Using ProxSDP as the SDP solver
# model = SOSModel(with_optimizer(ProxSDP.Optimizer, log_verbose=true, max_iter=100000, full_eig_decomp=true))
model = SOSModel(ProxSDP.Optimizer)
set_optimizer_attribute(model, "log_verbose", true)
set_optimizer_attribute(model, "max_iter", 100000)
set_optimizer_attribute(model, "full_eig_decomp", true)
@polyvar x z
@variable(model, t)
p = x^4 + x^2 - 3*x^2*z^2 + z^6 - t
@constraint(model, p >= 0)
@objective(model, Max, t)
optimize!(model)
println("Solution: $(value(t))")
# Returns the lower bound -.17700
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 13790 | const MOI = MathOptInterface
MOI.Utilities.@product_of_sets(Zeros, MOI.Zeros)
MOI.Utilities.@product_of_sets(Nonpositives, MOI.Nonpositives)
MOI.Utilities.@struct_of_constraints_by_set_types(
StructCache,
MOI.Zeros,
MOI.Nonpositives,
MOI.SecondOrderCone,
MOI.PositiveSemidefiniteConeTriangle,
)
const OptimizerCache{T} = MOI.Utilities.GenericModel{
T,
MOI.Utilities.ObjectiveContainer{T},
MOI.Utilities.VariablesContainer{T},
StructCache{T}{
MOI.Utilities.MatrixOfConstraints{
T,
MOI.Utilities.MutableSparseMatrixCSC{
T,
Int64,
MOI.Utilities.OneBasedIndexing,
},
Vector{T},
Zeros{T},
},
MOI.Utilities.MatrixOfConstraints{
T,
MOI.Utilities.MutableSparseMatrixCSC{
T,
Int64,
MOI.Utilities.OneBasedIndexing,
},
Vector{T},
Nonpositives{T},
},
MOI.Utilities.VectorOfConstraints{
MOI.VectorOfVariables,
MOI.SecondOrderCone,
},
MOI.Utilities.VectorOfConstraints{
MOI.VectorOfVariables,
MOI.PositiveSemidefiniteConeTriangle,
},
},
}
Base.@kwdef mutable struct ConeData
psc::Vector{Vector{Int}} = Vector{Int}[] # semidefinite
soc::Vector{Vector{Int}} = Vector{Int}[] # second order
end
mutable struct Optimizer <: MOI.AbstractOptimizer
cones::Union{Nothing, ConeData}
zeros::Union{Nothing, Zeros{Float64}}
nonps::Union{Nothing, Nonpositives{Float64}}
sol::Result
options::Options
end
function Optimizer(;kwargs...)
optimizer = Optimizer(
nothing,
nothing,
nothing,
Result(),
Options()
)
for (key, value) in kwargs
MOI.set(optimizer, MOI.RawOptimizerAttribute(string(key)), value)
end
return optimizer
end
#=
Basic Attributes
=#
MOI.get(::Optimizer, ::MOI.SolverName) = "ProxSDP"
MOI.get(::Optimizer, ::MOI.SolverVersion) = "1.8.2"
function MOI.set(optimizer::Optimizer, param::MOI.RawOptimizerAttribute, value)
fields = fieldnames(Options)
name = Symbol(param.name)
if name in fields
setfield!(optimizer.options, name, value)
else
error("No parameter matching $(name)")
end
return value
end
function MOI.get(optimizer::Optimizer, param::MOI.RawOptimizerAttribute)
fields = fieldnames(Options)
name = Symbol(param.name)
if name in fields
return getfield(optimizer.options, name)
else
error("No parameter matching $(name)")
end
end
MOI.supports(::Optimizer, ::MOI.Silent) = true
function MOI.set(optimizer::Optimizer, ::MOI.Silent, value::Bool)
if value == true
optimizer.options.timer_verbose = false
end
optimizer.options.log_verbose = !value
end
function MOI.get(optimizer::Optimizer, ::MOI.Silent)
if optimizer.options.log_verbose
return false
elseif optimizer.options.timer_verbose
return false
else
return true
end
end
MOI.supports(::Optimizer, ::MOI.TimeLimitSec) = true
function MOI.set(optimizer::Optimizer, ::MOI.TimeLimitSec, value)
optimizer.options.time_limit = value
end
function MOI.get(optimizer::Optimizer, ::MOI.TimeLimitSec)
return optimizer.options.time_limit
end
MOI.supports(::Optimizer, ::MOI.NumberOfThreads) = false
function MOI.is_empty(optimizer::Optimizer)
return optimizer.cones === nothing &&
optimizer.zeros === nothing &&
optimizer.nonps === nothing &&
optimizer.sol.status == 0
end
function MOI.empty!(optimizer::Optimizer)
optimizer.cones = nothing
optimizer.zeros = nothing
optimizer.nonps = nothing
optimizer.sol = Result()
return
end
function _rows(
optimizer::Optimizer,
ci::MOI.ConstraintIndex{MOI.VectorAffineFunction{T},MOI.Zeros},
) where T
return MOI.Utilities.rows(optimizer.zeros, ci)
end
function _rows(
optimizer::Optimizer,
ci::MOI.ConstraintIndex{MOI.VectorAffineFunction{T},MOI.Nonpositives},
) where T
return MOI.Utilities.rows(optimizer.nonps, ci)
end
# MOI.supports
function MOI.supports(
::Optimizer,
::Union{
MOI.ObjectiveSense,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{T}},
},
) where T
return true
end
function MOI.supports_constraint(
::Optimizer,
::Type{MOI.VectorAffineFunction{T}},
::Type{<:Union{MOI.Zeros, MOI.Nonpositives}}
) where T
return true
end
function MOI.supports_add_constrained_variables(
::Optimizer,
::Type{<:Union{
MOI.SecondOrderCone,
MOI.PositiveSemidefiniteConeTriangle,
}
}
)
return true
end
function cone_data(src::OptimizerCache, ::Type{T}) where T
cache = MOI.Utilities.constraints(
src.constraints,
MOI.VectorOfVariables,
T,
)
indices = MOI.get(
cache,
MOI.ListOfConstraintIndices{MOI.VectorOfVariables, T}(),
)
funcs = MOI.get.(cache, MOI.ConstraintFunction(), indices)
return Vector{Int64}[Int64[v.value for v in f.variables] for f in funcs]
end
# Vectorized length for matrix dimension n
sympackedlen(n) = div(n * (n + 1), 2)
# Matrix dimension for vectorized length n
sympackeddim(n) = div(isqrt(1 + 8n) - 1, 2)
matindices(n::Integer) = LinearIndices(trues(n,n))[findall(LinearAlgebra.tril(trues(n,n)))]
function _optimize!(dest::Optimizer, src::OptimizerCache)
MOI.empty!(dest)
TimerOutputs.reset_timer!()
@timeit "pre-processing" begin
#=
Affine
=#
Ab = MOI.Utilities.constraints(
src.constraints,
MOI.VectorAffineFunction{Float64},
MOI.Zeros,
)
A = Ab.coefficients
b = -Ab.constants
Gh = MOI.Utilities.constraints(
src.constraints,
MOI.VectorAffineFunction{Float64},
MOI.Nonpositives,
)
G = Gh.coefficients
h = -Gh.constants
@assert A.n == G.n
#=
Objective
=#
max_sense = MOI.get(src, MOI.ObjectiveSense()) == MOI.MAX_SENSE
obj =
MOI.get(src, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}())
objective_constant = MOI.constant(obj)
c = zeros(A.n)
obj_sign = ifelse(max_sense, -1.0, 1.0)
for term in obj.terms
c[term.variable.value] += obj_sign * term.coefficient
end
dest.zeros = deepcopy(Ab.sets) # TODO copy(Ab.sets)
dest.nonps = deepcopy(Gh.sets) # TODO copy(Gh.sets)
# TODO: simply this after
# https://github.com/jump-dev/MathOptInterface.jl/issues/1711
_A = if A.m == 0
SparseArrays.sparse(Int64[], Int64[], Float64[], 0, A.n)
else
convert(SparseArrays.SparseMatrixCSC{Float64,Int64}, A)
end
_G = if G.m == 0
SparseArrays.sparse(Int64[], Int64[], Float64[], 0, G.n)
else
convert(SparseArrays.SparseMatrixCSC{Float64,Int64}, G)
end
aff = AffineSets(A.n, A.m, G.m, 0,
_A, _G, b, h, c)
#=
Cones
=#
con = ConicSets(
SDPSet[],
SOCSet[]
)
soc_s = cone_data(src, MOI.SecondOrderCone)
for soc in soc_s
push!(con.socone, SOCSet(soc, length(soc)))
end
psc_s = cone_data(src, MOI.PositiveSemidefiniteConeTriangle)
for psc in psc_s
tri_len = length(psc)
sq_side = sympackeddim(tri_len)
mat_inds = matindices(sq_side)
push!(con.sdpcone, SDPSet(psc, mat_inds, tri_len, sq_side))
end
dest.cones = ConeData(
psc_s,
soc_s,
)
#
end # timeit
#=
Solve modified problem
=#
options = dest.options
# warm = WarmStart()
if options.disable_julia_logger
# disable logger
global_log = Logging.current_logger()
Logging.global_logger(Logging.NullLogger())
end
sol = @timeit "Main" chambolle_pock(aff, con, options)
if options.disable_julia_logger
# re-enable logger
Logging.global_logger(global_log)
end
if options.timer_verbose
TimerOutputs.print_timer(TimerOutputs.DEFAULT_TIMER)
print("\n")
TimerOutputs.print_timer(TimerOutputs.flatten(TimerOutputs.DEFAULT_TIMER))
print("\n")
end
if options.timer_file
f = open("time.log","w")
TimerOutputs.print_timer(f,TimerOutputs.DEFAULT_TIMER)
print(f,"\n")
TimerOutputs.print_timer(f,TimerOutputs.flatten(TimerOutputs.DEFAULT_TIMER))
print(f,"\n")
close(f)
end
#=
Fix solution
=#
sol.objval = obj_sign * sol.objval + objective_constant
sol.dual_objval = obj_sign * sol.dual_objval + objective_constant
dest.sol = sol
return
end
function MOI.optimize!(dest::Optimizer, src::OptimizerCache)
_optimize!(dest, src)
return MOI.Utilities.identity_index_map(src), false
end
function MOI.optimize!(dest::Optimizer, src::MOI.ModelLike)
cache = OptimizerCache{Float64}()
index_map = MOI.copy_to(cache, src)
_optimize!(dest, cache)
return index_map, false
end
MOI.supports_incremental_interface(::Optimizer) = false
#=
Attributes
=#
MOI.get(optimizer::Optimizer, ::MOI.SolveTimeSec) = optimizer.sol.time
"""
PDHGIterations()
The number of PDHG iterations completed during the solve.
"""
struct PDHGIterations <: MOI.AbstractModelAttribute end
MOI.is_set_by_optimize(::PDHGIterations) = true
function MOI.get(optimizer::Optimizer, ::PDHGIterations)
return Int64(optimizer.sol.iter)
end
function MOI.get(optimizer::Optimizer, ::MOI.RawStatusString)
return optimizer.sol.status_string
end
function MOI.get(optimizer::Optimizer, ::MOI.TerminationStatus)
s = optimizer.sol.status
@assert 0 <= s <= 6
if s == 0
return MOI.OPTIMIZE_NOT_CALLED
elseif s == 1
return MOI.OPTIMAL
elseif s == 2
return MOI.TIME_LIMIT
elseif s == 3
return MOI.ITERATION_LIMIT
elseif s == 4
return MOI.INFEASIBLE_OR_UNBOUNDED
elseif s == 5
return MOI.DUAL_INFEASIBLE
elseif s == 6
return MOI.INFEASIBLE
end
end
function MOI.get(optimizer::Optimizer, attr::MOI.ObjectiveValue)
MOI.check_result_index_bounds(optimizer, attr)
value = optimizer.sol.objval
return value
end
function MOI.get(optimizer::Optimizer, attr::MOI.DualObjectiveValue)
MOI.check_result_index_bounds(optimizer, attr)
value = optimizer.sol.dual_objval
return value
end
function MOI.get(optimizer::Optimizer, attr::MOI.PrimalStatus)
s = optimizer.sol.status
if attr.result_index > 1 || s == 0
return MOI.NO_SOLUTION
end
if s == 5 && optimizer.sol.certificate_found
return MOI.INFEASIBILITY_CERTIFICATE
end
if optimizer.sol.primal_feasible_user_tol#s == 1
return MOI.FEASIBLE_POINT
else
return MOI.INFEASIBLE_POINT
end
end
function MOI.get(optimizer::Optimizer, attr::MOI.DualStatus)
s = optimizer.sol.status
if attr.result_index > 1 || s ==0
return MOI.NO_SOLUTION
end
if s == 6 && optimizer.sol.certificate_found
return MOI.INFEASIBILITY_CERTIFICATE
end
if optimizer.sol.dual_feasible_user_tol
return MOI.FEASIBLE_POINT
else
return MOI.INFEASIBLE_POINT
end
end
function MOI.get(optimizer::Optimizer, ::MOI.ResultCount)
return optimizer.sol.result_count
end
#=
Results
=#
function MOI.get(
optimizer::Optimizer,
attr::MOI.VariablePrimal,
vi::MOI.VariableIndex,
)
MOI.check_result_index_bounds(optimizer, attr)
return optimizer.sol.primal[vi.value]
end
function MOI.get(
optimizer::Optimizer,
attr::MOI.ConstraintPrimal,
ci::MOI.ConstraintIndex{MOI.VectorAffineFunction{Float64}, MOI.Zeros},
)
MOI.check_result_index_bounds(optimizer, attr)
return optimizer.sol.slack_eq[_rows(optimizer, ci)]
end
function MOI.get(
optimizer::Optimizer,
attr::MOI.ConstraintPrimal,
ci::MOI.ConstraintIndex{MOI.VectorAffineFunction{Float64}, MOI.Nonpositives},
)
MOI.check_result_index_bounds(optimizer, attr)
return optimizer.sol.slack_in[_rows(optimizer, ci)]
end
function MOI.get(
optimizer::Optimizer,
attr::MOI.ConstraintPrimal,
ci::MOI.ConstraintIndex{MOI.VectorOfVariables, MOI.PositiveSemidefiniteConeTriangle}
)
MOI.check_result_index_bounds(optimizer, attr)
return optimizer.sol.primal[optimizer.cones.psc[ci.value]]
end
function MOI.get(
optimizer::Optimizer,
attr::MOI.ConstraintPrimal,
ci::MOI.ConstraintIndex{MOI.VectorOfVariables, MOI.SecondOrderCone}
)
MOI.check_result_index_bounds(optimizer, attr)
return optimizer.sol.primal[optimizer.cones.soc[ci.value]]
end
function MOI.get(
optimizer::Optimizer,
attr::MOI.ConstraintDual,
ci::MOI.ConstraintIndex{MOI.VectorAffineFunction{Float64},MOI.Zeros},
)
MOI.check_result_index_bounds(optimizer, attr)
return -optimizer.sol.dual_eq[_rows(optimizer, ci)]
end
function MOI.get(
optimizer::Optimizer,
attr::MOI.ConstraintDual,
ci::MOI.ConstraintIndex{MOI.VectorAffineFunction{Float64},MOI.Nonpositives},
)
MOI.check_result_index_bounds(optimizer, attr)
return -optimizer.sol.dual_in[_rows(optimizer, ci)]
end
function MOI.get(
optimizer::Optimizer,
attr::MOI.ConstraintDual,
ci::MOI.ConstraintIndex{MOI.VectorOfVariables, MOI.PositiveSemidefiniteConeTriangle}
)
MOI.check_result_index_bounds(optimizer, attr)
return optimizer.sol.dual_cone[optimizer.cones.psc[ci.value]]
end
function MOI.get(
optimizer::Optimizer,
attr::MOI.ConstraintDual,
ci::MOI.ConstraintIndex{MOI.VectorOfVariables, MOI.SecondOrderCone}
)
MOI.check_result_index_bounds(optimizer, attr)
return optimizer.sol.dual_cone[optimizer.cones.soc[ci.value]]
end
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 690 | module ProxSDP
using PrecompileTools: @setup_workload, @compile_workload
import Arpack
import KrylovKit
import MathOptInterface
import TimerOutputs
import TimerOutputs: @timeit
import Printf
import SparseArrays
import LinearAlgebra
import LinearAlgebra: BlasInt
import Random
import Logging
include("structs.jl")
include("options.jl")
include("util.jl")
include("printing.jl")
include("scaling.jl")
include("equilibration.jl")
include("pdhg.jl")
include("residuals.jl")
include("eigsolver.jl")
include("prox_operators.jl")
include("MOI_wrapper.jl")
# PrecompileTools
@setup_workload begin
@compile_workload begin
include("../test/run_mini_benchmark.jl")
end
end
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 32020 | #=
Docs from DSAUPD from https://www.caam.rice.edu/software/ARPACK/UG/node136.html
c-----------------------------------------------------------------------
c\BeginDoc
c
c\Name: dsaupd
c
c\Description:
c
c Reverse communication interface for the Implicitly Restarted Arnoldi
c Iteration. For symmetric problems this reduces to a variant of the Lanczos
c method. This method has been designed to compute approximations to a
c few eigenpairs of a linear operator OP that is real and symmetric
c with respect to a real positive semi-definite symmetric matrix B,
c i.e.
c
c B*OP = (OP')*B.
c
c Another way to express this condition is
c
c < x,OPy > = < OPx,y > where < z,w > = z'Bw .
c
c In the standard eigenproblem B is the identity matrix.
c ( A' denotes transpose of A)
c
c The computed approximate eigenvalues are called Ritz values and
c the corresponding approximate eigenvectors are called Ritz vectors.
c
c dsaupd is usually called iteratively to solve one of the
c following problems:
c
c Mode 1: A*x = lambda*x, A symmetric
c ===> OP = A and B = I.
c
c Mode 2: A*x = lambda*M*x, A symmetric, M symmetric positive definite
c ===> OP = inv[M]*A and B = M.
c ===> (If M can be factored see remark 3 below)
c
c Mode 3: K*x = lambda*M*x, K symmetric, M symmetric positive semi-definite
c ===> OP = (inv[K - sigma*M])*M and B = M.
c ===> Shift-and-Invert mode
c
c Mode 4: K*x = lambda*KG*x, K symmetric positive semi-definite,
c KG symmetric indefinite
c ===> OP = (inv[K - sigma*KG])*K and B = K.
c ===> Buckling mode
c
c Mode 5: A*x = lambda*M*x, A symmetric, M symmetric positive semi-definite
c ===> OP = inv[A - sigma*M]*[A + sigma*M] and B = M.
c ===> Cayley transformed mode
c
c NOTE: The action of w <- inv[A - sigma*M]*v or w <- inv[M]*v
c should be accomplished either by a direct method
c using a sparse matrix factorization and solving
c
c [A - sigma*M]*w = v or M*w = v,
c
c or through an iterative method for solving these
c systems. If an iterative method is used, the
c convergence test must be more stringent than
c the accuracy requirements for the eigenvalue
c approximations.
c
c\Usage:
c call dsaupd
c ( IDO, BMAT, N, WHICH, NEV, TOL, RESID, NCV, V, LDV, IPARAM,
c IPNTR, WORKD, WORKL, LWORKL, INFO )
c
c\Arguments
c IDO Integer. (INPUT/OUTPUT)
c Reverse communication flag. IDO must be zero on the first
c call to dsaupd. IDO will be set internally to
c indicate the type of operation to be performed. Control is
c then given back to the calling routine which has the
c responsibility to carry out the requested operation and call
c dsaupd with the result. The operand is given in
c WORKD(IPNTR(1)), the result must be put in WORKD(IPNTR(2)).
c (If Mode = 2 see remark 5 below)
c -------------------------------------------------------------
c IDO = 0: first call to the reverse communication interface
c IDO = -1: compute Y = OP * X where
c IPNTR(1) is the pointer into WORKD for X,
c IPNTR(2) is the pointer into WORKD for Y.
c This is for the initialization phase to force the
c starting vector into the range of OP.
c IDO = 1: compute Y = OP * Z and Z = B * X where
c IPNTR(1) is the pointer into WORKD for X,
c IPNTR(2) is the pointer into WORKD for Y,
c IPNTR(3) is the pointer into WORKD for Z.
c IDO = 2: compute Y = B * X where
c IPNTR(1) is the pointer into WORKD for X,
c IPNTR(2) is the pointer into WORKD for Y.
c IDO = 3: compute the IPARAM(8) shifts where
c IPNTR(11) is the pointer into WORKL for
c placing the shifts. See remark 6 below.
c IDO = 99: done
c -------------------------------------------------------------
c After the initialization phase, when the routine is used in
c either the "shift-and-invert" mode or the Cayley transform
c mode, the vector B * X is already available and does not
c need to be recomputed in forming OP*X.
c
c BMAT Character*1. (INPUT)
c BMAT specifies the type of the matrix B that defines the
c semi-inner product for the operator OP.
c B = 'I' -> standard eigenvalue problem A*x = lambda*x
c B = 'G' -> generalized eigenvalue problem A*x = lambda*B*x
c
c N Integer. (INPUT)
c Dimension of the eigenproblem.
c
c WHICH Character*2. (INPUT)
c Specify which of the Ritz values of OP to compute.
c
c 'LA' - compute the NEV largest (algebraic) eigenvalues.
c 'SA' - compute the NEV smallest (algebraic) eigenvalues.
c 'LM' - compute the NEV largest (in magnitude) eigenvalues.
c 'SM' - compute the NEV smallest (in magnitude) eigenvalues.
c 'BE' - compute NEV eigenvalues, half from each end of the
c spectrum. When NEV is odd, compute one more from the
c high end than from the low end.
c (see remark 1 below)
c
c NEV Integer. (INPUT)
c Number of eigenvalues of OP to be computed. 0 < NEV < N.
c
c TOL Double precision scalar. (INPUT)
c Stopping criterion: the relative accuracy of the Ritz value
c is considered acceptable if BOUNDS(I) .LE. TOL*ABS(RITZ(I)).
c If TOL .LE. 0. is passed a default is set:
c DEFAULT = DLAMCH('EPS') (machine precision as computed
c by the LAPACK auxiliary subroutine DLAMCH).
c
c RESID Double precision array of length N. (INPUT/OUTPUT)
c On INPUT:
c If INFO .EQ. 0, a random initial residual vector is used.
c If INFO .NE. 0, RESID contains the initial residual vector,
c possibly from a previous run.
c On OUTPUT:
c RESID contains the final residual vector.
c
c NCV Integer. (INPUT)
c Number of columns of the matrix V (less than or equal to N).
c This will indicate how many Lanczos vectors are generated
c at each iteration. After the startup phase in which NEV
c Lanczos vectors are generated, the algorithm generates
c NCV-NEV Lanczos vectors at each subsequent update iteration.
c Most of the cost in generating each Lanczos vector is in the
c matrix-vector product OP*x. (See remark 4 below).
c
c V Double precision N by NCV array. (OUTPUT)
c The NCV columns of V contain the Lanczos basis vectors.
c
c LDV Integer. (INPUT)
c Leading dimension of V exactly as declared in the calling
c program.
c
c IPARAM Integer array of length 11. (INPUT/OUTPUT)
c IPARAM(1) = ISHIFT: method for selecting the implicit shifts.
c The shifts selected at each iteration are used to restart
c the Arnoldi iteration in an implicit fashion.
c -------------------------------------------------------------
c ISHIFT = 0: the shifts are provided by the user via
c reverse communication. The NCV eigenvalues of
c the current tridiagonal matrix T are returned in
c the part of WORKL array corresponding to RITZ.
c See remark 6 below.
c ISHIFT = 1: exact shifts with respect to the reduced
c tridiagonal matrix T. This is equivalent to
c restarting the iteration with a starting vector
c that is a linear combination of Ritz vectors
c associated with the "wanted" Ritz values.
c -------------------------------------------------------------
c
c IPARAM(2) = LEVEC
c No longer referenced. See remark 2 below.
c
c IPARAM(3) = MXITER
c On INPUT: maximum number of Arnoldi update iterations allowed.
c On OUTPUT: actual number of Arnoldi update iterations taken.
c
c IPARAM(4) = NB: blocksize to be used in the recurrence.
c The code currently works only for NB = 1.
c
c IPARAM(5) = NCONV: number of "converged" Ritz values.
c This represents the number of Ritz values that satisfy
c the convergence criterion.
c
c IPARAM(6) = IUPD
c No longer referenced. Implicit restarting is ALWAYS used.
c
c IPARAM(7) = MODE
c On INPUT determines what type of eigenproblem is being solved.
c Must be 1,2,3,4,5; See under \Description of dsaupd for the
c five modes available.
c
c IPARAM(8) = NP
c When ido = 3 and the user provides shifts through reverse
c communication (IPARAM(1)=0), dsaupd returns NP, the number
c of shifts the user is to provide. 0 < NP <=NCV-NEV. See Remark
c 6 below.
c
c IPARAM(9) = NUMOP, IPARAM(10) = NUMOPB, IPARAM(11) = NUMREO,
c OUTPUT: NUMOP = total number of OP*x operations,
c NUMOPB = total number of B*x operations if BMAT='G',
c NUMREO = total number of steps of re-orthogonalization.
c
c IPNTR Integer array of length 11. (OUTPUT)
c Pointer to mark the starting locations in the WORKD and WORKL
c arrays for matrices/vectors used by the Lanczos iteration.
c -------------------------------------------------------------
c IPNTR(1): pointer to the current operand vector X in WORKD.
c IPNTR(2): pointer to the current result vector Y in WORKD.
c IPNTR(3): pointer to the vector B * X in WORKD when used in
c the shift-and-invert mode.
c IPNTR(4): pointer to the next available location in WORKL
c that is untouched by the program.
c IPNTR(5): pointer to the NCV by 2 tridiagonal matrix T in WORKL.
c IPNTR(6): pointer to the NCV RITZ values array in WORKL.
c IPNTR(7): pointer to the Ritz estimates in array WORKL associated
c with the Ritz values located in RITZ in WORKL.
c Note: IPNTR(8:10) is only referenced by dseupd. See Remark 2.
c IPNTR(8): pointer to the NCV RITZ values of the original system.
c IPNTR(9): pointer to the NCV corresponding error bounds.
c IPNTR(10): pointer to the NCV by NCV matrix of eigenvectors
c of the tridiagonal matrix T. Only referenced by
c dseupd if RVEC = .TRUE. See Remarks.
c Note: IPNTR(8:10) is only referenced by dseupd. See Remark 2.
c IPNTR(11): pointer to the NP shifts in WORKL. See Remark 6 below.
c -------------------------------------------------------------
c
c WORKD Double precision work array of length 3*N. (REVERSE COMMUNICATION)
c Distributed array to be used in the basic Arnoldi iteration
c for reverse communication. The user should not use WORKD
c as temporary workspace during the iteration. Upon termination
c WORKD(1:N) contains B*RESID(1:N). If the Ritz vectors are desired
c subroutine dseupd uses this output.
c See Data Distribution Note below.
c
c WORKL Double precision work array of length LWORKL. (OUTPUT/WORKSPACE)
c Private (replicated) array on each PE or array allocated on
c the front end. See Data Distribution Note below.
c
c LWORKL Integer. (INPUT)
c LWORKL must be at least NCV**2 + 8*NCV .
c
c INFO Integer. (INPUT/OUTPUT)
c If INFO .EQ. 0, a randomly initial residual vector is used.
c If INFO .NE. 0, RESID contains the initial residual vector,
c possibly from a previous run.
c Error flag on output.
c = 0: Normal exit.
c = 1: Maximum number of iterations taken.
c All possible eigenvalues of OP has been found. IPARAM(5)
c returns the number of wanted converged Ritz values.
c = 2: No longer an informational error. Deprecated starting
c with release 2 of ARPACK.
c = 3: No shifts could be applied during a cycle of the
c Implicitly restarted Arnoldi iteration. One possibility
c is to increase the size of NCV relative to NEV.
c See remark 4 below.
c = -1: N must be positive.
c = -2: NEV must be positive.
c = -3: NCV must be greater than NEV and less than or equal to N.
c = -4: The maximum number of Arnoldi update iterations allowed
c must be greater than zero.
c = -5: WHICH must be one of 'LM', 'SM', 'LA', 'SA' or 'BE'.
c = -6: BMAT must be one of 'I' or 'G'.
c = -7: Length of private work array WORKL is not sufficient.
c = -8: Error return from trid. eigenvalue calculation;
c Informational error from LAPACK routine dsteqr.
c = -9: Starting vector is zero.
c = -10: IPARAM(7) must be 1,2,3,4,5.
c = -11: IPARAM(7) = 1 and BMAT = 'G' are incompatable.
c = -12: IPARAM(1) must be equal to 0 or 1.
c = -13: NEV and WHICH = 'BE' are incompatable.
c = -9999: Could not build an Arnoldi factorization.
c IPARAM(5) returns the size of the current Arnoldi
c factorization. The user is advised to check that
c enough workspace and array storage has been allocated.
c
c
c\Remarks
c 1. The converged Ritz values are always returned in ascending
c algebraic order. The computed Ritz values are approximate
c eigenvalues of OP. The selection of WHICH should be made
c with this in mind when Mode = 3,4,5. After convergence,
c approximate eigenvalues of the original problem may be obtained
c with the ARPACK subroutine dseupd.
c
c 2. If the Ritz vectors corresponding to the converged Ritz values
c are needed, the user must call dseupd immediately following completion
c of dsaupd. This is new starting with version 2.1 of ARPACK.
c
c 3. If M can be factored into a Cholesky factorization M = LL'
c then Mode = 2 should not be selected. Instead one should use
c Mode = 1 with OP = inv(L)*A*inv(L'). Appropriate triangular
c linear systems should be solved with L and L' rather
c than computing inverses. After convergence, an approximate
c eigenvector z of the original problem is recovered by solving
c L'z = x where x is a Ritz vector of OP.
c
c 4. At present there is no a-priori analysis to guide the selection
c of NCV relative to NEV. The only formal requirement is that NCV > NEV.
c However, it is recommended that NCV .ge. 2*NEV. If many problems of
c the same type are to be solved, one should experiment with increasing
c NCV while keeping NEV fixed for a given test problem. This will
c usually decrease the required number of OP*x operations but it
c also increases the work and storage required to maintain the orthogonal
c basis vectors. The optimal "cross-over" with respect to CPU time
c is problem dependent and must be determined empirically.
c
c 5. If IPARAM(7) = 2 then in the Reverse communication interface the user
c must do the following. When IDO = 1, Y = OP * X is to be computed.
c When IPARAM(7) = 2 OP = inv(B)*A. After computing A*X the user
c must overwrite X with A*X. Y is then the solution to the linear set
c of equations B*Y = A*X.
c
c 6. When IPARAM(1) = 0, and IDO = 3, the user needs to provide the
c NP = IPARAM(8) shifts in locations:
c 1 WORKL(IPNTR(11))
c 2 WORKL(IPNTR(11)+1)
c .
c .
c .
c NP WORKL(IPNTR(11)+NP-1).
c
c The eigenvalues of the current tridiagonal matrix are located in
c WORKL(IPNTR(6)) through WORKL(IPNTR(6)+NCV-1). They are in the
c order defined by WHICH. The associated Ritz estimates are located in
c WORKL(IPNTR(8)), WORKL(IPNTR(8)+1), ... , WORKL(IPNTR(8)+NCV-1).
c
c-----------------------------------------------------------------------------
=#
mutable struct EigSolverAlloc{T}
converged::Bool
n::Int
nev::Int
ncv::Int
maxiter::Int
tol::Float64
resid::Vector{T}
converged_eigs::Int
rng::Random.MersenneTwister
# KrylovKit data
vals::Vector{Float64}
vecs::Vector{Vector{Float64}}
# Arpack data
bmat::String
which::String
mode::Int
Amat::LinearAlgebra.Symmetric{T,Matrix{T}}
lworkl::Int
TOL::Base.RefValue{T}
v::Matrix{T}
workd::Vector{T}
workl::Vector{T}
info::Vector{BlasInt}
iparam::Vector{BlasInt}
ipntr::Vector{BlasInt}
ido::Vector{BlasInt}
zernm1::UnitRange{Int}
howmny::String
select::Vector{BlasInt}
info_e::Vector{BlasInt}
d::Vector{T}
sigmar::Vector{T}
arpackerror::Bool
x::Vector{T}
y::Vector{T}
function EigSolverAlloc{T}() where T
new{T}()
end
end
hasconverged(arc::EigSolverAlloc) = arc.converged
function EigSolverAlloc(T::DataType, n::Int64, opt::Options)::EigSolverAlloc
arc = EigSolverAlloc{T}()
@timeit "init_arc" _init_eigsolver!(
arc, LinearAlgebra.Symmetric(Matrix{T}(LinearAlgebra.I, n, n), :U), 1, n, opt)
return arc
end
function eigsolver_init_resid!(arc, opt, n)
arc.resid = zeros(n)
arc.rng = Random.MersenneTwister(opt.eigsolver_resid_seed)
return nothing
end
function eigsolver_update_resid!(arc, opt, init)
if init == 2
Random.seed!(arc.rng, opt.eigsolver_resid_seed)
Random.rand!(arc.rng, arc.resid)
elseif init == 3
Random.seed!(arc.rng, opt.eigsolver_resid_seed)
Random.randn!(arc.rng, arc.resid)
LinearAlgebra.normalize!(arc.resid)
elseif init == 1
fill!(arc.resid, 1.0)
else
fill!(arc.resid, 0.0)
end
return nothing
end
function _init_eigsolver!(arc::EigSolverAlloc{T}, A::LinearAlgebra.Symmetric{T,Matrix{T}}, nev::Int64, n::Int64, opt::Options) where T
if opt.eigsolver == 1
arpack_init!(arc, A, nev, n, opt)
else
krylovkit_init!(arc, A, nev, n, opt)
end
return nothing
end
function arpack_getvalues(arc::EigSolverAlloc)
return arc.d
end
function arpack_getvectors(arc::EigSolverAlloc)
return arc.v
end
function arpack_init!(arc::EigSolverAlloc{T}, A::LinearAlgebra.Symmetric{T,Matrix{T}}, nev::Int64, n::Int64, opt::Options) where T
# IDO - Integer. (INPUT/OUTPUT) (in julia its is a integer vector)
# Reverse communication flag.
# IDO must be zero on the first call to dsaupd.
arc.ido = zeros(BlasInt, 1)
# BMAT - Character*1. (INPUT)
# Standard eigenvalue problem
# 'I' -> standard eigenvalue problem A*x = lambda*x
arc.bmat = "I"
# N - Integer. (INPUT)
# Dimension of the eigenproblem
arc.n = n
# WHICH - Character*2. (INPUT)
# 'LA' - compute the NEV largest (algebraic) eigenvalues.
arc.which = "LA"
# NEV - Integer. (INPUT)
# Number of eigenvalues of OP to be computed. 0 < NEV < N.
arc.nev = nev
# TOL - Double precision scalar. (INPUT)
# Stopping criterion
arc.TOL = Ref(opt.arpack_tol)
# RESID - Double precision array of length N. (INPUT/OUTPUT)
# Resid contains the initial residual vector
# Double precision array of length N. (INPUT/OUTPUT)
# On INPUT:
# If INFO .EQ. 0, a random initial residual vector is used.
# If INFO .NE. 0, RESID contains the initial residual vector,
# possibly from a previous run.
# On OUTPUT:
# RESID contains the final residual vector.
# reset the curent mersenne twister to keep determinism
eigsolver_init_resid!(arc, opt, n)
eigsolver_update_resid!(arc, opt, opt.arpack_resid_init)
# NCV - Integer. (INPUT)
# How many Lanczos vectors are generated
# Remark
# 4. At present there is no a-priori analysis to guide the selection
# of NCV relative to NEV. The only formal requirement is that NCV > NEV.
# However, it is recommended that NCV .ge. 2*NEV. If many problems of
# the same type are to be solved, one should experiment with increasing
# NCV while keeping NEV fixed for a given test problem. This will
# usually decrease the required number of OP*x operations but it
# also increases the work and storage required to maintain the orthogonal
# basis vectors. The optimal "cross-over" with respect to CPU time
# is problem dependent and must be determined empirically.
arc.ncv = max(2 * arc.nev + 1, opt.eigsolver_min_lanczos)
# TODO: 10 might be a way to tight bound
# why not 20
# V - Double precision N by NCV array. (OUTPUT)
# Double precision N by NCV array. (OUTPUT)
# The NCV columns of V contain the Lanczos basis vectors.
# Lanczos basis vectors (output)
arc.v = Matrix{T}(undef, arc.n, arc.ncv)
# LDV Integer. (INPUT)
# Leading dimension of V exactly as declared in the calling
# program.
# same as N here !!!
# arc.n = n
# IPARAM Integer array of length 11. (INPUT/OUTPUT)
arc.iparam = zeros(BlasInt, 11)
# IPARAM(1) = ISHIFT
# ISHIFT = 1: exact shifts
arc.iparam[1] = BlasInt(1)
# IPARAM(2) = LEVEC
# IGNORED
# IPARAM(3) = MXITER
# On INPUT: maximum number of Arnoldi update iterations allowed.
# On OUTPUT: actual number of Arnoldi update iterations taken.
arc.iparam[3] = BlasInt(opt.arpack_max_iter)
# IPARAM(4) = NB: blocksize to be used in the recurrence.
# The code currently works only for NB = 1.
arc.iparam[4] = BlasInt(1)
# IPARAM(5) = NCONV: number of "converged" Ritz values.
# IGNORED -> OUTPUT
# IPARAM(6) = IUPD
# IGNORED
# IPARAM(7) = MODE
# On INPUT determines what type of eigenproblem is being solved.
# Must be 1,2,3,4,5; See under \Description of dsaupd for the
# five modes available.
# Mode 1: A*x = lambda*x, A symmetric
# ===> OP = A and B = I.
arc.mode = 1
arc.iparam[7] = BlasInt(arc.mode)
# IPARAM(8) = NP
# IGNORED BY US (only need if user provides shift)
# IPARAM(9) = NUMOP, IPARAM(10) = NUMOPB, IPARAM(11) = NUMREO,
# IGNORED -> are all OUTPUTS
# IPNTR Integer array of length 11. (OUTPUT)
arc.ipntr = zeros(BlasInt, 11)
# WORKD Double precision work array of length 3*N. (REVERSE COMMUNICATION)
arc.workd = Vector{T}(undef, 3 * arc.n)
# LWORKL Integer. (INPUT)
# LWORKL must be at least NCV**2 + 8*NCV .
arc.lworkl = arc.ncv^2 + 8*arc.ncv
# WORKL Double precision work array of length LWORKL. (OUTPUT/WORKSPACE)
arc.workl = Vector{T}(undef, arc.lworkl)
# INFO Integer. (INPUT/OUTPUT)
# If INFO .EQ. 0, a randomly initial residual vector is used.
# If INFO .NE. 0, RESID contains the initial residual vector,
# possibly from a previous run.
if opt.arpack_resid_init == 0
arc.info = zeros(BlasInt, 1)
arc.info_e = zeros(BlasInt, 1)
else
arc.info = ones(BlasInt, 1)
arc.info_e = ones(BlasInt, 1)
end
# Build linear operator
arc.Amat = A
arc.x = Vector{T}(undef, arc.n)
arc.y = Vector{T}(undef, arc.n)
# Parameters for _EUPD! routine
arc.zernm1 = 0:(arc.n-1)
arc.howmny = "A"
arc.select = Vector{BlasInt}(undef, arc.ncv)
arc.d = Vector{T}(undef, arc.nev)
arc.sigmar = zeros(T, 1)
# Flags created for ProxSDP use
arc.converged = false
arc.arpackerror = false
return nothing
end
function arpack_update!(arc::EigSolverAlloc{T}, A::LinearAlgebra.Symmetric{T,Matrix{T}}, nev::Int64, opt::Options, up_ncv::Bool) where T
need_resize = up_ncv
if !need_resize
need_resize = nev != arc.nev
end
if !need_resize
need_resize = arc.ncv != max(2 * arc.nev + 1, opt.eigsolver_min_lanczos)
end
# IDO must be zero on the first call to dsaupd
fill!(arc.ido, BlasInt(0))
# Number of eigenvalues of OP to be computed. Needs to be 0 < NEV < N.
arc.nev = nev
# Stopping criterion
arc.TOL[] = opt.arpack_tol
# RESID - Double precision array of length N. (INPUT/OUTPUT)
if opt.arpack_reset_resid
eigsolver_update_resid!(arc, opt, opt.arpack_resid_init)
end
# How many Lanczos vectors are generated. Needs to be NCV > NEV. It is recommended that NCV >= 2*NEV.
# After the startup phase in which NEV Lanczos vectors are generated,
# the algorithm generates NCV-NEV Lanczos vectors at each subsequent update iteration.
if need_resize
if up_ncv
arc.ncv += max(arc.nev, 10)
else
arc.ncv = max(2 * arc.nev + 1, opt.eigsolver_min_lanczos)
end
end
# Lanczos basis vectors (output)
if need_resize
arc.v = Matrix{T}(undef, arc.n, arc.ncv)
end
# Iparam
fill!(arc.iparam, BlasInt(0))
# IPARAM(1) = ISHIFT
# ISHIFT = 1: exact shifts
arc.iparam[1] = BlasInt(1)
# IPARAM(3) = MXITER
# On INPUT: maximum number of Arnoldi update iterations allowed.
# On OUTPUT: actual number of Arnoldi update iterations taken.
arc.iparam[3] = BlasInt(arc.maxiter)
# IPARAM(4) = NB: blocksize to be used in the recurrence.
# The code currently works only for NB = 1.
arc.iparam[4] = BlasInt(1)
# Determines what type of eigenproblem is being solved.
arc.iparam[7] = BlasInt(arc.mode)
# IPNTR Integer array of length 11. (OUTPUT)
fill!(arc.ipntr, BlasInt(0))
# Workspace, LWORKL must be at least NCV**2 + 8*NCV
if need_resize
arc.lworkl = arc.ncv * (arc.ncv + 8)
end
# Allocate memory for workl
if need_resize
arc.workl = Vector{T}(undef, arc.lworkl)
end
# INFO Integer. (INPUT/OUTPUT)
# If INFO .EQ. 0, a randomly initial residual vector is used.
# If INFO .NE. 0, RESID contains the initial residual vector,
# possibly from a previous run.
if opt.arpack_resid_init == 0
arc.info[] = zero(BlasInt)
arc.info_e[] = zero(BlasInt)
else
arc.info[] = one(BlasInt)
arc.info_e[] = one(BlasInt)
end
# Build linear operator
arc.Amat = A
# Parameters for _EUPD! routine
if need_resize
arc.select = Vector{BlasInt}(undef, arc.ncv)
arc.d = Vector{T}(undef, arc.nev)
end
# Flags created for ProxSDP use
arc.converged = false
arc.arpackerror = false
return nothing
end
function _saupd!(arc::EigSolverAlloc{T})::Nothing where T
while true
Arpack.saupd(arc.ido, arc.bmat, arc.n, arc.which, arc.nev, arc.TOL, arc.resid, arc.ncv, arc.v, arc.n,
arc.iparam, arc.ipntr, arc.workd, arc.workl, arc.lworkl, arc.info)
if arc.ido[] == 1
@simd for i in 1:arc.n
@inbounds arc.x[i] = arc.workd[i-1+arc.ipntr[1]]
end
LinearAlgebra.mul!(arc.y, arc.Amat, arc.x)
@simd for i in 1:arc.n
@inbounds arc.workd[i-1+arc.ipntr[2]] = arc.y[i]
end
# using views
# x = view(arc.workd, arc.ipntr[1] .+ arc.zernm1)
# y = view(arc.workd, arc.ipntr[2] .+ arc.zernm1)
# LinearAlgebra.mul!(y, arc.Amat, x)
elseif arc.ido[] == 99
# In this case, don't call _EUPD! (according to https://help.scilab.org/docs/5.3.3/en_US/dseupd.html)
break
else
arc.converged = false
arc.arpackerror = true
break
end
end
# Check if _AUPD has converged properly
if !(0 <= arc.info[] <= 1)
arc.converged = false
arc.arpackerror = true
else
arc.converged = true
end
# check convergence for all ritz pairs
# https://github.com/JuliaLinearAlgebra/Arpack.jl/blob/a7cdb6d7f19f076f5fadd8b58a9c5a061c48322f/src/Arpack.jl#L188
# @assert arc.nev <= arc.iparam[5]
# if arc.nev > arc.iparam[5]
# arc.converged = false
# @show arc.nev , arc.iparam[5]
# end
return nothing
end
function _seupd!(arc::EigSolverAlloc{T})::Nothing where T
# Check if _AUPD has converged properly
if !arc.converged
arc.converged = false
return nothing
end
Arpack.seupd(true, arc.howmny, arc.select, arc.d, arc.v, arc.n, arc.sigmar,
arc.bmat, arc.n, arc.which, arc.nev, arc.TOL, arc.resid, arc.ncv, arc.v, arc.n,
arc.iparam, arc.ipntr, arc.workd, arc.workl, arc.lworkl, arc.info_e)
# Check if seupd has converged properly
if arc.info_e[] != 0
arc.converged = false
arc.arpackerror = true
return nothing
end
# Check number of converged eigenvalues (maybe some can still be used?)
nconv = arc.iparam[5]
if nconv < arc.nev
arc.converged = false
arc.arpackerror = false
return nothing
end
arc.converged = true
arc.arpackerror = false
return nothing
end
function arpack_eig!(arc::EigSolverAlloc, A::LinearAlgebra.Symmetric{T,Matrix{T}}, nev::Integer, opt::Options)::Nothing where T
up_ncv = false
for i in 1:1
# Initialize parameters and do memory allocation
@timeit "update_arc" arpack_update!(arc, A, nev, opt, up_ncv)::Nothing
# Top level reverse communication interface to solve real double precision symmetric problems.
@timeit "saupd" _saupd!(arc)::Nothing
if arc.nev > arc.iparam[5]
up_ncv = true
else
break
end
end
arc.converged_eigs = arc.iparam[5]
# Post processing routine (eigenvalues and eigenvectors purification)
@timeit "seupd" _seupd!(arc)::Nothing
return nothing
end
function krylovkit_getvalues(arc::EigSolverAlloc)
return arc.vals
end
function krylovkit_getvector(arc::EigSolverAlloc, i)
return arc.vecs[i]
end
function krylovkit_init!(arc::EigSolverAlloc{T}, A::LinearAlgebra.Symmetric{T,Matrix{T}}, nev::Int64, n::Int64, opt::Options) where T
arc.n = n
arc.nev = nev
eigsolver_init_resid!(arc, opt, n)
eigsolver_update_resid!(arc, opt, opt.krylovkit_resid_init)
arc.ncv = max(2 * arc.nev + 1, opt.eigsolver_min_lanczos)
return nothing
end
function krylovkit_update!(arc::EigSolverAlloc{T}, A::LinearAlgebra.Symmetric{T,Matrix{T}}, nev::Int64, opt::Options) where T
arc.nev = nev
if opt.krylovkit_reset_resid
eigsolver_update_resid!(arc, opt, opt.krylovkit_resid_init)
end
arc.ncv = max(2 * arc.nev + 1, opt.eigsolver_min_lanczos)
return nothing
end
function krylovkit_eig!(arc::EigSolverAlloc, A::LinearAlgebra.Symmetric{T,Matrix{T}}, nev::Integer, opt::Options)::Nothing where T
arc.converged = true
@timeit "update_arc" krylovkit_update!(arc, A, nev, opt)::Nothing
@timeit "krylovkit" begin
vals, vecs, info = KrylovKit.eigsolve(
A, arc.resid, arc.nev, :LR,
KrylovKit.Lanczos(
KrylovKit.KrylovDefaults.orth,
arc.ncv,
opt.krylovkit_max_iter,
opt.krylovkit_tol,
opt.krylovkit_eager,
opt.krylovkit_verbose
)
)
arc.vals = vals
arc.vecs = vecs
arc.converged_eigs = info.converged
if info.converged == 0
arc.converged = false
end
arc.vals = vals
arc.vecs = vecs
end
return nothing
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 2220 | function equilibrate!(M, aff, opt)
max_iters=opt.equilibration_iters
lb = opt.equilibration_lb
ub = opt.equilibration_ub
@timeit "eq init" begin
α = (aff.n / (aff.m + aff.p)) ^ .25
β = ((aff.m + aff.p) / aff.n ) ^ .25
α2, β2 = α^2, β^2
γ = .1
u, v = zeros(aff.m + aff.p), zeros(aff.n)
u_, v_ = zeros(aff.m + aff.p), zeros(aff.n)
u_grad, v_grad = zeros(aff.m + aff.p), zeros(aff.n)
row_norms, col_norms = zeros(aff.m + aff.p), zeros(aff.n)
E = LinearAlgebra.Diagonal(u)
D = LinearAlgebra.Diagonal(v)
M_ = copy(M)
rows_M_ = SparseArrays.rowvals(M_)
end
for iter in 1:max_iters
@timeit "update diags" begin
E.diag .= exp.(u)
D.diag .= exp.(v)
end
@timeit "M_" begin
LinearAlgebra.mul!(M_, M, D)
LinearAlgebra.mul!(M_, E, M_)
end
step_size = 2. / (γ * (iter + 1.))
# u gradient step
@timeit "norms" begin
fill!(row_norms, 0.0)
fill!(col_norms, 0.0)
for col in 1:aff.n
for line in SparseArrays.nzrange(M_, col)
row_norms[rows_M_[line]] += abs2(M_[rows_M_[line], col])
col_norms[col] += abs2(M_[rows_M_[line], col])
end
end
end
@timeit "grad" begin
@. u_grad = row_norms - α2 + γ * u
@. v_grad = col_norms - β2 + γ * v
end
@timeit "proj " begin
# u
@. u -= step_size * u_grad
box_project!(u, lb, ub)
# v
@. v -= step_size * v_grad
sum_v = sum(v)
v .= sum_v / aff.n
box_project!(v, 0., ub)
end
# Update averages.
@timeit "update" begin
@. u_ = 2 * u / (iter + 2) + iter * u_ / (iter + 2)
@. v_ = 2 * v / (iter + 2) + iter * v_ / (iter + 2)
end
end
@timeit "update diags" begin
E.diag .= exp.(u_)
D.diag .= exp.(v_)
end
return E, D
end
function box_project!(y, lb, ub)
y .= min.(ub, max.(y, lb))
nothing
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 4037 | Base.@kwdef mutable struct Options
# Printing options
log_verbose::Bool = false
log_freq::Int = 1000
timer_verbose::Bool = false
timer_file::Bool = false
disable_julia_logger::Bool = true
# time options
time_limit::Float64 = 3600_00. #100 hours
warn_on_limit::Bool = false
extended_log::Bool = false
extended_log2::Bool = false
log_repeat_header::Bool = false
# Default tolerances
tol_gap::Float64 = 1e-4
tol_feasibility::Float64 = 1e-4
tol_feasibility_dual::Float64 = 1e-4
tol_primal::Float64 = 1e-4
tol_dual::Float64 = 1e-4
tol_psd::Float64 = 1e-7
tol_soc::Float64 = 1e-7
check_dual_feas::Bool = false
check_dual_feas_freq::Int = 1000
max_obj::Float64 = 1e20
min_iter_max_obj::Int = 10
# infeasibility check
min_iter_time_infeas::Int = 1000
infeas_gap_tol::Float64 = 1e-4
infeas_limit_gap_tol::Float64 = 1e-1
infeas_stable_gap_tol::Float64 = 1e-4
infeas_feasibility_tol::Float64 = 1e-4
infeas_stable_feasibility_tol::Float64 = 1e-8
certificate_search::Bool = true
certificate_obj_tol::Float64 = 1e-1
certificate_fail_tol::Float64 = 1e-8
# Bounds on beta (dual_step / primal_step) [larger bounds may lead to numerical inaccuracy]
min_beta::Float64 = 1e-5
max_beta::Float64 = 1e+5
initial_beta::Float64 = 1.
# Adaptive primal-dual steps parameters [adapt_decay above .7 may lead to slower convergence]
initial_adapt_level::Float64 = .9
adapt_decay::Float64 = .8
adapt_window::Int64 = 50
# PDHG parameters
convergence_window::Int = 200
convergence_check::Int = 50
max_iter::Int = 0
min_iter::Int = 40
divergence_min_update::Int = 50
max_iter_lp::Int = 10_000_000
max_iter_conic::Int = 1_000_000
max_iter_local::Int = 0 #ignores user setting
advanced_initialization::Bool = true
# Linesearch parameters
line_search_flag::Bool = true
max_linsearch_steps::Int = 5000
delta::Float64 = .9999
initial_theta::Float64 = 1.
linsearch_decay::Float64 = .75
# Spectral decomposition parameters
full_eig_decomp::Bool = false
max_target_rank_krylov_eigs::Int = 16
min_size_krylov_eigs::Int = 100
warm_start_eig::Bool = true
rank_increment::Int = 1 # 0=multiply, 1 = add
rank_increment_factor::Int = 1 # 0 multiply, 1 = add
# eigsolver selection
#=
1: Arpack [dsaupd] (tipically non-deterministic)
2: KrylovKit [eigsolve/Lanczos] (DEFAULT)
=#
eigsolver::Int = 2
eigsolver_min_lanczos::Int = 25
eigsolver_resid_seed::Int = 1234
# Arpack
# note that Arpack is Non-deterministic
# (https://github.com/mariohsouto/ProxSDP.jl/issues/69)
arpack_tol::Float64 = 1e-10
#=
0: arpack random [usually faster - NON-DETERMINISTIC - slightly]
1: all ones [???]
2: julia random uniform (eigsolver_resid_seed) [medium for DETERMINISTIC]
3: julia normalized random normal (eigsolver_resid_seed) [best for DETERMINISTIC]
=#
arpack_resid_init::Int = 3
arpack_reset_resid::Bool = true # true for determinism
# larger is more stable to converge and more deterministic
arpack_max_iter::Int = 10_000
# see remark for of dsaupd
# KrylovKit
krylovkit_reset_resid::Bool = false
krylovkit_resid_init::Int = 3
krylovkit_tol::Float64 = 1e-12
krylovkit_max_iter::Int = 100
krylovkit_eager::Bool = false
krylovkit_verbose::Int = 0
# Reduce rank [warning: heuristics]
reduce_rank::Bool = false
rank_slack::Int = 3
full_eig_freq::Int = 10_000_000
full_eig_len::Int = 0
# equilibration parameters
equilibration::Bool = false
equilibration_iters::Int = 1000
equilibration_lb::Float64 = -10.0
equilibration_ub::Float64 = +10.0
equilibration_limit::Float64 = 0.9
equilibration_force::Bool = false
# spectral norm [using exact norm via svds may result in nondeterministic behavior]
approx_norm::Bool = true
end
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 28099 | function chambolle_pock(
affine_sets::AffineSets,
conic_sets::ConicSets,
opt
)::Result
# Initialize parameters
p = Params()
p.theta = opt.initial_theta
p.adapt_level = opt.initial_adapt_level
p.window = opt.convergence_window
p.beta = opt.initial_beta
p.time0 = time()
p.norm_b = LinearAlgebra.norm(affine_sets.b, 2)
p.norm_h = LinearAlgebra.norm(affine_sets.h, 2)
p.norm_c = LinearAlgebra.norm(affine_sets.c, 2)
p.rank_update, p.stop_reason, p.update_cont = 0, 0, 0
p.stop_reason_string = "Not optimized"
p.target_rank = 2 * ones(length(conic_sets.sdpcone))
p.current_rank = 2 * ones(length(conic_sets.sdpcone))
p.min_eig = zeros(length(conic_sets.sdpcone))
p.dual_feasibility = -1.0
p.dual_feasibility_check = false
p.certificate_search = false
p.certificate_search_min_iter = 0
p.certificate_found = false
sol = Array{Result}(undef, 0)
arc_list = [
EigSolverAlloc(Float64, sdp.sq_side, opt)
for (idx, sdp) in enumerate(conic_sets.sdpcone)]
ada_count = 0
if opt.max_iter <= 0
if length(conic_sets.socone) > 0 || length(conic_sets.sdpcone) > 0
opt.max_iter_local = opt.max_iter_conic
else
opt.max_iter_local = opt.max_iter_lp
end
else
opt.max_iter_local = opt.max_iter
end
# Print header
if opt.log_verbose
print_header_1()
print_parameters(opt, conic_sets)
print_constraints(affine_sets)
if length(conic_sets.socone) + length(conic_sets.sdpcone) > 0
print_prob_data(conic_sets)
end
print_header_2(opt)
end
@timeit "Init" begin
# Scale objective function
@timeit "normscale alloc" begin
c_orig, var_ordering = preprocess!(affine_sets, conic_sets)
A_orig, b_orig = copy(affine_sets.A), copy(affine_sets.b)
G_orig, h_orig = copy(affine_sets.G), copy(affine_sets.h)
rhs_orig = vcat(b_orig, h_orig)
end
# Diagonal preconditioning
@timeit "equilibrate" begin
if opt.equilibration
M = vcat(affine_sets.A, affine_sets.G)
UB = maximum(M)
LB = minimum(M)
if LB/UB <= opt.equilibration_limit
opt.equilibration = false
end
end
if opt.equilibration_force
opt.equilibration = true
end
if opt.equilibration
@timeit "equilib inner" E, D = equilibrate!(M, affine_sets, opt)
@timeit "equilib scaling" begin
M = E * M * D
affine_sets.A = M[1:affine_sets.p, :]
affine_sets.G = M[affine_sets.p + 1:end, :]
rhs = E * rhs_orig
affine_sets.b = rhs[1:affine_sets.p]
affine_sets.h = rhs[affine_sets.p + 1:end]
affine_sets.c = D * affine_sets.c
end
else
E = LinearAlgebra.Diagonal(zeros(affine_sets.m + affine_sets.p))
D = LinearAlgebra.Diagonal(zeros(affine_sets.n))
end
end
# Scale the off-diagonal entries associated with p.s.d. matrices by √2
@timeit "normscale" norm_scaling(affine_sets, conic_sets)
# Initialization
pair = PrimalDual(affine_sets)
a = AuxiliaryData(affine_sets, conic_sets)
map_socs!(pair.x, conic_sets, a)
residuals = Residuals(p.window)
# Diagonal scaling
M = vcat(affine_sets.A, affine_sets.G)
Mt = M'
# Stepsize parameters and linesearch parameters
if !opt.approx_norm
@timeit "svd" if minimum(size(M)) >= 2
try
spectral_norm = Arpack.svds(M, nsv=1)[1].S[1]
catch
println(" WARNING: Failed to compute spectral norm of M, shifting to Frobenius norm")
spectral_norm = LinearAlgebra.norm(M)
end
else
F = LinearAlgebra.svd!(Matrix(M))
spectral_norm = maximum(F.S)
end
else
spectral_norm = LinearAlgebra.norm(M)
end
if spectral_norm < 1e-10
spectral_norm = 1.0
end
# Build struct for storing matrices
mat = Matrices(M, Mt, affine_sets.c)
# Initial primal and dual steps
p.primal_step = 1. / spectral_norm
p.primal_step_old = p.primal_step
p.dual_step = p.primal_step
end
# Initialization
if opt.advanced_initialization
pair.x .= p.primal_step .* mat.c
LinearAlgebra.mul!(a.Mx, mat.M, pair.x)
LinearAlgebra.mul!(a.Mx_old, mat.M, pair.x_old)
end
# Fixed-point loop
@timeit "CP loop" for k in 1:2*opt.max_iter_local
# Update iterator
p.iter = k
# Primal step
@timeit "primal" primal_step!(pair, a, conic_sets, mat, opt, p, arc_list, p.iter)
# Linesearch (dual step)
if opt.line_search_flag
@timeit "linesearch" linesearch!(pair, a, affine_sets, mat, opt, p)
else
@timeit "dual step" dual_step!(pair, a, affine_sets, mat, opt, p)
end
# Compute residuals and update old iterates
@timeit "residual" compute_residual!(residuals, pair, a, p, affine_sets)
# Compute optimality gap and feasibility error
@timeit "gap" compute_gap!(residuals, pair, a, affine_sets, p)
if (opt.check_dual_feas && mod(k, opt.check_dual_feas_freq) == 0) ||
(opt.log_verbose && mod(k, opt.log_freq) == 0 && opt.extended_log2)
cc = ifelse(p.stop_reason == 6, 0.0, 1.0)*c_orig
p.dual_feasibility = dual_feas(pair.y, conic_sets, affine_sets, cc, A_orig, G_orig, a)
p.dual_feasibility_check = true
else
p.dual_feasibility_check = false
end
# Print progress
if opt.log_verbose && mod(k, opt.log_freq) == 0
print_progress(residuals, p, opt, p.dual_feasibility)
end
if p.iter < p.certificate_search_min_iter
continue
end
if opt.certificate_search && p.certificate_search
if p.stop_reason == 6 # Infeasible
if residuals.dual_obj[k] > +opt.certificate_obj_tol
p.dual_feasibility = dual_feas(pair.y, conic_sets, affine_sets, 0*c_orig, A_orig, G_orig, a)
p.dual_feasibility_check = true
if p.dual_feasibility < opt.tol_feasibility_dual
if opt.log_verbose
println("---------------------------------------------------------------------------------------")
println(" Dual ray found")
println("---------------------------------------------------------------------------------------")
end
p.certificate_found = true
p.stop_reason_string *= " [Dual ray found]"
break
end
end
else p.stop_reason == 5 # Unbounded
if residuals.prim_obj[k] < -opt.certificate_obj_tol
if residuals.feasibility[p.iter] < opt.tol_feasibility
if opt.log_verbose
println("---------------------------------------------------------------------------------------")
println(" Primal ray found")
println("---------------------------------------------------------------------------------------")
end
p.certificate_found = true
p.stop_reason_string *= " [Primal ray found]"
break
end
end
end
if (
residuals.prim_obj[k] < -opt.certificate_fail_tol &&
residuals.dual_obj[k] < -opt.certificate_fail_tol &&
residuals.feasibility[p.iter] < -opt.certificate_fail_tol
) || isnan(residuals.comb_residual[k])
if opt.log_verbose
println("---------------------------------------------------------------------------------------")
println(" Failed to finds certificate")
println("---------------------------------------------------------------------------------------")
end
p.stop_reason_string *= " [Failed to find certificate]"
break
end
end
# Check convergence
p.rank_update += 1
if residuals.dual_gap[p.iter] <= opt.tol_gap && residuals.feasibility[p.iter] <= opt.tol_feasibility &&
(!opt.check_dual_feas || p.dual_feasibility < opt.tol_feasibility_dual)
if convergedrank(a, p, conic_sets, opt) && soc_convergence(a, conic_sets, pair, opt, p) && p.iter > opt.min_iter
if !p.certificate_search
p.stop_reason = 1 # Optimal
p.stop_reason_string = "Optimal solution found"
else
if opt.log_verbose
println("---------------------------------------------------------------------------------------")
println(" Failed to find certificate - type 2")
println("---------------------------------------------------------------------------------------")
end
p.stop_reason_string *= " [Failed to find certificate - type 2]"
break
end
break
elseif p.rank_update > p.window
p.update_cont += 1
if p.update_cont > 0
for (idx, sdp) in enumerate(conic_sets.sdpcone)
if p.current_rank[idx] + opt.rank_slack >= p.target_rank[idx]
if min_eig(a, idx, p) > opt.tol_psd
if opt.rank_increment == 0
p.target_rank[idx] = min(opt.rank_increment_factor * p.target_rank[idx], sdp.sq_side)
else
p.target_rank[idx] = min(opt.rank_increment_factor + p.target_rank[idx], sdp.sq_side)
end
end
end
end
p.rank_update, p.update_cont = 0, 0
end
end
# Check divergence
elseif k > p.window && residuals.comb_residual[k - p.window] < residuals.comb_residual[k] && p.rank_update > p.window
p.update_cont += 1
if p.update_cont > opt.divergence_min_update
full_rank_flag = true
for (idx, sdp) in enumerate(conic_sets.sdpcone)
if p.target_rank[idx] < sdp.sq_side
full_rank_flag = false
p.rank_update, p.update_cont = 0, 0
end
if p.current_rank[idx] + opt.rank_slack >= p.target_rank[idx]
if min_eig(a, idx, p) > opt.tol_psd
if opt.rank_increment == 0
p.target_rank[idx] = min(opt.rank_increment_factor * p.target_rank[idx], sdp.sq_side)
else
p.target_rank[idx] = min(opt.rank_increment_factor + p.target_rank[idx], sdp.sq_side)
end
end
end
end
end
# Adaptive stepsizes
elseif residuals.primal_residual[k] > opt.tol_primal && residuals.dual_residual[k] < opt.tol_dual && k > p.window
ada_count += 1
if ada_count > opt.adapt_window
ada_count = 0
if opt.line_search_flag
p.beta *= (1. - p.adapt_level)
p.primal_step /= sqrt(1. - p.adapt_level)
else
p.primal_step /= (1. - p.adapt_level)
p.dual_step *= (1. - p.adapt_level)
end
p.adapt_level *= opt.adapt_decay
end
elseif residuals.primal_residual[k] < opt.tol_primal && residuals.dual_residual[k] > opt.tol_dual && k > p.window
ada_count += 1
if ada_count > opt.adapt_window
ada_count = 0
if opt.line_search_flag
p.beta /= (1. - p.adapt_level)
p.primal_step *= sqrt(1. - p.adapt_level)
else
p.primal_step *= (1. - p.adapt_level)
p.dual_step /= (1. - p.adapt_level)
end
p.adapt_level *= opt.adapt_decay
end
end
# max_iter or time limit stop condition
if p.iter >= opt.max_iter_local || time() - p.time0 >= opt.time_limit
if p.iter > opt.min_iter_time_infeas &&
max_abs_diff(residuals.dual_gap) < opt.infeas_stable_gap_tol &&
residuals.dual_gap[k] > opt.infeas_limit_gap_tol # low gap but far from zero, say 10%
if residuals.feasibility[p.iter] <= opt.tol_feasibility/100
p.stop_reason = 5 # Unbounded
p.stop_reason_string = "Problem declared unbounded due to lack of improvement"
if opt.certificate_search && !p.certificate_search
certificate_dual_infeasibility(affine_sets, p, opt)
push!(sol, cache_solution(pair, residuals, conic_sets, affine_sets, p, opt,
c_orig, A_orig, b_orig, G_orig, h_orig, D, E, var_ordering, a))
# println("6")
elseif opt.certificate_search && p.certificate_search
# error("6")
else
break
end
elseif residuals.feasibility[p.iter] > opt.infeas_feasibility_tol
p.stop_reason = 6 # Infeasible
p.stop_reason_string = "Problem declared infeasible due to lack of improvement"
if opt.certificate_search && !p.certificate_search
certificate_infeasibility(affine_sets, p, opt)
push!(sol, cache_solution(pair, residuals, conic_sets, affine_sets, p, opt,
c_orig, A_orig, b_orig, G_orig, h_orig, D, E, var_ordering, a))
# println("7")
elseif opt.certificate_search && p.certificate_search
# error("7")
else
break
end
end
elseif p.iter >= opt.max_iter_local
p.stop_reason = 3 # Iteration limit
p.stop_reason_string = "Iteration limit of $(opt.max_iter_local) was hit"
if opt.warn_on_limit
@warn(" WARNING: Iteration limit hit.")
end
else
p.stop_reason = 2 # Time limit
p.stop_reason_string = "Time limit hit, limit: $(opt.time_limit) time: $(time() - p.time0)"
if opt.warn_on_limit
println(" WARNING: Time limit hit.")
end
end
if p.iter >= opt.max_iter_local || time() - p.time0 >= opt.time_limit
break
end
end
# already looking of certificates
if opt.certificate_search && p.certificate_search
continue
end
# Dual obj growing too much
if (p.iter > opt.min_iter_max_obj && residuals.dual_obj[k] > opt.max_obj) || isnan(residuals.dual_obj[k])
# Dual unbounded
p.stop_reason = 6 # Infeasible
p.stop_reason_string = "Infeasible: |Dual objective| = $(residuals.dual_obj[k]) > maximum allowed = $(opt.max_obj)"
if opt.certificate_search && !p.certificate_search
certificate_infeasibility(affine_sets, p, opt)
push!(sol, cache_solution(pair, residuals, conic_sets, affine_sets, p, opt,
c_orig, A_orig, b_orig, G_orig, h_orig, D, E, var_ordering, a))
else
break
end
end
# Primal obj growing too much
if (p.iter > opt.min_iter_max_obj && residuals.prim_obj[k] < -opt.max_obj) || isnan(residuals.prim_obj[k])
p.stop_reason = 5 # Unbounded
p.stop_reason_string = "Unbounded: |Primal objective| = $(residuals.prim_obj[k]) > maximum allowed = $(opt.max_obj)"
if opt.certificate_search && !p.certificate_search
certificate_dual_infeasibility(affine_sets, p, opt)
push!(sol, cache_solution(pair, residuals, conic_sets, affine_sets, p, opt,
c_orig, A_orig, b_orig, G_orig, h_orig, D, E, var_ordering, a))
else
break
end
end
# Stalled feasibility with meaningful gap
if (
p.iter > opt.min_iter_max_obj &&
residuals.dual_gap[k] > opt.infeas_limit_gap_tol && # Low gap but far from zero (~10%)
residuals.feasibility[p.iter] > opt.infeas_feasibility_tol &&
max_abs_diff(residuals.feasibility) < opt.infeas_stable_feasibility_tol
)
p.stop_reason = 6 # Infeasible
p.stop_reason_string = "Infeasible: feasibility stalled at $(residuals.feasibility[p.iter])"
if opt.certificate_search && !p.certificate_search
certificate_infeasibility(affine_sets, p, opt)
push!(sol, cache_solution(pair, residuals, conic_sets, affine_sets, p, opt,
c_orig, A_orig, b_orig, G_orig, h_orig, D, E, var_ordering, a))
else
break
end
end
# Stalled gap at 100%
if (
p.iter > opt.min_iter_max_obj &&
residuals.dual_gap[k] > 1-opt.infeas_gap_tol &&
max_abs_diff(residuals.dual_gap) < opt.infeas_stable_gap_tol
)
if abs(residuals.dual_obj[k]) > abs(residuals.prim_obj[k]) && residuals.feasibility[p.iter] > opt.infeas_feasibility_tol
# Dual unbounded
p.stop_reason = 6 # Infeasible
p.stop_reason_string = "Infeasible: duality gap stalled at 100 % with |Dual objective| >> |Primal objective|"
if opt.certificate_search && !p.certificate_search
certificate_infeasibility(affine_sets, p, opt)
push!(sol, cache_solution(pair, residuals, conic_sets, affine_sets, p, opt,
c_orig, A_orig, b_orig, G_orig, h_orig, D, E, var_ordering, a))
else
break
end
elseif abs(residuals.prim_obj[k]) > abs(residuals.dual_obj[k]) && residuals.feasibility[p.iter] <= opt.tol_feasibility
p.stop_reason = 5 # Unbounded
p.stop_reason_string = "Unbounded: duality gap stalled at 100 % with |Dual objective| << |Primal objective|"
if opt.certificate_search && !p.certificate_search
certificate_dual_infeasibility(affine_sets, p, opt)
push!(sol, cache_solution(pair, residuals, conic_sets, affine_sets, p, opt,
c_orig, A_orig, b_orig, G_orig, h_orig, D, E, var_ordering, a))
else
break
end
end
end
end
# Compute results
time_ = time() - p.time0
# Print result
if opt.log_verbose
val = -1.0
if opt.extended_log2
cc = ifelse(p.stop_reason == 6, 0.0, 1.0)*c_orig
val = dual_feas(pair.y, conic_sets, affine_sets, cc, A_orig, G_orig, a)
end
print_progress(residuals, p, opt, val)
print_result(
p.stop_reason,
time_,
residuals,
length(p.current_rank) > 0 ? maximum(p.current_rank) : 0,
p)
end
if opt.certificate_search && p.certificate_search
@assert length(sol) == 1
if p.certificate_found
if p.stop_reason == 6
c_orig .*= 0.0
end
pop!(sol)
push!(sol, cache_solution(pair, residuals, conic_sets, affine_sets, p, opt,
c_orig, A_orig, b_orig, G_orig, h_orig, D, E, var_ordering, a))
end
else
@assert length(sol) == 0
push!(sol, cache_solution(pair, residuals, conic_sets, affine_sets, p, opt,
c_orig, A_orig, b_orig, G_orig, h_orig, D, E, var_ordering, a))
end
return sol[1]
end
function linesearch!(
pair::PrimalDual,
a::AuxiliaryData,
affine_sets::AffineSets,
mat::Matrices,
opt::Options,
p::Params
)::Nothing
p.primal_step = p.primal_step * sqrt(1. + p.theta)
for i in 1:opt.max_linsearch_steps
p.theta = p.primal_step / p.primal_step_old
@timeit "linesearch 1" begin
a.y_half .= pair.y .+ (p.beta * p.primal_step) .* ((1. + p.theta) .* a.Mx .- p.theta .* a.Mx_old)
end
@timeit "linesearch 2" begin
copyto!(a.y_temp, a.y_half)
box_projection!(a.y_half, affine_sets, p.beta * p.primal_step)
a.y_temp .-= (p.beta * p.primal_step) .* a.y_half
end
@timeit "linesearch 3" LinearAlgebra.mul!(a.Mty, mat.Mt, a.y_temp)
# In-place norm
@timeit "linesearch 4" begin
a.Mty .-= a.Mty_old
a.y_temp .-= pair.y_old
y_norm = LinearAlgebra.norm(a.y_temp)
Mty_norm = LinearAlgebra.norm(a.Mty)
end
if sqrt(p.beta) * p.primal_step * Mty_norm <= opt.delta * y_norm
break
else
p.primal_step *= opt.linsearch_decay
end
end
# Reverte in-place norm
a.Mty .+= a.Mty_old
a.y_temp .+= pair.y_old
copyto!(pair.y, a.y_temp)
p.primal_step_old = p.primal_step
p.dual_step = p.beta * p.primal_step
return nothing
end
function dual_step!(
pair::PrimalDual,
a::AuxiliaryData,
affine_sets::AffineSets,
mat::Matrices,
opt::Options,
p::Params
)::Nothing
@timeit "dual step 1" begin
a.y_half .= pair.y .+ p.dual_step * (2. * a.Mx .- a.Mx_old)
end
@timeit "dual step 2" begin
copyto!(a.y_temp, a.y_half)
box_projection!(a.y_half, affine_sets, p.dual_step)
a.y_temp .-= p.dual_step * a.y_half
end
@timeit "linesearch 3" LinearAlgebra.mul!(a.Mty, mat.Mt, a.y_temp)
copyto!(pair.y, a.y_temp)
p.primal_step_old = p.primal_step
return nothing
end
function primal_step!(
pair::PrimalDual,
a::AuxiliaryData,
cones::ConicSets,
mat::Matrices,
opt::Options,
p::Params,
arc_list,
iter::Int64,
)::Nothing
pair.x .-= p.primal_step .* (a.Mty .+ mat.c)
# Projection onto the p.s.d. cone
if length(cones.sdpcone) >= 1
@timeit "sdp proj" psd_projection!(pair.x, a, cones, opt, p, arc_list, iter)
end
# Projection onto the second order cone
if length(cones.socone) >= 1
@timeit "soc proj" soc_projection!(pair.x, a, cones, opt, p)
end
@timeit "linesearch -1" LinearAlgebra.mul!(a.Mx, mat.M, pair.x)
return nothing
end
function certificate_dual_infeasibility(affine_sets, p, opt)
if opt.log_verbose
println("-"^87)
println(" Begin search for dual infeasibility certificate")
println("-"^87)
end
fill!(affine_sets.b, 0.0)
fill!(affine_sets.h, 0.0)
certificate_parameters(p, opt)
return nothing
end
function certificate_infeasibility(affine_sets, p, opt)
if opt.log_verbose
println("-"^87)
println(" Begin search for infeasibility certificate")
println("-"^87)
end
fill!(affine_sets.c, 0.0)
certificate_parameters(p, opt)
return nothing
end
function certificate_parameters(p, opt)
p.certificate_search_min_iter = p.iter + 2 * opt.convergence_window + div(p.iter, 5) + 1000
p.certificate_search = true
opt.time_limit *= 1.1
opt.max_iter_local = opt.max_iter_local + div(opt.max_iter_local, 10)
return nothing
end
function cone_feas(v, cones, a, num = sqrt(2))
sdp_viol = 0.0
sdplen = psd_vec_to_square(v, a, cones, num) - 1
for (idx, sdp) in enumerate(cones.sdpcone)
if sdp.sq_side == 1
sdp_viol = max(sdp_viol, -min(0.0, a.m[idx][1]))
else
fact = LinearAlgebra.eigen!(a.m[idx])
sdp_viol = max(sdp_viol, -min(0.0, minimum(fact.values)))
end
end
soc_viol = 0.0
cont = sdplen
for (idx, soc) in enumerate(cones.socone)
len = soc.len
push!(a.soc_s, view(v, cont + 1))
s = v[cont+1]
sdp_viol = max(sdp_viol, -min(0.0, s - LinearAlgebra.norm(view(v, cont + 2:cont + len))))
cont += len
end
return max(sdp_viol, soc_viol), cont
end
function get_duals(y::Vector{T}, cones::ConicSets, affine::AffineSets, c, A, G) where T
dual_eq = y[1:affine.p]
dual_in = y[affine.p+1:end]
dual_cone = + c + A' * dual_eq + G' * dual_in
fix_diag_scaling(dual_cone, cones, 2.0)
return dual_eq, dual_in, dual_cone
end
function dual_feas(y::Vector{T}, cones::ConicSets, affine::AffineSets, c, A, G, a) where T
dual_eq, dual_in, dual_cone = get_duals(y, cones, affine, c, A, G)
return dual_feas(dual_in, dual_cone, cones, a)
end
function dual_feas(dual_in::Vector{T}, dual_cone::Vector{T}, cones, a) where T
ineq_viol = 0.0
if length(dual_in) > 0
ineq_viol = -min(0.0, minimum(dual_in))
end
cone_viol, cont = cone_feas(dual_cone, cones, a)
zero_viol = 0.0
dual_zr = dual_cone[(cont+1):end]
if length(dual_zr) > 0
zero_viol = maximum(abs.(dual_zr))
end
return max(cone_viol, ineq_viol, zero_viol)
end
function fix_diag_scaling(v, cones, num)
# Remove diag scaling
cont = 1
@inbounds for sdp in cones.sdpcone, j in 1:sdp.sq_side, i in 1:j#j:sdp.sq_side
if i != j
v[cont] /= num
end
cont += 1
end
end
function cache_solution(pair, residuals, conic_sets, affine_sets, p, opt,
c, A, b, G, h, D, E, var_ordering, a)
# Remove diag scaling
fix_diag_scaling(pair.x, conic_sets, sqrt(2.0))
# Remove equilibrating
if opt.equilibration
pair.x = D * pair.x
pair.y = E * pair.y
end
slack_eq = A * pair.x - b
slack_ineq = G * pair.x - h
dual_eq, dual_in, dual_cone =
get_duals(pair.y, conic_sets, affine_sets, c, A, G)
dual_feasibility = dual_feas(dual_in, dual_cone, conic_sets, a)
return Result(
p.stop_reason,
p.stop_reason_string,
pair.x[var_ordering],
dual_cone[var_ordering],
dual_eq,
dual_in,
slack_eq,
slack_ineq,
residuals.equa_feasibility,
residuals.ineq_feasibility,
residuals.prim_obj[p.iter],
residuals.dual_obj[p.iter],
residuals.dual_gap[p.iter],
time() - p.time0,
p.iter,
sum(p.current_rank),
residuals.feasibility[p.iter] <= opt.tol_feasibility,
dual_feasibility <= opt.tol_feasibility_dual,
p.certificate_found,
1,
)
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 6045 | function print_header_1()
println("---------------------------------------------------------------------------------------")
println("=======================================================================================")
println(" ProxSDP : Proximal Semidefinite Programming Solver ")
println(" (c) Mario Souto and Joaquim D. Garcia, 2020 ")
println(" v1.8.3 ")
println("---------------------------------------------------------------------------------------")
end
function print_parameters(opt::Options, conic_sets::ConicSets)
println(" Solver parameters:")
tol_str = " tol_gap = $(opt.tol_gap) tol_feasibility = $(opt.tol_feasibility)\n"
tol_str *= " tol_primal = $(opt.tol_primal) tol_dual = $(opt.tol_dual)"
if length(conic_sets.socone) >= 1
tol_str *= " tol_soc = $(opt.tol_soc)"
end
if length(conic_sets.sdpcone) >= 1
tol_str *= " tol_psd = $(opt.tol_psd)"
end
println(tol_str)
println(" max_iter = $(opt.max_iter_local) time_limit = $(opt.time_limit)s")
return nothing
end
eq_plural(val::Integer) = ifelse(val != 1, "ies", "y")
eqs(val::Integer) = val > 0 ? "$(val) linear equalit$(eq_plural(val))" : ""
ineqs(val::Integer) = val > 0 ? "$(val) linear inequalit$(eq_plural(val))" : ""
function print_constraints(aff::AffineSets)
println(" Constraints:")
println(" $(eqs(aff.p)) and $(eqs(aff.m))")
return nothing
end
cone_plural(val::Integer) = ifelse(val != 1, "s", "")
function print_prob_data(conic_sets::ConicSets)
soc_dict = Dict()
for soc in conic_sets.socone
if soc.len in keys(soc_dict)
soc_dict[soc.len] += 1
else
soc_dict[soc.len] = 1
end
end
psd_dict = Dict()
for psd in conic_sets.sdpcone
if psd.sq_side in keys(psd_dict)
psd_dict[psd.sq_side] += 1
else
psd_dict[psd.sq_side] = 1
end
end
println(" Cones:")
if length(conic_sets.socone) > 0
for (k, v) in soc_dict
println(" $v second order cone$(cone_plural(v)) of size $k")
end
end
if length(conic_sets.sdpcone) > 0
for (k, v) in psd_dict
println(" $v psd cone$(cone_plural(v)) of size $k")
end
end
return nothing
end
function print_header_2(opt, beg = true)
bar = "---------------------------------------------------------------------------------------"
name = " Initializing Primal-Dual Hybrid Gradient method"
cols = "| iter | prim obj | rel. gap | feasb. | prim res | dual res | tg. rank | time(s) |"
if opt.extended_log || opt.extended_log2
bar *= "-----------"
cols *= " dual obj |"
end
if opt.extended_log2
bar *= "-----------"
cols *= " d feasb. |"
end
if beg
println(bar)
println(name)
println(bar)
end
println(cols)
if beg
println(bar)
end
return nothing
end
function print_progress(residuals::Residuals, p::Params, opt, val = -1.0)
primal_res = residuals.primal_residual[p.iter]
dual_res = residuals.dual_residual[p.iter]
s_k = Printf.@sprintf("%d", p.iter)
s_k *= " |"
s_s = Printf.@sprintf("%.2e", residuals.dual_gap[p.iter])
s_s *= " |"
s_o = Printf.@sprintf("%.2e", residuals.prim_obj[p.iter])
s_o *= " |"
s_f = Printf.@sprintf("%.2e", residuals.feasibility[p.iter])
s_f *= " |"
s_p = Printf.@sprintf("%.2e", primal_res)
s_p *= " |"
s_d = Printf.@sprintf("%.2e", dual_res)
s_d *= " |"
s_target_rank = Printf.@sprintf("%g", sum(p.target_rank))
s_target_rank *= " |"
s_time = Printf.@sprintf("%g", time() - p.time0)
s_time *= " |"
a = "|"
a *= " "^max(0, 9 - length(s_k))
a *= s_k
a *= " "^max(0, 11 - length(s_o))
a *= s_o
a *= " "^max(0, 11 - length(s_s))
a *= s_s
a *= " "^max(0, 11 - length(s_f))
a *= s_f
a *= " "^max(0, 11 - length(s_p))
a *= s_p
a *= " "^max(0, 11 - length(s_d))
a *= s_d
a *= " "^max(0, 11 - length(s_target_rank))
a *= s_target_rank
a *= " "^max(0, 11 - length(s_time))
a *= s_time
if opt.extended_log || opt.extended_log2
str = Printf.@sprintf("%.3f", residuals.dual_obj[p.iter]) * " |"
str = " "^max(0, 11 - length(str)) * str
a *= str
end
if opt.extended_log2
str = Printf.@sprintf("%.5f", val) * " |"
str = " "^max(0, 11 - length(str)) * str
a *= str
end
if opt.log_repeat_header
print_header_2(opt, false)
end
println(a)
return nothing
end
function print_result(stop_reason::Int, time_::Float64, residuals::Residuals, max_rank::Int, p::Params)
println("---------------------------------------------------------------------------------------")
println(" Solver status:")
println(" "*p.stop_reason_string)
println(" Time elapsed = $(round(time_; digits = 2)) seconds")
println(" Primal objective = $(round(residuals.prim_obj[p.iter]; digits = 5))")
println(" Dual objective = $(round(residuals.dual_obj[p.iter]; digits = 5))")
println(" Duality gap = $(round(100*residuals.dual_gap[p.iter]; digits = 2)) %")
println("---------------------------------------------------------------------------------------")
println(" Primal feasibility:")
println(" ||A(X) - b|| / (1 + ||b||) = $(round(residuals.equa_feasibility; digits = 6)) [linear equalities] ")
println(" ||max(G(X) - h, 0)|| / (1 + ||h||) = $(round(residuals.ineq_feasibility; digits = 6)) [linear inequalities]")
println(" Rank of p.s.d. variable is $max_rank.")
println("=======================================================================================")
return nothing
end
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 5916 | function psd_vec_to_square(v::Vector{Float64}, a::AuxiliaryData, cones::ConicSets, sqrt_2::Float64 = sqrt(2))
# Build symmetric matrix(es) X
@timeit "reshape1" begin
cont = 1
@inbounds for (idx, sdp) in enumerate(cones.sdpcone), j in 1:sdp.sq_side, i in 1:j#j:sdp.sq_side
if i != j
a.m[idx].data[i,j] = v[cont] / sqrt_2
else
a.m[idx].data[i,j] = v[cont]
end
# a.m[idx].data[i,j] = ifelse(i != j, v[cont] / sqrt_2, v[cont])
cont += 1
end
end
return cont
end
function psd_square_to_vec(v::Vector{Float64}, a::AuxiliaryData, cones::ConicSets, sqrt_2::Float64 = sqrt(2))
# Build symmetric matrix(es) X
@timeit "reshape2" begin
cont = 1
@inbounds for (idx, sdp) in enumerate(cones.sdpcone), j in 1:sdp.sq_side, i in 1:j#j:sdp.sq_side
if i != j
v[cont] = a.m[idx].data[i, j] * sqrt_2
else
v[cont] = a.m[idx].data[i, j]
end
cont += 1
end
end
return cont
end
function psd_projection!(v::Vector{Float64}, a::AuxiliaryData, cones::ConicSets, opt::Options, p::Params, arc_list, iter::Int64)::Nothing
p.min_eig, current_rank, sqrt_2 = zeros(length(cones.sdpcone)), 0, sqrt(2.)
psd_vec_to_square(v, a, cones)
# Project onto the p.s.d. cone
for (idx, sdp) in enumerate(cones.sdpcone)
p.current_rank[idx] = 0
if sdp.sq_side == 1
a.m[idx][1] = max(0., a.m[idx][1])
p.min_eig[idx] = a.m[idx][1]
elseif !opt.full_eig_decomp &&
p.target_rank[idx] <= opt.max_target_rank_krylov_eigs &&
sdp.sq_side > opt.min_size_krylov_eigs &&
mod(p.iter, opt.full_eig_freq) > opt.full_eig_len # for full from time to time
@timeit "eigs" if opt.eigsolver == 1
arpack_eig!(arc_list[idx], a, idx, opt, p)
else
krylovkit_eig!(arc_list[idx], a, idx, opt, p)
end
if !hasconverged(arc_list[idx])
@timeit "eigfact" full_eig!(a, idx, opt, p)
end
else
@timeit "eigfact" full_eig!(a, idx, opt, p)
end
end
psd_square_to_vec(v, a, cones)
return nothing
end
function arpack_eig!(solver::EigSolverAlloc, a::AuxiliaryData, idx::Int, opt::Options, p::Params)::Nothing
arpack_eig!(solver, a.m[idx], p.target_rank[idx], opt)
if hasconverged(solver)
fill!(a.m[idx].data, 0.)
# TODO: how to measure this when convergen eigs is less than target?
p.min_eig[idx] = minimum(arpack_getvalues(solver))
# if solver.converged_eigs < p.target_rank[idx] && p.min_eig[idx] > 0.0
# p.min_eig[idx] = -Inf
# end
for i in 1:p.target_rank[idx]
val = arpack_getvalues(solver)[i]
if val > 0.
p.current_rank[idx] += 1
vec = view(arpack_getvectors(solver), :, i)
LinearAlgebra.BLAS.gemm!('N', 'T', val, vec, vec, 1., a.m[idx].data)
end
end
end
return nothing
end
function krylovkit_eig!(solver::EigSolverAlloc, a::AuxiliaryData, idx::Int, opt::Options, p::Params)::Nothing
krylovkit_eig!(solver, a.m[idx], p.target_rank[idx], opt)
if hasconverged(solver)
fill!(a.m[idx].data, 0.)
# TODO: how to measure this when convergen eigs is less than target?
# ? min_eig is just checking if the rank projection is going far enough to reach zeros and negatives
p.min_eig[idx] = minimum(krylovkit_getvalues(solver))
# if solver.converged_eigs < p.target_rank[idx] #&& p.min_eig[idx] > 0.0
# p.min_eig[idx] = -Inf
# end
for i in 1:min(p.target_rank[idx], solver.converged_eigs)
val = krylovkit_getvalues(solver)[i]
if val > 0.
p.current_rank[idx] += 1
vec = krylovkit_getvector(solver, i)
LinearAlgebra.BLAS.gemm!('N', 'T', val, vec, vec, 1., a.m[idx].data)
end
end
end
return nothing
end
function full_eig!(a::AuxiliaryData, idx::Int, opt::Options, p::Params)::Nothing
p.current_rank[idx] = 0
fact = LinearAlgebra.eigen!(a.m[idx])
p.min_eig[idx] = 0.0 #minimum(fact.values)
fill!(a.m[idx].data, 0.)
for i in 1:length(fact.values)
if fact.values[i] > 0.
v = view(fact.vectors, :, i)
LinearAlgebra.BLAS.gemm!('N', 'T', fact.values[i], v, v, 1., a.m[idx].data)
if fact.values[i] > opt.tol_psd
p.current_rank[idx] += 1
end
end
end
return nothing
end
function min_eig(a::AuxiliaryData, idx::Int, p::Params)
# if p.min_eig[idx] == -Inf
# @timeit "bad min eig" begin
# fact = LinearAlgebra.eigen!(a.m[idx])
# p.min_eig[idx] = minimum(fact.values)
# end
# end
return p.min_eig[idx]
end
function soc_projection!(v::Vector{Float64}, a::AuxiliaryData, cones::ConicSets, opt::Options, p::Params)::Nothing
for (idx, soc) in enumerate(cones.socone)
soc_projection!(a.soc_v[idx], a.soc_s[idx])
end
return nothing
end
function soc_projection!(v::ViewVector, s::ViewScalar)::Nothing
nv = LinearAlgebra.norm(v, 2)
if nv <= -s[]
s[] = 0.
v .= 0.
elseif nv <= s[]
#do nothing
else
val = .5 * (1. + s[] / nv)
v .*= val
s[] = val * nv
end
return nothing
end
function box_projection!(v::Array{Float64,1}, aff::AffineSets, step::Float64)::Nothing
# Projection onto = b
@simd for i in 1:length(aff.b)
@inbounds v[i] = aff.b[i]
end
# Projection onto <= h
@simd for i in 1:length(aff.h)
@inbounds v[aff.p+i] = min(v[aff.p+i] / step, aff.h[i])
end
return nothing
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 3390 |
function compute_gap!(residuals::Residuals, pair::PrimalDual, a::AuxiliaryData, aff::AffineSets, p::Params)::Nothing
# Inplace primal feasibility error
if aff.p > 0
residuals.equa_feasibility = 0.
@simd for i in 1:aff.p
@inbounds residuals.equa_feasibility = max(residuals.equa_feasibility, abs(a.Mx[i] - aff.b[i]))
end
residuals.equa_feasibility /= (1. + p.norm_b)
end
if aff.m > 0
residuals.ineq_feasibility = 0.
@simd for i in aff.p+1:aff.p+aff.m
@inbounds residuals.ineq_feasibility = max(residuals.ineq_feasibility, a.Mx[i] - aff.h[i-aff.p])
end
residuals.ineq_feasibility /= (1. + p.norm_h)
end
residuals.feasibility[p.iter] = max(residuals.equa_feasibility, residuals.ineq_feasibility)
# Primal-dual gap
residuals.prim_obj[p.iter] = LinearAlgebra.dot(aff.c, pair.x)
residuals.dual_obj[p.iter] = 0.
if aff.p > 0
residuals.dual_obj[p.iter] -= LinearAlgebra.dot(aff.b, @view pair.y[1:aff.p])
end
if aff.m > 0
residuals.dual_obj[p.iter] -= LinearAlgebra.dot(aff.h, @view pair.y[aff.p+1:end])
end
residuals.dual_gap[p.iter] =
abs(residuals.prim_obj[p.iter] - residuals.dual_obj[p.iter]) /
(1. + abs(residuals.prim_obj[p.iter]) + abs(residuals.dual_obj[p.iter]))
return nothing
end
function compute_residual!(residuals::Residuals, pair::PrimalDual, a::AuxiliaryData, p::Params, aff::AffineSets)::Nothing
# Primal residual
# Px_old
a.Mty_old .= pair.x_old .- p.primal_step .* a.Mty_old
# Px
pair.x_old .= pair.x .- p.primal_step .* a.Mty
# Px - Px_old
pair.x_old .-= a.Mty_old
residuals.primal_residual[p.iter] =
sqrt(aff.n) * LinearAlgebra.norm(pair.x_old, Inf) /
max(LinearAlgebra.norm(a.Mty_old, Inf), p.norm_b, p.norm_h, 1.)
# Dual residual
# Py_old
a.Mx_old .= pair.y_old .- p.dual_step .* a.Mx_old
# Py
pair.y_old .= pair.y .- p.dual_step .* a.Mx
# Py - Py_old
pair.y_old .-= a.Mx_old
residuals.dual_residual[p.iter] =
sqrt(aff.m + aff.p) * LinearAlgebra.norm(pair.y_old, Inf) /
max(LinearAlgebra.norm(a.Mx_old, Inf), p.norm_c, 1.)
# Compute combined residual
residuals.comb_residual[p.iter] = max(residuals.primal_residual[p.iter], residuals.dual_residual[p.iter])
# Keep track of previous iterates
copyto!(pair.x_old, pair.x)
copyto!(pair.y_old, pair.y)
copyto!(a.Mty_old, a.Mty)
copyto!(a.Mx_old, a.Mx)
return nothing
end
function soc_convergence(a::AuxiliaryData, cones::ConicSets, pair::PrimalDual, opt::Options, p::Params)::Bool
for (idx, soc) in enumerate(cones.socone)
if soc_gap(a.soc_v[idx], a.soc_s[idx]) >= opt.tol_soc
return false
end
end
return true
end
function soc_gap(v::ViewVector, s::ViewScalar)
return LinearAlgebra.norm(v, 2) - s[]
end
function convergedrank(a, p::Params, cones::ConicSets, opt::Options)::Bool
for (idx, sdp) in enumerate(cones.sdpcone)
if !(
sdp.sq_side < opt.min_size_krylov_eigs ||
p.target_rank[idx] > opt.max_target_rank_krylov_eigs ||
min_eig(a, idx, p) < opt.tol_psd
)
# @show min_eig(a, idx, p), -opt.tol_psd
return false
end
end
return true
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 2241 |
function preprocess!(aff::AffineSets, conic_sets::ConicSets)
c_orig = zeros(1)
if length(conic_sets.sdpcone) >= 1 || length(conic_sets.socone) >= 1
all_cone_vars = Int[]
for (idx, sdp) in enumerate(conic_sets.sdpcone)
M = zeros(Int, sdp.sq_side, sdp.sq_side)
iv = conic_sets.sdpcone[idx].vec_i
im = conic_sets.sdpcone[idx].mat_i
for i in eachindex(iv)
M[im[i]] = iv[i]
end
X = LinearAlgebra.Symmetric(M, :L)
n = size(X)[1] # columns or line
cont = 1
sdp_vars = zeros(Int, div(sdp.sq_side*(sdp.sq_side+1), 2))
for j in 1:n, i in j:n
sdp_vars[cont] = X[i, j]
cont += 1
end
append!(all_cone_vars, sdp_vars)
end
for (idx, soc) in enumerate(conic_sets.socone)
soc_vars = copy(soc.idx)
append!(all_cone_vars, soc_vars)
end
totvars = aff.n
extra_vars = sort(collect(setdiff(Set(collect(1:totvars)),Set(all_cone_vars))))
ord = vcat(all_cone_vars, extra_vars)
else
ord = collect(1:aff.n)
end
c_orig = copy(aff.c)
aff.A, aff.G, aff.c = aff.A[:, ord], aff.G[:, ord], aff.c[ord]
return c_orig[ord], sortperm(ord)
end
function norm_scaling(affine_sets::AffineSets, cones::ConicSets)
cte = (sqrt(2.) / 2.)
rows = SparseArrays.rowvals(affine_sets.A)
cont = 1
for sdp in cones.sdpcone, j in 1:sdp.sq_side, i in 1:j
if i != j
for line in SparseArrays.nzrange(affine_sets.A, cont)
affine_sets.A[rows[line], cont] *= cte
end
end
cont += 1
end
rows = SparseArrays.rowvals(affine_sets.G)
cont = 1
for sdp in cones.sdpcone, j in 1:sdp.sq_side, i in 1:j
if i != j
for line in SparseArrays.nzrange(affine_sets.G, cont)
affine_sets.G[rows[line], cont] *= cte
end
end
cont += 1
end
cont = 1
@inbounds for sdp in cones.sdpcone, j in 1:sdp.sq_side, i in 1:j
if i != j
affine_sets.c[cont] *= cte
end
cont += 1
end
return nothing
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 4916 |
struct CircularVector{T}
v::Vector{T}
l::Int
CircularVector{T}(l::Integer) where T = new(zeros(T, l), l)
end
function min_abs_diff(v::CircularVector{T}) where T
val = Inf
for i in 1:Base.length(v)
val = min(val, abs(v[i] - v[i-1]))
end
return val
end
function max_abs_diff(v::CircularVector{T}) where T
val = 0.0
for i in 1:Base.length(v)
val = max(val, abs(v[i] - v[i-1]))
end
return val
end
function Base.getindex(V::CircularVector{T}, i::Int) where T
return V.v[mod1(i, V.l)]
end
function Base.setindex!(V::CircularVector{T}, val::T, i::Int) where T
V.v[mod1(i, V.l)] = val
end
function Base.length(V::CircularVector{T}) where T
return V.l
end
mutable struct AffineSets
n::Int # Size of primal variables
p::Int # Number of linear equalities
m::Int # Number of linear inequalities
extra::Int # Number of adition linear equalities (for disjoint cones)
A::SparseArrays.SparseMatrixCSC{Float64,Int64}
G::SparseArrays.SparseMatrixCSC{Float64,Int64}
b::Vector{Float64}
h::Vector{Float64}
c::Vector{Float64}
end
mutable struct SDPSet
vec_i::Vector{Int}
mat_i::Vector{Int}
tri_len::Int
sq_side::Int
end
mutable struct SOCSet
idx::Vector{Int}
len::Int
end
mutable struct ConicSets
sdpcone::Vector{SDPSet}
socone::Vector{SOCSet}
end
Base.@kwdef mutable struct Result
status::Int = 0
status_string::String = "Problem not solved"
primal::Vector{Float64} = Float64[]
dual_cone::Vector{Float64} = Float64[]
dual_eq::Vector{Float64} = Float64[]
dual_in::Vector{Float64} = Float64[]
slack_eq::Vector{Float64} = Float64[]
slack_in::Vector{Float64} = Float64[]
primal_residual::Float64 = NaN
dual_residual::Float64 = NaN
objval::Float64 = NaN
dual_objval::Float64 = NaN
gap::Float64 = NaN
time::Float64 = NaN
iter::Int = -1
final_rank::Int = -1
primal_feasible_user_tol::Bool = false
dual_feasible_user_tol::Bool = false
certificate_found::Bool = false
result_count::Int = 0
end
mutable struct PrimalDual
x::Vector{Float64}
x_old::Vector{Float64}
y::Vector{Float64}
y_old::Vector{Float64}
PrimalDual(aff::AffineSets) = new(
zeros(aff.n), zeros(aff.n), zeros(aff.m+aff.p), zeros(aff.m+aff.p)
)
end
mutable struct WarmStart
x::Vector{Float64}
y_eq::Vector{Float64}
y_in::Vector{Float64}
end
mutable struct Residuals
dual_gap::CircularVector{Float64}
prim_obj::CircularVector{Float64}
dual_obj::CircularVector{Float64}
equa_feasibility::Float64
ineq_feasibility::Float64
feasibility::CircularVector{Float64}
primal_residual::CircularVector{Float64}
dual_residual::CircularVector{Float64}
comb_residual::CircularVector{Float64}
Residuals(window::Int) = new(
CircularVector{Float64}(2*window),
CircularVector{Float64}(2*window),
CircularVector{Float64}(2*window),
.0,
.0,
CircularVector{Float64}(2*window),
CircularVector{Float64}(2*window),
CircularVector{Float64}(2*window),
CircularVector{Float64}(2*window)
)
end
const ViewVector = SubArray#{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int}}, true}
const ViewScalar = SubArray#{Float64, 1, Vector{Float64}, Tuple{Int}, true}
mutable struct AuxiliaryData
m::Vector{LinearAlgebra.Symmetric{Float64,Matrix{Float64}}}
Mty::Vector{Float64}
Mty_old::Vector{Float64}
Mx::Vector{Float64}
Mx_old::Vector{Float64}
y_half::Vector{Float64}
y_temp::Vector{Float64}
soc_v::Vector{ViewVector}
soc_s::Vector{ViewScalar}
function AuxiliaryData(aff::AffineSets, cones::ConicSets)
new(
[LinearAlgebra.Symmetric(zeros(sdp.sq_side, sdp.sq_side), :U) for sdp in cones.sdpcone],
zeros(aff.n), zeros(aff.n),
zeros(aff.p+aff.m), zeros(aff.p+aff.m),
zeros(aff.p+aff.m), zeros(aff.p+aff.m),
ViewVector[], ViewScalar[]
)
end
end
mutable struct Matrices
M::SparseArrays.SparseMatrixCSC{Float64,Int64}
Mt::SparseArrays.SparseMatrixCSC{Float64,Int64}
c::Vector{Float64}
end
mutable struct Params
current_rank::Vector{Int}
target_rank::Vector{Int}
rank_update::Int
update_cont::Int
min_eig::Vector{Float64}
iter::Int
stop_reason::Int
stop_reason_string::String
iteration::Int
primal_step::Float64
primal_step_old::Float64
dual_step::Float64
theta::Float64
beta::Float64
adapt_level::Float64
window::Int
time0::Float64
norm_c::Float64
norm_b::Float64
norm_h::Float64
sqrt2::Float64
dual_feasibility::Float64
dual_feasibility_check::Bool
certificate_search::Bool
certificate_search_min_iter::Int
certificate_found::Bool
# solution backup
Params() = new()
end
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 958 |
function map_socs!(v::Vector{Float64}, cones::ConicSets, a::AuxiliaryData)
cont = 0
for (idx, sdp) in enumerate(cones.sdpcone)
cont += div(sdp.sq_side * (sdp.sq_side + 1), 2)
end
sizehint!(a.soc_v, length(cones.socone))
sizehint!(a.soc_s, length(cones.socone))
for (idx, soc) in enumerate(cones.socone)
len = soc.len
push!(a.soc_s, view(v, cont + 1))
push!(a.soc_v, view(v, cont + 2:cont + len))
cont += len
end
return nothing
end
function ivech!(out::AbstractMatrix{T}, v::AbstractVector{T}) where T
n = sympackeddim(length(v))
n1, n2 = size(out)
@assert n == n1 == n2
c = 0
for j in 1:n, i in 1:j
c += 1
out[i,j] = v[c]
end
return out
end
function ivech(v::AbstractVector{T}) where T
n = sympackeddim(length(v))
out = zeros(n, n)
ivech!(out, v)
return out
end
ivec(X) = Matrix(LinearAlgebra.Symmetric(ivech(X),:U))
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 1576 | function base_sdp(solver, seed)
Random.seed!(seed)
if Base.libblas_name == "libmkl_rt"
model = Model()
else
model = Model(solver=solver)
end
n = 2
if Base.libblas_name == "libmkl_rt"
@variable(model, X[1:n, 1:n], PSD)
else
@variable(model, X[1:n, 1:n], SDP)
end
@objective(model, Min, -3X[1,1]-4X[2,2])
@constraint(model, 2X[1,1]+1X[2,2] <= 4)
@constraint(model, 1X[1,1]+2X[2,2] <= 4)
if Base.libblas_name == "libmkl_rt"
JuMP.attach(model, solver)
end
teste = JuMP.solve(model)
if Base.libblas_name == "libmkl_rt"
@show XX = getvalue2.(X)
else
@show XX = getvalue.(X)
end
end
function base_sdp2(solver, seed)
Random.seed!(seed)
if Base.libblas_name == "libmkl_rt"
model = Model()
else
model = Model(solver=solver)
end
n = 4
if Base.libblas_name == "libmkl_rt"
@variable(model, X[1:n, 1:n], PSD)
else
@variable(model, X[1:n, 1:n], SDP)
end
@objective(model, Min, -3X[1,1]-4X[2,2])
@constraint(model, 2X[1,1]+1X[2,2]+X[3,3] == 4)
@constraint(model, 1X[1,1]+2X[2,2]+X[4,4] == 4)
if Base.libblas_name == "libmkl_rt"
JuMP.attach(model, solver)
end
teste = JuMP.solve(model)
if Base.libblas_name == "libmkl_rt"
@show XX = getvalue2.(X)
else
@show XX = getvalue.(X)
end
end
getvalue2(var::JuMP.Variable) = (m=var.m;m.solverinstance.primal[m.solverinstance.varmap[m.variabletosolvervariable[var.instanceindex]]])
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 759 | import Random
function mimo_data(seed, m, n)
rng = Random.MersenneTwister(seed)
# Channel
H = Random.randn(rng, (m, n))
# Gaussian noise
v = Random.randn(rng, (m, 1))
# True signal
s = Random.rand(rng, [-1, 1], n)
# Received signal
sigma = .0001
y = H * s + sigma * v
L = [hcat(H' * H, -H' * y); hcat(-y' * H, y' * y)]
return s, H, y, L
end
function mimo_eval(s, H, y, L, XX)
x_hat = sign.(XX[1:end-1, end])
rank = length([eig for eig in LinearAlgebra.eigen(XX).values if eig > 1e-7])
@show decode_error = sum(abs.(x_hat - s))
@show rank
@show LinearAlgebra.norm(y - H * x_hat)
@show LinearAlgebra.norm(y - H * s)
@show tr(L * XX)
return nothing
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 947 | import Random
import LinearAlgebra
function randsdp_data(seed, m, n)
rng = Random.MersenneTwister(seed)
# n = 15 # Instance size
# m = 10 # Number of constraints
# Objective function
c_sqrt = Random.rand(rng, Float64, (n, n))
C = c_sqrt * c_sqrt'
# C[1, 2] *= 0.5
# C[2, 1] = C[1, 2]
# Generate m-dimensional feasible system
A, b = Dict(), Dict()
X_ = Random.randn(rng, (n, n))
X_ = X_ * X_'
for i in 1:m
A[i] = Random.rand(rng, Float64, (n, n))
A[i] = A[i] * A[i]'
b[i] = tr(A[i] * X_)
end
return A, b, C
end
function randsdp_eval(A,b,C,n,m,XX)
@show minus_rank = length([eig for eig in LinearAlgebra.eigen(XX).values if eig < -1e-10])
@show rank = length([eig for eig in LinearAlgebra.eigen(XX).values if eig > 1e-10])
@show tr(C * XX)
for i in 1:m
@show tr(A[i] * XX)-b[i]
end
nothing
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 1489 | function sdplib_data(path)
# Read data from file
data = DelimitedFiles.readdlm(path, use_mmap=true)
# Parse SDPLIB data
m = data[1, 1]
if isa(data[3, 1], Float64) || isa(data[3, 1], Int64)
blks = data[3, :]
for elem = 1:length(blks)
if blks[elem] == ""
blks = blks[1:elem-1]
break
end
end
else
blks = parse.(Float64, split(data[3, 1][2:end - 1], ","))
end
cum_blks = pushfirst!(cumsum(blks), 0)
if isa(data[4, 1], Float64) || isa(data[4, 1], Int64)
c = data[4, :]
else
c = [parse(Float64,string) for string in split(data[4, 1][2:end - 1], ",")]
end
n = abs(cum_blks[end])
n = length(c)
F = Dict(i => SparseArrays.spzeros(n, n) for i = 0:m)
for k=5:size(data)[1]
idx = cum_blks[data[k, 2]]
# if data[k, 2] == 1
# idx = 0
# else
# idx = 161
# end
i, j = data[k, 3] + idx, data[k, 4] + idx
if data[k, 1] == 0
F[0][i, j] = - data[k, 5]
F[0][j, i] = - data[k, 5]
else
F[data[k, 1]][i, j] = data[k, 5]
F[data[k, 1]][j, i] = data[k, 5]
end
end
return n, m, F, c
end
function sdplib_eval(F,c,n,m,XX)
rank = length([eig for eig in LinearAlgebra.eigen(XX).values if eig > 1e-10])
@show rank
@show tr(F[0] * XX)
nothing
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 897 |
function sensorloc_data(seed, n)
rng = Random.MersenneTwister(seed)
# n = number of sensors points
# m = number of anchor points
m = floor(Int, 0.1 * n)
# Sensor true position (2 dimensional)
x_true = Random.rand(rng, Float64, (2, n))
# Distances from sensors to sensors
d = Dict((i, j) => LinearAlgebra.norm(x_true[:, i] - x_true[:, j]) for i in 1:n for j in 1:i)
# Anchor positions
a = Dict(i => Random.rand(rng, Float64, (2, 1)) for i in 1:m)
# Distances from anchor to sensors
d_bar = Dict((k, j) => LinearAlgebra.norm(x_true[:, j] - a[k]) for k in 1:m for j in 1:n)
return m, x_true, a, d, d_bar
end
function sensorloc_eval(n, m, x_true, XX)
@show LinearAlgebra.norm(x_true - XX[1:2, 3:n + 2])
@show rank = length([eig for eig in LinearAlgebra.eigen(XX).values if eig > 1e-7])
return nothing
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 755 |
# only Convex.jl tests
# using Convex, ProxSDP, Test
# using Convex.ProblemDepot: run_tests
# @testset "Convex Problem Depot tests" begin
# run_tests(; exclude=[r"mip", r"exp"]) do problem
# solve!(problem, () -> ProxSDP.Optimizer(
# log_freq = 1_000_000, log_verbose = true,
# tol_gap = 5e-8, tol_feasibility = 1e-7,
# max_iter = 10_000_000, time_limit = 30.)
# )
# end
# end
# Convex.jl and SumOfSquares.jl tests
using ConvexTests, ProxSDP
@info "Starting ProxSDP tests"
do_tests("ProxSDP", () -> ProxSDP.Optimizer(
log_freq = 1_000_000, log_verbose = true,
tol_gap = 5e-8, tol_feasibility = 1e-7,
max_iter = 100_000_000, time_limit = 4 * 30.
); exclude = [r"mip", r"exp"])
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 1619 | function jump_mimo(solver, seed, n; verbose = false, test = false)
# n = 3
m = 10n
s, H, y, L = mimo_data(seed, m, n)
nvars = ProxSDP.sympackedlen(n + 1)
model = Model(ProxSDP.Optimizer)
@variable(model, X[1:n+1, 1:n+1], PSD)
for j in 1:(n+1), i in j:(n+1)
@constraint(model, X[i, j] <= 1.0)
@constraint(model, X[i, j] >= -1.0)
end
@objective(model, Min, sum(L[i, j] * X[i, j] for j in 1:n+1, i in 1:n+1))
@constraint(model, ctr[i in 1:n+1], X[i, i] == 1.0)
teste = @time optimize!(model)
XX = value.(X)
if test
for j in 1:n+1, i in 1:n+1
@test 1.01 > abs(XX[i,j]) > 0.99
end
end
verbose && mimo_eval(s,H,y,L,XX)
objval = objective_value(model)
stime = MOI.get(model, MOI.SolveTimeSec())
rank = -1
try
@show rank = model.moi_backend.optimizer.model.optimizer.sol.final_rank
catch
end
status = 0
if JuMP.termination_status(model) == MOI.OPTIMAL
status = 1
end
# SDP constraints
max_spd_violation = minimum(eigen(XX).values)
# test violations of linear constraints
max_lin_viol = 0.0
for j in 1:(n+1), i in j:(n+1)
val = abs(XX[i,j]) - 1.0
if val > 0.0
if val > max_lin_viol
max_lin_viol = val
end
end
end
for j in 1:(n+1)
val = abs(XX[j,j] - 1.0)
if val > 0.0
if val > max_lin_viol
max_lin_viol = val
end
end
end
return (objval, stime, rank, status, max_lin_viol, max_spd_violation)
end
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 988 | function jump_randsdp(solver, seed, n, m, verbose = false)
A, b, C = randsdp_data(seed, m, n)
model = Model(ProxSDP.Optimizer)
@variable(model, X[1:n, 1:n], PSD)
@objective(model, Min, sum(C[i, j] * X[i, j] for j in 1:n, i in 1:n))
@constraint(model, ctr[k in 1:m], sum(A[k][i, j] * X[i, j] for j in 1:n, i in 1:n) == b[k])
# @constraint(model, bla, sum(C[i, j] * X[i, j] for i in 1:n, j in 1:n)<=0.1)
teste = @time optimize!(model)
XX = value.(X)
verbose && randsdp_eval(A,b,C,n,m,XX)
objval = objective_value(model)
stime = MOI.get(model, MOI.SolveTimeSec())
# @show tp = typeof(model.moi_backend.optimizer.model.optimizer)
# @show fieldnames(tp)
rank = -1
try
@show rank = model.moi_backend.optimizer.model.optimizer.sol.final_rank
catch
end
status = 0
if JuMP.termination_status(model) == MOI.OPTIMAL
status = 1
end
return (objval, stime, rank, status, -1.0, -1.0)
end
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 1580 |
function jump_sdplib(solver, path; verbose = false, test = false)
println("running: $(path)")
n, m, F, c = sdplib_data(path)
# Build model
model = Model(ProxSDP.Optimizer)
@variable(model, X[1:n, 1:n], PSD)
# Objective function
@objective(model, Min, sum(F[0][idx...] * X[idx...]
for idx in zip(SparseArrays.findnz(F[0])[1:end-1]...)))
# Linear equality constraints
for k = 1:m
@constraint(model, sum(F[k][idx...] * X[idx...]
for idx in zip(SparseArrays.findnz(F[k])[1:end-1]...)) == c[k])
end
teste = @time optimize!(model)
XX = value.(X)
verbose && sdplib_eval(F,c,n,m,XX)
objval = objective_value(model)
stime = MOI.get(model, MOI.SolveTimeSec())
# @show tp = typeof(model.moi_backend.optimizer.model.optimizer)
# @show fieldnames(tp)
rank = -1
try
@show rank = model.moi_backend.optimizer.model.optimizer.sol.final_rank
catch
end
status = 0
if JuMP.termination_status(model) == MOI.OPTIMAL
status = 1
end
# SDP constraints
max_spd_violation = minimum(eigen(XX).values)
# test violations of linear constraints
max_lin_viol = 0.0
for k = 1:m
val = abs(sum(F[k][idx...] * XX[idx...]
for idx in zip(SparseArrays.findnz(F[k])[1:end-1]...)) - c[k])
if val > 0.0
if val > max_lin_viol
max_lin_viol = val
end
end
end
return (objval, stime, rank, status, max_lin_viol, max_spd_violation)
# return (objval, stime)
end
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 3591 | import Random
function jump_sensorloc(solver, seed, n; verbose = false, test = false)
m, x_true, a, d, d_bar = sensorloc_data(seed, n)
model = Model(ProxSDP.Optimizer)
# Build SDP problem
@variable(model, X[1:n+2, 1:n+2], PSD)
# Constraint with distances from anchors to sensors
for j in 1:n, k in 1:m
# e = zeros(n, 1)
# e[j] = -1.0
# v = vcat(a[k], e)
# V = v * v'
# @constraint(model, sum(V .* X) == d_bar[k, j]^2)
@constraint(model, X[1,1]*a[k][1]*a[k][1] + X[2,2]*a[k][2]*a[k][2]
- 2 * X[1, j+2] * a[k][1]
- 2 * X[2, j+2] * a[k][2]
+ X[j+2, j+2]
== d_bar[k, j]^2)
end
# Constraint with distances from sensors to sensors
count, count_all = 0, 0
rng = Random.MersenneTwister(seed)
has_ctr = zeros(Bool,n,n)
for i in 1:n, j in 1:i - 1
count_all += 1
if Random.rand(rng) > 0.9
count += 1
has_ctr[i,j] = true
# e = zeros(n, 1)
# e[i] = 1.0
# e[j] = -1.0
# v = vcat(zeros(2, 1), e)
# V = v * v'
# @constraint(model, sum(V .* X) == d[i, j]^2)
@constraint(model, X[i+2,i+2] + X[j+2,j+2] - 2*X[i+2,j+2] == d[i, j]^2)
end
end
if verbose
@show count_all, count
end
@constraint(model, X[1, 1] == 1.0)
@constraint(model, X[1, 2] == 0.0)
@constraint(model, X[2, 1] == 0.0)
@constraint(model, X[2, 2] == 1.0)
# Feasibility objective function
@objective(model, Min, 0.0 * X[1, 1] + 0.0 * X[2, 2])
teste = @time optimize!(model)
XX = value.(X)
verbose && sensorloc_eval(n, m, x_true, XX)
objval = objective_value(model)
stime = MOI.get(model, MOI.SolveTimeSec())
# @show tp = typeof(model.moi_backend.optimizer.model.optimizer)
# @show fieldnames(tp)
rank = -1
try
@show rank = model.moi_backend.optimizer.model.optimizer.sol.final_rank
catch
end
status = 0
if JuMP.termination_status(model) == MOI.OPTIMAL
status = 1
end
# SDP constraints
max_spd_violation = minimum(eigen(XX).values)
# test violations of linear constraints
max_lin_viol = 0.0
# Constraint with distances from anchors to sensors
for j in 1:n
for k in 1:m
val = abs(XX[1,1]*a[k][1]*a[k][1] + XX[2,2]*a[k][2]*a[k][2] -
2 * XX[1, j+2] * a[k][1] -
2 * XX[2, j+2] * a[k][2] +
XX[j+2, j+2] -
d_bar[k, j]^2)
if val > 0.0
if val > max_lin_viol
max_lin_viol = val
end
end
end
end
# Constraint with distances from sensors to sensors
count, count_all = 0, 0
rng = Random.MersenneTwister(seed)
has_ctr = zeros(Bool,n,n)
for i in 1:n
for j in 1:i - 1
if has_ctr[i,j]
val = abs(XX[i+2,i+2] + XX[j+2,j+2] - 2*XX[i+2,j+2] - d[i, j]^2)
if val > max_lin_viol
max_lin_viol = val
end
end
end
end
max_lin_viol = max(max_lin_viol, abs(XX[1, 1] - 1.0))
max_lin_viol = max(max_lin_viol, abs(XX[1, 2] - 0.0))
max_lin_viol = max(max_lin_viol, abs(XX[2, 1] - 0.0))
max_lin_viol = max(max_lin_viol, abs(XX[2, 2] - 1.0))
return (objval, stime, rank, status, max_lin_viol, max_spd_violation)
# return (objval, stime)
end
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 2940 | function moi_mimo(optimizer, seed, n; verbose = false, test = false, scalar = false)
MOI.empty!(optimizer)
if test
@test MOI.is_empty(optimizer)
end
m = 10n
s, H, y, L = mimo_data(seed, m, n)
nvars = ProxSDP.sympackedlen(n + 1)
X = MOI.add_variables(optimizer, nvars)
if scalar
for i in 1:nvars
MOI.add_constraint(optimizer, MOI.SingleVariable(X[i]), MOI.LessThan(1.0))
MOI.add_constraint(optimizer, MOI.SingleVariable(X[i]), MOI.GreaterThan(-1.0))
end
else
MOI.add_constraint(optimizer,
MOI.VectorAffineFunction(
MOI.VectorAffineTerm.(
collect(1:nvars), MOI.ScalarAffineTerm.(1.0, X)),
-ones(nvars)),
MOI.Nonpositives(nvars))
MOI.add_constraint(optimizer,
MOI.VectorAffineFunction(
MOI.VectorAffineTerm.(
collect(1:nvars), MOI.ScalarAffineTerm.(-1.0, X)),
-ones(nvars)),
MOI.Nonpositives(nvars))
end
Xsq = Matrix{MOI.VariableIndex}(undef, n+1,n+1)
ProxSDP.ivech!(Xsq, X)
Xsq = Matrix(LinearAlgebra.Symmetric(Xsq,:U))
vov = MOI.VectorOfVariables(X)
cX = MOI.add_constraint(optimizer, vov, MOI.PositiveSemidefiniteConeTriangle(n+1))
if scalar
for i in 1:n+1
MOI.add_constraint(optimizer, MOI.SingleVariable(Xsq[i,i]), MOI.EqualTo(1.0))
end
else
MOI.add_constraint(optimizer,
MOI.VectorAffineFunction(
MOI.VectorAffineTerm.(
collect(1:n+1), MOI.ScalarAffineTerm.(1.0, [Xsq[i,i] for i in 1:n+1])),
-ones(n+1)),
MOI.Zeros(n+1))
end
objf_t = vec([MOI.ScalarAffineTerm(L[i,j], Xsq[i,j]) for i in 1:n+1, j in 1:n+1])
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), MOI.ScalarAffineFunction(objf_t, 0.0))
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
objval = MOI.get(optimizer, MOI.ObjectiveValue())
stime = -1.0
try
stime = MOI.get(optimizer, MOI.SolveTimeSec())
catch
println("could not query time")
end
Xsq_s = MOI.get.(optimizer, MOI.VariablePrimal(), Xsq)
if test
for i in 1:n+1, j in 1:n+1
@test 1.01 > abs(Xsq_s[i,j]) > 0.99
end
end
verbose && mimo_eval(s, H, y, L, Xsq_s)
rank = -1
status = 0
if MOI.get(optimizer, MOI.TerminationStatus()) == MOI.OPTIMAL
status = 1
end
return (objval, stime, rank, status)
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 13593 | function simple_lp(bridged)
MOI.empty!(bridged)
@test MOI.is_empty(bridged)
# add 10 variables - only diagonal is relevant
X = MOI.add_variables(bridged, 2)
# add sdp constraints - only ensuring positivenesse of the diagonal
vov = MOI.VectorOfVariables(X)
c1 = MOI.add_constraint(bridged,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(2.0, X[1]),
MOI.ScalarAffineTerm(1.0, X[2])
], 0.0), MOI.EqualTo(4.0))
c2 = MOI.add_constraint(bridged,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(1.0, X[1]),
MOI.ScalarAffineTerm(2.0, X[2])
], 0.0), MOI.EqualTo(4.0))
b1 = MOI.add_constraint(bridged,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(1.0, X[1])
], 0.0), MOI.GreaterThan(0.0))
b2 = MOI.add_constraint(bridged,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(1.0, X[2])
], 0.0), MOI.GreaterThan(0.0))
MOI.set(bridged,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([-4.0, -3.0], [X[1], X[2]]), 0.0)
)
MOI.set(bridged, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(bridged)
obj = MOI.get(bridged, MOI.ObjectiveValue())
@test obj ≈ -9.33333 atol = 1e-2
Xr = MOI.get(bridged, MOI.VariablePrimal(), X)
@test Xr ≈ [1.3333, 1.3333] atol = 1e-2
end
function simple_lp_2_1d_sdp(bridged)
MOI.empty!(bridged)
@test MOI.is_empty(bridged)
# add 10 variables - only diagonal is relevant
X = MOI.add_variables(bridged, 2)
# add sdp constraints - only ensuring positivenesse of the diagonal
vov = MOI.VectorOfVariables(X)
c1 = MOI.add_constraint(bridged,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(2.0, X[1]),
MOI.ScalarAffineTerm(1.0, X[2])
], 0.0), MOI.EqualTo(4.0))
c2 = MOI.add_constraint(bridged,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(1.0, X[1]),
MOI.ScalarAffineTerm(2.0, X[2])
], 0.0), MOI.EqualTo(4.0))
b1 = MOI.add_constraint(bridged,
MOI.VectorOfVariables([X[1]]), MOI.PositiveSemidefiniteConeTriangle(1))
b2 = MOI.add_constraint(bridged,
MOI.VectorOfVariables([X[2]]), MOI.PositiveSemidefiniteConeTriangle(1))
MOI.set(bridged,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([-4.0, -3.0], [X[1], X[2]]), 0.0)
)
MOI.set(bridged, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(bridged)
obj = MOI.get(bridged, MOI.ObjectiveValue())
@test obj ≈ -9.33333 atol = 1e-2
Xr = MOI.get(bridged, MOI.VariablePrimal(), X)
@test Xr ≈ [1.3333, 1.3333] atol = 1e-2
end
function lp_in_SDP_equality_form(bridged)
MOI.empty!(bridged)
@test MOI.is_empty(bridged)
# add 10 variables - only diagonal is relevant
X = MOI.add_variables(bridged, 10)
# add sdp constraints - only ensuring positivenesse of the diagonal
vov = MOI.VectorOfVariables(X)
cX = MOI.add_constraint(bridged, vov, MOI.PositiveSemidefiniteConeTriangle(4))
c1 = MOI.add_constraint(bridged,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(2.0, X[1]),
MOI.ScalarAffineTerm(1.0, X[3]),
MOI.ScalarAffineTerm(1.0, X[6])
], 0.0), MOI.EqualTo(4.0))
c2 = MOI.add_constraint(bridged,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(1.0, X[1]),
MOI.ScalarAffineTerm(2.0, X[3]),
MOI.ScalarAffineTerm(1.0, X[10])
], 0.0), MOI.EqualTo(4.0))
MOI.set(bridged,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([-4.0, -3.0], [X[1], X[3]]), 0.0)
)
MOI.set(bridged, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(bridged)
obj = MOI.get(bridged, MOI.ObjectiveValue())
@test obj ≈ -9.33333 atol = 1e-2
Xr = MOI.get(bridged, MOI.VariablePrimal(), X)
@test Xr ≈ [1.3333, .0, 1.3333, .0, .0, .0, .0, .0, .0, .0] atol = 1e-2
end
function lp_in_SDP_inequality_form(optimizer)
MOI.empty!(optimizer)
@test MOI.is_empty(optimizer)
# add 10 variables - only diagonal is relevant
X = MOI.add_variables(optimizer, 3)
# add sdp constraints - only ensuring positivenesse of the diagonal
vov = MOI.VectorOfVariables(X)
cX = MOI.add_constraint(optimizer, vov, MOI.PositiveSemidefiniteConeTriangle(2))
c1 = MOI.add_constraint(optimizer,
MOI.VectorAffineFunction(MOI.VectorAffineTerm.([1,1],[
MOI.ScalarAffineTerm(2.0, X[1]),
MOI.ScalarAffineTerm(1.0, X[3]),
]), [-4.0]), MOI.Nonpositives(1))
c2 = MOI.add_constraint(optimizer,
MOI.VectorAffineFunction(MOI.VectorAffineTerm.([1,1],[
MOI.ScalarAffineTerm(1.0, X[1]),
MOI.ScalarAffineTerm(2.0, X[3]),
]), [-4.0]), MOI.Nonpositives(1))
MOI.set(optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([4.0, 3.0], [X[1], X[3]]), 0.0)
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.optimize!(optimizer)
obj = MOI.get(optimizer, MOI.ObjectiveValue())
@test obj ≈ 9.33333 atol = 1e-2
Xr = MOI.get(optimizer, MOI.VariablePrimal(), X)
@test Xr ≈ [1.3333, .0, 1.3333] atol = 1e-2
c1_d = MOI.get(optimizer, MOI.ConstraintDual(), c1)
c2_d = MOI.get(optimizer, MOI.ConstraintDual(), c2)
end
function sdp_from_moi(optimizer)
# min X[1,1] + X[2,2] max y
# X[2,1] = 1 [0 y/2 [ 1 0
# y/2 0 <= 0 1]
# X >= 0 y free
# Optimal solution:
#
# ⎛ 1 1 ⎞
# X = ⎜ ⎟ y = 2
# ⎝ 1 1 ⎠
MOI.empty!(optimizer)
@test MOI.is_empty(optimizer)
X = MOI.add_variables(optimizer, 3)
vov = MOI.VectorOfVariables(X)
cX = MOI.add_constraint(optimizer, vov, MOI.PositiveSemidefiniteConeTriangle(2))
c = MOI.add_constraint(optimizer, MOI.VectorAffineFunction([MOI.VectorAffineTerm(1,MOI.ScalarAffineTerm(1.0, X[2]))], [-1.0]), MOI.Zeros(1))
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(1.0, [X[1], X[3]]), 0.0))
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
@test MOI.get(optimizer, MOI.TerminationStatus()) == MOI.OPTIMAL
@test MOI.get(optimizer, MOI.PrimalStatus()) == MOI.FEASIBLE_POINT
@test MOI.get(optimizer, MOI.DualStatus()) == MOI.FEASIBLE_POINT
@test MOI.get(optimizer, MOI.ObjectiveValue()) ≈ 2 atol=1e-2
Xv = ones(3)
@test MOI.get(optimizer, MOI.VariablePrimal(), X) ≈ Xv atol=1e-2
# @test MOI.get(optimizer, MOI.ConstraintPrimal(), cX) ≈ Xv atol=1e-2
# @test MOI.get(optimizer, MOI.ConstraintDual(), c) ≈ 2 atol=1e-2
# @show MOI.get(optimizer, MOI.ConstraintDual(), c)
end
function double_sdp_from_moi(optimizer)
# solve simultaneously two of these:
# min X[1,1] + X[2,2] max y
# X[2,1] = 1 [0 y/2 [ 1 0
# y/2 0 <= 0 1]
# X >= 0 y free
# Optimal solution:
#
# ⎛ 1 1 ⎞
# X = ⎜ ⎟ y = 2
# ⎝ 1 1 ⎠
MOI.empty!(optimizer)
@test MOI.is_empty(optimizer)
X = MOI.add_variables(optimizer, 3)
Y = MOI.add_variables(optimizer, 3)
vov = MOI.VectorOfVariables(X)
vov2 = MOI.VectorOfVariables(Y)
cX = MOI.add_constraint(optimizer, vov, MOI.PositiveSemidefiniteConeTriangle(2))
cY = MOI.add_constraint(optimizer, vov2, MOI.PositiveSemidefiniteConeTriangle(2))
c = MOI.add_constraint(optimizer, MOI.VectorAffineFunction([MOI.VectorAffineTerm(1,MOI.ScalarAffineTerm(1.0, X[2]))], [-1.0]), MOI.Zeros(1))
c2 = MOI.add_constraint(optimizer, MOI.VectorAffineFunction([MOI.VectorAffineTerm(1,MOI.ScalarAffineTerm(1.0, Y[2]))], [-1.0]), MOI.Zeros(1))
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(1.0, [X[1], X[end], Y[1], Y[end]]), 0.0))
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
@test MOI.get(optimizer, MOI.TerminationStatus()) == MOI.OPTIMAL
@test MOI.get(optimizer, MOI.PrimalStatus()) == MOI.FEASIBLE_POINT
@test MOI.get(optimizer, MOI.DualStatus()) == MOI.FEASIBLE_POINT
@test MOI.get(optimizer, MOI.ObjectiveValue()) ≈ 2*2 atol=1e-2
Xv = ones(3)
@test MOI.get(optimizer, MOI.VariablePrimal(), X) ≈ Xv atol=1e-2
Yv = ones(3)
@test MOI.get(optimizer, MOI.VariablePrimal(), Y) ≈ Yv atol=1e-2
# @test MOI.get(optimizer, MOI.ConstraintPrimal(), cX) ≈ Xv atol=1e-2
# @test MOI.get(optimizer, MOI.ConstraintDual(), c) ≈ 2 atol=1e-2
# @show MOI.get(optimizer, MOI.ConstraintDual(), c)
end
function double_sdp_with_duplicates(optimizer)
MOI.empty!(optimizer)
x = MOI.add_variable(optimizer)
X = [x, x, x]
vov = MOI.VectorOfVariables(X)
cX = MOI.add_constraint(optimizer, vov, MOI.PositiveSemidefiniteConeTriangle(2))
c = MOI.add_constraint(optimizer, MOI.VectorAffineFunction([MOI.VectorAffineTerm(1,MOI.ScalarAffineTerm(1.0, X[2]))], [-1.0]), MOI.Zeros(1))
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(1.0, [X[1], X[3]]), 0.0))
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
@test MOI.get(optimizer, MOI.TerminationStatus()) == MOI.OPTIMAL
@test MOI.get(optimizer, MOI.PrimalStatus()) == MOI.FEASIBLE_POINT
@test MOI.get(optimizer, MOI.DualStatus()) == MOI.FEASIBLE_POINT
@test MOI.get(optimizer, MOI.ObjectiveValue()) ≈ 2 atol=1e-2
Xv = ones(3)
@test MOI.get(optimizer, MOI.VariablePrimal(), X) ≈ Xv atol=1e-2
end
function sdp_wiki(optimizer)
# https://en.wikipedia.org/wiki/Semidefinite_programming
MOI.empty!(optimizer)
@test MOI.is_empty(optimizer)
X = MOI.add_variables(optimizer, 6)
vov = MOI.VectorOfVariables(X)
cX = MOI.add_constraint(optimizer, vov, MOI.PositiveSemidefiniteConeTriangle(3))
cd1 = MOI.add_constraint(optimizer, MOI.VectorAffineFunction([MOI.VectorAffineTerm(1,MOI.ScalarAffineTerm(1.0, X[1]))], [-1.0]), MOI.Zeros(1))
cd1 = MOI.add_constraint(optimizer, MOI.VectorAffineFunction([MOI.VectorAffineTerm(1,MOI.ScalarAffineTerm(1.0, X[3]))], [-1.0]), MOI.Zeros(1))
cd1 = MOI.add_constraint(optimizer, MOI.VectorAffineFunction([MOI.VectorAffineTerm(1,MOI.ScalarAffineTerm(1.0, X[6]))], [-1.0]), MOI.Zeros(1))
c12_ub = MOI.add_constraint(optimizer, MOI.VectorAffineFunction([MOI.VectorAffineTerm(1,MOI.ScalarAffineTerm(1.0, X[2]))], [0.1]), MOI.Nonpositives(1)) # x <= -0.1 -> x + 0.1 <= 0
c12_lb = MOI.add_constraint(optimizer, MOI.VectorAffineFunction([MOI.VectorAffineTerm(1,MOI.ScalarAffineTerm(-1.0, X[2]))], [-0.2]), MOI.Nonpositives(1)) # x >= -0.2 -> -x + -0.2 <= 0
c23_ub = MOI.add_constraint(optimizer, MOI.VectorAffineFunction([MOI.VectorAffineTerm(1,MOI.ScalarAffineTerm(1.0, X[5]))], [-0.5]), MOI.Nonpositives(1)) # x <= 0.5 -> x - 0.5 <= 0
c23_lb = MOI.add_constraint(optimizer, MOI.VectorAffineFunction([MOI.VectorAffineTerm(1,MOI.ScalarAffineTerm(-1.0, X[5]))], [0.4]), MOI.Nonpositives(1)) # x >= 0.4 -> -x + 0.4 <= 0
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(1.0, [X[4]]), 0.0))
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
obj = MOI.get(optimizer, MOI.ObjectiveValue())
@test obj ≈ -0.978 atol=1e-2
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.optimize!(optimizer)
obj = MOI.get(optimizer, MOI.ObjectiveValue())
@test obj ≈ 0.872 atol=1e-2
end
simple_lp(optimizer_bridged)
simple_lp_2_1d_sdp(optimizer_bridged)
lp_in_SDP_equality_form(optimizer_bridged)
lp_in_SDP_inequality_form(optimizer_bridged)
sdp_from_moi(optimizer_bridged)
double_sdp_from_moi(optimizer_bridged)
double_sdp_with_duplicates(optimizer_bridged)
sdp_wiki(optimizer_bridged)
# print test
const optimizer_print = MOI.instantiate(
()->ProxSDP.Optimizer(
log_freq = 10, log_verbose = true, timer_verbose = true, extended_log = true, extended_log2 = true,
tol_gap = 1e-4, tol_feasibility = 1e-4),
with_bridge_type = Float64)
sdp_wiki(optimizer_print)
# eig solvers
default_solver = MOI.get(optimizer_bridged, MOI.RawOptimizerAttribute("eigsolver"))
min_size_krylov_eigs = MOI.get(optimizer_bridged, MOI.RawOptimizerAttribute("min_size_krylov_eigs"))
MOI.set(optimizer_bridged, MOI.RawOptimizerAttribute("eigsolver"), 1)
MOI.set(optimizer_bridged, MOI.RawOptimizerAttribute("min_size_krylov_eigs"), 1)
sdp_wiki(optimizer_bridged)
MOI.set(optimizer_bridged, MOI.RawOptimizerAttribute("eigsolver"), 2)
sdp_wiki(optimizer_bridged)
MOI.set(optimizer_bridged, MOI.RawOptimizerAttribute("min_size_krylov_eigs"), min_size_krylov_eigs)
MOI.set(optimizer_bridged, MOI.RawOptimizerAttribute("eigsolver"), default_solver)
MOI.set(optimizer_bridged, MOI.RawOptimizerAttribute("full_eig_decomp"), true)
sdp_wiki(optimizer_bridged)
MOI.set(optimizer_bridged, MOI.RawOptimizerAttribute("full_eig_decomp"), false)
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 3083 | function moi_randsdp(optimizer, seed, n, m; verbose = false, test = false, atol = 1e-2, scalar = false, varbounds = true)
MOI.empty!(optimizer)
if test
@test MOI.is_empty(optimizer)
end
A, b, C = randsdp_data(seed, m, n)
nvars = ProxSDP.sympackedlen(n)
X = MOI.add_variables(optimizer, nvars)
Xsq = Matrix{MOI.VariableIndex}(undef,n,n)
ProxSDP.ivech!(Xsq, X)
Xsq = Matrix(LinearAlgebra.Symmetric(Xsq,:U))
for k in 1:m
ctr_k = vec([MOI.ScalarAffineTerm(A[k][i,j], Xsq[i,j]) for j in 1:n, i in 1:n])
if scalar
MOI.add_constraint(optimizer, MOI.ScalarAffineFunction(ctr_k, 0.0), MOI.EqualTo(b[k]))
else
MOI.add_constraint(optimizer,
MOI.VectorAffineFunction(MOI.VectorAffineTerm.([1], ctr_k), [-b[k]]), MOI.Zeros(1))
end
end
if varbounds
for k in 1:n
ctr_k = [MOI.ScalarAffineTerm(1.0, X[k])]
ctr_k_n = [MOI.ScalarAffineTerm(-1.0, X[k])]
if scalar
MOI.add_constraint(optimizer, MOI.ScalarAffineFunction(ctr_k, 0.0), MOI.GreaterThan(-10.0))
MOI.add_constraint(optimizer, MOI.ScalarAffineFunction(ctr_k, 0.0), MOI.LessThan(10.0))
else
MOI.add_constraint(optimizer,
MOI.VectorAffineFunction(MOI.VectorAffineTerm.([1], ctr_k_n), [-10.0]), MOI.Nonpositives(1))
MOI.add_constraint(optimizer,
MOI.VectorAffineFunction(MOI.VectorAffineTerm.([1], ctr_k), [-10.0]), MOI.Nonpositives(1))
end
end
end
vov = MOI.VectorOfVariables(X)
cX = MOI.add_constraint(optimizer, vov, MOI.PositiveSemidefiniteConeTriangle(n))
objf_t = vec([MOI.ScalarAffineTerm(C[i,j], Xsq[i,j]) for j in 1:n, i in 1:n])
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), MOI.ScalarAffineFunction(objf_t, 0.0))
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
status = 0
if MOI.get(optimizer, MOI.TerminationStatus()) == MOI.OPTIMAL
status = 1
end
objval = Inf
stime = -1.0
try
stime = MOI.get(optimizer, MOI.SolveTimeSec())
catch
println("could not query time")
end
if status == 1
objval = MOI.get(optimizer, MOI.ObjectiveValue())
Xsq_s = MOI.get.(optimizer, MOI.VariablePrimal(), Xsq)
minus_rank = length([eig for eig in LinearAlgebra.eigen(Xsq_s).values if eig < -1e-4])
if test
@test minus_rank == 0
end
# rank = length([eig for eig in LinearAlgebra.eigen(XX).values if eig > 1e-10])
# @show rank
if test
@test tr(C * Xsq_s) - objval < atol
for i in 1:m
@test abs(tr(A[i] * Xsq_s)-b[i])/(1+abs(b[i])) < atol
end
end
verbose && randsdp_eval(A,b,C,n,m,Xsq_s)
end
rank = -1
return (objval, stime, rank, status)
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 2227 | function moi_sdplib(optimizer, path; verbose = false, test = false, scalar = false)
if verbose
println("running: $(path)")
end
MOI.empty!(optimizer)
if test
@test MOI.is_empty(optimizer)
end
n, m, F, c = sdplib_data(path)
nvars = ProxSDP.sympackedlen(n)
X = MOI.add_variables(optimizer, nvars)
vov = MOI.VectorOfVariables(X)
cX = MOI.add_constraint(optimizer, vov, MOI.PositiveSemidefiniteConeTriangle(n))
Xsq = Matrix{MOI.VariableIndex}(undef, n,n)
ProxSDP.ivech!(Xsq, X)
Xsq = Matrix(LinearAlgebra.Symmetric(Xsq,:U))
# Objective function
objf_t = [MOI.ScalarAffineTerm(F[0][idx...], Xsq[idx...])
for idx in zip(SparseArrays.findnz(F[0])[1:end-1]...)]
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), MOI.ScalarAffineFunction(objf_t, 0.0))
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
# Linear equality constraints
for k in 1:m
ctr_k = [MOI.ScalarAffineTerm(F[k][idx...], Xsq[idx...])
for idx in zip(SparseArrays.findnz(F[k])[1:end-1]...)]
if scalar
MOI.add_constraint(optimizer, MOI.ScalarAffineFunction(ctr_k, 0.0), MOI.EqualTo(c[k]))
else
MOI.add_constraint(optimizer,
MOI.VectorAffineFunction(MOI.VectorAffineTerm.([1], ctr_k), [-c[k]]), MOI.Zeros(1))
end
end
MOI.optimize!(optimizer)
objval = MOI.get(optimizer, MOI.ObjectiveValue())
stime = -1.0
try
stime = MOI.get(optimizer, MOI.SolveTimeSec())
catch
println("could not query time")
end
Xsq_s = MOI.get.(optimizer, MOI.VariablePrimal(), Xsq)
minus_rank = length([eig for eig in LinearAlgebra.eigen(Xsq_s).values if eig < -1e-4])
if test
@test minus_rank == 0
end
# @test tr(F[0] * Xsq_s) - obj < 1e-1
# for i in 1:m
# @test abs(tr(F[i] * Xsq_s)-c[i]) < 1e-1
# end
verbose && sdplib_eval(F,c,n,m,Xsq_s)
rank = -1
status = 0
if MOI.get(optimizer, MOI.TerminationStatus()) == MOI.OPTIMAL
status = 1
end
return (objval, stime, rank, status)
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 5110 |
function moi_sensorloc(optimizer, seed, n; verbose = false, test = false, scalar = false)
rng = Random.MersenneTwister(seed)
MOI.empty!(optimizer)
if test
@test MOI.is_empty(optimizer)
end
# Generate randomized problem data
m, x_true, a, d, d_bar = sensorloc_data(seed, n)
# Decision variable
nvars = ProxSDP.sympackedlen(n + 2)
X = MOI.add_variables(optimizer, nvars)
Xsq = Matrix{MOI.VariableIndex}(undef, n + 2, n + 2)
ProxSDP.ivech!(Xsq, X)
Xsq = Matrix(LinearAlgebra.Symmetric(Xsq, :U))
vov = MOI.VectorOfVariables(X)
cX = MOI.add_constraint(optimizer, vov, MOI.PositiveSemidefiniteConeTriangle(n + 2))
# Constraint with distances from anchors to sensors
for j in 1:n
for k in 1:m
if scalar
MOI.add_constraint(optimizer,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(a[k][1]*a[k][1], Xsq[1,1]),
MOI.ScalarAffineTerm(a[k][2]*a[k][2], Xsq[2,2]),
MOI.ScalarAffineTerm(-2 * a[k][1], Xsq[1, j+2]),
MOI.ScalarAffineTerm(-2 * a[k][2], Xsq[2, j+2]),
MOI.ScalarAffineTerm(1.0, Xsq[j+2, j+2]),
], 0.0), MOI.EqualTo(d_bar[k, j]^2))
else
MOI.add_constraint(optimizer,
MOI.VectorAffineFunction(
MOI.VectorAffineTerm.([1], [
MOI.ScalarAffineTerm(a[k][1]*a[k][1], Xsq[1,1]),
MOI.ScalarAffineTerm(a[k][2]*a[k][2], Xsq[2,2]),
MOI.ScalarAffineTerm(-2 * a[k][1], Xsq[1, j+2]),
MOI.ScalarAffineTerm(-2 * a[k][2], Xsq[2, j+2]),
MOI.ScalarAffineTerm(1.0, Xsq[j+2, j+2]),
]), -[d_bar[k, j]^2]), MOI.Zeros(1))
end
end
end
# Constraint with distances from sensors to sensors
count, count_all = 0, 0
for i in 1:n
for j in 1:i - 1
count_all += 1
if Random.rand(rng) > 0.9
count += 1
if scalar
MOI.add_constraint(optimizer,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(1.0, Xsq[i+2,i+2] ),
MOI.ScalarAffineTerm(1.0, Xsq[j+2,j+2] ),
MOI.ScalarAffineTerm(-2.0, Xsq[i+2,j+2]),
], 0.0), MOI.EqualTo(d[i, j]^2))
else
MOI.add_constraint(optimizer,
MOI.VectorAffineFunction(
MOI.VectorAffineTerm.([1],[
MOI.ScalarAffineTerm(1.0, Xsq[i+2,i+2] ),
MOI.ScalarAffineTerm(1.0, Xsq[j+2,j+2] ),
MOI.ScalarAffineTerm(-2.0, Xsq[i+2,j+2]),
]), -[d[i, j]^2]), MOI.Zeros(1))
end
end
end
end
if verbose
@show count_all, count
end
if scalar
MOI.add_constraint(optimizer, MOI.SingleVariable(Xsq[1, 1]), MOI.EqualTo(1.0))
MOI.add_constraint(optimizer, MOI.SingleVariable(Xsq[1, 2]), MOI.EqualTo(0.0))
MOI.add_constraint(optimizer, MOI.SingleVariable(Xsq[2, 1]), MOI.EqualTo(0.0))
MOI.add_constraint(optimizer, MOI.SingleVariable(Xsq[2, 2]), MOI.EqualTo(1.0))
else
MOI.add_constraint(optimizer, MOI.VectorAffineFunction(
MOI.VectorAffineTerm.([1],[MOI.ScalarAffineTerm(1.0, Xsq[1, 1])]), [-1.0]), MOI.Zeros(1))
MOI.add_constraint(optimizer, MOI.VectorAffineFunction(
MOI.VectorAffineTerm.([1],[MOI.ScalarAffineTerm(1.0, Xsq[1, 2])]), [ 0.0]), MOI.Zeros(1))
MOI.add_constraint(optimizer, MOI.VectorAffineFunction(
MOI.VectorAffineTerm.([1],[MOI.ScalarAffineTerm(1.0, Xsq[2, 1])]), [ 0.0]), MOI.Zeros(1))
MOI.add_constraint(optimizer, MOI.VectorAffineFunction(
MOI.VectorAffineTerm.([1],[MOI.ScalarAffineTerm(1.0, Xsq[2, 2])]), [-1.0]), MOI.Zeros(1))
end
objf_t = [MOI.ScalarAffineTerm(0.0, Xsq[1, 1])]
if false
objf_t = [MOI.ScalarAffineTerm(1.0, Xsq[i,i]) for i in 1:n + 2]
end
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), MOI.ScalarAffineFunction(objf_t, 0.0))
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
objval = MOI.get(optimizer, MOI.ObjectiveValue())
stime = -1.0
try
stime = MOI.get(optimizer, MOI.SolveTimeSec())
catch
println("could not query time")
end
Xsq_s = MOI.get.(optimizer, MOI.VariablePrimal(), Xsq)
verbose && sensorloc_eval(n, m, x_true, Xsq_s)
rank = -1
status = 0
if MOI.get(optimizer, MOI.TerminationStatus()) == MOI.OPTIMAL
status = 1
end
return (objval, stime, rank, status)
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 5305 | push!(Base.LOAD_PATH, joinpath(dirname(@__FILE__), "..", ".."))
using Test
import ProxSDP
import MathOptInterface
import LinearAlgebra
import Random
import SparseArrays
import DelimitedFiles
const MOI = MathOptInterface
const MOIB = MOI.Bridges
const MOIU = MOI.Utilities
const optimizer_bridged = MOI.instantiate(
()->ProxSDP.Optimizer(
tol_gap = 1e-6, tol_feasibility= 1e-6,
# max_iter = 100_000,
time_limit = 3., #seconds FAST
warn_on_limit = true,
# log_verbose = true, log_freq = 100000
),
with_bridge_type = Float64)
@testset "SolverName" begin
@test MOI.get(optimizer_bridged, MOI.SolverName()) == "ProxSDP"
end
@testset "SolverVersion" begin
ver = readlines(joinpath(@__DIR__, "..", "Project.toml"))[4][12:16]
@test MOI.get(optimizer_bridged, MOI.SolverVersion()) == ver
end
function test_runtests()
config = MOI.Test.Config(
atol = 1e-4,
rtol = 1e-3,
exclude = Any[
MOI.ConstraintBasisStatus,
MOI.VariableBasisStatus,
MOI.ConstraintName,
MOI.VariableName,
MOI.ObjectiveBound,
MOI.ScalarFunctionConstantNotZero, # ignored by UniversalFallback
],
)
opt = ProxSDP.Optimizer(
tol_gap = 1e-6, tol_feasibility= 1e-6,
# max_iter = 100_000,
time_limit = 1.0, #seconds FAST
warn_on_limit = true,
# log_verbose = true, log_freq = 100000
)
model = MOI.instantiate(()->opt, with_bridge_type = Float64)
MOI.set(model, MOI.Silent(), true)
MOI.Test.runtests(
model,
config,
exclude = String[
# Not ProxSDP fail.
# see: https://github.com/jump-dev/MathOptInterface.jl/issues/1665
"test_model_UpperBoundAlreadySet",
"test_model_LowerBoundAlreadySet",
# poorly scaled problem (solved bellow with higher accuracy)
"test_linear_add_constraints",
# time limit hit
"test_linear_INFEASIBLE",
"test_solve_DualStatus_INFEASIBILITY_CERTIFICATE_VariableIndex_LessThan_max",
],
)
MOI.set(model, MOI.RawOptimizerAttribute("time_limit"), 5.0)
MOI.empty!(model)
MOI.Test.test_solve_DualStatus_INFEASIBILITY_CERTIFICATE_VariableIndex_LessThan_max(
model,
config,
)
MOI.empty!(model)
MOI.Test.test_linear_INFEASIBLE(
model,
config,
)
MOI.set(model, MOI.RawOptimizerAttribute("tol_primal"), 1e-7)
MOI.set(model, MOI.RawOptimizerAttribute("tol_dual"), 1e-7)
MOI.set(model, MOI.RawOptimizerAttribute("tol_gap"), 1e-7)
MOI.set(model, MOI.RawOptimizerAttribute("tol_feasibility"), 1e-7)
MOI.set(model, MOI.Silent(), true)
MOI.empty!(model)
MOI.Test.test_linear_add_constraints(
model,
config,
)
@test MOI.get(model, ProxSDP.PDHGIterations()) >= 0
return
end
@testset "MOI Unit" begin
test_runtests()
end
@testset "ProxSDP MOI Units tests" begin
include("moi_proxsdp_unit.jl")
end
@testset "MIMO Sizes" begin
include("base_mimo.jl")
include("moi_mimo.jl")
for i in 2:5
@testset "MIMO n = $(i)" begin
moi_mimo(optimizer_bridged, 123, i, test = true)
end
end
end
# hitting time limit
# probably infeasible/unbounded
# @testset "RANDSDP Sizes" begin
# include("base_randsdp.jl")
# include("moi_randsdp.jl")
# for n in 10:11, m in 10:11
# @testset "RANDSDP n=$n, m=$m" begin
# moi_randsdp(optimizer, 123, n, m, test = true, atol = 1e-1)
# end
# end
# end
# This problems are too large for Travis
opt_low_acc = ProxSDP.Optimizer(
tol_gap = 1e-3, tol_feasibility = 1e-3,
# log_verbose = true, log_freq = 1000
)
cache = MOI.default_cache(opt_low_acc, Float64)
optimizer_low_acc = MOI.Utilities.CachingOptimizer(cache, opt_low_acc)
@testset "SDPLIB Sizes" begin
datapath = joinpath(dirname(@__FILE__), "data")
include("base_sdplib.jl")
include("moi_sdplib.jl")
@testset "EQPART" begin
# badly conditioned
# moi_sdplib(optimizer_low_acc, joinpath(datapath, "gpp124-1.dat-s"), test = true)
moi_sdplib(optimizer_low_acc, joinpath(datapath, "gpp124-2.dat-s"), test = true)
# moi_sdplib(optimizer, joinpath(datapath, "gpp124-3.dat-s"), test = true)
# moi_sdplib(optimizer, joinpath(datapath, "gpp124-4.dat-s"), test = true)
end
@testset "MAX CUT" begin
moi_sdplib(optimizer_low_acc, joinpath(datapath, "mcp124-1.dat-s"), test = true)
# moi_sdplib(optimizer, joinpath(datapath, "mcp124-2.dat-s"), test = true)
# moi_sdplib(optimizer, joinpath(datapath, "mcp124-3.dat-s"), test = true)
# moi_sdplib(optimizer, joinpath(datapath, "mcp124-4.dat-s"), test = true)
end
end
@testset "Sensor Localization" begin
include("base_sensorloc.jl")
include("moi_sensorloc.jl")
for n in 5:5:10
moi_sensorloc(optimizer_bridged, 0, n, test = true)
end
end
@testset "Unsupported argument" begin
@test_throws ErrorException optimizer_unsupportedarg = ProxSDP.Optimizer(unsupportedarg = 10)
# @test_throws ErrorException MOI.optimize!(optimizer_unsupportedarg)
end
include("test_terminationstatus.jl")
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 1958 | import StatsBase
function quad_knapsack(solver, seed)
rng = Random.MersenneTwister(seed)
if Base.libblas_name == "libmkl_rt"
model = Model()
else
model = Model(solver=solver)
end
# Instance size
n = 20
# k-item capacity
# k = Int(n / 10)
k = 1
# Frequency of nonzero weights
delta = 0.5
# Build weights and capacity
a = zeros(n)
for i in 1:n
a[i] = Random.rand(rng, 1:50)
end
a = ones(n)
b = Random.rand(rng, 100:sum(a)+100)
# Profits
C = zeros((n, n))
for i in 1:n
for j in 1:n
if StatsBase.sample(rng, [1, 0], StatsBase.Weights([delta, 1.0 - delta])) == 1
c_ = - Random.rand(rng, 1:100)
C[i, j] = c_
C[j, i] = c_
end
end
end
# Decision variable
if Base.libblas_name == "libmkl_rt"
@variable(model, X[1:n+1, 1:n+1], PSD)
else
@variable(model, X[1:n+1, 1:n+1], SDP)
end
@objective(model, Min, sum(C[i, j] * X[i+1, j+1] for i in 1:n, j in 1:n))
# Capacity constraint
# @constraint(model, cap, sum(a[i] * X[i+1, i+1] for i in 1:n) <= b)
# k-item constraint
@constraint(model, k_knap, sum(X[i+1, i+1] for i in 1:n) == k)
# @constraint(model, bla, X[1, 1] == 1)
# @constraint(model, lb[i in 1:n+1, j in 1:n+1], X[i, j] >= 0.0)
# @constraint(model, ub[i in 1:n+1, j in 1:n+1], X[i, j] <= 1.0)
if Base.libblas_name == "libmkl_rt"
JuMP.attach(model, solver)
end
@time teste = JuMP.solve(model)
if Base.libblas_name == "libmkl_rt"
XX = getvalue2.(X)
else
XX = getvalue.(X)
end
rank = length([eig for eig in LinearAlgebra.eigen(XX).values if eig > 1e-10])
@show rank
@show diag(XX)
end
getvalue2(var::JuMP.Variable) = (m=var.m;m.solverinstance.primal[m.solverinstance.varmap[m.variabletosolvervariable[var.instanceindex]]]) | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 1181 |
#=
Load required libraries
=#
using Random
using JuMP
using LinearAlgebra
#=
select problem types to be tested
=#
sets_to_test = Symbol[]
push!(sets_to_test, :RANDSDP)
push!(sets_to_test, :SENSORLOC)
#=
select solvers to be tested
=#
solvers = Tuple{String, Function}[]
#=
ProxSDP with default parameters
=#
using ProxSDP
push!(solvers, ("ProxSDP", () -> ProxSDP.Optimizer(
log_verbose=true,
timer_verbose=true,
time_limit = 5 * 60.0,
log_freq = 1_000,
)))
#=
Selection of problem instances
=#
RANDSDP_TEST_SET = 1:1
SENSORLOC_TEST_SET = [
50,
]
#=
Load problem testing functions
=#
include("base_randsdp.jl")
include("jump_randsdp.jl")
include("base_sensorloc.jl")
include("jump_sensorloc.jl")
_randsdp = jump_randsdp
_sensorloc = jump_sensorloc
#=
Run benchmarks
=#
for optimizer in solvers
if :RANDSDP in sets_to_test
_randsdp(optimizer[2], 0, 10, 10)
for i in RANDSDP_TEST_SET
sol = _randsdp(optimizer[2], i, 10, 10)
end
end
if :SENSORLOC in sets_to_test
for n in SENSORLOC_TEST_SET
sol = _sensorloc(optimizer[2], 0, n)
end
end
end
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 4840 | #=
Set path data
=#
path = joinpath(dirname(@__FILE__), "..", "..")
push!(Base.LOAD_PATH, path)
datapath = joinpath(dirname(@__FILE__), "data")
#=
Load required libraries
=#
using Test
using Dates
using Random
using DelimitedFiles
using SparseArrays
using JuMP
using LinearAlgebra
#=
select problem types to be tested
=#
sets_to_test = Symbol[]
push!(sets_to_test, :RANDSDP)
push!(sets_to_test, :SENSORLOC)
push!(sets_to_test, :SDPLIB)
push!(sets_to_test, :MIMO)
#=
select solvers to be tested
=#
solvers = Tuple{String, Function}[]
#=
ProxSDP with default parameters
=#
using ProxSDP
push!(solvers, ("ProxSDP", () -> ProxSDP.Optimizer(
log_verbose=true,
timer_verbose=true,
time_limit = 5 * 60.0,
log_freq = 1_000,
)))
#=
ProxSDP with defautl parameters excpet for FULL RANK decomposition
=#
# using ProxSDP
# push!(solvers, ("ProxSDPfull", () -> ProxSDP.Optimizer(
# log_verbose=true,
# time_limit = 900.0,
# log_freq = 1_000,
# full_eig_decomp = true,
# )))
#=
First order solvers
=#
# using SCS
# push!(solvers, ("SCS", () -> SCS.Optimizer(eps = 1e-4))) # eps = 1e-4
# using COSMO
# push!(solvers, ("COSMO", () -> COSMO.Optimizer(time_limit = 900.0, max_iter = 100_000, eps_abs = 1e-4))) # eps = 1e-4
# add SDPNAL+
# using SDPNAL
# push!(solvers, ("SDPNAL", () -> SDPNAL.Optimizer())) # eps = 1e-4
#=
Interior point solvers
=#
# using MosekTools
# push!(solvers, ("MOSEK", () -> Mosek.Optimizer(MSK_DPAR_OPTIMIZER_MAX_TIME = 900.0)))
# using CSDP
# push!(solvers, ("CSDP", () -> CSDP.Optimizer(objtol=1e-4, maxiter=100000)))
# using SDPA
# push!(solvers, ("SDPA", () -> SDPA.Optimizer()))
#=
Funstions to write results
=#
NOW = replace("$(now())",":"=>"_")
FILE = open(joinpath(dirname(@__FILE__),"proxsdp_bench_$(NOW).log"),"w")
println(FILE, "class, prob_ref, time, obj, rank, lin_feas, sdp_feas")
function println2(FILE, solver::String, class::String, ref::String, sol)
println(FILE, "$solver, $class, $ref, $(sol[2]), $(sol[1]), $(sol[3]), $(sol[4]), $(sol[5]), $(sol[6])")
flush(FILE)
end
#=
Selection of problem instances
=#
RANDSDP_TEST_SET = 1:1
SENSORLOC_TEST_SET = [
100,
200,
300,
400,
]
MIMO_TEST_SET = [
100,
500,
1000,
# 1500,
# 2000,
]
GPP_TEST_SET = [
"gpp124-1.dat-s",
"gpp124-1.dat-s",
"gpp124-2.dat-s",
"gpp124-3.dat-s",
"gpp124-4.dat-s",
"gpp250-1.dat-s",
"gpp250-2.dat-s",
"gpp250-3.dat-s",
"gpp250-4.dat-s",
"gpp500-1.dat-s",
"gpp500-2.dat-s",
"gpp500-3.dat-s",
"gpp500-4.dat-s",
# "equalG11.dat-s",
# "equalG51.dat-s",
]
MAXCUT_TEST_SET = [
"mcp124-1.dat-s",
"mcp124-1.dat-s",
"mcp124-2.dat-s",
"mcp124-3.dat-s",
"mcp124-4.dat-s",
"mcp250-1.dat-s",
"mcp250-2.dat-s",
"mcp250-3.dat-s",
"mcp250-4.dat-s",
"mcp500-1.dat-s",
"mcp500-2.dat-s",
"mcp500-3.dat-s",
"mcp500-4.dat-s",
# "maxG11.dat-s" ,
# "maxG51.dat-s" ,
# "maxG32.dat-s" ,
# "maxG55.dat-s" ,
# "maxG60.dat-s" ,
]
#=
Load problem testing functions
=#
include("base_randsdp.jl")
include("jump_randsdp.jl")
include("base_mimo.jl")
include("jump_mimo.jl")
include("base_sensorloc.jl")
include("jump_sensorloc.jl")
include("base_sdplib.jl")
include("jump_sdplib.jl")
_randsdp = jump_randsdp
_mimo = jump_mimo
_sensorloc = jump_sensorloc
_sdplib = jump_sdplib
#=
Run benchmarks
=#
for optimizer in solvers
if :RANDSDP in sets_to_test
println("RANDSDP")
_randsdp(optimizer[2], 0, 5, 5)
for i in RANDSDP_TEST_SET
@show i
sol = _randsdp(optimizer[2], i, 5, 5)
println2(FILE, optimizer[1], "RANDSDP", "$i", sol)
end
end
if :SENSORLOC in sets_to_test
println("SENSORLOC")
for n in SENSORLOC_TEST_SET
@show n
sol = _sensorloc(optimizer[2], 0, n)
println2(FILE, optimizer[1], "SENSORLOC", "$n", sol)
end
end
if :SDPLIB in sets_to_test
println("gpp")
for name in GPP_TEST_SET
println(name)
sol = _sdplib(optimizer[2], joinpath(datapath, name))
println2(FILE, optimizer[1], "SDPLIB_gp", name, sol)
end
println("max_cut")
for name in MAXCUT_TEST_SET
println(name)
sol = _sdplib(optimizer[2], joinpath(datapath, name))
println2(FILE, optimizer[1], "SDPLIB_mc", name, sol)
end
end
if :MIMO in sets_to_test
println("MIMO")
_mimo(optimizer[2], 0, 100)
for n in MIMO_TEST_SET
@show n
sol = _mimo(optimizer[2], 0, n)
println2(FILE, optimizer[1], "MIMO", "$n", sol)
end
end
end
#=
Finish benhcmark file
=#
close(FILE) | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 162 | path = joinpath(dirname(@__FILE__), "..", "..")
push!(Base.LOAD_PATH, path)
datapath = joinpath(dirname(@__FILE__), "data")
import ProxSDP
include("moitest.jl") | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | code | 2510 | function build_simple_lp!(optim)
MOI.empty!(optim)
@test MOI.is_empty(optim)
# add 10 variables - only diagonal is relevant
X = MOI.add_variables(optim, 2)
# add sdp constraints - only ensuring positivenesse of the diagonal
vov = MOI.VectorOfVariables(X)
c1 = MOI.add_constraint(optim,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(2.0, X[1]),
MOI.ScalarAffineTerm(1.0, X[2])
], 0.0), MOI.EqualTo(4.0))
c2 = MOI.add_constraint(optim,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(1.0, X[1]),
MOI.ScalarAffineTerm(2.0, X[2])
], 0.0), MOI.EqualTo(4.0))
b1 = MOI.add_constraint(optim,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(1.0, X[1])
], 0.0), MOI.GreaterThan(0.0))
b2 = MOI.add_constraint(optim,
MOI.ScalarAffineFunction([
MOI.ScalarAffineTerm(1.0, X[2])
], 0.0), MOI.GreaterThan(0.0))
MOI.set(optim,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([-4.0, -3.0], [X[1], X[2]]), 0.0)
)
MOI.set(optim, MOI.ObjectiveSense(), MOI.MIN_SENSE)
end
const optimizer_maxiter = MOI.instantiate(
()->ProxSDP.Optimizer(max_iter = 1, log_verbose = false),
with_bridge_type = Float64)
const optimizer_timelimit = MOI.instantiate(
()->ProxSDP.Optimizer(
tol_gap = 1e-16, tol_feasibility = 1e-16,
tol_primal = 1e-16, tol_dual = 1e-16,
tol_feasibility_dual = 1e-16, tol_psd = 1e-16,
time_limit = 0.0, check_dual_feas = false,
check_dual_feas_freq = 1),
with_bridge_type = Float64)
@testset "MOI status" begin
@testset "MOI.OPTIMAL" begin
build_simple_lp!(optimizer_bridged)
MOI.optimize!(optimizer_bridged)
@test MOI.get(optimizer_bridged, MOI.TerminationStatus()) == MOI.OPTIMAL
MOI.empty!(optimizer_bridged)
end
@testset "MOI.ITERATION_LIMIT" begin
build_simple_lp!(optimizer_maxiter)
MOI.optimize!(optimizer_maxiter)
@test MOI.get(optimizer_maxiter, MOI.TerminationStatus()) == MOI.ITERATION_LIMIT
MOI.empty!(optimizer_maxiter)
end
@testset "MOI.TIME_LIMIT" begin
build_simple_lp!(optimizer_timelimit)
MOI.optimize!(optimizer_timelimit)
@test MOI.get(optimizer_timelimit, MOI.TerminationStatus()) == MOI.TIME_LIMIT
MOI.empty!(optimizer_timelimit)
end
end | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | docs | 5161 | 
---
| **Build Status** |
|:-----------------:|
| [![Build Status][build-img]][build-url] [![Codecov branch][codecov-img]][codecov-url] [![][docs-img]][docs-url]|
[build-img]: https://github.com/mariohsouto/ProxSDP.jl/workflows/CI/badge.svg?branch=master
[build-url]: https://github.com/mariohsouto/ProxSDP.jl/actions?query=workflow%3ACI
[codecov-img]: http://codecov.io/github/mariohsouto/ProxSDP.jl/coverage.svg?branch=master
[codecov-url]: http://codecov.io/github/mariohsouto/ProxSDP.jl?branch=master
[docs-img]: https://img.shields.io/badge/docs-latest-blue.svg
[docs-url]: https://mariohsouto.github.io/ProxSDP.jl/latest/
**ProxSDP** is an open-source semidefinite programming ([SDP](https://en.wikipedia.org/wiki/Semidefinite_programming)) solver based on the paper ["Exploiting Low-Rank Structure in Semidefinite Programming by Approximate Operator Splitting"](https://arxiv.org/abs/1810.05231). The main advantage of ProxSDP over other state-of-the-art solvers is the ability to exploit the **low-rank** structure inherent to several SDP problems.
### Overview of problems ProxSDP can solve
* General conic convex optimization problems with the presence of the [positive semidefinite cone](https://web.stanford.edu/~boyd/papers/pdf/semidef_prog.pdf), [second-order cone](https://web.stanford.edu/~boyd/papers/pdf/socp.pdf) and [positive orthant](https://www.math.ucla.edu/~tom/LP.pdf);
* Semidefinite relaxation of nonconvex problems, e.g. [max-cut](http://www-math.mit.edu/~goemans/PAPERS/maxcut-jacm.pdf), [binary MIMO](https://arxiv.org/pdf/cs/0606083.pdf), [optimal power flow](http://authorstest.library.caltech.edu/141/1/TPS_OPF_2_tech.pdf), [sensor localization](https://web.stanford.edu/~boyd/papers/pdf/sensor_selection.pdf), [sum-of-squares](https://en.wikipedia.org/wiki/Sum-of-squares_optimization);
* Control theory problems with [LMI](https://en.wikipedia.org/wiki/Linear_matrix_inequality) constraints;
* Nuclear norm minimization problems, e.g. [matrix completion](https://statweb.stanford.edu/~candes/papers/MatrixCompletion.pdf);
## Installation
You can install **ProxSDP** through the [Julia package manager](https://docs.julialang.org/en/v1/stdlib/Pkg/index.html):
```julia
] add ProxSDP
```
## Usage
Let's consider the semidefinite programming relaxation of the [max-cut](http://www-math.mit.edu/~goemans/PAPERS/maxcut-jacm.pdf) problem as in
```
max 0.25 * W•X
s.t. diag(X) = 1,
X ≽ 0,
```
### JuMP
This problem can be solved by the following code using **ProxSDP** and [JuMP](https://github.com/JuliaOpt/JuMP.jl).
```julia
# Load packages
using ProxSDP, JuMP, LinearAlgebra
# Number of vertices
n = 4
# Graph weights
W = [18.0 -5.0 -7.0 -6.0
-5.0 6.0 0.0 -1.0
-7.0 0.0 8.0 -1.0
-6.0 -1.0 -1.0 8.0]
# Build Max-Cut SDP relaxation via JuMP
model = Model(ProxSDP.Optimizer)
set_optimizer_attribute(model, "log_verbose", true)
set_optimizer_attribute(model, "tol_gap", 1e-4)
set_optimizer_attribute(model, "tol_feasibility", 1e-4)
@variable(model, X[1:n, 1:n], PSD)
@objective(model, Max, 0.25 * dot(W, X))
@constraint(model, diag(X) .== 1)
# Solve optimization problem with ProxSDP
optimize!(model)
# Retrieve solution
Xsol = value.(X)
```
### Convex.jl
Another alternative is to use **ProxSDP** via [Convex.jl](https://github.com/jump-dev/Convex.jl) as the following
```julia
# Load packages
using Convex, ProxSDP
# Number of vertices
n = 4
# Graph weights
W = [18.0 -5.0 -7.0 -6.0
-5.0 6.0 0.0 -1.0
-7.0 0.0 8.0 -1.0
-6.0 -1.0 -1.0 8.0]
# Define optimization problem
X = Semidefinite(n)
problem = maximize(0.25 * dot(W, X), diag(X) == 1)
# Solve optimization problem with ProxSDP
solve!(problem, ProxSDP.Optimizer(log_verbose=true, tol_gap=1e-4, tol_feasibility=1e-4))
# Get the objective value
problem.optval
# Retrieve solution
evaluate(X)
```
## Citing this package
The published version of the paper can be found [here](https://doi.org/10.1080/02331934.2020.1823387) and the arXiv version [here](https://arxiv.org/pdf/1810.05231.pdf).
We kindly request that you cite the paper as:
```bibtex
@article{souto2020exploiting,
author = {Mario Souto and Joaquim D. Garcia and \'Alvaro Veiga},
title = {Exploiting low-rank structure in semidefinite programming by approximate operator splitting},
journal = {Optimization},
pages = {1-28},
year = {2020},
publisher = {Taylor & Francis},
doi = {10.1080/02331934.2020.1823387},
URL = {https://doi.org/10.1080/02331934.2020.1823387}
}
```
The preprint version of the paper can be found [here](https://arxiv.org/abs/1810.05231).
## Disclaimer
* ProxSDP is a research software, therefore it should not be used in production.
* Please open an issue if you find any problems, developers will try to fix and find alternatives.
* There is no continuous development for 32-bit systems, the package should work, but might reach some issues.
* ProxSDP assumes primal and dual feasibility.
## ROAD MAP
- Support for exponential and power cones;
- Warm start.
| ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | docs | 1794 | # ProxSDP Documentation
ProxSDP is a semidefinite programming ([SDP](https://en.wikipedia.org/wiki/Semidefinite_programming)) solver based on the paper ["Exploiting Low-Rank Structure in Semidefinite Programming by Approximate Operator Splitting"](https://arxiv.org/abs/1810.05231). ProxSDP solves general SDP problems by means of a first order proximal algorithm based on the [primal-dual hybrid gradient](http://www.cmapx.polytechnique.fr/preprint/repository/685.pdf), also known as Chambolle-Pock method. The main advantage of ProxSDP over other state-of-the-art solvers is the ability of exploit the low-rank property inherent to several SDP problems.
### Overview of problems ProxSDP can solve
* Any semidefinite programming problem in [standard form](http://web.stanford.edu/~boyd/papers/pdf/semidef_prog.pdf);
* Semidefinite relaxations of nonconvex problems, e.g. [max-cut](http://www-math.mit.edu/~goemans/PAPERS/maxcut-jacm.pdf), [binary MIMO](https://arxiv.org/pdf/cs/0606083.pdf), [optimal power flow](http://authorstest.library.caltech.edu/141/1/TPS_OPF_2_tech.pdf), [sensor localization](https://web.stanford.edu/~boyd/papers/pdf/sensor_selection.pdf);
* Nuclear norm minimization problems, e.g. [matrix completion](https://statweb.stanford.edu/~candes/papers/MatrixCompletion.pdf).
### Installing
Currently ProxSDP only works with **Julia 1.0.x**
To add ProxSDP run:
```julia
pkg> add ProxSDP
```
### Referencing
The first version of the paper can be found [here](https://arxiv.org/abs/1810.05231).
```
@article{souto2018exploiting,
title={Exploiting Low-Rank Structure in Semidefinite Programming by Approximate Operator Splitting},
author={Souto, Mario and Garcia, Joaquim D and Veiga, {\'A}lvaro},
journal={arXiv preprint arXiv:1810.05231},
year={2018}
}
``` | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.8.3 | 0ca8d20076eac7fef4ea8394082bd3ffdd3c4733 | docs | 2887 | # Manual
## Building problems with JuMP.jl
Currently the easiest ways to pass problems to ProxSDP is through [JuMP](https://github.com/JuliaOpt/JuMP.jl) or MathOptInterface.
In the test folder one can find MOI implementations of some problems: MIMO, Sensor Localization, Random SDPs and sdplib problems.
## Solver arguments
Argument | Description | Type | Values (default)
--- | --- | --- | ---
log_verbose | print evolution of the process | `Bool` | `false`
log_freq | print evoluition of the process every n iterations | `Int` | `100`
timer_verbose | Outputs a time logger | `Bool` | `false`
time_limit | Maximum time the algorithm can try to solve in seconds | `Float64` | `360000.0`
max_iter | Maximum number of iterations | `Int` | `1000000`
tol_primal | Primal error tolerance | `Float64` | `1e-3`
tol_dual | Dual error tolerance | `Float64` | `1e-3`
tol_psd | Tolerance associated with PSD cone | `Float64` | `1e-15`
tol_soc | Tolerance associated with SOC cone | `Float64` | `1e-15`
initial_theta | Initial over relaxation parameter | `Float64` | `1.0`
initial_beta | Initial primal/dual step ratio | `Float64` | `1.0`
min_beta | Minimum primal/dual step ratio | `Float64` | `1e-4`
max_beta | Maximum primal/dual step ratio | `Float64` | `1e+4`
convergence_window | Minimum number of iterations to update target rank | `Int` | `200`
max\_linsearch\_steps | Maximum number of iterations for linesearch | `Int` | `1000`
full\_eig\_decomp | Flag for using full eigenvalue decomposition | `Bool` | `false`
## JuMP example
A quick JuMP example:
```julia
using ProxSDP, JuMP
# Create a JuMP model using ProxSDP as the solver
model = Model(ProxSDP.Optimizer)
set_optimizer_attribute(model, "log_verbose", true)
# Create a Positive Semidefinite variable
@variable(model, X[1:2,1:2], PSD)
x = X[1,1]
y = X[2,2]
@constraint(model, ub_x, x <= 2)
@constraint(model, ub_y, y <= 30)
@constraint(model, con, 1x + 5y <= 3)
# ProxSDP supports maximization or minimization
# of linear functions
@objective(model, Max, 5x + 3 * y)
# Then we can solve the model
JuMP.optimize!(model)
# And ask for results!
JuMP.objective_value(model)
JuMP.value(x)
JuMP.value(y)
```
### Referencing
The first version of the paper can be found [here](https://arxiv.org/abs/1810.05231).
```
@article{souto2018exploiting,
title={Exploiting Low-Rank Structure in Semidefinite Programming by Approximate Operator Splitting},
author={Souto, Mario and Garcia, Joaquim D and Veiga, {\'A}lvaro},
journal={arXiv preprint arXiv:1810.05231},
year={2018}
}
```
### Disclaimer
* ProxSDP is a research software, therefore it should not be used in production.
* Please open an issue if you find any problems, developers will try to fix and find alternatives.
* ProxSDP assumes primal and dual feasibility. Currently, it is not able to reliably identify infeasibility and unboundedness. | ProxSDP | https://github.com/mariohsouto/ProxSDP.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 336 | # docs/make.jl
using Mikrubi
using Documenter
using PyPlot
makedocs(
sitename = "Mikrubi.jl",
pages = [
"Home" => "index.md",
"Manual" => "manual.md",
"Graphics" => "graphics.md",
],
modules = [Mikrubi],
)
deploydocs(
repo = "github.com/Mikumikunisiteageru/Mikrubi.jl.git",
versions = ["stable" => "v^", "v#.#.#"]
)
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 7259 | # examples/alliwalli/workflow.jl
using Mikrubi
cd(joinpath(pkgdir(Mikrubi), "examples", "alliwalli"))
# I am not authorized to redistribute the shapefile in the package due to copyright issues. So I show here how the original materials are processed and converted, and their derivatives are attached here as an instance. Users may skip this block when re-running the script. Instead, the derivates are read anew from the disk.
if false
shptable = readshape("path/to/china/counties.shp");
# Read a shapefile from Gaode containing all 2894 counties of China
layers = readlayers("path/to/worldclim/layers");
# Read 19 climatic factor layers from WorldClim
china, ylayers = makefield(layers, shptable)
writefield("chinafield.mkuf", china)
# Save the Mikrubi field to disk
writelayers("ylayers/ybio*.tif", ylayers)
# Save the extracted layers to disk
end
# The file `database.tsv` is a result of my georeferencing work for specimens from Chinese Virtual Herbarium (https://www.cvh.ac.cn/) of Allium wallichii. Its first column is taken out and constitutes the list of occupied counties. The list is saved as `countylist.tsv`. When necessary, users may re-run this block.
if false
using DelimitedFiles
db, _ = readdlm("database.tsv", '\t', header=true)
ctlist = unique(db[:, 1])
writelist("countylist.txt", ctlist)
end
# Read from disk the Mikrubi field, the extracted layers, and the county list
china = readfield("chinafield.mkuf")
ylayers = readlayers("ylayers")
ctlist = readlist("countylist.txt")
# Train a model for Allium wallichii in China
model = fit(china, ctlist)
# [ Info: Now minimizing the opposite likelihood function...
# Iter Function value √(Σ(yᵢ-ȳ)²)/n
# ------ -------------- --------------
# 0 4.171830e+04 2.450473e+02
# * time: 0.003999948501586914
# 500 1.833867e+02 9.325053e-02
# * time: 2.2709999084472656
# 1000 1.470935e+02 1.524631e-01
# * time: 4.2850000858306885
# 1500 1.388932e+02 4.105145e-02
# * time: 6.365000009536743
# 2000 1.273631e+02 2.085092e-02
# * time: 8.411999940872192
# 2500 1.266571e+02 9.378015e-05
# * time: 10.424999952316284
# [ Info: Maximized log-likeliness: -126.65599400745549
# MikrubiModel{Float64}(3, [1.4842288152354197, -1.3603311815698715, -0.38761691866210646, 1.1231074177981228, 1.2090116395112087, -0.1033479618173679, 14.747024521778938, -14.878922083170924, 11.97056752230023, 30.299436373642205])
# Save the model to disk
writemodel("model.mkum", model)
# Predict the geographic distribution of Allium wallichii in China
geodist = predict(ylayers, model)
# 2160×1080×1 Raster{Float64,3} prob with dimensions:
# X Projected{Float64} LinRange{Float64}(-180.0, 179.833, 2160) ForwardOrdered Regular Intervals crs: WellKnownText,
# Y Projected{Float64} LinRange{Float64}(89.8333, -90.0, 1080) ReverseOrdered Regular Intervals crs: WellKnownText,
# Band Categorical{Int64} 1:1 ForwardOrdered
# extent: Extent(X = (-180.0, 179.99999999999997), Y = (-90.0, 90.0), Band = (1, 1))
# missingval: Inf
# crs: GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AXIS["Latitude",NORTH],AXIS["Longitude",EAST],AUTHORITY["EPSG","4326"]]
# values: [:, :, 1]
# 89.8333 89.6667 89.5 89.3333 89.1667 89.0 … -89.1667 -89.3333 -89.5 -89.6667 -89.8333 -90.0
# -180.0 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# -179.833 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# -179.667 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# -179.5 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# -179.333 Inf Inf Inf Inf Inf Inf … Inf Inf Inf Inf Inf Inf
# -179.167 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# -179.0 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# -178.833 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# ⋮ ⋮ ⋱ ⋮ ⋮
# 178.5 Inf Inf Inf Inf Inf Inf … Inf Inf Inf Inf Inf Inf
# 178.667 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# 178.833 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# 179.0 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# 179.167 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# 179.333 Inf Inf Inf Inf Inf Inf … Inf Inf Inf Inf Inf Inf
# 179.5 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# 179.667 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# 179.833 Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf Inf
# Write the distribution layer to disk
writelayer("geodist.tif", geodist)
# Another application of the function `predict` is to return probabilities organized according to counties
predict(china, model)
# Dict{Int64, Vector{Tuple{Vector{Float64}, Float64}}} with 2893 entries:
# 1144 => [([113.583, 36.0833], 1.66324e-9), ([113.583, 35.9167], 5.05715e-10), ([113.583, 36.25], 1.03843e-10), ([113. …
# 2108 => [([107.583, 35.9167], 1.0858e-13), ([107.583, 35.75], 1.05915e-13), ([107.75, 35.75], 4.37428e-14), ([107.75, …
# 1175 => [([124.917, 46.5833], 0.0), ([124.75, 46.25], 0.0), ([124.917, 46.25], 0.0), ([124.75, 46.0833], 0.0), ([124. …
# ......
# Calculate the probability of all counties being occupied by the Allium species according to the model
probcounties(china, model)
# Dict{Int64, Float64} with 2893 entries:
# 2288 => 0.0
# 1703 => 0.0
# 1956 => 2.01172e-12
# 2350 => 1.88738e-15
# 2841 => 9.8146e-6
# 2876 => 0.00403172
# ......
# For the 39-th county (Xiangcheng County, Ganzi Tibetan Autonomous Prefecture, Sichuan Province, China; type locality of Allium xiangchengense), sort its pixels according to probability of being occupied by Allium wallichii
predictcounty(china, model, 39)
# 30-element Vector{Tuple{Vector{Float64}, Float64}}:
# ([100.08333333333331, 28.583333333333336], 0.014382846487993373)
# ([99.91666666666663, 28.583333333333336], 0.01280767018326856)
# ([99.58333333333331, 29.41666666666667], 0.010613283680767083)
# ([99.41666666666663, 29.41666666666667], 0.010569729644928638)
# ([99.75, 29.25], 0.01015553778372369)
# ([99.75, 29.41666666666667], 0.009674145903628695)
# ......
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 780 | # examples/onedimsim/sim.jl
using Mikrubi
# Prepare inputs for skewed field where counties have different sizes
function county_list(R, hw=240)
sl = 1 # Size of counties on the left size (negative)
sr = 1 + R # Size of counties on the right size (positive)
pixels = -hw+0.5 : hw-0.5
junctions = vcat(-hw, reverse(-sl:-sl:1-hw), 0:sr:hw-1+sr) .+ 0.5
cumsum(in.(pixels, (junctions,))), collect(pixels)
end
# Set the parameter R, and generate required arguments
R = 30
ctids, vars = county_list(R)
# Fix the parameters
params = [0.02, 0, 1]
# Construct the field directly
asym = MikrubiField(ctids, vars, vars)
# Construct the model directly
model = MikrubiModel(1, params)
# Sample the counties for ten times
for _ = 1:10
println(samplecounties(asym, model))
end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 2419 | # examples/prinsepia/jui.jl
# For Mikrubi.jl, three types of input data are generally required:
# (1) a map describing the regions (often a shapefile or a GeoJSON),
# (2) raster layers of climatic factors, and
# (3) a list of regions occupied by the focal species.
# The district-level map of Nepal is available from GADM, which can be accessed by commands in Julia using package GADM.jl.
import GADM
shppath = GADM.download("NPL") # ISO code of Nepal
# The climatic factors can likewise be downloaded from WorldClim via the packae RasterDataSources.jl.
using RasterDataSources
get!(ENV, "RASTERDATASOURCES_PATH", "path/for/data/download")
getraster(WorldClim{BioClim}, res="10m") # 10-min resolution
climpath = RasterDataSources.rasterpath(WorldClim{BioClim})
# The two data sets above should have been prapared in paths `shppath` and `climpath` respectively. Now they can be read and handled with Mikrubi.jl.
using Mikrubi
shptable = readshape(shppath, 3) # District-level
layers = readlayers(climpath)
field, ylayers = makefield(layers, shptable)
# The `makefield` function finishes the map rasterization and the dimensionality reduction of environmental space, and then returns `field`, a structure containing the regionalization and environment information of Nepal, as well as `ylayers`, the low-dimensional environmental factors.
# According to Pendry (2012), the Prinsepia utilis Royle is present in 17 out of 75 districts of Nepal. The occupied districts are transformed into codes using the function `lookup` and afterwards used to build up a model using `fit`. The model is subsequently applied to `ylayers` using `predict`, which yields the predicted distribution of the species.
regions = ["Dadeldhura", "Doti", "Bajhang", "Kalikot",
"Mugu", "Jajarkot", "Jumla", "Rolpa", "Dolpa", "Baglung",
"Mustang", "Manang", "Gorkha", "Nuwakot", "Rasuwa",
"Okhaldhunga", "Solukhumbu"] # 17 occupied districts
regcodes = lookup(shptable, "NAME_3", regions)
# Transformed into codes by looking up the NAME_3 column
model = fit(field, regcodes)
geodist = predict(ylayers, model)
# The output `geodist` is of raster format. It can be written to disk using `writelayer` for downstream analysis.
writelayer("path/to/output/geodist.tif", geodist)
# And it can also be illustrated via the package PyPlot.jl (note: this requires Python and its package `matplotlib`).
using PyPlot
showlayer(geodist)
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 1739 | # src/Mikrubi.jl
module Mikrubi
@doc let
path = joinpath(dirname(@__DIR__), "README.md")
include_dependency(path)
read(path, String)
end Mikrubi
using Rasters
using RecipesBase
using Requires
import ArchGDAL; const AG = ArchGDAL
import GeoInterface; const GI = GeoInterface
import DimensionalData; const DD = DimensionalData
import Logistics: Logistic, logistic, loglogistic, complement
import Printf: @sprintf
import DelimitedFiles: readdlm, writedlm
import Statistics: mean, std, cor
import MultivariateStats # fit, projection, PCA
import Optim: optimize, maximize, maximum, NelderMead, Newton, Options
import StatsBase: tiedrank
import ColorTypes: RGB
export logistic, loglogistic
export readshape, goodcolumns, lookup, rasterize
export readlayers, writelayer, writelayers, makefield
export MikrubiField, readfield, writefield
export MikrubiModel, readmodel, writemodel
export readlist, writelist, fit
export predict, predictcounty, probcounties, samplecounties
export lipschitz
const MAXPCADIM = 4
"""
textwrap(str::AbstractString) :: String
Gobble all linefeeds ("`\\n`") inside `str` and replaces them with spaces
("` `"), so long strings can be wrapped to multiple lines in the codes, like
the Python package "textwrap". See also [`tw`](@ref).
"""
textwrap(str::AbstractString) = replace(str, r" *\n[ \t]*" => " ")
"""
@tw_str :: String
Macro version of [`textwrap`](@ref), without interpolation and unescaping.
"""
macro tw_str(str)
textwrap(str)
end
include("rasterize.jl")
include("shape.jl")
include("layer.jl")
include("core.jl")
include("recipesbase.jl")
include("deprecated.jl")
function __init__()
@require PyPlot="d330b81b-6aea-500a-939a-2ce795aea3ee" include("pyplot.jl")
end
end # module
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 14496 | # src/core.jl
"""
dvar2dparam(dvar::Int) :: Int
Convert dimensionality of an environmental space to the dimensionality of
the induced parameter space, i.e., compute the degrees of freedom for
positive-definite quadratic functions mapping a `dvar`-dimensional linear space
into real numbers.
# Examples
```julia
julia> dvar2dparam(1)
3
julia> dvar2dparam(3)
10
```
"""
dvar2dparam(dvar::Int) = ((dvar + 1) * (dvar + 2)) >> 1
"""
MikrubiField{T, U <: Real, V <: AbstractFloat}
MikrubiField(ctids, locs, vars)
Construct a Mikrubi field containing a number of pixels or points, using the
arguments which should always have the same number of rows
- `ctids::Vector`: a vector containing the county identifiers
- `locs::Array{<:Real}`: an array of geographic coordinates
- `vars::Matrix{<:AbstractFloat}`: an array of environmental coordinates
"""
struct MikrubiField{T, U <: Real, V <: AbstractFloat}
ctids::Vector{T}
locs::VecOrMat{U}
vars::Matrix{V}
npixel::Int
mcounty::Int
ids::Vector{T}
starts::Dict{T, Int}
stops::Dict{T, Int}
dvar::Int
function MikrubiField(ctids::AbstractVector{T}, locs::AbstractVecOrMat{U},
vars::AbstractMatrix{V}) where {T, U <: Real, V <: AbstractFloat}
ctids, locs, vars = copy(ctids), copy(locs), copy(vars)
npixel = length(ctids)
npixel == size(locs, 1) == size(vars, 1) ||
throw(DimensionMismatch(tw"Arguments `ctids`, `locs`, and `vars`
should always have the same number of rows!"))
if ! issorted(ctids)
perm = sortperm(ctids)
ctids, locs, vars = ctids[perm], locs[perm, :], vars[perm, :]
end
ids = unique(ctids)
mcounty = length(ids)
size(vars, 2) >= 5 &&
@warn tw"It is strongly recommended that no more than four
principal components are used for Mikrubi, or parameter space
would be highly ill-conditioned!"
starts = Dict(reverse(ctids) .=> npixel:-1:1)
stops = Dict( ctids .=> 1:+1:npixel)
dvar = size(vars, 2)
new{T, U, V}(ctids, locs, vars,
npixel, mcounty, ids, starts, stops, dvar)
end
end
MikrubiField(ctids, locs, vars) =
MikrubiField(ctids, colmatrix(locs), colmatrix(float.(vars)))
function Base.show(io::IO, field::MikrubiField)
print(io, textwrap("Mikrubi Field:
geo_dim = $(size(field.locs, 2)),
env_dim = $(field.dvar),
$(field.npixel) pixels,
and $(field.mcounty) counties"))
end
"""
writefield(path::AbstractString, field::MikrubiField) :: Nothing
Write a Mikrubi field to file at `path`.
"""
function writefield(path::AbstractString, field::MikrubiField)
headerstring = "I" * "L" ^ size(field.locs, 2) * "V" ^ field.dvar
header = string.(hcat(headerstring...))
body = hcat(Array{Any}(field.ctids), field.locs, field.vars)
writedlm(path, vcat(header, body))
end
"""
readfield(path::AbstractString) :: MikrubiField
Read a Mikrubi field from file at `path`.
"""
function readfield(path::AbstractString)
body, header = readdlm(path, Any, header=true)
heads = header[:]
Set(heads) == Set(["I", "L", "V"]) && findlast(heads .== "I") == 1 &&
findfirst(heads .== "V") - findlast(heads .== "L") == 1 ||
error("The file at `path` is not a well-formatted file!")
ctids = [body[:, heads .== "I"]...]
locs = Real.(body[:, heads .== "L"])
vars = AbstractFloat.(body[:, heads .== "V"])
MikrubiField(ctids, locs, vars)
end
"""
MikrubiModel{V <: AbstractFloat}
MikrubiModel(dvar::Int, params::Vector{<:AbstractFloat})
Construct a Mikrubi Model from a dimensionality `dvar` and a parameter vector
`params`. The equation must hold for `dvar2dparam(dvar) == length(params)`.
Mikrubi Models can be obtained from the function `fit`, and can be used in the
function `predict`.
"""
struct MikrubiModel{V <: AbstractFloat}
dvar::Int
params::Vector{V}
function MikrubiModel(dvar::Int,
params::Vector{V}) where {V <: AbstractFloat}
dvar2dparam(dvar) != length(params) &&
throw(DimensionMismatch(tw"Length of `params` and value of `dvar`
are incompatible! The following equation must hold:
`(dvar+1) * (dvar+2) / 2 == length(params)`."))
new{V}(dvar, params)
end
end
"""
writemodel(path::AbstractString, model::MikrubiModel) :: Nothing
Write a Mikrubi model to file at `path`.
"""
writemodel(path::AbstractString, model::MikrubiModel) =
writedlm(path, vcat(Any[model.dvar], model.params))
"""
readmodel(path::AbstractString) :: MikrubiModel
Read a Mikrubi model from file at `path`.
"""
function readmodel(path::AbstractString)
vector = readdlm(path, header=false)[:]
isinteger(vector[1]) &&
length(vector)-1 == dvar2dparam(Int(vector[1])) ||
error("The file at `path` is not a well-formatted file!")
MikrubiModel(Int(vector[1]), vector[begin+1:end])
end
"""
writelist(path::AbstractString, list::AbstractVector) :: Nothing
Write any list or vector to file at `path`.
"""
writelist(path::AbstractString, list::AbstractVector) = writedlm(path, list)
"""
readlist(path::AbstractString) :: Vector
Read any list of vector from file at `path`.
"""
readlist(path::AbstractString) = [readdlm(path, Any, header=false)...]
"""
decomparams(p::AbstractVector, d::Int) :: Tuple{Matrix, Vector, Any}
decomparams(model::MikrubiModel) :: Tuple{Matrix, Vector, Any}
Return parameter decomposition `At`, `b`, `c`, where
- `At` is a lower triangular matrix of size `(d, d)`,
- `b` is a column vector of size `d`, and
- `c` is a scalar.
WARNING: The vector `p` must have length `dvar2dparam(d)`.
# Example
```julia
julia> decomparams(collect(1:10), 3)
([1 0 0; 2 3 0; 4 5 6], [7, 8, 9], 10)
```
"""
function decomparams(p::AbstractVector, d::Int)
At = zeros(eltype(p), d, d)
i = j = k = 1
while i <= d
At[i, j] = p[k]
i == j ? i += j = 1 : j += 1
k += 1
end
At, p[k:k+d-1], p[k+d]
end
decomparams(model::MikrubiModel) = decomparams(model.params, model.dvar)
"""
pabsence(vars::AbstractMatrix, params::AbstractVector) :: Logistic
pabsence(field::MikrubiField, params::AbstractVector) :: Logistic
Compute the probability of absence at pixels given `vars`/`field` and `params`.
"""
function pabsence(vars::AbstractMatrix, params::AbstractVector)
At, b, c = decomparams(params, size(vars, 2))
return Logistic.(sum((vars * At) .^ 2, dims=2)[:] .+ vars * b .+ c)
end
pabsence(field::MikrubiField, params::AbstractVector) =
pabsence(field.vars, params)
"""
ppresence(vars::AbstractMatrix, params::AbstractVector) :: Logistic
ppresence(field::MikrubiField, params::AbstractVector) :: Logistic
Compute the probability of presence at pixels given `vars`/`field` and `params`.
"""
ppresence(vars::AbstractMatrix, params::AbstractVector) =
complement.(pabsence(vars, params))
ppresence(field::MikrubiField, params::AbstractVector) =
complement.(pabsence(field, params))
"""
mlogL(field::MikrubiField, counties, params::AbstractVector)
:: AbstractFloat
mlogL(vars::AbstractMatrix, params::AbstractVector) :: AbstractFloat
Compute the opposite log-likelihood that the occupied counties or occupied
coordinates are sampled. The opposite is taken for optimization.
"""
function mlogL(field::MikrubiField, counties, params::AbstractVector)
e = pabsence(field, params)
for o = counties
start = field.starts[o]
stop = field.stops[o]
subprod = prod(e[start:stop], init=one(eltype(e)))
e[start:stop] .= one(eltype(e))
e[start] = complement(subprod)
end
return -sum(log.(e))
end
mlogL(vars::AbstractMatrix, params::AbstractVector) =
-sum(log.(ppresence(vars, params)))
"""
findnearest(loc::AbstractVecOrMat{<:Real}, field::MikrubiField) :: Int
Return the row index in `field.locs` which is the nearest to the given
coordinates.
"""
function findnearest(loc::AbstractVecOrMat{<:Real}, field::MikrubiField)
loc = loc[:]'
length(loc) == size(field.locs, 2) ||
error(textwrap("Dimensionality of `loc` ($(length(loc))) is
incompatible with the geographic dimensionality of `field`
($(size(field.locs, 2)))!"))
eucldist2 = sum((loc .- field.locs) .^ 2, dims=2)[:]
findmin(eucldist2)[2]
end
"""
findnearests(loc::AbstractVector{<:AbstractVecOrMat}, field::MikrubiField)
:: Vector{Int}
findnearests(loc::AbstractMatrix{<:Real}, field::MikrubiField)
:: Vector{Int}
Return the row indices in `field.locs` which are the nearest to each of the
given coordinates. Duplicate results are reduced to one.
"""
findnearests(loc::AbstractVector{<:AbstractVecOrMat}, field::MikrubiField) =
unique(findnearest.(loc, [field]))
findnearests(loc::AbstractMatrix, field::MikrubiField) =
unique(findnearest.(eachrow(float.(loc)), [field]))
"""
fit(field::MikrubiField, counties, coords=zeros(0, 0);
optresult=[], iterations=3_000_000, kwargs...) :: MikrubiModel
Numerically find the Mikrubi model maximizing the likelihood that the occupied
counties as well as the occupied coordinates are sampled in the given Mikrubi
field. The optimization result is stored in the container `optresult` for
debugging.
"""
function fit(field::MikrubiField, counties, coords=zeros(0, 0);
optresult=[], iterations=39000, kwargs...)
valcounties = intersect(counties, field.ids)
indcoords = findnearests(coords, field)
isempty(valcounties) && isempty(indcoords) &&
error("No meaningful occupied counties or coordinates!")
cdvars = field.vars[indcoords, :]
fun(params) = mlogL(field, valcounties, params) + mlogL(cdvars, params)
zeroes = zeros(eltype(field.vars), dvar2dparam(field.dvar))
@info "Now minimizing the opposite likelihood function..."
result = optimize(fun, zeroes, NelderMead(), Options(
iterations=iterations, show_trace=true, show_every=500; kwargs...))
result.iteration_converged &&
@warn textwrap("The optimizing routine has reached the maximum
iteration count (`iterations = $iterations`), and thus the
maximizer may be unreliable. Please try to enlarge the parameter.")
push!(optresult, result)
@info "Maximized log-likeliness: $(-result.minimum)"
MikrubiModel(field.dvar, result.minimizer)
end
"""
predict(matrix::AbstractMatrix, model::MikrubiModel) :: Vector
predict(layers::RasterStack, model::MikrubiModel) :: Raster
predict(field::MikrubiField, model::MikrubiModel)
:: Dict{<:Any, <:Vector{<:Tuple{Vector{<:Real}, AbstractFloat}}}
Predict the probability of presence according to processed climatic factors
(`matrix` / `layers`) or on the Mikrubi `field`.
"""
function predict(matrix::AbstractMatrix, model::MikrubiModel)
size(matrix, 2) == model.dvar ||
throw(DimensionMismatch("The number of columns of the matrix
($(size(matrix, 2))) is different from the dimensionality of the
model ($(model.dvar))!"))
return float.(ppresence(matrix, model.params))
end
function predict(layers::RasterStack, model::MikrubiModel)
matrix, idx = extractlayers(layers)
layer = makelayer(predict(matrix, model), idx, first(layers))
return rebuild(layer; name="prob")
end
predict(field::MikrubiField, model::MikrubiModel) =
Dict(id => predictcounty(field, model, id) for id = field.ids)
"""
predictcounty(field::MikrubiField, model::MikrubiModel, county)
:: Vector{<:Tuple{Vector{<:Real}, AbstractFloat}}
Return the geographic coordinates of pixels in the county sorted by the
likeliness of being occupied.
"""
function predictcounty(field::MikrubiField, model::MikrubiModel, county)
county in field.ids ||
error("The Mikrubi field has no such county!")
idx = field.starts[county] : field.stops[county]
probs = predict(field.vars[idx, :], model)
locs = collect.(eachrow(field.locs[idx, :]))
sort!(tuple.(locs, probs), rev=true, by=last)
end
"""
probpixels(field::MikrubiField, model::MikrubiModel)
:: Vector{<:AbstractFloat}
Compute the probability for every pixel to be occupied in the `field`.
"""
probpixels(field::MikrubiField, model::MikrubiModel) =
predict(field.vars, model)
"""
probcounties(field::MikrubiField, model::MikrubiModel)
:: Dict{<:Any, <:AbstractFloat}
probcounties(::Type{<:Logistic}, field::MikrubiField, model::MikrubiModel)
:: Dict{<:Any, <:Logistic}
Compute the probability for every county to be occupied in the `field`.
"""
function probcounties(field::MikrubiField{T, U, V},
model::MikrubiModel{V}) where {T, U <: Real, V <: AbstractFloat}
logpabsence = log.(pabsence(field, model.params))
logPabsence = Dict{T, V}()
for i = 1:field.npixel
id = field.ctids[i]
logPabsence[id] = logpabsence[i] + get(logPabsence, id, zero(V))
end
Ppresence = Dict{T, V}()
for id = field.ids
Ppresence[id] = -expm1(logPabsence[id])
end
Ppresence
end
function probcounties(::Type{<:Logistic}, field::MikrubiField{T, U, V},
model::MikrubiModel{V}) where {T, U <: Real, V <: AbstractFloat}
lpabsence = pabsence(field, model.params)
Pabsence = Dict{T, Logistic{V}}()
for i = 1:field.npixel
id = field.ctids[i]
Pabsence[id] = lpabsence[i] * get(Pabsence, id, one(Logistic{V}))
end
Ppresence = Dict{T, Logistic{V}}()
for id = field.ids
Ppresence[id] = complement(Pabsence[id])
end
Ppresence
end
"""
samplecounties(field::MikrubiField, model::MikrubiModel) :: Vector
Sample counties according to their probability of being occupied.
"""
function samplecounties(field::MikrubiField, model::MikrubiModel)
Ppresence = probcounties(field, model)
@inline bernoulli(p) = rand() <= p
sort!([k for k = keys(Ppresence) if bernoulli(Ppresence[k])])
end
"""
loglipschitz(model::MikrubiModel, field::MikrubiField; wholespace=false)
:: AbstractFloat
Calculate the (logarithmic) maximum gradient (in norm) of the probability of
presence over the `field`. When `wholespace=false` (default), the maximum is
taken among the points contained in `field`; otherwise it is taken around the
whole space.
"""
function loglipschitz(model::MikrubiModel,
field::MikrubiField; wholespace=false)
At, b, c = decomparams(model)
biM = 2 * At * At'
q(z) = sum((z*At) .^ 2) + sum(z * b) + c
llpll(qz) = loglogistic(qz) + loglogistic(-qz)
loglip(z) = llpll(q(z')) + log(sum((biM * z + b).^2))/2
vars = field.vars
m, id = findmax(loglip.(eachrow(vars)))
if wholespace == false
return m
else
return maximum(maximize(loglip, vars[id, :],
model.dvar == 1 ? Newton() : NelderMead()))
end
end
"""
lipschitz(model::MikrubiModel, field::MikrubiField; wholespace=false)
:: AbstractFloat
Calculate the maximum gradient (in norm) of the probability of presence over
the `field`. When `wholespace=false` (default), the maximum is taken among the
points contained in `field`; otherwise it is taken around the whole space.
"""
lipschitz(model::MikrubiModel, field::MikrubiField; wholespace=false) =
exp(loglipschitz(model, field; wholespace=wholespace))
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 163 | # src/deprecated.jl
export Graphics, setplot
module Graphics end
function setplot(x)
@info "`setplot(PyPlot)` is no longer required for plotting."
return
end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 16374 | # src/layer.jl
"""
colmatrix(vector::AbstractVector) :: AbstractMatrix
colmatrix(matrix::AbstractMatrix) :: AbstractMatrix
Return a one-column matrix if the argument is a vector, or the matrix itself
if the argument is already a matrix.
"""
colmatrix(vector::AbstractVector) = repeat(vector, 1, 1)
colmatrix(matrix::AbstractMatrix) = matrix
"""
allsame(a::AbstractVector) :: Bool
Return `true` if all elements from `a` are identical, or otherwise `false`.
An error is thrown if the vector `a` is empty.
# Examples
```julia
julia> allsame([1, 1, 2])
false
julia> allsame([1, 1, 1])
true
julia> allsame([1])
true
julia> allsame([])
ERROR: BoundsError: attempt to access 0-element Array{Any,1} at index [1]
Stacktrace:
[1] getindex at .\\array.jl:787 [inlined]
[2] allsame(::Array{Any,1}) at .\\REPL[9]:1
[3] top-level scope at REPL[20]:1
```
"""
allsame(a::AbstractVector) = all(a .== first(a))
"""
sortfilenames!(filenames::AbstractVector{<:AbstractString})
Sort `filenames` in place according to the order of the distinctive parts
among them. If all of the distinctive parts are decimal numerals, they are
sorted as integers.
# Examples
```julia
julia> sortfilenames!(["bio_9.tif", "bio_10.tif", "bio_1.tif"])
[ Info: 3 files "bio_*.tif" recognized in the directory, where * = 1, 9, 10.
3-element Array{String,1}:
"bio_1.tif"
"bio_9.tif"
"bio_10.tif"
julia> sortfilenames!(["bio_09.tif", "bio_10.tif", "bio_01.tif"])
[ Info: 3 files "bio_*.tif" recognized in the directory, where * = 01, 09, 10.
3-element Array{String,1}:
"bio_01.tif"
"bio_09.tif"
"bio_10.tif"
```
"""
function sortfilenames!(filenames::AbstractVector{<:AbstractString})
if length(filenames) == 0
error("No file recognized in the directory!")
elseif length(filenames) == 1
@info textwrap("Only one file \"$(splitpath(filenames[1])[end])\"
recognized in the directory.")
return filenames
end
s = 1
while allsame([fn[s] for fn = filenames])
s += 1
end
t = 0
while allsame([fn[end-t] for fn = filenames])
t += 1
end
fileids = [fn[s:end-t] for fn = filenames]
perm = sortperm(fileids)
try
perm = sortperm(parse.(Int, fileids))
catch ; end
@info textwrap("$(length(filenames)) files
\"$(splitpath(filenames[1][1:s-1])[end])*$(filenames[1][end-t+1:end])\"
recognized in the directory, where * = $(join(fileids[perm], ", ")).")
filenames .= filenames[perm]
filenames
end
"""
readlayers(filenames::Vector{<:AbstractString}) :: RasterStack
readlayers(dir::AbstractString; extset=nothing) :: RasterStack
Read all raster layers from the directory `dir` as a `RasterStack`.
`extset` describes possible extensions of raster files (e.g., `Set(".tif")`,
or `[".tiff"]`; see also [`readshape`](@ref)). By setting `extset` to
`nothing`, the extension filtering is not processed, i.e., all files are
regarded as raster files.
"""
function readlayers(filenames::AbstractVector{<:AbstractString})
layers = RasterStack(filenames)
keyz = keys(layers)
return RasterStack(map(key -> Float64.(layers[key]), keyz); name=keyz)
# for better numerical stability
end
readlayers(dir::AbstractString; extset=nothing) =
readlayers(sortfilenames!(filterext(dir, extset)))
"""
writelayer(path::AbstractString, layer::Raster) :: Nothing
Write `layer` to the disk at `path`. Alias for `GeoArrays.write!`.
"""
writelayer(path::AbstractString, layer::Raster) = (write(path, layer); return)
"""
writelayers(paths::AbstractVector{<:AbstractString},
layers::RasterStack) :: Nothing
writelayers(pathformula::AbstractString, layers::RasterStack) :: Nothing
Write `layers` to `paths` respondingly, or a series of paths generated by the
`pathformula` where an asterisk is used for wildcard and replaced by numbers.
"""
function writelayers(paths::AbstractVector{<:AbstractString},
layers::RasterStack)
length(paths) != length(layers) &&
error("`paths` and `layers` must have the same length!")
for (path, layer) = zip(paths, values(layers))
writelayer(path, layer)
end
end
function writelayers(pathformula::AbstractString, layers::RasterStack)
occursin("*", pathformula) ||
error("`pathformula` must contain an asterisk (\"`*`\") as wildcard!")
for (i, layer) = enumerate(values(layers))
path = replace(pathformula, "*" => i)
writelayer(path, layer)
end
end
"""
masklayers!(layers::RasterStack, ctpixels::CtPixels) :: RasterStack
Mask the `layers` in a way that only pixels present in `ctpixels` are kept,
while all other uncovered pixels are set to a missing value.
"""
function masklayers!(layers::RasterStack, ctpixels::CtPixels)
miss = boolmask(layers)
miss[getpixels(ctpixels)] .= false
for layer = values(layers)
layer[miss] .= missingval(layer)
end
layers
end
"""
extractlayers(layers::RasterStack) :: Tuple{Matrix, Vector{Int}}
Extract the non-missing pixels from `layers`, and combine them into a matrix,
whose rows representing pixels and columns representing variables.
`extractlayers` is the inverse function of [`makelayers`](@ref).
"""
function extractlayers(layers::RasterStack)
nonmiss = boolmask(layers)
hcat(values(layers[nonmiss])...), findall(nonmiss[:])
end
"""
emptylayer!(grid::Raster) :: Raster
Fill the `grid` with missing values in place.
"""
emptylayer!(grid::Raster) = (grid .= missingval(grid); grid)
"""
emptylayer(grid::Raster) :: Raster
Create a new `Raster` full of missing values from the shape of `grid`.
"""
emptylayer(grid::Raster) = emptylayer!(copy(grid))
"""
emptylayers(grid::Raster, m::Int) :: RasterStack
Create a `RasterStack` with `m` empty `Raster`s (full of missing values) from
the shape of `grid`.
"""
emptylayers(grid::Raster, m::Int) =
RasterStack(Tuple(emptylayer(grid) for _ = 1:m); name=string.("pca", 1:m))
"""
makelayers(matrix::AbstractMatrix, idx::AbstractVector, grid::Raster)
:: RasterStack
Make a `RasterStack` from the `grid` and values in columns of `matrix`.
For making a `Raster` from a column vector, see [`makelayer`](@ref).
`makelayers` is the inverse function of [`extractlayers`](@ref).
"""
function makelayers(matrix::AbstractMatrix, idx::AbstractVector, grid::Raster)
npixel, mvar = size(matrix)
npixel == length(idx) || throw(DimensionMismatch(
"The row number of `matrix` is not equal to the length of `idx`!"))
layers = emptylayers(grid, mvar)
for (i, layer) = enumerate(values(layers))
layer[idx] .= matrix[:, i]
end
layers
end
"""
makelayer(vector::AbstractVector, idx::AbstractVector, grid::Raster)
Make a `Raster` from the `grid` and values in `vector`.
For making a `RasterStack` from a matrix, see [`makelayers`](@ref).
"""
makelayer(vector::AbstractVector, idx::AbstractVector, grid::Raster) =
first(makelayers(colmatrix(vector), idx, grid))
"""
dftraverse!(beststate, bestscore, state, score, depth, maxdepth,
incompat, scoremat) :: Nothing
Find the index combination that
- firstly containing as many indices as possible, and
- secondly with the lowest pairwise sum from submatrix of `scoremat`,
such that no indices `i` and `j` coexist as long as `incompat[i][j] == true`.
The result is stored as the only element of `beststate`, with its score
decided by the two criteria above stored as the only element of `bestscore`.
# Example
```julia
julia> beststate = Vector(undef, 1);
julia> bestscore = [(0, 0.0)];
julia> dftraverse!(beststate, bestscore, Int[], (0, 0.0), 1, 3,
Bool[0 0 1; 0 0 0; 1 0 0],
[0.0 0.6 0.3; 0.6 0.0 0.9; 0.3 0.9 0.0]);
julia> beststate
1-element Array{Any,1}:
[1, 2]
julia> bestscore
1-element Array{Tuple{Int64,Float64},1}:
(2, -0.6)
```
"""
function dftraverse!(beststate, bestscore, state, score, depth, maxdepth,
incompat, scoremat)
if depth > maxdepth
if score > bestscore[1]
bestscore[1] = score
beststate[1] = state
end
return
end
if ! any(incompat[state, depth])
dftraverse!(beststate, bestscore, vcat(state, depth),
score .+ (1, -sum(scoremat[state, depth])), depth + 1, maxdepth,
incompat, scoremat)
end
dftraverse!(beststate, bestscore, state, score, depth + 1, maxdepth,
incompat, scoremat)
end
"""
selectvars(matrix::Matrix, rabsthres=0.8) :: Vector{Int}
Select as many variables as possible from `matrix` such that no pairwise
Pearson coefficient among them exceeds `rabsthres` and their sum is minimal.
# Example
```julia
julia> selectvars([1. 4. 7.; 2. 5. 8.; 3. 9. 27.], rabsthres=0.9)
2-element Array{Int64,1}:
1
3
```
"""
function selectvars(matrix::Matrix; rabsthres=0.8)
rabsmat = abs.(cor(matrix, dims=1))
incompat = rabsmat .> rabsthres
beststate = Vector(undef, 1)
bestscore = [(0, zero(eltype(matrix)))]
dftraverse!(beststate, bestscore, Int[], bestscore[1], 1, size(matrix, 2),
incompat, rabsmat)
beststate[1]
end
# """
# rowmeanstd(smatrix::Matrix) :: Tuple{Matrix, Matrix}
#
# Compute the mean and standard variation of a matrix by its rows.
# """
# function rowmeanstd(smatrix::Matrix)
# n = size(smatrix, 1)
# mean = sum(smatrix, dims=1) ./ n
# std = sqrt.(sum((smatrix .- mean) .^ 2, dims=1) ./ (n-1))
# return mean, std
# end
"""
princompvars(smatrix::Matrix; nprincomp=3) :: Tuple{Vector, Matrix}
Perform principal component analysis on `smatrix` whose columns represents
variables, and combines the `nprincomp` principal components into a matrix, and
returns the result matrix as well as the affine transformation
`(colmean, projwstd)`, such that
the result matrix == `(smatrix .- colmean) * projwstd`.
"""
function princompvars(smatrix::AbstractMatrix; nprincomp=3)
nprincomp > MAXPCADIM &&
@warn textwrap("It is strongly recommended that no more than four
principal components are used for Mikrubi, or parameter space would
be highly ill-conditioned!")
colmean = mean(smatrix, dims=1)
colstd = std(smatrix, dims=1)
# colmean, colstd = rowmeanstd(smatrix)
minimum(colstd) .<= 1e-11 &&
error(textwrap("Some variable among the layers has a very small
deviation (in other words, is (nearly) constant), which directly
leads to ill condition of succeeding calculations."))
smatrix01 = (smatrix .- colmean) ./ colstd
pca = MultivariateStats.fit(MultivariateStats.PCA,
smatrix01', maxoutdim=nprincomp)
pcadim = size(pca)[2]
pcadim < nprincomp &&
@warn textwrap("Only $(size(pca)[2]) principal component(s) (less than
`nprincomp` = $nprincomp) is/are used to expressed the selected
layers in the principal component analysis.")
proj = MultivariateStats.projection(pca)
projwstd = proj ./ colstd[:]
return colmean, projwstd
end
"""
DimLower
DimLower()
A container for transformation used in `makefield`, working as a function. If
it is new (`new=true`), the parameters (`colid`, `colmean`, and `projwstd`)
are computed when it is applied on a `RasterStack`.
"""
mutable struct DimLower{T<:AbstractFloat}
new::Bool
colid::Vector{Int}
colmean::Matrix{T}
projwstd::Matrix{T}
DimLower(layers::RasterStack) = new{eltype(first(layers))}(true)
end
function (f::DimLower)(layers::RasterStack; rabsthres=0.8, nprincomp=3)
matrix, idx = extractlayers(layers)
f.new && (f.colid = selectvars(matrix; rabsthres=rabsthres))
smatrix = matrix[:, f.colid]
f.new &&
((f.colmean, f.projwstd) = princompvars(smatrix; nprincomp=nprincomp))
ematrix = (smatrix .- f.colmean) * f.projwstd
f.new = false
elayers = makelayers(ematrix, idx, first(layers))
return idx, ematrix, elayers
end
"""
makefield(layers::RasterStack, ctpixels::CtPixels;
rabsthres=0.8, nprincomp=3) :: Tuple{MikrubiField, RasterStack}
makefield(layers::RasterStack, ctpixels::CtPixels,
players::RasterStack; rabsthres=0.8, nprincomp=3)
:: Tuple{MikrubiField, RasterStack, RasterStack}
makefield(layers::RasterStack, shptable; rabsthres=0.8, nprincomp=3)
:: Tuple{MikrubiField, RasterStack}
makefield(layers::RasterStack, shptable, players::RasterStack;
rabsthres=0.8, nprincomp=3)
:: Tuple{MikrubiField, RasterStack, RasterStack}
Create a `MikrubiField` as well as processed variable layers from `layers`
and `shptable` or `ctpixels`, by
0. (rasterizing the `shptable` to `ctpixels` using `rasterize`,)
1. masking the `layers` with `ctpixels` (using `Mikrubi.masklayers!`),
2. extracting non-missing pixels from `layers` (using `Mikrubi.extractlayers`),
3. selecting less correlated variables (using `Mikrubi.selectvars`), and
4. doing the principal component analysis (using `Mikrubi.princompvars`).
# Optional keyword arguments
- `rabsthres`: threshold of collinearity.
Absolute value of Pearson correlation efficient greater than this threshold
is identified as collinearity and the two variables are thus incompatible.
- `nprincomp`: expected number of principal components of the variables.
# Notes about `players`
When `players` is present in the argument list, raster layers it contains
experience the same process including subsetting, selecting, and taking
principal components, and results are packed and returned in the third place.
User must assure that `players` has the same length as `layers`, and
their elements are corresponding in order. This would be useful when the
prediction is in another geographic range or at another time.
"""
makefield(layers::RasterStack, ctpixels::CtPixels;
rabsthres=0.8, nprincomp=3) =
_makefield(layers, ctpixels, rabsthres, nprincomp)[1:2]
function makefield(layers::RasterStack, ctpixels::CtPixels,
players::RasterStack; rabsthres=0.8, nprincomp=3)
length(layers) == length(players) ||
error("`players` must have the same length as `layers`!")
field, elayers, dimlower =
_makefield(layers, ctpixels, rabsthres, nprincomp)
eplayers = last(dimlower(players))
return field, elayers, eplayers
end
makefield(layers::RasterStack, shptable; rabsthres=0.8, nprincomp=3) =
makefield(layers, rasterize(shptable, first(layers));
rabsthres=rabsthres, nprincomp=nprincomp)
makefield(layers::RasterStack, shptable, players::RasterStack;
rabsthres=0.8, nprincomp=3) =
makefield(layers, rasterize(shptable, first(layers)), players;
rabsthres=rabsthres, nprincomp=nprincomp)
function _makefield(layers::RasterStack, ctpixels::CtPixels,
rabsthres, nprincomp)
layers = deepcopy(layers)
masklayers!(layers, ctpixels)
dimlower = DimLower(layers)
idx, ematrix, elayers =
dimlower(layers; rabsthres=rabsthres, nprincomp=nprincomp)
field = buildfield(ctpixels, idx, ematrix, first(layers))
return field, elayers, dimlower
end
"""
dimpoints(grid::Raster) :: DimPoints
Create a `DimensionalData.DimPoints` from `grid` after shifting all its
dimension loci to `Center()`. Similar to `GeoArrays.coords`.
"""
function dimpoints(grid::Raster)
ds = map(dims(grid)) do d
DD.maybeshiftlocus(DD.Center(), d)
end
return DD.DimPoints(set(grid, ds))
end
"""
centercoords(dp::DimPoints, ci::CartesianIndices, idx::Int)
:: Tuple{AbstractFloat, AbstractFloat}
Get center coordinates of a grid cell indexed by `idx` in a `Raster`.
"""
function centercoords(grid::Raster, idx::Int)
dp = dimpoints(grid)
ci = CartesianIndices(grid)
return dp[ci[idx]][1:2]
end
centercoords(dp::DD.DimPoints, ci::CartesianIndices, idx::Int) =
dp[ci[idx]][1:2]
"""
buildfield(ctpixels::CtPixels, idx::Vector,
projmat::Matrix, grid::Raster) :: MikrubiField
Construct a `MikrubiField` from `ctpixels`, `idx`, `projmat`, and `grid`. Used in
[`makefield`](@ref).
"""
function buildfield(ctpixels::CtPixels, idx::AbstractVector,
projmat::AbstractMatrix, grid::Raster)
ctids = getcounties(ctpixels)
npixel = length(ctids)
mvar = size(projmat, 2)
revidx = Dict(idx .=> 1:length(idx))
locs = Matrix{eltype(grid.dims[1])}(undef, npixel, 2)
vars = Matrix{eltype(projmat)}(undef, npixel, mvar)
found = trues(npixel)
dp = dimpoints(grid)
ci = CartesianIndices(grid)
for i = 1:npixel
coord = getpixel(ctpixels, i)
if haskey(revidx, coord)
locs[i, :] .= centercoords(dp, ci, coord)
vars[i, :] .= projmat[revidx[coord], :]
else
found[i] = false
end
end
sumfound = sum(found)
ratiofound = sumfound / npixel
0.9 < ratiofound < 1 &&
@info textwrap("Among the $npixel pixels, $(npixel-sumfound)
($(@sprintf("%.1f", 100 * (1 - ratiofound)))%) is/are discarded
for lacking values.")
ratiofound <= 0.9 &&
@warn textwrap("Among the $npixel pixels, $(npixel-sumfound)
($(@sprintf("%.1f", 100 * (1 - ratiofound)))%) is/are discarded
for lacking values!")
MikrubiField(ctids[found], locs[found, :], vars[found, :])
end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 6609 | # src/pyplot.jl
export showlayer, showfield, showctpixels, showshptable
function geom2mat(geom)
GI.isgeometry(geom) || error("`geom` is not a geometry!")
trait = GI.geomtrait(geom)
if isa(trait, GI.PolygonTrait)
return geom2mat_polygon(geom)
elseif isa(trait, GI.MultiPolygonTrait)
return geom2mat_multipolygon(geom)
else
error("Trait of `geom` not supported!")
end
end
function geom2mat_polygon(polygon)
parts = Int[]
points = NTuple{2, Float64}[]
for i = 1 : GI.nring(polygon)
ring = GI.getring(polygon, i)
push!(parts, length(points))
for j = 1 : GI.npoint(ring)
point = GI.getpoint(ring, j)
push!(points, (GI.x(point), GI.y(point)))
end
end
n = length(points)
mat = Matrix{Float64}(undef, 2, n)
for i = 1:n
mat[:, i] .= points[i]
end
return parts, mat
end
function geom2mat_multipolygon(multipolygon)
parts = Int[]
points = NTuple{2, Float64}[]
for k = 1 : GI.npolygon(multipolygon)
polygon = GI.getpolygon(multipolygon, k)
for i = 1 : GI.nring(polygon)
ring = GI.getring(polygon, i)
push!(parts, length(points))
for j = 1 : GI.npoint(ring)
point = GI.getpoint(ring, j)
push!(points, (GI.x(point), GI.y(point)))
end
end
end
n = length(points)
mat = Matrix{Float64}(undef, 2, n)
for i = 1:n
mat[:, i] .= points[i][1:2]
end
return parts, mat
end
function coords(grid::Raster)
x, y = dims(grid)[1:2]
xseq = dimseq(x)
yseq = dimseq(y)
# vcat.(xseq, yseq')
xx = repeat(xseq, 1, length(yseq))
yy = repeat(yseq', length(xseq), 1)
return xx, yy
end
"""
showlayer(layer; ax=PyPlot.gca(), f=identity, kwargs...)
Show a layer. Keyword argument `f = identity` is a function acted separately
on every element. A possible alternative is `f = x -> x ^ 0.4`.
"""
function showlayer(layer; ax=PyPlot.gca(), f=identity, kwargs...)
xx, yy = coords(layer)
Anan = replace_missing(layer, missingval=eltype(layer)(NaN))
A = Array(Anan)[:, :, 1]
nna = findall(.!isnan.(A))
lim(z) = [1.05 -0.05; -0.05 1.05] *
[minimum(vcat(z[nna], z[nna .+ [CartesianIndex(1,1)]])),
maximum(vcat(z[nna], z[nna .+ [CartesianIndex(1,1)]]))]
handle = ax.pcolor(xx, yy, f.(A); kwargs...)
ax.set_xlim(lim(xx))
ax.set_ylim(lim(yy))
return handle
end
"""
showfield(layer; ax=PyPlot.gca(), f=identity, kwargs...)
showfield(field, layer; ax=PyPlot.gca(), f=tiedrank, kwargs...)
Show geographic information and environmental information of a Mikrubi model.
The three principal components are reflexed in red, green, and blue. Keyword
argument `f = tiedrank` is a function acted on columns of `field.vars` as a
whole. A possible alternative is `f = identity`.
"""
function showfield(field; ax=PyPlot.gca(), f=tiedrank, kwargs...)
u(a) = (a .- minimum(a)) ./ (maximum(a) - minimum(a))
v(a) = u(f(a))
r = field.dvar >= 1 ? v(field.vars[:, 1]) : fill(0.5, field.npixel)
g = field.dvar >= 2 ? v(field.vars[:, 2]) : fill(0.5, field.npixel)
b = field.dvar >= 3 ? v(field.vars[:, 3]) : fill(0.5, field.npixel)
@assert size(field.locs, 2) >= 2
x = field.locs[:, 1]
y = field.locs[:, 2]
ax.scatter(x, y, c=collect(zip(r, g, b)), s=2; kwargs...)
end
function showfield(field, layer; ax=PyPlot.gca(), f=tiedrank, kwargs...)
u(a) = (a .- minimum(a)) ./ (maximum(a) - minimum(a))
v(a) = u(f(a))
r = field.dvar >= 1 ? v(field.vars[:, 1]) : fill(0.5, field.npixel)
g = field.dvar >= 2 ? v(field.vars[:, 2]) : fill(0.5, field.npixel)
b = field.dvar >= 3 ? v(field.vars[:, 3]) : fill(0.5, field.npixel)
xb, yb, _ = size(layer)
matrix = fill(1., yb, xb, 3)
for i = 1:field.npixel
x, y = xy2ij(layer, field.locs[i, 1:2]...)
matrix[y, x, :] .= r[i], g[i], b[i]
end
x, y = dims(layer)[1:2]
x1, x2 = extrema(dimseq(x))
y1, y2 = extrema(dimseq(y))
handle = ax.imshow(matrix;
extent=(x1,x2, y1,y2), zorder=-1, aspect="auto", kwargs...)
nna = minimum(matrix, dims=3) .< 1
xval = any(nna, dims=1)[:]
yval = any(nna, dims=2)[:]
x3, x4 = minmax(dimseq(x)[findfirst(xval)], dimseq(x)[findlast(xval)+1])
y3, y4 = minmax(dimseq(y)[findfirst(yval)], dimseq(y)[findlast(yval)+1])
lim(z3, z4) = [1.05 -0.05; -0.05 1.05] * [z3, z4]
ax.set_xlim(lim(x3, x4))
ax.set_ylim(lim(y3, y4))
return handle
end
"""
showctpixels(ctpixels; ax=PyPlot.gca(), salt=20, kwargs...)
showctpixels(ctpixels, layer; ax=PyPlot.gca(), salt=20, kwargs...)
Show a `Mikrubi.CtPixels`. Every county is assigned a hash color (influenced
by a fixed `salt` value also), and every pixel has the composite color from all
counties assigned to it. Empty cells are depicted white.
"""
showctpixels(ctpixels; ax=PyPlot.gca(), salt=20, kwargs...) =
showctpixels(ctpixels, ctpixels.indices; ax=ax, salt=salt, kwargs...)
function showctpixels(ctpixels, layer; ax=PyPlot.gca(), salt=20, kwargs...)
xb, yb, _ = size(layer)
ci = CartesianIndices((xb, yb))
matrix = fill(1., yb, xb, 3)
for (ct, pixel) = ctpixels.list
x, y = ci[pixel].I
matrix[y, x, :] .*= hashcolor(ct, salt=salt)
end
x, y = dims(layer)[1:2]
x1, x2 = extrema(dimseq(x))
y1, y2 = extrema(dimseq(y))
handle = ax.imshow(matrix;
extent=(x1,x2, y1,y2), zorder=-1, aspect="auto", kwargs...)
nna = minimum(matrix, dims=3) .< 1
xval = any(nna, dims=1)[:]
yval = any(nna, dims=2)[:]
x3, x4 = minmax(dimseq(x)[findfirst(xval)], dimseq(x)[findlast(xval)+1])
y3, y4 = minmax(dimseq(y)[findfirst(yval)], dimseq(y)[findlast(yval)+1])
lim(z3, z4) = [1.05 -0.05; -0.05 1.05] * [z3, z4]
ax.set_xlim(lim(x3, x4))
ax.set_ylim(lim(y3, y4))
return handle
end
function polygonline(geom)
line = Tuple{Float64, Float64}[]
parts, mat = geom2mat(geom)
parts .+= 1
for i = 1:size(mat,2)
pt = (mat[:, i]...,)
i in parts && push!(line, (NaN, NaN))
push!(line, pt)
end
line
end
function shapelines(geoms)
lines = Tuple{Float64, Float64}[]
for geom = geoms
append!(lines, polygonline(geom))
end
n = length(lines)
flag = trues(n)
set = Set([((NaN, NaN), (NaN, NaN))])
for i = 2:n
if (lines[i-1], lines[i]) in set
flag[i] = false
continue
end
push!(set, (lines[i-1], lines[i]))
push!(set, (lines[i], lines[i-1]))
end
line = deepcopy(lines)
line[.!flag] .= [(NaN, NaN)]
shortline = Tuple{Float64, Float64}[]
lastnan = true
for i = 2:n
isnan(line[i][1]) && lastnan && continue
push!(shortline, line[i])
lastnan = isnan(line[i][1])
end
shortline
end
"""
showshptable(shptable; ax=PyPlot.gca(), kwargs...)
Show lines from polygons in `shptable`. Identical segments are reduced as one.
"""
function showshptable(shptable; ax=PyPlot.gca(), kwargs...)
shortline = shapelines(AG.getgeom.(shptable))
ax.plot(first.(shortline), last.(shortline), "-k", lw=0.8; kwargs...)
end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 3024 | # src/prepare.jl
"""
CtPixels
CtPixels(indices::Raster{Int})
Collector for county-specific rasterization results, whose `list` contains
county-pixel tuples. Can only be instantiated from an index raster
(see [`indicate`](@ref)).
"""
struct CtPixels
indices::Raster{Int}
list::Vector{Tuple{Int,Int}}
CtPixels(indices::Raster{Int}) = new(indices, Tuple{Int,Int}[])
end
Base.length(ctpixels::CtPixels) = length(ctpixels.list)
"""
getpixels(ctpixels::CtPixels) :: Vector{Int}
Get pixel indices from `ctpixels`.
"""
getpixels(ctpixels::CtPixels) = last.(ctpixels.list)
"""
getcounties(ctpixels::CtPixels) :: Vector{Int}
Get county indices from `ctpixels`.
"""
getcounties(ctpixels::CtPixels) = first.(ctpixels.list)
"""
getpixel(ctpixels::CtPixels, i::Int) :: Int
Get the pixel index of the `i`-th county-pixel tuple in `ctpixels`.
"""
getpixel(ctpixels::CtPixels, i::Int) = last(ctpixels.list[i])
"""
getcounty(ctpixels::CtPixels, i::Int) :: Int
Get the county index of the `i`-th county-pixel tuple in `ctpixels`.
"""
getcounty(ctpixels::CtPixels, i::Int) = first(ctpixels.list[i])
"""
indicate(layer::Raster) :: Raster{Int}
Build an index raster `indices` from `layer`. The value of an array element in
`indices` is either (1) `0` for missing, if the corresponding element in
`layer` is missing; or (2) the integer index of the array element, otherwise.
"""
function indicate(layer::Raster)
indices = Int.(zero(layer))
indices[:] .= eachindex(indices)
indices[.!boolmask(layer)] .= 0
return rebuild(indices, missingval=0)
end
"""
register!(ctpixels::CtPixels, ct::Int, pixel::Int) :: Int
Push a county-pixel tuple into `ctpixels`, if `pixel` is not zero. For
convenience, the value of `pixel` is returned.
"""
function register!(ctpixels::CtPixels, ct::Int, pixel::Int)
iszero(pixel) || push!(ctpixels.list, (ct, pixel))
return pixel
end
"""
register!(ctpixels::CtPixels, ct::Int) :: Function
Create a function that accepts a `pixel`, pushes the county-pixel tuple into
`ctpixels`, and finally returns the value of `pixel`.
"""
register!(ctpixels::CtPixels, ct::Int) =
pixel -> register!(ctpixels, ct, pixel)
"""
ispoly(geom) :: Bool
Check if `geom` is a polygon or a multipolygon.
"""
function ispoly(geom)
GI.isgeometry(geom) || return false
trait = GI.geomtrait(geom)
return isa(trait, GI.MultiPolygonTrait) || isa(trait, GI.PolygonTrait)
end
"""
rasterize(geoms, layer::Raster) :: CtPixels
rasterize(shptable::AG.IFeatureLayer, layer::Raster) :: CtPixels
For a collection of (multi)polygons, rasterize each of them and write the
results in a CtPixels.
"""
function rasterize(geoms, layer::Raster)
all(ispoly.(geoms)) ||
throw(ArgumentError("`geoms` are not (multi)polygons!"))
indices = indicate(layer)
ctpixels = CtPixels(indices)
for (i, s) = enumerate(geoms)
rasterize!(indices, s, boundary=:touches, fill=register!(ctpixels,i))
end
return ctpixels
end
rasterize(shptable::AG.IFeatureLayer, layer::Raster) =
rasterize(AG.getgeom.(shptable), layer)
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 2929 | # src/recipesbase.jl
pseudorand(x, y) = x % y / y
function hashcolor(x; salt=20, gmin=0.5)
r0, g0, b0 = pseudorand.(hash((salt, x)), [39, 139, 239])
r, g, b = @. 1 - (1 - (r0, g0, b0)) * (1 - gmin)
end
function dyecolors(field::MikrubiField; func=tiedrank)
u(a) = (a .- minimum(a)) ./ (maximum(a) - minimum(a))
v(a) = u(func(a))
r = field.dvar >= 1 ? v(field.vars[:, 1]) : fill(0.5, field.npixel)
g = field.dvar >= 2 ? v(field.vars[:, 2]) : fill(0.5, field.npixel)
b = field.dvar >= 3 ? v(field.vars[:, 3]) : fill(0.5, field.npixel)
colores = RGB.(r, g, b)
end
dye(ctpixels::CtPixels; salt=20) =
dye(ctpixels, ctpixels.indices; salt=salt)
function dye(ctpixels::CtPixels, grid::Raster; salt=20)
n, m, _ = size(grid)
ci = CartesianIndices((n, m))
imaget = fill(RGB(1.0, 1.0, 1.0), n, m)
for (ct, pixel) = ctpixels.list
c = imaget[pixel]
rt, gt, bt = hashcolor(ct, salt=salt)
imaget[pixel] = RGB(c.r * rt, c.g * gt, c.b * bt)
end
Raster(imaget, dims=dims(grid)[1:2])
end
function dye(field::MikrubiField, grid::Raster; func=tiedrank)
colores = dyecolors(field; func=func)
size(field.locs, 2) >= 2 || error("`field.locs` not enough!")
n, m, _ = size(grid)
ci = CartesianIndices((n, m))
imaget = fill(RGB(1.0, 1.0, 1.0), n, m)
for k = 1:field.npixel
i, j = xy2ij(grid, field.locs[k, 1:2]...)
imaget[i, j] = colores[k]
end
Raster(imaget, dims=dims(grid)[1:2])
end
function dimends(dim::DD.Dimension)
dimct = DD.maybeshiftlocus(DD.Center(), dim)
dimsh = step(dimct) / 2
return (first(dimct) - dimsh, last(dimct) + dimsh)
end
function dimseq(dim::DD.Dimension)
dimct = DD.maybeshiftlocus(DD.Center(), dim)
dimsh = step(dimct) / 2
ticks = collect(dimct)
return vcat(ticks .- dimsh, ticks[end] + dimsh)
end
iswhite(rgb::RGB) = isone(rgb)
function xylim(image::AbstractMatrix{<:RGB}, grid::Raster)
x, y, _... = DD.dims(grid)
xg0, xg1 = extrema(dimends(x))
yg0, yg1 = extrema(dimends(y))
colored = .!iswhite.(image)
xcolored = any(colored, dims=1)[:]
ycolored = any(colored, dims=2)[:]
xl0, xl1 = minmax(dimseq(x)[findfirst(xcolored)],
dimseq(x)[findlast(xcolored)+1])
yl0, yl1 = minmax(dimseq(y)[findfirst(ycolored)],
dimseq(y)[findlast(ycolored)+1])
wider = [1.05 -0.05; -0.05 1.05]
return [xg0, xg1], [yg0, yg1], wider * [xl0, xl1], wider * [yl0, yl1]
end
function xy2ij(layer, x, y)
DD.dims2indices.(dims(layer)[1:2], [X(Near(x)), Y(Near(y))])
end
struct MPlot end
@recipe f(shptable::AG.IFeatureLayer) = AG.getgeom.(shptable)
@recipe f(ctpixels::CtPixels; salt=20) = MPlot, dye(ctpixels, salt=salt)
@recipe f(grid::Raster, field::MikrubiField; func=tiedrank) =
MPlot, dye(field, grid; func=func)
@recipe function f(::Type{MPlot}, mosaic)
image = Matrix(mosaic')
xg, yg, xl, yl = xylim(image, mosaic)
# :xguide --> "X"
# :yguide --> "Y"
:xlims --> xl
:ylims --> yl
:yflip --> false
:aspect_ratio --> :auto
xg, yg, reverse(image, dims=1)
end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 4810 | # src/shape.jl
"""
filterext(dir::AbstractString, extset=nothing) :: Vector{String}
Find all file names in `dir` with extensions in `extset`. When `extset` is set
to `nothing` (by default), all extensions are acceptable.
"""
function filterext(dir::AbstractString, extset=nothing)
if isnothing(extset)
check = isfile
else
check = path -> isfile(path) && last(splitext(path)) in extset
end
filenames = filter(check, readdir(dir, join=true))
return filenames
end
"""
readshape(path::AbstractString, index::Int=-1;
extset = [".shp", ".geojson", ".gpkg"]) :: AG.IFeatureLayer
Read a shape file located at `path`. If `path` refers to a file, the file is
directly read; otherwise, if `path` refers to a directory, a random shape file
inside is read.
`extset` describes possible extensions of shape files (see also
[`readlayers`](@ref)). By setting `extset` to `nothing`, the extension
filtering is not processed, i.e., all files are regarded as shape files.
`extset` is indifferent when `path` refers to a file.
The shape file should contain a dataset. When the dataset consists of multiple
layers, `index` indicates which data layer should be returned.
"""
function readshape(path::AbstractString, index::Int=-1;
extset = [".shp", ".geojson", ".gpkg"])
ispath(path) || error("No such file or directory!")
if isdir(path)
filenames = filterext(path, extset)
if length(filenames) == 0
error("No files with extensions in `extset` in the directory!")
elseif length(filenames) == 1
path = first(filenames)
else
@warn tw"Multiple files with extensions in `extset` exist in the
directory! Now choose a random one to read."
path = first(filenames)
end
end
dataset = AG.read(path)
if AG.nlayer(dataset) == 1
index = 0
elseif AG.nlayer(dataset) > 1 && index == -1
display(dataset)
error(tw"Multiple data layers exist in the dataset!
Please designate `index` for one.")
end
shptable = AG.getlayer(dataset, index)
isempty(shptable) && error(tw"No shapes in the dataset!")
return shptable
end
"""
goodcolumns(shptable::AG.IFeatureLayer) :: Dict{String, Vector}
Find all properties of features in `shptable` where entries are all unique and
either integers or strings (types whose `isequal` is well-defined).
"""
function goodcolumns(shptable::AG.IFeatureLayer)
fields = Dict{String, Vector}()
feature = first(shptable)
for i = 0 : AG.nfield(shptable)-1
list = AG.getfield.(shptable, i)
if eltype(list) <: Union{Integer, AbstractString} && allunique(list)
name = AG.getname(AG.getfielddefn(feature, i))
fields[name] = list
end
end
fields
end
@deprecate goodproperties(shptable) goodcolumns(shptable)
"""
lookup(shptable::AG.IFeatureLayer,
column::Union{AbstractString, Symbol}, entry)
lookup(shptable::AG.IFeatureLayer,
column::Union{AbstractString, Symbol}, entries::AbstractArray)
lookup(shptable::AG.IFeatureLayer)
Find row(s) in the shape table whose `column` record(s) equal(s) to `entry`
or elements of `entries`. When the third argument is an array, results are
output as an array of the same shape by broadcasting.
"""
function lookup(shptable::AG.IFeatureLayer,
column::Union{AbstractString, Symbol}, entry)
feature = first(shptable)
i = AG.findfieldindex(feature, column)
if i == -1 || isnothing(i)
columns = keys(goodcolumns(shptable))
error(textwrap("No column in the shapefile named `$column`!
Recommended alternatives are $(join(repr.(columns), ", ")).
Please select one from them as `column`."))
end
shpcol = AG.getfield.(shptable, i)
index = findall(shpcol .== entry)
if length(index) == 1
return index[1]
elseif length(index) > 1
@warn textwrap("Multiple entries in the column $column equal to
$(repr(entry))! All of these are returned, but please
select exactly one from them for succeeding processing.")
return index
end # length(index) == 0 afterwards
if isa(entry, AbstractString) && eltype(shpcol) <: Integer
@warn tw"Types mismatched: `entry` is a string while the column
designated contains integers. Please try again after using `parse`
or `tryparse` to convert `entry` to an integer. Nothing is
returned here."
return nothing
elseif isa(entry, Integer) && eltype(shpcol) <: AbstractString
@warn tw"Types mismatched: `entry` is an integer while the
column designated contains strings. Please try again after using
`string` to convert `entry`. Nothing is returned here."
return nothing
end
@warn textwrap("No matched record. Please check the input arguments
again. Nothing is returned here.")
return nothing
end
lookup(shptable::AG.IFeatureLayer,
column::Union{AbstractString, Symbol}, entries::AbstractArray) =
lookup.([shptable], [column], entries)
lookup(shptable::AG.IFeatureLayer) = lookup(shptable, "", 0)
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 4380 | # test/exalliwalli.jl
using Mikrubi
using Test
import Rasters
dir = pwd()
cd(joinpath(pkgdir(Mikrubi), "examples", "alliwalli"))
@testset "readfield" begin
global china = readfield("chinafield.mkuf")
@test isa(china, MikrubiField{Int, Float64, Float64})
@test china.mcounty == 2893
@test china.npixel == 62716
@test china.dvar == 3
@test sum(china.ctids) == 84874351
@test isapprox(sum(china.locs), 8.883390833333332e6)
@test isapprox(sum(china.vars), -32830.26854044552)
@test china.ctids[3939] == 199
@test isapprox(china.locs[3939, :],
[105.08333333333331, 25.083333333333343])
@test isapprox(china.vars[3939, :],
[-2.8994803671788847, 0.7973346798555266, -0.5631648131291713])
end
@testset "readlayers" begin
global ylayers = readlayers("ylayers")
@test isa(ylayers, Rasters.RasterStack)
@test size(ylayers) == (2160, 1080, 1)
@test sum(Rasters.boolmask(ylayers)) == 35813
ybio1 = ylayers[:ybio1]
@test isa(ybio1, Rasters.Raster{Float64})
@test size(ybio1) == (2160, 1080, 1)
@test sum(Rasters.boolmask(ybio1)) == 35813
@test isapprox(ybio1[1839, 239, 1], 1.7729962288447991)
@test isapprox(sum(skipmissing(ybio1)), 1.738049704158584e-10)
end
@testset "readlist" begin
global ctlist = readlist("countylist.txt")
@test isa(ctlist, Vector{Int})
@test length(ctlist) == 46
@test ctlist[39] == 72
@test sum(ctlist) == 36844
ctlistfilename = joinpath(tempdir(), "countylist.txt")
isfile(ctlistfilename) && rm(ctlistfilename)
writelist(ctlistfilename, ctlist)
ctlist1 = readlist(ctlistfilename)
@test isa(ctlist1, Vector{Int})
@test ctlist == ctlist1
rm(ctlistfilename)
end
@testset "fit" begin
optresults = []
global model = fit(china, ctlist; optresult=optresults)
optresult = optresults[1]
@test optresult.ls_success
@test isapprox(optresult.minimum, 126.65599400745549)
@test isa(model, MikrubiModel{Float64})
@test model.dvar == 3
@test isapprox(model.params, [1.4842288152354197,
-1.3603311815698715, -0.38761691866210646, 1.1231074177981228,
1.2090116395112087, -0.10334796181736790, 14.747024521778938,
-14.878922083170924, 11.9705675223002300, 30.299436373642205])
model1 = MikrubiModel(3, [1.4,-1.4,-0.4,1.1,1.2,-0.1,14.7,-14.9,12.0,30.3])
e0 = Mikrubi.mlogL(china, ctlist, model.params)
e1 = Mikrubi.mlogL(china, ctlist, model1.params)
@test isapprox(e0, 126.65599400745549)
@test isapprox(e1, 303.59978177848010)
@test e0 < e1
end
@testset "writemodel" begin
modelfilename = joinpath(tempdir(), "model.mkum")
isfile(modelfilename) && rm(modelfilename)
writemodel(modelfilename, model)
model1 = readmodel(modelfilename)
@test model.dvar == model1.dvar
@test isapprox(model.params, model1.params)
rm(modelfilename)
end
@testset "predict" begin
global geodist = predict(ylayers, model)
@test isa(geodist, Rasters.Raster{Float64})
@test length(geodist) == 2332800
@test size(geodist) == (2160, 1080, 1)
@test sum(Rasters.boolmask(geodist)) == 35813
@test isapprox(geodist[39*43, 39*10, 1], 0.013895678502362063)
@test isapprox(sum(skipmissing(geodist)), 28.461626260733837)
end
@testset "writelayer" begin
layerfilename = joinpath(tempdir(), "geodist.tif")
isfile(layerfilename) && rm(layerfilename)
writelayer(layerfilename, geodist)
geodist1 = Rasters.Raster(layerfilename)
@test isa(geodist1, Rasters.Raster{Float64})
@test geodist == geodist1
rm(layerfilename)
end
@testset "predict" begin
p = predict(china, model)
@test isa(p, Dict{Int, Vector{Tuple{Vector{Float64}, Float64}}})
@test length(p) == 2893
@test sum(length.(values(p))) == 62716
p39 = p[39]
@test isa(p39, Vector{Tuple{Vector{Float64}, Float64}})
@test length(p39) == 30
loc, prob = first(p39)
@test isapprox(loc, [100.08333333333331, 28.583333333333336])
@test isapprox(prob, 0.014382846487993373)
end
@testset "probcounties" begin
pc = probcounties(china, model)
@test isa(pc, Dict{Int, Float64})
@test length(pc) == 2893
@test sum(keys(pc)) == 4188157
@test isapprox(sum(values(pc)), 45.27843660370468)
@test isapprox(pc[39], 0.19286274249159574)
end
@testset "predictcounty" begin
p39 = predictcounty(china, model, 39)
@test isa(p39, Vector{Tuple{Vector{Float64}, Float64}})
@test length(p39) == 30
loc, prob = first(p39)
@test isapprox(loc, [100.08333333333331, 28.583333333333336])
@test isapprox(prob, 0.014382846487993373)
@test p39 == predict(china, model)[39]
end
cd(dir)
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 929 | # test/exdemo.jl
using Mikrubi
using Test
@testset "a small example" begin
cty = ["a", "b", "c", "d", "d", "e", "f", "f", "e"]
geo = [1 1; 1 2; 1 3; 2 1; 2 2; 2 3; 3 1; 3 2; 3 3]
env = [1 1; 1 2; 1 3; 2 1; 2 2; 2 3; 3 1; 3 2; 3 3]
field = MikrubiField(cty, geo, env)
@test field.npixel == 9
@test field.mcounty == 6
@test field.dvar == 2
@test field.ids == string.(collect("abcdef"))
@test getindex.([field.starts], field.ids) == [1, 2, 3, 4, 6, 8]
@test getindex.([field.stops], field.ids) == [1, 2, 3, 5, 7, 9]
model = fit(field, ["a", "c", "f"])
@test model.dvar == 2
@test 0 <= length(samplecounties(field, model)) <= 6
@test isapprox(sum(Mikrubi.probpixels(field, model)), 3.137251f0)
@test isapprox(sum(values(probcounties(field, model))), 2.8943691f0)
@test isapprox(lipschitz(model, field, wholespace=false), 0.21698886f0)
@test isapprox(lipschitz(model, field, wholespace=true), 4.4162006f0)
end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 11150 | # test/exjui.jl
using Mikrubi
using Logistics
using Test
import ArchGDAL; const AG = ArchGDAL
import GADM
import GDAL
import RasterDataSources; const RDS = RasterDataSources
import Rasters
@testset "GADM" begin
global shppath = GADM.download("NPL"; version="4.1")
@test isdir(shppath)
shpfiles = readdir(shppath, join=false, sort=true)
@test length(shpfiles) == 1
global gpkg, = shpfiles
@show shppath
@show gpkg
@test gpkg == "gadm41_NPL.gpkg"
end
@testset "RasterDataSources" begin
get!(ENV, "RASTERDATASOURCES_PATH", tempdir())
RDS.getraster(RDS.WorldClim{RDS.BioClim}, res="10m")
global climpath = RDS.rasterpath(RDS.WorldClim{RDS.BioClim})
@test isdir(climpath)
filenames = readdir(climpath, join=true)
@test length(filenames) == 20
@test length(filter(isfile, filenames)) == 19
@test length(filter(isdir, filenames)) == 1
@test unique(last.(splitext.(filter(isfile, filenames)))) == [".tif"]
@test last(splitpath(first(filenames))) == "wc2.1_10m_bio_1.tif"
end
@testset "readshape" begin
@test_throws ErrorException readshape(joinpath(shppath, gpkg))
@test_throws ErrorException readshape(joinpath(shppath, gpkg), -1)
@test isa(readshape(joinpath(shppath, gpkg), 4), AG.IFeatureLayer)
@test_throws ErrorException readshape(joinpath(shppath, gpkg), 5)
@test_throws ErrorException readshape(shppath; extset=[])
@test_throws ErrorException readshape(shppath)
@test_throws ErrorException readshape(shppath, -1)
body, tail = splitext(gpkg)
gpkg_ = body * "_" * tail
cp(joinpath(shppath, gpkg), joinpath(shppath, gpkg_))
msg = Mikrubi.tw"Multiple files with extensions in `extset` exist
in the directory! Now choose a random one to read."
@test_logs (:warn, msg) readshape(shppath, 3)
rm(joinpath(shppath, gpkg_))
global shptable = readshape(shppath, 3)
@test isa(shptable, AG.IFeatureLayer)
@test AG.getname(shptable) == "ADM_ADM_3"
@test AG.nfeature(shptable) == 75
pt = AG.getgeom(AG.getgeom(AG.getgeom(first(shptable), 0), 0), 0)
@test isapprox(AG.getx(pt, 0), 85.406402588)
@test isapprox(AG.gety(pt, 0), 27.632347107)
end
@testset "goodcolumns" begin
gc = goodcolumns(shptable)
@test sort!(collect(keys(gc))) == ["GID_3", "NAME_3"]
end
@testset "lookup" begin
@test_throws ErrorException lookup(shptable)
@test_throws ErrorException lookup(shptable, "Any", 1)
@test_throws ErrorException lookup(shptable, :Any, 1)
@test nothing === lookup(shptable, "GID_3", 1)
end
@testset "readlayers" begin
global layers = readlayers(climpath)
@test isa(layers, Rasters.RasterStack)
@test length(layers) == 19
@test keys(layers)[1] == Symbol("wc2.1_10m_bio_1")
@test keys(layers)[19] == Symbol("wc2.1_10m_bio_19")
bm = Rasters.boolmask(layers)
@test sum(bm) == 808053
@test length(collect(skipmissing(first(layers)))) == 808053
@test isapprox(sum(collect(skipmissing(first(layers)))), -3.262779f6)
@test isapprox(collect(skipmissing(first(layers)))[1:10], Float32[0.0, 0.0,
-2.5923913, -8.346475, -16.416666, -17.895636,
-6.286264, -6.5873747, -5.2716227, -2.6854463])
end
@testset "rasterize" begin
layer = first(layers)
@test_throws MethodError Mikrubi.CtPixels()
@test_throws MethodError Mikrubi.CtPixels(layer)
indices = Mikrubi.indicate(layer)
@test isa(indices, Rasters.Raster{Int})
@test indices[0393939] == 0
@test indices[1393939] == 1393939
cpx = Mikrubi.CtPixels(indices)
@test isa(cpx, Mikrubi.CtPixels)
@test length(cpx) == 0
global ctpixels = rasterize(shptable, layer)
@test isa(ctpixels, Mikrubi.CtPixels)
@test length(ctpixels) == 1127
@test ctpixels.indices == Mikrubi.indicate(layer) == cpx.indices
@test length(ctpixels.list) == 1127
@test ctpixels.list[39] == (5, 811592)
@test Mikrubi.getcounty(ctpixels, 39) == 5
@test Mikrubi.getpixel(ctpixels, 39) == 811592
@test Mikrubi.getcounties(ctpixels)[39] == 5
@test Mikrubi.getpixels(ctpixels)[39] == 811592
@test isa(Mikrubi.getcounties(ctpixels), Vector{Int})
@test isa(Mikrubi.getpixels(ctpixels), Vector{Int})
@test length(Mikrubi.getcounties(ctpixels)) == 1127
@test length(Mikrubi.getpixels(ctpixels)) == 1127
end
@testset "buildfield" begin
layers_ = deepcopy(layers)
ctpixels_ = deepcopy(ctpixels)
ctpixels_.indices[1, 1, 1] = 1
push!(ctpixels_.list, (76, 1))
Mikrubi.masklayers!(layers_, ctpixels_)
dimlower = Mikrubi.DimLower(layers_)
idx, ematrix, elayers = dimlower(layers_; rabsthres=0.8, nprincomp=3)
field = Mikrubi.buildfield(ctpixels_, idx, ematrix, first(layers_))
@test isa(field, MikrubiField)
end
@testset "DimLower" begin
layers_ = deepcopy(layers)
Mikrubi.masklayers!(layers_, ctpixels)
matrix, idx = Mikrubi.extractlayers(layers_)
@test matrix[:, 1] == collect(skipmissing(first(layers_)))
colid = Mikrubi.selectvars(matrix; rabsthres=0.8)
@test colid == [2, 3, 5, 7, 12, 14, 17, 19]
smatrix = matrix[:, colid]
@test isapprox(smatrix[1:10], Float32[11.106563, 12.474979,
11.979521, 12.020729, 12.416375, 10.518167,
11.936396, 11.022813, 11.316313, 11.497708])
colmean, projwstd = Mikrubi.princompvars(smatrix; nprincomp=3)
@test isapprox(colmean[:], Float32[11.103582, 44.19512,
24.790443, 25.243826, 1304.4342, 7.5794873, 47.97265, 81.61539])
@test isapprox(projwstd, Float32[
0.4013147 0.3629837 0.18809055;
-0.061730698 -0.13327469 -0.108475216;
0.030078402 -0.036740705 0.058986623;
0.13204093 0.17373833 0.11315063;
0.00021478014 -0.00088945474 0.00058913097;
-0.13801464 -0.013789057 0.047764596;
-0.01994192 0.0015393574 0.021920582;
-0.008666443 0.008949931 0.0048636016])
f_, yl_, dimlower = Mikrubi._makefield(layers, ctpixels, 0.8, 3)
@test isa(dimlower, Mikrubi.DimLower{Float64})
@test dimlower.new == false
@test dimlower.colid == [2, 3, 5, 7, 12, 14, 17, 19]
@test isa(dimlower.colmean, Matrix{Float64})
@test size(dimlower.colmean) == (1, 8)
@test isapprox(dimlower.colmean[:], Float32[11.103582, 44.19512,
24.790443, 25.243826, 1304.4342, 7.5794873, 47.97265, 81.61539])
@test isa(dimlower.projwstd, Matrix{Float64})
@test size(dimlower.projwstd) == (8, 3)
@test isapprox(dimlower.projwstd, Float32[
0.4013147 0.3629837 0.18809055;
-0.061730698 -0.13327469 -0.108475216;
0.030078402 -0.036740705 0.058986623;
0.13204093 0.17373833 0.11315063;
0.00021478014 -0.00088945474 0.00058913097;
-0.13801464 -0.013789057 0.047764596;
-0.01994192 0.0015393574 0.021920582;
-0.008666443 0.008949931 0.0048636016])
idx, ematrix, elayers = dimlower(layers; rabsthres=0.8, nprincomp=3)
@test isa(idx, Vector{Int})
@test length(idx) == 808053
@test sum(idx) == 923357905365
@test isa(ematrix, Matrix{Float64})
@test size(ematrix) == (808053, 3)
@test isapprox(sum(ematrix), 3.4257468f6)
end
@testset "makefield" begin
f0, yl0 = makefield(layers, ctpixels)
@test sprint(show, f0) ===
"Mikrubi Field: geo_dim = 2, env_dim = 3, 1127 pixels, and 75 counties"
@test sum(Rasters.boolmask(layers)) == 808053
@test isa(f0, MikrubiField)
@test isa(yl0, Rasters.RasterStack)
path = joinpath(tempdir(), "field.mkuf")
@test nothing === writefield(path, f0)
str = String(read(path))
@test str[1:12] == "I\tL\tL\tV\tV\tV\n"
pathformula(s) = joinpath(tempdir(), "layer_$s.tif")
@test nothing === writelayers(pathformula.(5:7), yl0)
yl0_ = readlayers(pathformula.(5:7))
@test length(yl0_) == 3
@test size(yl0_) == (2160, 1080, 1)
@test nothing === writelayers(pathformula("*"), yl0)
yl0_ = readlayers(pathformula.(1:3))
@test length(yl0_) == 3
@test size(yl0_) == (2160, 1080, 1)
global field, ylayers = makefield(layers, shptable);
@test sum(Rasters.boolmask(layers)) == 808053
@test yl0 == ylayers
@test f0.ctids == field.ctids
@test f0.locs == field.locs
@test f0.vars == field.vars
@test sum(field.ctids) == 45008
@test isapprox(sum(field.locs), 126492.66666666664)
@test isapprox(sum(field.vars), -395.2514f0)
@test length(ylayers) == 3
bm = Rasters.boolmask(ylayers)
@test sum(bm) == 585
@test isapprox(collect(maximum(ylayers)),
Float32[3.5854542, 4.006934, 2.3392994])
@test isapprox(sum.(collect(ylayers[bm])), zeros(3), atol=1e-10)
f1, yl1, pyl1 = makefield(layers, ctpixels, layers)
@test sum(Rasters.boolmask(layers)) == 808053
@test yl0 == yl1
@test isapprox(collect(yl1[bm]), collect(pyl1[bm]))
@test sum(Rasters.boolmask(pyl1)) == 808053
f2, yl2, pyl2 = makefield(layers, shptable, layers)
@test sum(Rasters.boolmask(layers)) == 808053
@test yl0 == yl2
@test isapprox(collect(yl2[bm]), collect(pyl2[bm]))
@test sum(Rasters.boolmask(pyl2)) == 808053
end
@testset "lookup" begin
regions = ["Dadeldhura", "Doti", "Bajhang", "Kalikot",
"Mugu", "Jajarkot", "Jumla", "Rolpa", "Dolpa", "Baglung",
"Mustang", "Manang", "Gorkha", "Nuwakot", "Rasuwa",
"Okhaldhunga", "Solukhumbu"]
@test length(regions) == 17
@test allunique(regions)
@test lookup(shptable, "NAME_3", "Bajhang") == 41
@test AG.getfeature(x -> AG.getfield(x, 9), shptable, 41) == "Bajhang"
@test isnothing(lookup(shptable, "NAME_3", "Bazhang"))
@test lookup(shptable, "NAME_2", "Seti") == [40, 41, 42, 43, 44]
global regcodes = lookup(shptable, "NAME_3", regions)
@test isa(regcodes, Vector{Int})
@test regcodes == [37,43,41,53,54,48,52,57,50,60,61,67,64,6,7,31,34]
end
@testset "findnearest" begin
@test_throws ErrorException Mikrubi.findnearest([85.32], field)
@test_throws ErrorException Mikrubi.findnearest([8, 5, 3, 2], field)
@test Mikrubi.findnearest([85.321201, 27.722903], field) == 19
@test Mikrubi.findnearest([85.321201 27.722903], field) == 19
@test Mikrubi.findnearest([85.321201 27.722903]', field) == 19
@test isapprox(field.locs[19, :], [85.25, 27.75])
end
@testset "findnearests" begin
@test Mikrubi.findnearests(
[85.321201 27.722903; 89.368606 24.85836], field) == [19, 345]
@test Mikrubi.findnearests(
[[85.321201, 27.722903], [89.368606, 24.85836]], field) == [19, 345]
end
@testset "fit" begin
optresults = []
global model = fit(field, regcodes; optresult=optresults)
optresult = optresults[1]
@test optresult.ls_success
@test isapprox(optresult.minimum, 20.805983368146116)
@test isa(model, MikrubiModel{Float64})
@test model.dvar == 3
@test isapprox(model.params, [7.180739724704129,
-0.04075956831789931, -0.54476207315363, 0.8879548516254412,
0.3960510254962835, 0.00011517895691697269, 193.3943652333833,
-1.5361309085483503, 24.225666096714022, 181.11673123077227])
model1 = MikrubiModel(3, model.params .* 1.01f0)
e0 = Mikrubi.mlogL(field, regcodes, model.params)
e1 = Mikrubi.mlogL(field, regcodes, model1.params)
@test isapprox(e0, 20.805983368146116)
@test isapprox(e1, 34.81738959663534)
@test e0 < e1
pp = Mikrubi.ppresence(field, model.params)
@test isa(pp, Vector{Logistic{Float64}})
@test size(pp) == (1127,)
pp1 = Mikrubi.ppresence(field.vars, model.params)
@test pp1 == pp
@test typeof(pp1) == typeof(pp)
end
@testset "predict" begin
global geodist = predict(ylayers, model)
@test isa(geodist, Rasters.Raster{Float64})
@test length(geodist) == 2332800
@test size(geodist) == (2160, 1080, 1)
@test sum(Rasters.boolmask(geodist)) == 585
@test isapprox(geodist[809439], 0.0016682358794606333)
@test isapprox(sum(skipmissing(geodist)), 12.929364196004789)
end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 2357 | # test/exsim.jl
using Logistics
using Mikrubi
using Test
function county_list(R, hw=240)
sl = 1
sr = 1 + R
pixels = -hw+0.5 : hw-0.5
junctions = vcat(-hw, reverse(-sl:-sl:1-hw), 0:sr:hw-1+sr) .+ 0.5
cumsum(in.(pixels, (junctions,))), collect(pixels)
end
@testset "field" begin
R = 30
ctids, vars = county_list(R)
@test length(ctids) == 480
@test length(vars) == 480
@test issorted(ctids)
@test issorted(vars)
@test isa(ctids, Vector{Int})
@test sum(ctids) == 87572
@test isa(vars, Vector{Float64})
@test sum(vars) == 0.0
global asym = MikrubiField(ctids, vars, vars)
@test isa(asym, MikrubiField{Int, Float64, Float64})
@test asym.mcounty == 248
@test asym.npixel == 480
@test asym.dvar == 1
@test asym.ctids == ctids
@test asym.locs == repeat(vars, 1, 1)
@test asym.vars == repeat(vars, 1, 1)
end
@testset "model" begin
params = [0.02, 0, 1]
@test_throws DimensionMismatch MikrubiModel(2, params)
global model = MikrubiModel(1, params)
@test isa(model, MikrubiModel{Float64})
@test model.dvar == 1
@test model.params == params
pc = probcounties(asym, model)
@test isapprox(pc[39], 3.253660629809474e-8)
@test isapprox(pc[239], 0.2687645074317543)
@test isapprox(pc[248], 1.3446977198405818e-8)
pcl = probcounties(Logistic, asym, model)
@test isapprox(pcl[39], 3.253660629809474e-8)
@test isapprox(pcl[239], 0.2687645074317543)
@test isapprox(pcl[248], 1.3446977198405818e-8)
pp = Mikrubi.probpixels(asym, model)
@test pp == predict(asym.vars, model)
@test isapprox(pp[39], 3.253660629809474e-8)
@test isapprox(pp[239], 0.2687645074317544)
@test isapprox(pp[248], 0.26454071698645776)
c = predictcounty(asym, model, 239)
@test isa(c, Vector{Tuple{Vector{Float64}, Float64}})
@test length(c) == 1
@test length(c[1]) == 2
@test c[1][1] == [-1.5]
@test isapprox(c[1][2], 0.2687645074317544)
sample = samplecounties(asym, model)
@test isa(sample, Vector{Int})
end
@testset "fit" begin
sample = [183,195,196,203,204,206,207,208,222,233,237,240,241,242]
optresults = []
model1 = fit(asym, sample; optresult=optresults)
optresult = optresults[1]
@test optresult.ls_success
@test isapprox(optresult.minimum, 32.021621971313365)
@test isa(model1, MikrubiModel{Float64})
@test model1.dvar == 1
@test all(isapprox.(model1.params,
[0.025545912586333833, 0.013468233505695898, 1.1292564512607859]))
end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 2702 | # test/functions.jl
using Mikrubi
using Test
@testset "textwrap and @tw_str" begin
@test Mikrubi.textwrap("Very \n\t good") == "Very good"
@test Mikrubi.textwrap("Very
good") == "Very good"
@test Mikrubi.textwrap("Very
good") == "Very good"
@test Mikrubi.tw"Very
good" == "Very good"
@test Mikrubi.tw"Very
good" == "Very good"
end
@testset "allsame" begin
@test Mikrubi.allsame([1, 1, 2]) == false
@test Mikrubi.allsame([1, 1, 1]) == true
@test Mikrubi.allsame([1]) == true
@test_throws BoundsError Mikrubi.allsame([])
end
@testset "colmatrix" begin
@test Mikrubi.colmatrix([1]) == fill(1, 1, 1)
@test Mikrubi.colmatrix([1, 1]) == fill(1, 2, 1)
@test Mikrubi.colmatrix([1 1; 1 1]) == fill(1, 2, 2)
end
@testset "logistic" begin
@test logistic(-Inf) == 0.0
@test logistic(+Inf) == 1.0
@test logistic(0.0) == 0.5
@test logistic(0.39) + logistic(-0.39) == 1.0
end
@testset "loglogistic" begin
@test loglogistic(-Inf) == -Inf
@test loglogistic(+Inf) == 0.0
@test loglogistic(0.0) == -log(2.0)
end
@testset "dftraverse!" begin
beststate = Vector(undef, 1);
bestscore = [(0, 0.0)];
Mikrubi.dftraverse!(beststate, bestscore, Int[], (0, 0.0), 1, 3,
Bool[0 0 1; 0 0 0; 1 0 0], [0.0 0.6 0.3; 0.6 0.0 0.9; 0.3 0.9 0.0])
@test beststate == [[1, 2]]
@test bestscore == [(2, -0.6)]
end
@testset "selectvars" begin
@test Mikrubi.selectvars(
[1. 4. 7.; 2. 5. 8.; 3. 9. 27.], rabsthres=0.9) == [1, 3]
end
@testset "princompvars" begin
matrix = Float64[1 2 3 4; 0 1 2 3; 0 0 1 2; 0 0 0 1]
colmean, projwstd = Mikrubi.princompvars(matrix)
pcamatrix = (matrix .- colmean) * projwstd
@test all(isapprox.((pcamatrix' * pcamatrix)[[2,3,4,6,7,8]], 0, atol=1e-10))
@test isapprox(sum(pcamatrix), 0, atol=1e-10)
@test isapprox(sum(pcamatrix.^2), 12, atol=1e-10)
end
@testset "dvar2dparam" begin
@test Mikrubi.dvar2dparam(1) == 3
@test Mikrubi.dvar2dparam(2) == 6
@test Mikrubi.dvar2dparam(3) == 10
@test Mikrubi.dvar2dparam(19) == 210
end
@testset "decomparams" begin
@test Mikrubi.decomparams([1,2,3], 1) == (fill(1, 1, 1), [2], 3)
@test Mikrubi.decomparams([1,2,3,4,5,6], 2) == ([1 0; 2 3], [4, 5], 6)
@test Mikrubi.decomparams([1,2,3,4,5,6,7,8,9,10], 3) ==
([1 0 0; 2 3 0; 4 5 6], [7, 8, 9], 10)
end
@testset "sortfilenames!" begin
@test_throws ErrorException Mikrubi.sortfilenames!(String[])
@test Mikrubi.sortfilenames!(["123"]) == ["123"]
@test Mikrubi.sortfilenames!(["124", "123"]) == ["123", "124"]
@test Mikrubi.sortfilenames!(["bio_9.tif", "bio_10.tif",
"bio_1.tif"]) == ["bio_1.tif", "bio_9.tif", "bio_10.tif"]
@test Mikrubi.sortfilenames!(["bio_09.tif", "bio_10.tif",
"bio_01.tif"]) == ["bio_01.tif", "bio_09.tif", "bio_10.tif"]
end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 2042 | # test/pyplot.jl
using Mikrubi
using Test
@testset "pyplot functions undefined" begin
@test ! @isdefined showlayer
@test ! @isdefined showfield
@test ! @isdefined showctpixels
@test ! @isdefined showshptable
end
using PyPlot
using PyCall
@testset "pyplot functions defined" begin
@test @isdefined showlayer
@test @isdefined showfield
@test @isdefined showctpixels
@test @isdefined showshptable
end
import GADM
import RasterDataSources; const RDS = RasterDataSources
import ArchGDAL; const AG = ArchGDAL
shppath = GADM.download("NPL")
get!(ENV, "RASTERDATASOURCES_PATH", tempdir())
RDS.getraster(RDS.WorldClim{RDS.BioClim}, res="10m")
climpath = RDS.rasterpath(RDS.WorldClim{RDS.BioClim})
shptable = readshape(shppath, 3)
layers = readlayers(climpath)
layer = first(layers)
ctpixels = rasterize(shptable, layer)
field, ylayers = makefield(layers, shptable)
@testset "setplot" begin
@test setplot(PyPlot) === nothing
end
@testset "geom2mat" begin
multipolygon = AG.getgeom(first(shptable), 0)
@test isa(multipolygon, AG.IGeometry{AG.wkbMultiPolygon})
parts, mat = Mikrubi.geom2mat(multipolygon)
@test parts == [0]
@test size(mat) == (2, 118)
@test isapprox(last(mat), 27.632347107)
polygon = AG.getgeom(multipolygon, 0)
@test isa(polygon, AG.IGeometry{AG.wkbPolygon})
parts, mat = Mikrubi.geom2mat(polygon)
@test parts == [0]
@test size(mat) == (2, 118)
@test isapprox(last(mat), 27.632347107)
linearring = AG.getgeom(polygon, 0)
@test isa(linearring, AG.IGeometry{AG.wkbLineString})
@test_throws ErrorException Mikrubi.geom2mat(linearring)
end
@testset "showshptable" begin
@test isa(showshptable(shptable), Vector{<:PyObject})
close()
end
@testset "showlayer" begin
@test isa(showlayer(layer), PyObject)
close()
end
@testset "showctpixels" begin
@test isa(showctpixels(ctpixels), PyObject)
close()
@test isa(showctpixels(ctpixels, layer), PyObject)
close()
end
@testset "showfield" begin
@test isa(showfield(field), PyObject)
close()
@test isa(showfield(field, layer), PyObject)
close()
end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 2787 | # src/rasterizing.jl
using Mikrubi
using Test
import ArchGDAL; const AG = ArchGDAL
using Rasters
using .LookupArrays
sintv(n) = Sampled(1:n; sampling=Intervals(End()))
@testset "allium-shape" begin
raster = Raster(zeros(X(sintv(7)), Y(sintv(9))))
@test isapprox([Mikrubi.centercoords(raster, 39)...], [3.5, 5.5])
indices = Mikrubi.indicate(raster)
wkt = "POLYGON ((0.5 0.5, 2.2 5.3, 1 8, 2.3 8.7, " *
"3.5 6.5, 5.5 8.5, 6.5 7.5, 4.2 4.8, 2.8 0, 0.5 0.5))"
geom = AG.fromWKT(wkt)
ctpixels = Mikrubi.rasterize([geom], raster)
@test isa(ctpixels, Mikrubi.CtPixels)
@test all(isone.(Mikrubi.getcounties(ctpixels)))
pixels = Mikrubi.getpixels(ctpixels)
body = vcat(1:4, 8:11, 16:18, 23:25, 30:33, 37:41, 44:49, 51:56, 58:59, 62)
skin = vcat(50, 57, 61, 63)
@test isempty(setdiff(body, pixels))
@test isempty(setdiff(pixels, union(body, skin)))
end
@testset "ribbon-shape" begin
raster = Raster(zeros(X(sintv(8)), Y(sintv(8))))
@test isapprox([Mikrubi.centercoords(raster, 39)...], [6.5, 4.5])
indices = Mikrubi.indicate(raster)
# wkt = Mikrubi.tw"MULTIPOLYGON
# (((1 4, 3.5 7, 3.9 7, 1.4 4, 3.9 1, 3.5 1, 1 4)),
# ((1.6 4, 4.1 7, 4.5 7, 7 4, 4.5 1, 4.1 1, 1.6 4),
# (2 4, 4 1.6, 6 4, 4 6.4, 2 4),
# (6.4 4, 4.2 1.36, 4.3 1.24, 6.6 4, 4.3 6.76, 4.2 6.64, 6.4 4)))"
wkt = Mikrubi.tw"MULTIPOLYGON
(((1.01 4.01, 3.5 7, 3.9 7, 1.4 4, 3.9 1, 3.5 1.01, 1.01 4.01)),
((1.6 4.01, 4.1 7, 4.5 7, 7 4, 4.5 1, 4.1 1.01, 1.6 4.01),
(2.01 4.01, 4.01 1.6, 6 4, 4 6.4, 2.01 4.01),
(6.4 4, 4.2 1.36, 4.3 1.24, 6.6 4, 4.3 6.76, 4.2 6.64, 6.4 4)))"
# A bug in `_fill_line!` of Rasters v0.5.1, see:
# https://github.com/rafaqz/Rasters.jl/issues/376
geoms = AG.fromWKT(wkt)
ctpixels = Mikrubi.rasterize([geoms], raster)
@test isa(ctpixels, Mikrubi.CtPixels)
@test all(isone.(Mikrubi.getcounties(ctpixels)))
pixels = Mikrubi.getpixels(ctpixels)
body = vcat(11:14, 18:23, 26:27, 30:31, 34:35, 38:39, 42:47, 51:54)
skin = vcat(4, 5, 25, 32, 33, 40, 60, 61)
@test isempty(setdiff(body, pixels))
@test isempty(setdiff(pixels, union(body, skin)))
geom1 = AG.getgeom(geoms, 0)
geom2 = AG.getgeom(geoms, 1)
ctpixels = Mikrubi.rasterize([geom1, geom2], raster)
@test isa(ctpixels, Mikrubi.CtPixels)
@test sort!(unique(Mikrubi.getcounties(ctpixels))) == [1, 2]
pixels1 = last.(filter(x -> x[1] == 1, ctpixels.list))
body1 = vcat(11:12, 18:20, 26:27, 34:35, 42:44, 51:52)
skin1 = vcat(4, 25, 33, 60)
@test isempty(setdiff(body1, pixels1))
@test isempty(setdiff(pixels1, union(body1, skin1)))
pixels2 = last.(filter(x -> x[1] == 2, ctpixels.list))
body2 = vcat(12:14, 19:23, 26:27, 30:31, 34:35, 38:39, 43:47, 52:54)
skin2 = vcat(5, 32, 40, 61)
@test isempty(setdiff(body2, pixels2))
@test isempty(setdiff(pixels2, union(body2, skin2)))
end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 2032 | # test/recipesbase.jl
using Mikrubi
using Test
using Plots
# using FileIO
import GADM
import RasterDataSources; const RDS = RasterDataSources
shppath = GADM.download("NPL")
get!(ENV, "RASTERDATASOURCES_PATH", tempdir())
RDS.getraster(RDS.WorldClim{RDS.BioClim}, res="10m")
climpath = RDS.rasterpath(RDS.WorldClim{RDS.BioClim})
shptable = readshape(shppath, 3)
layers = readlayers(climpath)
layer = first(layers)
ctpixels = rasterize(shptable, layer)
field, ylayers = makefield(layers, shptable)
white = RGB(1.0, 1.0, 1.0)
# function checksumifrepl(filename, r, g, b)
# savefig(joinpath(tempdir(), filename))
# img = FileIO.load(joinpath(tempdir(), filename))
# sumcolor = sum(white .- img)
# @test isapprox(sumcolor, RGB(r, g, b))
# end
@testset "plot a shptable" begin
@test isa(plot(shptable), Plots.Plot)
# checksumifrepl("shptable.png",
# 41932.725490196084, 39602.3019607843, 41112.08627450979)
@test isa(plot!(shptable), Plots.Plot)
end
@testset "plot a layer" begin
@test isa(plot(layer), Plots.Plot)
# checksumifrepl("layer.png",
# 18255.329411764702, 22052.23921568627, 26050.831372549022)
@test isa(plot!(layer), Plots.Plot)
end
@testset "plot a CtPixels" begin
@test isa(plot(ctpixels), Plots.Plot)
# checksumifrepl("ctpixels.png",
# 26157.454901960777, 24958.325490196075, 23180.349019607835)
@test isa(plot!(ctpixels), Plots.Plot)
@test isa(plot(ctpixels; salt=21), Plots.Plot)
# checksumifrepl("ctpixels2.png",
# 25980.529411764706, 25731.533333333326, 22352.149019607838)
@test isa(plot!(ctpixels; salt=21), Plots.Plot)
end
@testset "plot a Mikrubi field" begin
@test isa(plot(layer, field), Plots.Plot)
# checksumifrepl("field.png",
# 29293.13725490196, 27610.82745098039, 30779.545098039212)
@test isa(plot!(layer, field), Plots.Plot)
@test isa(plot(layer, field; func=identity), Plots.Plot)
# checksumifrepl("field2.png",
# 25963.666666666664, 30761.10588235294, 31637.52549019608)
@test isa(plot!(layer, field; func=identity), Plots.Plot)
end
Plots.closeall()
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | code | 802 | # test/runtests.jl
using Mikrubi
using Aqua
using SafeTestsets
# Aqua.test_ambiguities([Mikrubi, Base, Core])
Aqua.test_unbound_args(Mikrubi)
Aqua.test_undefined_exports(Mikrubi)
Aqua.test_piracy(Mikrubi)
Aqua.test_project_extras(Mikrubi)
Aqua.test_stale_deps(Mikrubi)
Aqua.test_deps_compat(Mikrubi)
Aqua.test_project_toml_formatting(Mikrubi)
@safetestset "functions" begin include("functions.jl") end
@safetestset "rasterizing" begin include("rasterizing.jl") end
@safetestset "exdemo" begin include("exdemo.jl") end
@safetestset "exsim" begin include("exsim.jl") end
@safetestset "exalliwalli" begin include("exalliwalli.jl") end
@safetestset "exjui" begin include("exjui.jl") end
@safetestset "recipesbase" begin include("recipesbase.jl") end
@safetestset "pyplot" begin include("pyplot.jl") end
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | docs | 5220 | # Mikrubi.jl
[](https://Mikumikunisiteageru.github.io/Mikrubi.jl/stable)
[](https://Mikumikunisiteageru.github.io/Mikrubi.jl/dev)
[](https://github.com/Mikumikunisiteageru/Mikrubi.jl/actions/workflows/CI.yml)
[](https://codecov.io/gh/Mikumikunisiteageru/Mikrubi.jl)
[](https://github.com/JuliaTesting/Aqua.jl)
[](https://pkgs.genieframework.com?packages=Mikrubi)
*Mikrubi: a model for species distributions using region-based records*
Many species occurrence records from specimens and publications are based on regions such as administrative units (thus sometimes called `counties` in the codes). These region-based records are accessible and dependable, and sometimes they are the only available data source; however, few species distribution models accept such data as direct input. In [Yang et al. (2023)](https://onlinelibrary.wiley.com/doi/full/10.1111/ecog.06283), we present a method named Mikrubi for robust prediction of species distributions from region-based occurrence data. This is the Julia package implementing the algorithms.
## Installation
Mikrubi.jl currently requires Julia v1.7.0 or higher. This registered package can be installed inside the Julia REPL by typing
```julia
]add Mikrubi
```
## Input data requirements
To estimate the fine-scale distribution of a species using its presence or absence in each region, the package generally requires three types of input data:
- A map describing the shapes of all regions as polygons. For many countries or regions, such an administrative partition map can be found from [Database of Global Administrative Areas](https://gadm.org/) (accessible via [GADM.jl](https://github.com/JuliaGeo/GADM.jl), see examples/prinsepia/jui.jl). Specifically for China, the correct county-level shapefile is available from [National Platform of Common Geospatial Information Services](https://www.tianditu.gov.cn/) and [Gaode Map Open Platform](https://lbs.amap.com/).
- Raster layers of climatic factors of the same size, shape, and resolution. A commonly used dataset is [WorldClim](https://worldclim.org/data/index.html) (accessible via [RasterDataSources.jl](https://github.com/EcoJulia/RasterDataSources.jl)).
- A list of regions occupied by the species.
## Workflow
A typical workflow of the package resembles the following lines, where `shppath` refers to the path to the map file, `climpath` refers to the directory path to the raster files, and `ctlistpath` refers to the path to the list containing lines of integer identifiers representing the regions.
```julia
using Mikrubi
shptable = readshape(shppath)
layers = readlayers(climpath)
ctlist = readlist(ctlistpath)
field, ylayers = makefield(layers, shptable)
model = fit(field, ctlist)
geodist = predict(ylayers, model)
writelayer("path/to/output/geodist.tif", geodist)
```
## Citation
An introduction of this package and the model it implements has been published on [Ecography (10.1111/ecog.06283)](https://onlinelibrary.wiley.com/doi/full/10.1111/ecog.06283).
Thank Michael Krabbe Borregaard @mkborregaard and Rafael Schouten @rafaqz for reviewing the code and the manuscript and giving greatly constructive opinions!
If you apply the package or the model in your research, please cite them via the paper above or as the following after substituting the version:
```
Yang, Y.-C., Zhang, Q. and Chen, Z.-D. 2023. Mikrubi: a model for species distributions using region-based records. – Ecography 2023: e06283 (ver. 1.3.2).
```
The equivalent BibTeX file for citation is available at [CITATION.bib](https://github.com/Mikumikunisiteageru/Mikrubi.jl/blob/master/CITATION.bib). You may also import this file or the following BibTeX code block to your reference management software.
```bibtex
@article{Mikrubi2023,
author = {Yang, Yu-Chang and Zhang, Qian and Chen, Zhi-Duan},
title = {Mikrubi: a model for species distributions using region-based records},
journal = {Ecography},
year = {2023},
volume = {2023},
pages = {e06283},
doi = {https://doi.org/10.1111/ecog.06283},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/ecog.06283},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/ecog.06283},
}
```
#### Patch to the Ecography paper
Due to [an update of GADM.jl on May 16, 2023](https://github.com/JuliaGeo/GADM.jl/commit/f7bebc9c358a9d00540e42e90e47ad7b6ca145bf) which fetches GADM data v4.1 rather than v3.6, the district-level map of Nepal now has index `3` (no longer `1` as in the paper). The corresponding line in the code example from the paper (see also [jui.jl](https://github.com/Mikumikunisiteageru/Mikrubi.jl/blob/master/examples/prinsepia/jui.jl)) should be changed to
```julia
shptable = readshape(shppath, 3) # District-level
```
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | docs | 7352 | # Graphics
```@contents
Pages = ["graphics.md"]
Depth = 4
```
```@meta
Module = Mikrubi
```
Objects in Mikrubi of various types (shape files, rasters, raster stacks, `Mikrubi.CtPixels` instances, Mikrubi fields) can be plotted by Plots or by PyPlot.
## Plotting with Plots
After loading the Plots package, the objects can be plotted by `plot` or `plot!`.
### Example of *Prinsepia utilis*
Here we use the distribution of *Prinsepia utilis* (Rosaceae) in Nepal as an example. First we get all the objects prepared.
```julia
using Mikrubi
using Plots
import GADM
import RasterDataSources; const RDS = RasterDataSources
shppath = GADM.download("NPL")
get!(ENV, "RASTERDATASOURCES_PATH", tempdir())
RDS.getraster(RDS.WorldClim{RDS.BioClim}, res="10m")
climpath = RDS.rasterpath(RDS.WorldClim{RDS.BioClim})
```
#### Illustrating the shape file
The variable `shptable`, what is read through [`readshape`](@ref), called a shape file here, can be `plot`ted.
```
shptable = readshape(shppath, 3);
plot(shptable)
savefig("plots_shptable.png")
```

#### Illustrating the raster layers
We read the WorldClim raster `layers`, and extract the first raster `layer` from the stack, and then `plot` it.
```julia
layers = readlayers(climpath)
layer = first(layers)
plot(layer)
savefig("plots_layer.png")
```

#### Illustrating the rasterization result
Here `ctpixels` is the object storing the rasterization result, although in many cases the result is hidden in [`makefield`](@ref).
```julia
ctpixels = rasterize(shptable, layer);
plot(ctpixels)
savefig("plots_ctpixels.png")
```

#### Illustrating the Mikrubi field
A Mikrubi field cannot be `plot`ted solely, because it does not contain enough information for illustrating. It must follow the corresponding raster grid in the argument list.
```julia
field, ylayers = makefield(layers, shptable);
plot(layer, field)
savefig("plots_field.png")
```

## Plotting with PyPlot
Since the PyPlot plotting engine is substantially the wrapped Python package `matplotlib`, users who wish to use PyPlot should manually install the package (e.g. `pip install matplotlib` in terminal). Mikrubi can use PyPlot after it is loaded in the current session by:
```julia
using PyPlot
```
Four functions are provided to illustrate the objects of different types.
```@docs
showshptable
showlayer
showfield
showctpixels
```
### Example of *Allium wallichii*
In the beginning, here we get the packages, path strings, and decoration functions ready:
```julia
using Mikrubi
using PyPlot
shppath = "path/to/china/counties.shp";
climpath = "path/to/worldclim/layers";
ctlistpath = "path/to/occupied/county/list.txt";
largeaxis() = gca().set_position([0.06, 0.07, 0.9, 0.9])
worldwide() = (xlim(-180, 180); ylim(-90, 90))
```
#### Illustrating the shape file and the raw layers
Now the workflow is disassembled into steps, and we check the outputs by illustrating them.
First of all, a shape file is read into Julia. We can see clearly that the boundaries of counties of China plotted in black line.
```julia
shptable = readshape(shppath)
figure(figsize=(6.4, 6.4))
showshptable(shptable)
largeaxis()
savefig("pyplot_shptable.png")
close()
```

Then, a series of WorldClim climatic factor layers are read in, and the first layer among them is illustrated.
```julia
layers = readlayers(climpath)
figure(figsize=(6.4, 3.2))
set_cmap("viridis")
showlayer(first(layers))
largeaxis()
worldwide()
savefig("pyplot_rawlayer1.png")
close()
```

#### Illustrating the rasterization result
Later, counties are rasterized using the grid defined by the layers. Every exclusive pixel is assigned the characteristic color of the county it belongs to, while pixels shared by multiple counties are dyed composite (thus always darker) colors.
```julia
ctpixels = rasterize(shptable, first(layers))
figure(figsize=(6.4, 6.4))
showctpixels(ctpixels, first(layers))
showshptable(shptable, lw=0.5)
gca().set_aspect("auto")
largeaxis()
savefig("pyplot_ctpixels.png")
```

Zoom in, and details of the rasterization result are clearer.
```julia
xlim(88, 98)
ylim(30, 40)
savefig("pyplot_ctpixels2.png")
close()
```

#### Illustrating the extracted layers and the Mikrubi field
Then, a Mikrubi field is constructed from the results above. Notable, `layers` lie in the input argument list at both the first and the third places. Layers at the first place are masked by the rasterization result and transformed into fewer (by default, three) layers by principal component analysis, and the results are finally assigned to `elayers` here. Meanwhile, layers at the third place undergo the same processes but no masking is applied, whose results are assigned to `eplayers`.
Now check the images of `first(elayers)` and `first(eplayers)` under the same `clim`, and we can see that they are actually identical on their overlapping part — because they are derived from the same input layers and have experienced the same operations.
```julia
field, elayers, eplayers = makefield(layers, ctpixels, layers)
figure(figsize=(6.4, 5.2))
showlayer(first(elayers))
gca().set_aspect("auto")
largeaxis()
clim(-6, 2)
savefig("pyplot_pcalayer1.png")
close()
```

```julia
figure(figsize=(6.4, 3.2))
set_cmap("viridis")
showlayer(first(eplayers))
largeaxis()
worldwide()
clim(-6, 2)
savefig("pyplot_gpcalayer1.png")
close()
```

At the same time, we may check the Mikrubi field just obtained visually in RGB space (skewed by `f = tiedrank` in `showfield` for better image representation). Parts of China are dyed different colors, and the pattern does coincide with our knowledge.
```julia
figure(figsize=(6.4, 5.2))
showfield(field, first(layers))
gca().set_aspect("auto")
largeaxis()
savefig("pyplot_field.png")
close()
```

#### Illustrating the predictions
Finally it is the fitting and the predictions. Since the model here is in high dimensionality beyond imagination, we turn to check the images of the predictions. Analogously, under the same `clim` values, the predicted regional distribution (`geodist`) and the predicted global distribution (`ggeodist`) are identical over their overlapping area. Using graphics, we can confirm that everything is in accordance with expectation.
```julia
ctlist = readlist(ctlistpath)
model = fit(field, ctlist)
geodist = predict(elayers, model)
figure(figsize=(6.4, 5.2))
set_cmap("CMRmap")
showlayer(geodist, f = x -> x ^ 0.35)
gca().set_aspect("auto")
largeaxis()
clim(0, 0.45)
savefig("pyplot_geodist.png")
close()
```

```julia
ggeodist = predict(eplayers, model)
figure(figsize=(6.4, 3.2))
set_cmap("CMRmap")
showlayer(ggeodist, f = x -> x ^ 0.35)
largeaxis()
worldwide()
clim(0, 0.45)
savefig("pyplot_ggeodist.png")
close()
```

| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | docs | 7376 | # Mikrubi.jl
```@docs
Mikrubi
```
## A workflow example
We take *Allium wallichii* in China as an example. For more details, please check `examples/alliwalli/workflow.jl`; for the graphic representation of the variables, please visit [Graphics](@ref)).
```julia
julia> using Mikrubi
julia> shptable = readshape(shppath)
Layer: counties
Geometry 0 (): [wkbPolygon], POLYGON ((99.994226 ...), ...
Field 0 (id): [OFTInteger64], 2367, 2368, 2369, 2370, 2371, 2372, 2373, ...
Field 1 (provinceid): [OFTInteger64], 53, 53, 53, 53, 53, 53, 53, 53, ...
Field 2 (cityid): [OFTInteger64], 5305, 5305, 5305, 5305, 5305, 5323, ...
Field 3 (cocode): [OFTString], 530524, 530523, 530502, 530521, 530522, ...
Field 4 (coshname): [OFTString], 昌宁县, 龙陵县, 隆阳区, 施甸县, 腾冲县, 楚雄市, 大姚县, ...
...
Number of Fields: 14
julia> layers = readlayers(climpath)
[ Info: 19 files "wc2.0_bio_10m_*.tif" recognized in the directory, where * = 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19.
RasterStack with dimensions:
X Projected{Float64} LinRange{Float64}(-180.0, 179.833, 2160) ForwardOrdered Regular Intervals crs: WellKnownText,
Y Projected{Float64} LinRange{Float64}(89.8333, -90.0, 1080) ReverseOrdered Regular Intervals crs: WellKnownText,
Band Categorical{Int64} 1:1 ForwardOrdered
and 19 layers:
:wc2.0_bio_10m_01 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_02 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_03 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_04 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_05 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_06 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_07 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_08 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_09 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_10 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_11 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_12 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_13 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_14 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_15 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_16 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_17 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_18 Float64 dims: X, Y, Band (2160×1080×1)
:wc2.0_bio_10m_19 Float64 dims: X, Y, Band (2160×1080×1)
julia> ctlist = readlist(ctlistpath)
46-element Array{Int64,1}:
568
162
364
...
233
2768
2770
julia> field, ylayers = makefield(layers, shptable);
julia> field
Mikrubi Field: geo_dim = 2, env_dim = 3, 62716 pixels, and 2893 counties
julia> ylayers
RasterStack with dimensions:
X Projected{Float64} LinRange{Float64}(-180.0, 179.833, 2160) ForwardOrdered Regular Intervals crs: WellKnownText,
Y Projected{Float64} LinRange{Float64}(89.8333, -90.0, 1080) ReverseOrdered Regular Intervals crs: WellKnownText,
Band Categorical{Int64} 1:1 ForwardOrdered
and 3 layers:
:pca1 Float64 dims: X, Y, Band (2160×1080×1)
:pca2 Float64 dims: X, Y, Band (2160×1080×1)
:pca3 Float64 dims: X, Y, Band (2160×1080×1)
julia> model = fit(field, ctlist)
[ Info: Now minimizing the opposite likelihood function...
Iter Function value √(Σ(yᵢ-ȳ)²)/n
------ -------------- --------------
0 4.171830e+04 2.450473e+02
* time: 0.01399993896484375
500 1.833867e+02 9.325053e-02
* time: 2.998000144958496
1000 1.470935e+02 1.524631e-01
* time: 4.9700000286102295
1500 1.388932e+02 4.105145e-02
* time: 6.976000070571899
2000 1.273631e+02 2.085092e-02
* time: 8.812000036239624
2500 1.266571e+02 9.378015e-05
* time: 10.5239999294281
[ Info: Maximized log-likeliness: -126.65599400745549
MikrubiModel{Float64}(3, [1.4842288152354197, -1.3603311815698715, -0.38761691866210646, 1.1231074177981228, 1.2090116395112087, -0.1033479618173679, 14.747024521778938, -14.878922083170924, 11.97056752230023, 30.299436373642205])
julia> geodist = predict(ylayers, model)
2160×1080×1 Raster{Float64,3} prob with dimensions:
X Projected{Float64} LinRange{Float64}(-180.0, 179.833, 2160) ForwardOrdered Regular Intervals crs: WellKnownText,
Y Projected{Float64} LinRange{Float64}(89.8333, -90.0, 1080) ReverseOrdered Regular Intervals crs: WellKnownText,
Band Categorical{Int64} 1:1 ForwardOrdered
extent: Extent(X = (-180.0, 179.99999999999997), Y = (-90.0, 90.0), Band = (1, 1))
missingval: -1.7e308
crs: GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AXIS["Latitude",NORTH],AXIS["Longitude",EAST],AUTHORITY["EPSG","4326"]]
values: [:, :, 1]
89.8333 89.6667 89.5 89.3333 89.1667 … -89.3333 -89.5 -89.6667 -89.8333 -90.0
-180.0 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
-179.833 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
-179.667 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
-179.5 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
-179.333 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 … -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
-179.167 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
-179.0 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
-178.833 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
⋮ ⋮ ⋱ ⋮
178.5 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 … -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
178.667 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
178.833 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
179.0 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
179.167 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
179.333 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 … -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
179.5 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
179.667 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
179.833 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308 -1.7e308
julia> writelayer("path/to/output/geodist.tif", geodist)
```
## Outline of [Manual](@ref)
```@contents
Pages = [
"manual.md",
]
Depth = 3
```
## Outline of [Graphics](@ref)
```@contents
Pages = [
"graphics.md",
]
Depth = 4
```
## [Index](@id main-index)
```@index
```
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"MIT"
] | 1.3.5 | 9dc60147752098eb1e3a2b2cee884f42fa86ef15 | docs | 7740 | # Manual
```@contents
Pages = ["manual.md"]
Depth = 3
```
```@meta
Module = Mikrubi
```
## Reading and writing
### Reading and searching shapefile
Since shape files are always read instead of written in this application, only the reading function [`readshape`](@ref) is provided.
```@docs
readshape
```
The function [`lookup`](@ref) is useful when some attribute (e.g. name or code) of a county is known and the row number of the county in a shapefile is wanted (row number may act as identifiers in the list of occupied counties, see the syntax of [`fit`](@ref)).
```@docs
lookup
```
#### Internal functions
```@docs
Mikrubi.filterext
Mikrubi.goodcolumns
```
### Reading and writing list file
List of occupied counties can be prepared explicitly in Julia as a vector or a set. Meanwhile, it is also possible to read from or write to disk such a list, especially when the list is generated outside Julia.
```@docs
readlist
writelist
```
### Reading and writing raster layers
Climatic factors are downloaded and stored as raster layers. Mikrubi reads such layers by [`readlayers`](@ref), performs principal component analysis on them and returns the results as layers also. When the output layers need to be kept for future use, they can be written to disk using [`writelayers`](@ref). Moreover, when the predicted distribution of species is organized in raster format, it can be saved likewise using [`writelayer`](@ref).
```@docs
readlayers
writelayer
writelayers
```
#### Internal functions
It is worth mention that when reading layers from a directory, files are sorted according to their names in a manner similar to the sorting order in Windows OS. Please pay extra attention when two parallel raster stacks are fed into [`makefield`](@ref).
```@docs
Mikrubi.sortfilenames!
Mikrubi.allsame
```
### Reading and writing Mikrubi fields
[`MikrubiField`](@ref) is a specially designed type where the environmental information of pixels and their county identifiers are nested. It may be necessary to save (by [`writefield`](@ref)) and load (by [`readfield`](@ref)) a Mikrubi field especially when it is used on multiple species.
```@docs
readfield
writefield
```
### Reading and writing Mikrubi models
[`MikrubiModel`](@ref) is a struct containing transformation parameters. It can be read from and written to disk using respectively [`readmodel`](@ref) and [`writemodel`](@ref).
```@docs
readmodel
writemodel
```
## Rasterizing a shapefile
Since v1.3.0, Mikrubi no longer provides its own rasterization routine; the implementation from [Rasters](https://rafaqz.github.io/Rasters.jl/stable/) is applied instead. The function [`rasterize`](@ref) in Mikrubi integrates the rasterization of multiple geometries. The returned value is of an internal type [`Mikrubi.CtPixels`](@ref).
```@docs
rasterize
Mikrubi.CtPixels
```
#### Internal functions
```@docs
Mikrubi.getpixels
Mikrubi.getcounties
Mikrubi.getpixel
Mikrubi.getcounty
Mikrubi.indicate
Mikrubi.register!
Mikrubi.ispoly
```
## [Processing the raster layers](@id makefield)
In Mikrubi, climatic factors after being read in typically undergo some processing steps together with shapefile inside the function [`makefield`](@ref), which returns a Mikrubi field and a stack of extracted components in raster layers. The two outputs can be used for training and prediction.
Sometimes it is also required to apply a model to another circumstance (different time or different space), in which case another series of parallel climatic factor layers need to be processed in exactly the same way as those used to generate the Mikrubi field (so that their climatic meanings are the same). Such layers need to be put in the third place in the input argument list for [`makefield`](@ref), and those parallelly extracted components are returned in the third place in output as well.
```@docs
makefield
```
#### Internal functions
```@docs
Mikrubi.colmatrix
Mikrubi.masklayers!
Mikrubi.extractlayers
Mikrubi.emptylayer!
Mikrubi.emptylayer
Mikrubi.emptylayers
Mikrubi.makelayer
Mikrubi.makelayers
Mikrubi.dftraverse!
Mikrubi.selectvars
Mikrubi.princompvars
Mikrubi.DimLower
Mikrubi.dimpoints
Mikrubi.centercoords
Mikrubi.buildfield
```
## The Mikrubi core
Two specially designed structs are involved in the core of Mikrubi.
### Mikrubi field
[`MikrubiField`](@ref) is a struct containing mainly three types of information of pixels/points, that is, which counties they belong to (`ctids`), their geographic coordinates (`locs`), and their environmental coordinates (`vars`), with also some derived assistant attributes, such as geographic dimensionality (usually `2`) and environmental dimensionality (for example, `3`).
[`MikrubiField`](@ref) can be obtained in three ways: as first output argument of [`makefield`](@ref makefield), read from disk, or constructed directly from the three required attributes (this may be useful for simulation analysis).
```@docs
MikrubiField
```
### Mikrubi model
[`MikrubiModel`](@ref) contains the environmental dimensionality and the model parameters to define a positive-definite quadratic mapping from environmental space to a real number axis.
Like [`MikrubiField`](@ref), [`MikrubiField`](@ref) can be obtained in three ways: as output argument of [`fit`](@ref fitfuncm), read from disk, or constructed directly from attributes. An example of obtaining Mikrubi field and Mikrubi model from constructors are available in `examples/onedimsim/sim.jl`.
```@docs
MikrubiModel
```
### [Fitting a Mikrubi model](@id fitfuncm)
When a Mikrubi field as well as occurrence data in county and/or in coordinates are ready, they can be used to train a Mikrubi model by function [`fit`](@ref) (county data at `counties`, required; coordinates at `coords`, optional). Result is output as a [`MikrubiModel`](@ref).
```@docs
fit(field::MikrubiField, counties, coords=zeros(0, 0))
```
### Predicting from a Mikrubi model
A Mikrubi model can be applied by function [`predict`](@ref) to a matrix with its columns corresponding to extracted variables, a stack of extracted layers, or a Mikrubi field.
- When input argument is a matrix, output argument is a column vector denoting the probability of presence in pixels/points related to rows in the matrix.
- When input argument is a stack of layers, output argument is a single layer denoting the probability of presence.
- When input argument is a Mikrubi field, output argument is a `Dict` which maps every county identifier to probability of presence at pixels inside the county, see also [`predictcounty`](@ref).
```@docs
predict
```
When distribution probability within only one county is concerned, [`predictcounty`](@ref) returns probability of presence at all pixels that constitute the county in descending order. Therefore, the first element represents the most likely occupied pixel of a county.
```@docs
predictcounty
```
It is also possible to obtain the overall probability that every county is occupied by the function [`probcounties`](@ref).
```@docs
probcounties
```
### Sampling counties in a Mikrubi field
For simulation analysis, sometimes it is required to sample a set of counties from a Mikrubi field and a Mikrubi model. [`samplecounties`](@ref) does the trick.
```@docs
samplecounties
```
### Detecting overfitting
Overfitting can be detected with the Lipschitz constant, the (logarithmic) maximum gradient (in norm) of the probability of presence in environmental space.
```@docs
lipschitz
```
#### Internal functions
```@docs
Mikrubi.dvar2dparam
Mikrubi.decomparams
Mikrubi.pabsence
Mikrubi.ppresence
Mikrubi.mlogL
Mikrubi.probpixels
Mikrubi.findnearest
Mikrubi.findnearests
Mikrubi.loglipschitz
Mikrubi.textwrap
Mikrubi.@tw_str
```
| Mikrubi | https://github.com/Mikumikunisiteageru/Mikrubi.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | code | 438 | using Documenter
using NoiseRobustDifferentiation
makedocs(;
modules=[NoiseRobustDifferentiation],
sitename="NoiseRobustDifferentiation.jl",
authors="Adrian Hill",
format=Documenter.HTML(; prettyurls=get(ENV, "CI", "false") == "true", assets=String[]),
pages=["Home" => "index.md", "Examples" => "examples.md"],
)
deploydocs(;
repo="github.com/adrhill/NoiseRobustDifferentiation.jl.git",
devbranch="main",
)
| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | code | 2478 | using Plots
using LaTeXStrings
function plot_example_abs(f, u, x, data, û_FDM, û)
x_opt = range(x[1], x[end]; length=1000)
f_opt = f.(x_opt)
u_opt = u.(x_opt)
append!(û_FDM, NaN)
pf = plot(x_opt, f_opt; c=:grey, ylabel=L"f", label=L"f")
pf = plot!(x, data; c=:2, ylabel=L"f", label=L"f_{noisy}", legend=:bottomright)
pu = plot(x_opt, u_opt; c=:grey, ylabel=L"u", label=L"u")
pu = plot!(
x,
[û_FDM, û];
c=[:2 :1],
ylabel=L"u",
label=[L"\hat{u}_{FDM}" L"\hat{u}_{tvdiff}"],
legend=:bottomright,
)
return plot(pf, pu; layout=(2, 1), show=true)
end
function plot_FDM(f)
n = length(f)
dx = 1 / (n - 1)
û_FDM = diff(f) / dx # FDM
return plot_FDM(f, û_FDM)
end
function plot_FDM(f, û)
append!(û, NaN)
n = length(f)
x = range(0, 1; length=n)
pf = plot(x, f; c=:1, ylabel=L"f")
pu = plot(x, û; c=:1, ylabel=L"\hat{u}_{FDM}")
return plot(pf, pu; layout=(2, 1), legend=false, show=true)
end
function plot_demo_large_diff(f)
dx = 1
û = diff(f) / dx # FDM
append!(û, NaN) # hide
pf = plot(f; c=:1, ylabel=L"f (L/min)")
pu = plot(û; c=:1, ylabel=L"\hat{u}_{FDM} (L/min/s)", xlabel=L"t (s)")
return plot(pf, pu; layout=(2, 1), legend=false, show=true, dpi=300)
end
function plot_demo_large_tvdiff(f, û)
append!(û, NaN) # hide
pf = plot(f; c=:1, ylabel=latexstring("f (L/min)"))
pu = plot(û; c=:1, ylabel=L"\hat{u}_{tvdiff} (L/min/s)", xlabel=L"t (s)")
return plot(pf, pu; layout=(2, 1), legend=false, show=true, dpi=300)
end
function plot_tvdiff(f, û)
n = length(f)
dx = 1 / (n - 1)
return plot_tvdiff(f, û, dx)
end
"""
Plot data f and total variance regularized
numerical differences `u`.
"""
function plot_tvdiff(f, û, dx::Real)
n = length(f)
x = range(0; step=dx, length=n)
pf = plot(x, f; ylabel=L"f")
pu = plot(x, û; ylabel=L"\hat{u}_{tvdiff}")
return plot(pf, pu; layout=(2, 1), legend=false, show=true)
end
function plot_tvdiff_all(f, û_TVR)
n = length(f)
dx = 1 / (n - 1)
û_FDM = diff(f) / dx # FDM
return plot_tvdiff_all(f, û_FDM, û_TVR)
end
function plot_tvdiff_all(f, û_FDM, û_TVR)
pf = plot(f; c=:1, ylabel=L"f")
pu1 = plot(û_FDM; c=:1, ylabel=L"\hat{u}_{FDM}")
pu2 = plot(û_TVR; c=:1, ylabel=L"\hat{u}_{tvdiff}")
return plot(pf, pu1, pu2; layout=(3, 1), legend=false, show=true, dpi=300)
end
| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | code | 211 | module NoiseRobustDifferentiation
using LinearAlgebra
using SparseArrays: spdiagm
using Preconditioners
using LinearMaps: LinearMap
using IterativeSolvers: cg
include("tvdiff.jl")
export tvdiff
end # module
| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | code | 9971 | """
tvdiff(data::AbstractVector, iter::Integer, α::Real; kwargs...)
# Arguments
- `data::AbstractVector`:
Vector of data to be differentiated.
- `iter::Integer`:
Number of iterations to run the main loop. A stopping
condition based on the norm of the gradient vector `g`
below would be an easy modification.
- `α::Real`:
Regularization parameter. This is the main parameter
to fiddle with. Start by varying by orders of
magnitude until reasonable results are obtained. A
value to the nearest power of 10 is usally adequate.
Higher values increase regularization strenght
and improve conditioning.
## Keywords
- `u_0::AbstractVector`:
Initialization of the iteration. Default value is the
naive derivative (without scaling), of appropriate
length (this being different for the two methods).
Although the solution is theoretically independent of
the intialization, a poor choice can exacerbate
conditioning issues when the linear system is solved.
- `scale::String`:
Scale of dataset, `\"large\"` or `\"small\"` (case insensitive).
Default is `\"small\"` . `\"small\"` has somewhat better
boundary behavior, but becomes unwieldly for very large datasets.
`\"large\"` has simpler numerics but
is more efficient for large-scale problems. `\"large\"` is
more readily modified for higher-order derivatives,
since the implicit differentiation matrix is square.
- `ε::Real`:
Parameter for avoiding division by zero. Default value
is `1e-6`. Results should not be very sensitive to the
value. Larger values improve conditioning and
therefore speed, while smaller values give more
accurate results with sharper jumps.
- `dx::Real`:
Grid spacing, used in the definition of the derivative
operators. Default is `1 / length(data)`.
- `precond::String`:
Select the preconditioner for the conjugate gradient method.
Default is `\"none\"`.
+ `scale = \"small\"`:
While in principle `precond=\"simple\"` should speed things up,
sometimes the preconditioner can cause convergence problems instead,
and should be left to `\"none\"`.
+ `scale = \"large\"`:
The improved preconditioners are one of the main features of the
algorithm, therefore using the default `\"none\"` is discouraged.
Currently, `\"diagonal\"`,`\"amg_rs\"`,`\"amg_sa\"`, `\"cholesky\"` are available.
- `diff_kernel::String`:
Kernel to use in the integral to smooth the derivative. By default it is set to
`\"abs\"`, the absolute value ``|u'|``. However, it can be changed to `\"square\"`,
the square value ``(u')^2``. The latter produces smoother
derivatives, whereas the absolute values tends to make them more blocky.
- `cg_tol::Real`:
Relative tolerance used in conjugate gradient method. Default is `1e-6`.
- `cg_maxiter::Int`:
Maximum number of iterations to use in conjugate gradient optimisation.
Default is `100`.
- `show_diagn::Bool`:
Flag whether to display diagnostics at each iteration. Default is `false`.
Useful for diagnosing preconditioning problems. When tolerance is not met,
an early iterate being best is more worrying than a large relative residual.
# Output
- `u`:
Estimate of the regularized derivative of data with
`length(u) = length(data)`.
"""
function tvdiff(
data::AbstractVector,
iter::Integer,
α::Real;
u_0::AbstractVector=[NaN],
scale::String="small",
ε::Real=1e-6,
dx::Real=NaN,
precond::String="none",
diff_kernel::String="abs",
cg_tol::Real=1e-6,
cg_maxiter::Integer=100,
show_diagn::Bool=false,
)::AbstractVector
n = length(data)
if isnan(dx)
dx = 1 / (n - 1)
end
# Make string inputs case insensitive
scale = lowercase(scale)
precond = lowercase(precond)
diff_kernel = lowercase(diff_kernel)
# Run tvdiff for selected method
return tvdiff(
Val(Symbol(scale)),
data,
iter,
α,
u_0,
ε,
dx,
cg_tol,
cg_maxiter,
precond,
diff_kernel,
show_diagn,
)
end
# Total variance regularized numerical differences for small scale problems.
function tvdiff(
::Val{:small},
data::AbstractVector,
iter::Integer,
α::Real,
u_0::AbstractVector,
ε::Real,
dx::Real,
cg_tol::Real,
cg_maxiter::Integer,
precond::String,
diff_kernel::String,
show_diagn::Bool,
)::AbstractVector
n = length(data)
#= Assert initialization if provided, otherwise set
default initization to naive derivative =#
if isequal(u_0, [NaN])
u_0 = [0; diff(data); 0] / dx
elseif length(u_0) != (n + 1)
throw(
DimensionMismatch(
"size $(size(u_0)) of u_0 doesn't match size ($(n + 1),) required for scale=\"small\".",
),
)
end
u = copy(u_0)
# Construct differentiation matrix.
D = spdiagm(n, n + 1, 0 => -ones(n), 1 => ones(n)) / dx
Dᵀ = transpose(D)
# Construct antidifferentiation operator and its adjoint.
function A(x)
return (cumsum(x) - 0.5 * (x .- x[1]))[2:end] * dx
end
function Aᵀ(x)
return [sum(x) / 2; (sum(x) .- cumsum(x) .- x / 2)] * dx
end
# Precompute antidifferentiation adjoint on data
# Since A([0]) = 0, we need to adjust.
offset = data[1]
Aᵀb = Aᵀ(offset .- data)
for i in 1:iter
if diff_kernel == "abs"
# Diagonal matrix of weights, for linearizing E-L equation.
Q = Diagonal(1 ./ sqrt.((D * u) .^ 2 .+ ε))
# Linearized diffusion matrix, also approximation of Hessian.
L = dx * Dᵀ * Q * D
elseif diff_kernel == "square"
L = dx * Dᵀ * D
else
throw(
ArgumentError(
"""unexpected diff_kernel "$(diff_kernel)" for scale="small"."""
),
)
end
# Gradient of functional.
g = Aᵀ(A(u)) + Aᵀb + α * L * u
# Select preconditioner.
if precond == "simple"
P = Diagonal(α * diag(L) .+ 1)
elseif precond == "none"
P = I # Identity matrix
else
throw(ArgumentError("""unexpected precond "$(precond)" for scale="small"."""))
end
# Prepare linear operator for linear equation.
# Approximation of Hessian of TVR functional at u
H = LinearMap(u -> Aᵀ(A(u)) + α * L * u, n + 1, n + 1)
# Solve linear equation.
s = cg(H, -g; Pl=P, reltol=cg_tol, maxiter=cg_maxiter)
show_diagn && println(
"Iteration $(i):\trel. change = $(norm(s) / norm(u)),\tgradient norm = $(norm(g))",
)
# Update current solution
u += s
end
return u[1:(end - 1)]
end
# Total variance regularized numerical differences for large scale problems.
function tvdiff(
::Val{:large},
data::AbstractVector,
iter::Integer,
α::Real,
u_0::AbstractVector,
ε::Real,
dx::Real,
cg_tol::Real,
cg_maxiter::Integer,
precond::String,
diff_kernel::String,
show_diagn::Bool,
)::AbstractVector
n = length(data)
# Since Au(0) = 0, we need to adjust.
data = data .- data[1]
#= Assert initialization if provided, otherwise set
default initization to naive derivative =#
if isequal(u_0, [NaN])
u_0 = [0; diff(data)] / dx
elseif length(u_0) != n
throw(
DimensionMismatch(
"size $(size(u_0)) of u_0 doesn't match size ($(n),) required for scale=\"large\".",
),
)
end
u = deepcopy(u_0) * dx
# Construct differentiation matrix.
D = spdiagm(n, n, 0 => -ones(n - 1), 1 => ones(n - 1)) / dx
Dᵀ = transpose(D)
# Construct antidifferentiation operator and its adjoint.
A(x) = cumsum(x)
function Aᵀ(x)
return sum(x) .- [0; cumsum(x[1:(end - 1)])]
end
# Precompute antidifferentiation adjoint on data
Aᵀd = Aᵀ(data)
for i in 1:iter
if diff_kernel == "abs"
# Diagonal matrix of weights, for linearizing E-L equation.
Q = Diagonal(1 ./ sqrt.((D * u) .^ 2 .+ ε))
# Linearized diffusion matrix, also approximation of Hessian.
L = Dᵀ * Q * D
elseif diff_kernel == "square"
L = Dᵀ * D
else
throw(
ArgumentError(
"""unexpected diff_kernel "$(diff_kernel)" for scale="large"."""
),
)
end
# Gradient of functional.
g = Aᵀ(A(u)) - Aᵀd + α * L * u
# Select preconditioner.
B = α * L + Diagonal(reverse(cumsum(n:-1:1)))
if precond == "cholesky"
# Incomplete Cholesky preconditioner with cut-off level 2
P = CholeskyPreconditioner(B, 2)
elseif precond == "diagonal"
P = DiagonalPreconditioner(B)
elseif precond == "amg_rs"
# Ruge-Stuben variant
P = AMGPreconditioner{RugeStuben}(B)
elseif precond == "amg_sa"
# Smoothed aggregation
P = AMGPreconditioner{SmoothedAggregation}(B)
elseif precond == "none"
P = I # Identity matrix
else
throw(ArgumentError("""unexpected precod "$(precond)" for scale="large"."""))
end
# Prepare linear operator for linear equation.
# Approximation of Hessian of TVR functional at u
H = LinearMap(u -> Aᵀ(A(u)) + α * L * u, n, n)
# Solve linear equation.
s = cg(H, -g; Pl=P, reltol=cg_tol, maxiter=cg_maxiter)
show_diagn && println(
"Iteration $(i):\trel. change = $(norm(s) / norm(u)),\tgradient norm = $(norm(g))",
)
# Update current solution
u += s
end
return u / dx
end
| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | code | 237 | using NoiseRobustDifferentiation
using Test
@testset "tvdiff" begin
@testset "scale=\"small\"" begin
include("runtests_small.jl")
end
@testset "scale=\"large\"" begin
include("runtests_large.jl")
end
end
| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | code | 782 | using Random
using Statistics
using Test
# Include testing functions
include("test_dimensions.jl")
include("test_symbolic_functions.jl")
include("test_demo_large.jl")
_testset_output_dim("large")
_testset_symbolic_functions(
"large", ["amg_rs", "amg_sa", "cholesky", "diagonal", "none"], ["abs", "square"]
)
_testset_demo_large("large", ["amg_rs"], ["abs"])
@testset "Broken inputs" begin
@test_throws ArgumentError tvdiff([0, 1, 2], 1, 0.1; scale="large", precond="bad_input")
@test_throws ArgumentError tvdiff(
[0, 1, 2], 1, 0.1; scale="large", diff_kernel="bad_input"
)
@test_throws MethodError tvdiff([0, 1, 2], 1, 0.1; scale="l4rge")
@test_throws DimensionMismatch tvdiff(
[0, 1, 2], 1, 0.1; u_0=[1, 2, 3, 4], scale="large"
)
end
| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | code | 695 | using Random
using Statistics
using Test
# Include testing functions
include("test_dimensions.jl")
include("test_symbolic_functions.jl")
include("test_demo_large.jl")
_testset_output_dim("small")
_testset_symbolic_functions("small", ["simple", "none"], ["abs", "square"])
@testset "Broken inputs" begin
@test_throws ArgumentError tvdiff([0, 1, 2], 1, 0.1; scale="small", precond="bad_input")
@test_throws ArgumentError tvdiff(
[0, 1, 2], 1, 0.1; scale="small", diff_kernel="bad_input"
)
@test_throws MethodError tvdiff([0, 1, 2], 1, 0.1; scale="sm4ll")
@test_throws DimensionMismatch tvdiff(
[0, 1, 2], 1, 0.1; u_0=[1, 2, 3, 4, 5], scale="small"
)
end
| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | code | 1137 | using CSV
using DataFrames
"""
Perform primitive check for smoothness of derivative.
If derivative û is smooth, elements of diff(û) should be small.
Use parameter-sets from MATLAB examples. =#
"""
function _test_demo_large(data, scale, precond, diff_kernel)
@test sum(
abs.(
diff(
tvdiff(
data,
40,
1e-1;
ε=1e-8,
scale=scale,
precond=precond,
diff_kernel=diff_kernel,
),
),
),
) < 1e3
end
function _testset_demo_large(scale, preconds, diff_kernels)
# Load data
file = CSV.File("./data/demo_large.csv")
data = DataFrame(file).largescaledata
@testset "Demo large" begin
for precond in preconds
@testset "$precond" begin
for diff_kernel in diff_kernels
@testset "$diff_kernel" begin
_test_demo_large(data, scale, precond, diff_kernel)
end
end
end
end
end
end
| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | code | 244 | function _testset_output_dim(scale)
n = 20
data = collect(range(-1, 1; length=n))
û = tvdiff(data, 1, 0.2; scale=scale)
@testset "Dimensions" begin
@test length(û) == n # output dim should equal input dim
end
end
| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | code | 2289 | function _eval_function(f, u, scale, precond, diff_kernel; n=50, iter=1, α=0.1, rng_seed=0)
x = range(-5, 5; length=n)
dx = x[2] - x[1]
# add noise to data
rng = MersenneTwister(rng_seed)
data = f.(x) + 0.05 * (rand(rng, n) .- 0.5)
# use tvdiff
û = tvdiff(data, iter, α; dx=dx, scale=scale, precond=precond, diff_kernel=diff_kernel)
return rmse = √(mean(abs2.(û - u.(x))))
end
function _test_symbolic_functions(scale, precond, diff_kernel)
@testset "abs" begin
@test _eval_function(abs, sign, scale, precond, diff_kernel; n=50, iter=2) < 0.3
@test _eval_function(abs, sign, scale, precond, diff_kernel; n=50, iter=10) < 0.26
@test _eval_function(abs, sign, scale, precond, diff_kernel; n=1000, iter=2) < 0.3
@test _eval_function(abs, sign, scale, precond, diff_kernel; n=1000, iter=10) < 0.14
end
@testset "sigmoid" begin
σ(x) = exp(x) / (1 + exp(x))
dσdx(x) = σ(x) - exp(2 * x) / (1 + exp(x)^2)
@test _eval_function(σ, dσdx, scale, precond, diff_kernel; n=50, iter=2) < 0.3
@test _eval_function(σ, dσdx, scale, precond, diff_kernel; n=50, iter=10) < 0.17
@test _eval_function(σ, dσdx, scale, precond, diff_kernel; n=1000, iter=2) < 0.3
@test _eval_function(σ, dσdx, scale, precond, diff_kernel; n=1000, iter=10) < 0.17
end
@testset "sin" begin
α = 0.1 # use lower value for alpha
@test _eval_function(sin, cos, scale, precond, diff_kernel; α=α, n=50, iter=2) < 0.3
@test _eval_function(sin, cos, scale, precond, diff_kernel; α=α, n=50, iter=10) <
0.27
@test _eval_function(sin, cos, scale, precond, diff_kernel; α=α, n=1000, iter=2) <
0.3
@test _eval_function(sin, cos, scale, precond, diff_kernel; α=α, n=1000, iter=10) <
0.19
end
end
function _testset_symbolic_functions(scale, preconds, diff_kernels)
@testset "Symbolic fcs" begin
for precond in preconds
@testset "$precond" begin
for diff_kernel in diff_kernels
@testset "$diff_kernel" begin
_test_symbolic_functions(scale, precond, diff_kernel)
end
end
end
end
end
end
| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | docs | 2113 | # NoiseRobustDifferentiation.jl
## Version `v0.2.4`
- ![Documentation][badge-docs] Fix name of keyword argument `cg_maxiter` in docstring ([#23][pr-23])
## Version `v0.2.3`
- ![Maintenance][badge-maintenance] Update dependencies. ([#22][pr-22])
## Version `v0.2.2`
- ![Feature][badge-feature] Add back `CholeskyPreconditioner`. ([#21][pr-21])
- ![Maintenance][badge-maintenance] Update dependencies. ([#20][pr-20])
## Version `v0.2.1`
- ![Bugfix][badge-bugfix] Fix documentation.
## Version `v0.2.0`
- ![BREAKING][badge-breaking] renamed exported function `TVRegDiff` to `tvdiff`. ([#16][pr-16])
- ![BREAKING][badge-breaking] Removed `CholeskyPreconditioner`. ([#16][pr-16], [#17][pr-17])
- ![Maintenance][badge-maintenance] Update dependencies. ([#16][pr-16])
<!--
# Badges
![BREAKING][badge-breaking]
![Deprecation][badge-deprecation]
![Feature][badge-feature]
![Enhancement][badge-enhancement]
![Bugfix][badge-bugfix]
![Security][badge-security]
![Experimental][badge-experimental]
![Maintenance][badge-maintenance]
![Documentation][badge-docs]
-->
[pr-16]: https://github.com/adrhill/NoiseRobustDifferentiation.jl/pull/16
[pr-17]: https://github.com/adrhill/NoiseRobustDifferentiation.jl/pull/17
[pr-20]: https://github.com/adrhill/NoiseRobustDifferentiation.jl/pull/20
[pr-21]: https://github.com/adrhill/NoiseRobustDifferentiation.jl/pull/21
[pr-22]: https://github.com/adrhill/NoiseRobustDifferentiation.jl/pull/22
[pr-23]: https://github.com/adrhill/NoiseRobustDifferentiation.jl/pull/23
[badge-breaking]: https://img.shields.io/badge/BREAKING-red.svg
[badge-deprecation]: https://img.shields.io/badge/deprecation-orange.svg
[badge-feature]: https://img.shields.io/badge/feature-green.svg
[badge-enhancement]: https://img.shields.io/badge/enhancement-blue.svg
[badge-bugfix]: https://img.shields.io/badge/bugfix-purple.svg
[badge-security]: https://img.shields.io/badge/security-black.svg
[badge-experimental]: https://img.shields.io/badge/experimental-lightgrey.svg
[badge-maintenance]: https://img.shields.io/badge/maintenance-gray.svg
[badge-docs]: https://img.shields.io/badge/docs-orange.svg
| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | docs | 2291 | # NoiseRobustDifferentiation.jl
| **Documentation** | **Build Status** | **Code Coverage** |
|:-------------------------------------------------------------------------------:|:-----------------------------------------:|:-------------------------------:|
| [![][docs-stable-img]][docs-stable-url] [![][docs-latest-img]][docs-latest-url] | [![Build Status][ci-img]][ci-url] | [![][codecov-img]][codecov-url] |
Julia reimplementation of *Total Variation Regularized Numerical Differentiation* (TVDiff).
Based on [Rick Chartrand's original Matlab code](https://sites.google.com/site/dnartrahckcir/home/tvdiff-code) and [Simone Sturniolo's Python reimplementation](https://github.com/stur86/tvregdiff).
## Examples
This package exports a single function `tvdiff`.
It works on noisy data without suppressing jump discontinuities:

and also on large datasets:

[More examples can be found in the documentation.](https://adrhill.github.io/NoiseRobustDifferentiation.jl/dev/examples/)
## Installation
To install this package and its dependencies, open the Julia REPL and run
```julia
julia> ]add NoiseRobustDifferentiation
```
## Citation
Please cite the following paper if you use this code in published work:
> Rick Chartrand, "Numerical differentiation of noisy, nonsmooth data," ISRN Applied Mathematics, Vol. 2011, Article ID 164564, 2011.
[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
[docs-stable-url]: https://adrhill.github.io/NoiseRobustDifferentiation.jl/stable/
[docs-latest-img]: https://img.shields.io/badge/docs-dev-blue.svg
[docs-latest-url]: https://adrhill.github.io/NoiseRobustDifferentiation.jl/dev/
[ci-img]: https://github.com/adrhill/NoiseRobustDifferentiation.jl/workflows/CI/badge.svg
[ci-url]: https://github.com/adrhill/NoiseRobustDifferentiation.jl/actions?query=workflow%3ACI
[codecov-img]: https://codecov.io/gh/adrhill/NoiseRobustDifferentiation.jl/branch/main/graph/badge.svg
[codecov-url]: https://codecov.io/gh/adrhill/NoiseRobustDifferentiation.jl | NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | docs | 3546 | # Examples
## Simple example
First we generate a small dataset by adding uniform noise to ``f(x)=|x|``
```@example abs_small
using Random, Distributions
n = 50
x = range(-5, 5, length=n)
dx = x[2] - x[1]
f_noisy = abs.(x) + rand(Uniform(-0.05, 0.05), n)
; nothing # hide
```
then we call `tvdiff` using a regularization parameter of `α=0.2` for 100 iterations.
```@example abs_small
using NoiseRobustDifferentiation
include("plot_examples.jl") # hide
û = tvdiff(f_noisy, 100, 0.2, dx=dx)
nothing # hide
```
We compare the results to the true derivative ``u(x)=sign(x)`` and a naive implementation of finite differences.
```@example abs_small
û_FDM = diff(f_noisy) / dx # FDM
plot_example_abs(abs, sign, x, f_noisy, û_FDM, û) # hide
savefig("abs_small.svg"); nothing # hide
```

## Examples from paper
Let's reconstruct the figures from Rick Chartrand's paper *"Numerical differentiation of noisy, non-smooth data"*.
The corresponding datasets can be found under `/docs/data`.
### Small-scale example
The small-scale example in the paper is a more noisy variant of our first example. We start by loading the data.
```@example paper_small
using NoiseRobustDifferentiation
using CSV, DataFrames
include("plot_examples.jl") # hide
file = CSV.File("../data/demo_small.csv")
df = DataFrame(file)
data = df.noisyabsdata
plot_FDM(data) # hide
savefig("paper_small_fdm.svg"); nothing # hide
```
Applying finite differences leads to a noisy and inaccurate result that amplifies the noise:

A strongly regularized result is obtained by calling `tvdiff` with `α=0.2`.
```@example paper_small
û = tvdiff(data, 500, 0.2, scale="small", dx=0.01, ε=1e-6)
plot_tvdiff(data, û) # hide
savefig("paper_small.svg"); nothing # hide
```

Because of keyword argument defaults, this is equal to calling
```julia
û = tvdiff(data, 500, 0.2)
```
A better result is obtained after 7000 iterations, though differences are minimal.
```@example paper_small
û = tvdiff(data, 7000, 0.2)
plot_tvdiff(data, û) # hide
savefig("paper_small7000.svg")# hide
plot_tvdiff_all(data, û) # hide
savefig("paper_small_all.svg"); nothing # hide
```

### Large-scale example
The data in this example was obtained from a whole-room calorimeter.
```@example paper_large
using NoiseRobustDifferentiation
using CSV, DataFrames
include("plot_examples.jl") # hide
file = CSV.File("../data/demo_large.csv")
df = DataFrame(file)
data = df.largescaledata
plot_demo_large_diff(data) # hide
savefig("paper_large_fdm.png"); nothing # hide
```
Computing derivates using naive finite differences gives a useless result:

Using `tvdiff` with `ε=1e-9`, we obtain a strongly regularized result. Larger values of ``\varepsilon`` improve conditioning and speed, while smaller values give more accurate results with sharper jumps.
```@example paper_large
û = tvdiff(data, 40, 1e-1, scale="large", precond="amg_rs", ε=1e-9)
plot_demo_large_tvdiff(data, û) # hide
savefig("paper_large_jump.png") # hide
plot_tvdiff_all(data, û) # hide
savefig("paper_large_all.png"); nothing # hide
```

Therefore raising ``\varepsilon`` to `1e-7` gives a smoother result. However, jumps in the derivative are also smoothed away.
```@example paper_large
û = tvdiff(data, 40, 1e-1, scale="large", precond="amg_rs", ε=1e-7)
plot_demo_large_tvdiff(data, û) # hide
savefig("paper_large_smooth.png"); nothing # hide
```

| NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
|
[
"BSD-3-Clause"
] | 0.2.4 | 108fc03c3419164898540545988d0df13fa5239b | docs | 1808 | # NoiseRobustDifferentiation.jl
Julia reimplementation of *Total Variation Regularized Numerical Differentiation* (TVDiff).
Based on [Rick Chartrand's original Matlab code](https://sites.google.com/site/dnartrahckcir/home/tvdiff-code) and [Simone Sturniolo's Python reimplementation](https://github.com/stur86/tvregdiff).
```@contents
Pages = ["index.md", "examples.md"]
```
## Installation
To install this package and its dependencies, open the Julia REPL and run
```julia
julia> ]add NoiseRobustDifferentiation
```
Julia 1.5 is required.
## Functions
```@docs
tvdiff
```
## Differences to MATLAB Code
### Conjugate gradient method
The [original code](https://sites.google.com/site/dnartrahckcir/home/tvdiff-code) uses MATLAB's inbuilt function `pcg()`, which implements the preconditioned conjugate gradients method (PCG). This code uses the conjugate gradients method (CG) from [IterativeSolvers.jl](https://github.com/JuliaMath/IterativeSolvers.jl).
Refer to the [implementation details](https://juliamath.github.io/IterativeSolvers.jl/dev/linear_systems/cg/#Implementation-details-1) for a brief discussion of differences between both methods.
Since the CG method from IterativeSolvers.jl allows for preconditioners, most of the options from [Preconditioners.jl](https://github.com/mohamed82008/Preconditioners.jl) are implemented using default parameters.
### New parameters
- `precond`: Method used for preconditioning.
- `cg_tol`: Tolerance used in conjugate gradient method.
### Other differences
- `diag_flag` has been renamed to `show_diagn`
- removed plotting flag
## Citation
Please cite the following paper if you use this code in published work:
> Rick Chartrand, "Numerical differentiation of noisy, nonsmooth data," ISRN Applied Mathematics, Vol. 2011, Article ID 164564, 2011. | NoiseRobustDifferentiation | https://github.com/adrhill/NoiseRobustDifferentiation.jl.git |
Subsets and Splits