licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.0.5 | 5ba5cdf37fb2104dd6a653b20be82b0ceb13888b | docs | 581 | # CellularAutomata.jl
This package is meant to be a complete Open source reference for everything regarding Cellular Automata.
In it you will find wayd to create one and two dimensional Cellular Automata models and functions
to analyze them
## General usage
The main function is given by `CellularAutomaton` where `rule` is a function returning the next state of the Cellular Automata.
## Contributions
Contributions are more than welcomed! I do not have too much bandwidth for this package at the moment, but I will try to take care of any contributions should they come.
| CellularAutomata | https://github.com/MartinuzziFrancesco/CellularAutomata.jl.git |
|
[
"MIT"
] | 0.0.5 | 5ba5cdf37fb2104dd6a653b20be82b0ceb13888b | docs | 50 | # General APIs
```@docs
CellularAutomaton
```
| CellularAutomata | https://github.com/MartinuzziFrancesco/CellularAutomata.jl.git |
|
[
"MIT"
] | 0.0.5 | 5ba5cdf37fb2104dd6a653b20be82b0ceb13888b | docs | 164 | # One dimensional Cellular Automata
## Discrete Cellular Automata
```@docs
DCA
```
```@docs
TCA
```
## Continuous Cellular Automata
```@docs
CCA
```
| CellularAutomata | https://github.com/MartinuzziFrancesco/CellularAutomata.jl.git |
|
[
"MIT"
] | 0.0.5 | 5ba5cdf37fb2104dd6a653b20be82b0ceb13888b | docs | 89 | # Two dimensional Cellular Automata
## Life-Like Cellular Automata
```@docs
Life
```
| CellularAutomata | https://github.com/MartinuzziFrancesco/CellularAutomata.jl.git |
|
[
"MIT"
] | 0.0.5 | 5ba5cdf37fb2104dd6a653b20be82b0ceb13888b | docs | 6600 | # One Dimensional Cellular Automata
## Elementary Cellular Automata
Elementary Cellular Automata (ECA) have a radius of one and can be in only two possible states. Here we show a couple of examples:
[Rule 18](http://atlas.wolfram.com/01/01/18/)
```@example eca
using CellularAutomata, Plots
states = 2
radius = 1
generations = 50
ncells = 111
starting_val = zeros(Bool, ncells)
starting_val[Int(floor(ncells/2)+1)] = 1
rule = 18
ca = CellularAutomaton(DCA(rule), starting_val, generations)
heatmap(ca.evolution,
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
axis=false,
ticks=false)
```
[Rule 30](http://atlas.wolfram.com/01/01/30/)
```@example eca
states = 2
radius = 1
generations = 50
ncells = 111
starting_val = zeros(Bool, ncells)
starting_val[Int(floor(ncells/2)+1)] = 1
rule = 30
ca = CellularAutomaton(DCA(rule), starting_val, generations)
heatmap(ca.evolution,
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
axis=false,
ticks=false)
```
## Multiple States Cellular Automata
General Cellular Automata have the same rule of ECA but they can have a radius larger than unity and/or a number of states greater than two. Here are provided examples for every possible permutation, starting with a Cellular Automaton with 3 states.
[Rule 7110222193934](https://www.wolframalpha.com/input/?i=rule+7%2C110%2C222%2C193%2C934+k%3D3&lk=3)
```@example msca
using CellularAutomata, Plots
states = 3
radius = 1
generations = 50
ncells = 111
starting_val = zeros(ncells)
starting_val[Int(floor(ncells/2)+1)] = 2
rule = 7110222193934
ca = CellularAutomaton(DCA(rule,states=states,radius=radius),
starting_val, generations)
heatmap(ca.evolution,
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
axis=false,
ticks=false,
size=(ncells*10, generations*10))
```
## Larger Radius Cellular Automata
The following examples shows a Cellular Automaton with radius=2, with two only possible states:
[Rule 1388968789](https://www.wolframalpha.com/input/?i=rule+1%2C388%2C968%2C789+r%3D2&lk=3)
```@example lrca
using CellularAutomata, Plots
states = 2
radius = 2
generations = 30
ncells = 111
starting_val = zeros(ncells)
starting_val[Int(floor(ncells/2)+1)] = 1
rule = 1388968789
ca = CellularAutomaton(DCA(rule,states=states,radius=radius),
starting_val, generations)
heatmap(ca.evolution,
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
axis=false,
ticks=false,
size=(ncells*10, generations*10))
```
And finally, three states with a radius equal to two:
[Rule 914752986721674989234787899872473589234512347899](https://www.wolframalpha.com/input/?i=CA+k%3D3+r%3D2+rule+914752986721674989234787899872473589234512347899&lk=3)
```@example lrca
states = 3
radius = 2
generations = 30
ncells = 111
starting_val = zeros(ncells)
starting_val[Int(floor(ncells/2)+1)] = 2
rule = 914752986721674989234787899872473589234512347899
ca = CellularAutomaton(DCA(rule,states=states,radius=radius),
starting_val, generations)
heatmap(ca.evolution,
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
axis=false,
ticks=false,
size=(ncells*10, generations*10))
```
It is also possible to specify asymmetric neighborhoods, giving a tuple to the kwarg detailing the number of neighbors to considerate at the left and right of the cell:
[Rule 1235](https://www.wolframalpha.com/input/?i=radius+3%2F2+rule+1235&lk=3)
```@example lrca
states = 2
radius = (2,1)
generations = 30
ncells = 111
starting_val = zeros(ncells)
starting_val[Int(floor(ncells/2)+1)] = 1
rule = 1235
ca = CellularAutomaton(DCA(rule,states=states,radius=radius),
starting_val, generations)
heatmap(ca.evolution,
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
axis=false,
ticks=false,
size=(ncells*10, generations*10))
```
## Totalistic Cellular Automata
Totalistic Cellular Automata takes the sum of the neighborhood to calculate the value of the next step.
[Rule 1635](http://atlas.wolfram.com/01/02/1635/)
```@example tca
using CellularAutomata, Plots
states = 3
radius = 1
generations = 50
ncells = 111
starting_val = zeros(Integer, ncells)
starting_val[Int(floor(ncells/2)+1)] = 1
rule = 1635
ca = CellularAutomaton(DCA(rule, states=states),
starting_val, generations)
heatmap(ca.evolution,
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
axis=false,
ticks=false)
```
[Rule 107398](http://atlas.wolfram.com/01/03/107398/)
```@example tca
states = 4
radius = 1
generations = 50
ncells = 111
starting_val = zeros(Integer, ncells)
starting_val[Int(floor(ncells/2)+1)] = 1
rule = 107398
ca = CellularAutomaton(DCA(rule, states=states),
starting_val, generations)
heatmap(ca.evolution,
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
axis=false,
ticks=false)
```
Here are some results for a bigger radius, using a radius of 2 as an example.
[Rule 53](http://atlas.wolfram.com/01/06/Rules/53/index.html#01_06_9_53)
```julia
states = 2
radius = 2
generations = 50
ncells = 111
starting_val = zeros(Integer, ncells)
starting_val[Int(floor(ncells/2)+1)] = 1
rule = 53
ca = CellularAutomaton(DCA(rule, radius=radius),
starting_val, generations)
heatmap(ca.evolution,
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
axis=false,
ticks=false)
```
## Continuous Cellular Automata
Continuous Cellular Automata work in the same way as the totalistic but with real values. The examples are taken from the already mentioned book [NKS](https://www.wolframscience.com/nks/p159--continuous-cellular-automata/).
Rule 0.025
```@example cca
using CellularAutomata, Plots
generations = 50
ncells = 111
starting_val = zeros(Float64, ncells)
starting_val[Int(floor(ncells/2)+1)] = 1.0
rule = 0.025
ca = CellularAutomaton(CCA(rule), starting_val, generations)
heatmap(ca.evolution,
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
axis=false,
ticks=false)
```
Rule 0.2
```@example cca
radius = 1
generations = 50
ncells = 111
starting_val = zeros(Float64, ncells)
starting_val[Int(floor(ncells/2)+1)] = 1.0
rule = 0.2
ca = CellularAutomaton(CCA(rule, radius=radius),
starting_val, generations)
heatmap(ca.evolution,
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
axis=false,
ticks=false)
```
| CellularAutomata | https://github.com/MartinuzziFrancesco/CellularAutomata.jl.git |
|
[
"MIT"
] | 0.0.5 | 5ba5cdf37fb2104dd6a653b20be82b0ceb13888b | docs | 1201 | # Two Dimensional Cellular Automata
## Game of Life
This package can also reproduce Conway's Game of Life, and any variation based on it. The ```Life()``` function takes in a tuple containing the number of neighbors that will gave birth to a new cell, or that will make an existing cell survive. (For example in the Conways's Life the tuple (3, (2,3)) indicates having 3 live neighbors will give birth to an otherwise dead cell, and having either 2 or 3 lie neighbors will make an alive cell continue living.) The implementation follows the [Golly](http://golly.sourceforge.net/Help/changes.html) notation.
This script reproduces the famous glider:
```@example life
using CellularAutomata, Plots
glider = [[0, 0, 1, 0, 0] [0, 0, 0, 1, 0] [0, 1, 1, 1, 0]]
space = zeros(Bool, 30, 30)
insert = 1
space[insert:insert+size(glider, 1)-1, insert:insert+size(glider, 2)-1] = glider
gens = 100
space_gliding = CellularAutomaton(Life((3, (2,3))), space, gens)
anim = @animate for i = 1:gens
heatmap(space_gliding.evolution[:,:,i],
yflip=true,
c=cgrad([:white, :black]),
legend = :none,
size=(1080,1080),
axis=false,
ticks=false)
end
gif(anim, "glider.gif", fps = 15)
```
| CellularAutomata | https://github.com/MartinuzziFrancesco/CellularAutomata.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | code | 660 | using Pkg
Pkg.add("Documenter")
Pkg.add("Dispatcher")
using Documenter, Dispatcher, DispatcherCache
# Make src directory available
push!(LOAD_PATH,"../src/")
# Make documentation
makedocs(
modules = [DispatcherCache],
format = :html,
sitename = "DispatcherCache.jl",
authors = "Corneliu Cofaru, 0x0α Research",
clean = true,
debug = true,
pages = [
"Introduction" => "index.md",
"Usage examples" => "examples.md",
"API Reference" => "api.md",
]
)
# Deploy documentation
deploydocs(
repo = "github.com/zgornel/DispatcherCache.jl.git",
target = "build",
deps = nothing,
make = nothing
)
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | code | 1221 | # Dispatcher.jl - Adaptive hash-graph persistency mechanism for computational
# task graphs written at 0x0α Research by Corneliu Cofaru, 2019
"""
DispatcherCache.jl is a `hash-chain` optimizer for Dispatcher delayed
execution graphs. It employes a hashing mechanism to check wether the
state associated to a node is the `DispatchGraph` that is to
be executed has already been `hashed` (and hence, an output is available)
or, it is new or changed. Depending on the current state (by 'state'
one understands the called function source code, input arguments and
other input node dependencies), the current task becomes a load-from-disk
or an execute-and-store-to-disk operation. This is done is such a manner that
the minimimum number of load/execute operations are performed, minimizing
both persistency and computational demands.
"""
module DispatcherCache
using Serialization
using Dispatcher
using IterTools
using JSON
using TranscodingStreams
using CodecBzip2
using CodecZlib
import Dispatcher: run!
export add_hash_cache!
include("constants.jl")
include("utils.jl")
include("wrappers.jl")
include("hash.jl")
include("core.jl")
end # module
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | code | 189 | # Useful constants
const DEFAULT_CACHE_DIR = "./__hash_cache__"
const DEFAULT_COMPRESSION = "none"
const DEFAULT_HASHCHAIN_FILENAME = "hashchain.json"
const DEFAULT_HASHCACHE_DIR = "cache"
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | code | 8898 | """
add_hash_cache!(graph, endpoints=[], uncacheable=[]
[; compression=DEFAULT_COMPRESSION, cachedir=DEFAULT_CACHE_DIR])
Optimizes a delayed execution graph `graph::DispatchGraph`
by wrapping individual nodes in load-from-disk on execute-and-store
wrappers depending on the state of the disk cache and of the graph.
The function modifies inplace the input graph and should be called on
the same unmodified `graph` after each execution and *not* on the
modified graph.
Once the original graph is modified, calling `run!` on it will,
if the cache is already present, load the top most consistent key
or alternatively re-run and store the outputs of nodes which have new state.
# Arguments
* `exec::Executor` the `Dispatcher.jl` executor
* `graph::DispatchGraph` input dispatch graph
* `endpoints::AbstractVector` leaf nodes for which caching will occur;
nodes that depend on these will not be cached. The nodes can be specified
either by label of by the node object itself
* uncacheable::AbstractVector` nodes that will never be cached and
will always be executed (these nodes are still hashed and their hashes
influence upstream node hashes as well)
# Keyword arguments
* `compression::String` enables compression of the node outputs.
Available options are `"none"`, for no compression, `"bz2"` or `"bzip2"`
for BZIP compression and `"gz"` or `"gzip"` for GZIP compression
* `cachedir::String` the cache directory.
Note: This function should be used with care as it modifies the input
dispatch graph. One way to handle this is to make a function that
generates the dispatch graph and calling `add_hash_cache!` each time on
the distict, functionally identical graphs.
"""
function add_hash_cache!(graph::DispatchGraph,
endpoints::AbstractVector=[],
uncacheable::AbstractVector=[];
compression::String=DEFAULT_COMPRESSION,
cachedir::String=DEFAULT_CACHE_DIR)
# Checks
if isempty(endpoints)
@warn "No enpoints for graph, will not process dispatch graph."
return nothing
end
# Initializations
_endpoints = map(n->get_node(graph, n), endpoints)
_uncacheable = imap(n->get_node(graph, n), uncacheable)
subgraph = Dispatcher.subgraph(graph, _endpoints)
work = collect(nodes(graph)) # nodes to be traversed
solved = Set{DispatchNode}() # computable tasks
node2hash = Dict{DispatchNode, String}() # node => hash mapping
storable = Set{String}() # hashes of nodes with storable output
updates = Dict{DispatchNode, DispatchNode}()
# Load hashchain
hashchain = load_hashchain(cachedir, compression=compression)
# Traverse dispatch graph
while !isempty(work)
node = popfirst!(work)
deps = dependencies(node)
if isempty(deps) || issubset(deps, solved)
# Node is solvable
push!(solved, node)
_hash_node, _hash_comp = node_hash(node, node2hash)
node2hash[node] = _hash_node
skipcache = node in _uncacheable || !(node in subgraph.nodes)
# Wrap nodes
if _hash_node in keys(hashchain) && !skipcache &&
!(_hash_node in storable)
# Hash match and output cacheable
wrap_to_load!(updates, node, _hash_node,
cachedir=cachedir,
compression=compression)
elseif _hash_node in keys(hashchain) && skipcache
# Hash match and output *non-cachable*
wrap_to_store!(updates, node, _hash_node,
cachedir=cachedir,
compression=compression,
skipcache=skipcache)
else
# Hash miss
hashchain[_hash_node] = _hash_comp
push!(storable, _hash_node)
wrap_to_store!(updates, node, _hash_node,
cachedir=cachedir,
compression=compression,
skipcache=skipcache)
end
else
# Non-solvable node
push!(work, node)
end
end
# Update graph
for i in 1:length(graph.nodes)
graph.nodes[i] = updates[graph.nodes[i]]
end
# Write hashchain
store_hashchain(hashchain, cachedir, compression=compression)
return updates
end
"""
run!(exec, graph, endpoints, uncacheable=[]
[;compression=DEFAULT_COMPRESSION, cachedir=DEFAULT_CACHE_DIR])
Runs the `graph::DispatchGraph` and loads or executes and stores the
outputs of the nodes in the subgraph whose leaf nodes are given by
`endpoints`. Nodes in `uncachable` are not locally cached.
# Arguments
* `exec::Executor` the `Dispatcher.jl` executor
* `graph::DispatchGraph` input dispatch graph
* `endpoints::AbstractVector` leaf nodes for which caching will occur;
nodes that depend on these will not be cached. The nodes can be specified
either by label of by the node object itself
* `uncacheable::AbstractVector` nodes that will never be cached and will
always be executed (these nodes are still hashed and their hashes influence
upstream node hashes as well)
# Keyword arguments
* `compression::String` enables compression of the node outputs.
Available options are `"none"`, for no compression, `"bz2"` or `"bzip2"`
for BZIP compression and `"gz"` or `"gzip"` for GZIP compression
* `cachedir::String` The cache directory.
# Examples
```julia
julia> using Dispatcher
using DispatcherCache
# Some functions
foo(x) = begin sleep(1); x end
bar(x) = begin sleep(1); x+1 end
baz(x,y) = begin sleep(1); x-y end
# Make a dispatch graph out of some operations
op1 = @op foo(1)
op2 = @op bar(2)
op3 = @op baz(op1, op2)
D = DispatchGraph(op3)
# DispatchGraph({3, 2} directed simple Int64 graph,
# NodeSet(DispatchNode[
# Op(DeferredFuture at (1,1,241),baz,"baz"),
# Op(DeferredFuture at (1,1,239),foo,"foo"),
# Op(DeferredFuture at (1,1,240),bar,"bar")]))
julia> # First run, writes results to disk (lasts 2 seconds)
result_node = [op3] # the node for which we want results
cachedir = "./__cache__" # directory does not exist
@time r = run!(AsyncExecutor(), D,
result_node, cachedir=cachedir)
println("result (first run) = \$(fetch(r[1].result.value))")
# [info | Dispatcher]: Executing 3 graph nodes.
# [info | Dispatcher]: Node 1 (Op<baz, Op<foo>, Op<bar>>): running.
# [info | Dispatcher]: Node 2 (Op<foo, Int64>): running.
# [info | Dispatcher]: Node 3 (Op<bar, Int64>): running.
# [info | Dispatcher]: Node 2 (Op<foo, Int64>): complete.
# [info | Dispatcher]: Node 3 (Op<bar, Int64>): complete.
# [info | Dispatcher]: Node 1 (Op<baz, Op<foo>, Op<bar>>): complete.
# [info | Dispatcher]: All 3 nodes executed.
# 2.029992 seconds (11.53 k allocations: 1.534 MiB)
# result (first run) = -2
julia> # Secod run, loads directly the result from ./__cache__
@time r = run!(AsyncExecutor(), D,
[op3], cachedir=cachedir)
println("result (second run) = \$(fetch(r[1].result.value))")
# [info | Dispatcher]: Executing 1 graph nodes.
# [info | Dispatcher]: Node 1 (Op<baz>): running.
# [info | Dispatcher]: Node 1 (Op<baz>): complete.
# [info | Dispatcher]: All 1 nodes executed.
# 0.005257 seconds (2.57 k allocations: 478.359 KiB)
# result (second run) = -2
julia> readdir(cachedir)
# 2-element Array{String,1}:
# "cache"
# "hashchain.json"
```
"""
function run!(exec::Executor,
graph::DispatchGraph,
endpoints::AbstractVector,
uncacheable::AbstractVector=[];
compression::String=DEFAULT_COMPRESSION,
cachedir::String=DEFAULT_CACHE_DIR)
# Make a copy of the input graph that will be modified
# and mappings from the original nodes to the copies
tmp_graph = Base.deepcopy(graph)
node2tmpnode = Dict(graph.nodes[i] => tmp_graph.nodes[i]
for i in 1:length(graph.nodes))
# Construct the endpoints and uncachable node lists
# for the copied dispatch graph
tmp_endpoints = [node2tmpnode[get_node(graph, node)]
for node in endpoints]
tmp_uncacheable = [node2tmpnode[get_node(graph, node)]
for node in uncacheable]
# Modify input graph
updates = add_hash_cache!(tmp_graph,
tmp_endpoints,
tmp_uncacheable,
compression=compression,
cachedir=cachedir)
# Run temporary graph
return run!(exec, [updates[e] for e in tmp_endpoints])
end
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | code | 3437 | """
node_hash(node, key2hash)
Calculates and returns the hash corresponding to a `Dispatcher` task graph
node i.e. `DispatchNode` using the hashes of its dependencies, input arguments
and source code of the function associated to the `node`. Any available hashes
are taken from `key2hash`.
"""
function node_hash(node::DispatchNode, key2hash::Dict{T,String}) where T
hash_code = source_hash(node)
hash_arguments = arg_hash(node)
hash_dependencies = dep_hash(node, key2hash)
node_hash = __hash(join(hash_code, hash_arguments, hash_dependencies))
subgraph_hash = Dict("code" => hash_code,
"args" => hash_arguments,
"deps" => hash_dependencies)
return node_hash, subgraph_hash
end
"""
source_hash(node)
Hashes the lowered representation of the source code of the function
associated with `node`. Useful for `Op` nodes, the other node types
do not have any associated source code.
# Examples
```julia
julia> using DispatcherCache: source_hash
f(x) = x + 1
g(x) = begin
#comment
x + 1
end
node_f = @op f(1)
node_g = @op g(10)
# Test
source_hash(node_f) == source_hash(node_g)
# true
```
"""
source_hash(node::Op) = begin
f = node.func
local code
try
code = join(code_lowered(f)[1].code, "\n")
catch
code = get_label(node)
@warn "Cannot hash code for node $(code) (using label)."
end
return __hash(code)
end
source_hash(node::DispatchNode) = __hash(nothing)
"""
arg_hash(node)
Hash the data arguments (in certain cases configuration fields) of the
dispatch `node`.
# Examples
```julia
julia> using DispatcherCache: arg_hash, __hash
f(x) = println("\$x")
arg = "argument"
node = @op f(arg)
arg_hash(node)
# "d482b7b1b5357c33"
julia> arg_hash(node) == __hash(hash(nothing) + hash(arg) + hash(typeof(arg)))
# true
```
"""
arg_hash(node::Op) = begin
h = hash(nothing)
arguments = (arg for arg in node.args if !(arg isa DispatchNode))
if !isempty(arguments)
h += mapreduce(+, arguments) do x
hash(x) + hash(typeof(x))
end
end
kwarguments = ((k,v) for (k,v) in node.kwargs if !(v isa DispatchNode))
if !isempty(kwarguments)
h += mapreduce(+, kwarguments) do x
k, v = x
hash(k) + hash(v) + hash(typeof(v))
end
end
return __hash(h)
end
arg_hash(node::DataNode) = __hash(node.data)
arg_hash(node::IndexNode) = __hash(node.index)
arg_hash(node::DispatchNode) = __hash(nothing)
"""
dep_hash(node, key2hash)
Hash the dispatch node dependencies of `node` using their existing hashes if possible.
"""
dep_hash(node::DispatchNode, key2hash) = begin
h = __hash(nothing)
nodes = dependencies(node)
if isempty(nodes)
return __hash(h)
else
for node in nodes
h *= get(key2hash, node, node_hash(node, key2hash)[1])
end
return __hash(h)
end
end
"""
__hash(something)
Return a hexadecimal string corresponding to the hash of sum
of the hashes of the value and type of `something`.
# Examples
```julia
julia> using DispatcherCache: __hash
__hash([1,2,3])
# "f00429a0d65eb7cb"
```
"""
function __hash(something)
h = hash(hash(typeof(something)) + hash(something))
return string(h, base=16)
end
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | code | 4977 | """
get_node(graph, label)
Returns the node corresponding to `label`.
"""
get_node(graph::DispatchGraph, node::T) where T<:DispatchNode = node
get_node(graph::DispatchGraph, label::T) where T<:AbstractString = begin
found = Set{DispatchNode}()
for node in nodes(graph)
if has_label(node)
get_label(node) == label && push!(found, node)
end
end
length(found) > 1 && throw(ErrorException("Labels in dispatch graph are not unique."))
length(found) < 1 && throw(ErrorException("No nodes with label $label found."))
return pop!(found)
end
get_node(graph::DispatchGraph, node::T) where T = begin
throw(ArgumentError("A node identifier can be either a " *
"::DispatchNode or ::AbstractString."))
end
"""
load_hashchain(cachedir [; compression=DEFAULT_COMPRESSION])
Loads the hashchain file found in the directory `cachedir`. Before
loading, the `compression` value is checked against the one stored
in the hashchain file (both have to match). If the file does not exist,
it is created.
"""
function load_hashchain(cachedir::String=DEFAULT_CACHE_DIR;
compression::String=DEFAULT_COMPRESSION)
cachedir = abspath(expanduser(cachedir))
file = joinpath(cachedir, DEFAULT_HASHCHAIN_FILENAME)
cachedir_outputs = joinpath(cachedir, DEFAULT_HASHCACHE_DIR)
if !ispath(cachedir_outputs)
@debug "Creating the cache directory..."
mkpath(cachedir_outputs)
end
local hashchain
if !isfile(file)
@debug "Creating a new hashchain file $file..."
hashchain = Dict{String, Any}()
store_hashchain(hashchain, cachedir, compression=compression)
else
local data
open(file, "r") do fid # read the whole JSON hashchain file
data = JSON.parse(read(fid, String))
end
if compression != data["compression"]
throw(ErrorException("Compression mismatch: $compression vs. "*
"$(data["compression"])"))
end
hashchain = data["hashchain"]
# Clean up hashchain based on what exists already on disk
# i.e. remove keys not found on disk
on_disk_hashes = map(filename->split(filename, ".")[1],
filter!(!isfile, readdir(cachedir_outputs)))
keys_to_delete = setdiff(keys(hashchain), on_disk_hashes)
for key in keys_to_delete
delete!(hashchain, key)
end
store_hashchain(hashchain, cachedir, compression=compression)
end
return hashchain
end
"""
store_hashchain(hashchain, cachedir=DEFAULT_CACHE_DIR [; compression=DEFAULT_COMPRESSION, version=1])
Stores the `hashchain` object in a file named `DEFAULT_HASHCHAIN_FILENAME`,
in the directory `cachedir`. The values of `compression` and `version` are
stored as well in the file.
"""
function store_hashchain(hashchain::Dict{String, Any},
cachedir::String=DEFAULT_CACHE_DIR;
compression::String=DEFAULT_COMPRESSION,
version::Int=1)
cachedir = abspath(expanduser(cachedir))
if !ispath(cachedir)
@debug "Creating the cache directory..."
mkpath(cachedir)
end
file = joinpath(cachedir, DEFAULT_HASHCHAIN_FILENAME)
hashchain = Dict("version" => version,
"compression" => compression,
"hashchain" => hashchain)
open(file, "w+") do fid
write(fid, JSON.json(hashchain, 4))
end
end
"""
get_compressor(compression, action)
Return a `TranscodingStreams` compatible compressor or decompressor
based on the values of `compression` and `action`.
"""
function get_compressor(compression::AbstractString, action::AbstractString)
# Checks
if !(compression in ["bz2", "bzip2", "gz", "gzip", "none"])
throw(ErrorException("Unknown compression option,"*
" aborting."))
end
if !(action in ["compress", "decompress"])
throw(ErrorException("The action can only be \"compress\" or \"decompress\"."))
end
# Get compressor/decompressor
if compression == "bz2" || compression == "bzip2"
compressor = ifelse(action == "compress",
Bzip2CompressorStream,
Bzip2DecompressorStream)
elseif compression == "gz" || compression == "gzip"
compressor = ifelse(action == "compress",
GzipCompressorStream,
GzipDecompressorStream)
elseif compression == "none"
compressor = NoopStream # no compression
end
return compressor
end
"""
root_nodes(graph::DispatchGraph) ->
Return an iterable of all nodes in the graph with no input edges.
"""
function root_nodes(graph::DispatchGraph)
imap(n->graph.nodes[n], filter(1:nv(graph.graph)) do node_index
indegree(graph.graph, node_index) == 0
end)
end
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | code | 5949 | """
wrap_to_load!(updates, node, nodehash;
cachedir=DEFAULT_CACHE_DIR,
compression=DEFAULT_COMPRESSION)
Generates a new dispatch node that corresponds to `node::DispatchNode`
and which loads a file from the `cachedir` cache directory whose name and extension
depend on `nodehash` and `compression` and contents are the output of `node`.
The generated node is added to `updates` which maps `node` to the generated node.
"""
function wrap_to_load!(updates::Dict{DispatchNode, DispatchNode},
node::DispatchNode,
nodehash::String;
cachedir::String=DEFAULT_CACHE_DIR,
compression::String=DEFAULT_COMPRESSION)
# Defing loading wrapper function
"""
Simple load wrapper.
"""
function loading_wrapper()
_cachedir = abspath(joinpath(expanduser(cachedir), DEFAULT_HASHCACHE_DIR))
if compression != "none"
extension = ".$compression"
operation = "LOAD-UNCOMPRESS"
else
extension = ".bin"
operation = "LOAD"
end
filepath = joinpath(_cachedir, nodehash * extension)
decompressor = get_compressor(compression, "decompress")
label = _labelize(node, nodehash)
@debug "[$nodehash][$(label)] $operation (compression=$compression)"
if isfile(filepath)
result = open(decompressor, filepath, "r") do fid
deserialize(fid)
end
else
throw(ErrorException("Cache file $filepath is missing."))
end
return result
end
# Add wrapped node to updates (no arguments to update :)
newnode = Op(loading_wrapper)
newnode.label = _labelize(node, nodehash)
push!(updates, node => newnode)
return nothing
end
"""
wrap_to_store!(graph, node, nodehash;
cachedir=DEFAULT_CACHE_DIR,
compression=DEFAULT_COMPRESSION,
skipcache=false)
Generates a new `Op` node that corresponds to `node::DispatchNode`
and which stores the output of the execution of `node` in a file whose
name and extension depend on `nodehash` and `compression`. The generated
node is added to `updates` which maps `node` to the generated node. The
node output is stored in `cachedir`. The caching is skipped if `skipcache`
is `true`.
"""
function wrap_to_store!(updates::Dict{DispatchNode, DispatchNode},
node::DispatchNode,
nodehash::String;
cachedir::String=DEFAULT_CACHE_DIR,
compression::String=DEFAULT_COMPRESSION,
skipcache::Bool=false)
# Defing exec-store wrapper function
"""
Simple exec-store wrapper.
"""
function exec_store_wrapper(args...; kwargs...)
_cachedir = abspath(joinpath(expanduser(cachedir), DEFAULT_HASHCACHE_DIR))
if compression != "none" && !skipcache
operation = "EXEC-STORE-COMPRESS"
elseif compression == "none" && !skipcache
operation = "EXEC-STORE"
else
operation = "EXEC *ONLY*"
end
extension = ifelse(compression != "none", ".$compression", ".bin")
filepath = joinpath(_cachedir, nodehash * extension)
compressor = get_compressor(compression, "compress")
# Get calculation result
result = _run_node(node, args...; kwargs...)
# Store result
label = _labelize(node, nodehash)
@debug "[$nodehash][$(label)] $operation (compression=$compression)"
if !skipcache
if !isfile(filepath)
open(compressor, filepath, "w") do fid
serialize(fid, result)
end
else
@debug "`-->[$nodehash][$(node.label)] * SKIPPING $operation"
end
end
return result
end
# Update arguments and keyword arguments of node using the updates;
# the latter should contain at this point only solved nodes (so the
# dependencies of the current node should be good.
newnode = Op(exec_store_wrapper)
newnode.label = _labelize(node, nodehash)
newnode.args = _arguments(node, updates)
newnode.kwargs = _kwarguments(node, updates)
# Add wrapped node to updates
push!(updates, node=>newnode)
return nothing
end
# Small wrapper that executes a node
_run_node(node::Op, args...; kwargs...) = node.func(args...; kwargs...)
_run_node(node::IndexNode, args...; kwargs...) = getindex(args..., node.index)
_run_node(node::CollectNode, args...; kwargs...) = vcat(args...)
_run_node(node::DataNode, args...; kwargs...) = identity(args...)
# Small wrapper that gets the label for a wrapped node
_labelize(node::Op, args...) = get_label(node)
_labelize(node::CollectNode, args...) = get_label(node)
_labelize(node::IndexNode, nodehash::String) = "IndexNode_$nodehash"
_labelize(node::DataNode, nodehash::String) = "DataNode_$nodehash"
# Small wrapper that generates new arguments for a wrapped node
_arguments(node::Op, updates) = map(node.args) do arg
ifelse(arg isa DispatchNode, get(updates, arg, arg), arg)
end
_arguments(node::IndexNode, updates) =
tuple(get(updates, node.node, node.node))
_arguments(node::CollectNode, updates) =
Tuple(get(updates, n, n) for n in node.nodes)
_arguments(node::DataNode, updates) =
tuple(ifelse(node.data isa DispatchNode,
get(updates, node.data, node.data),
node.data))
# Small wrapper that generates new keyword arguments for a wrapped node
_kwarguments(node::Op, updates) = pairs(
NamedTuple{(node.kwargs.itr...,)}(
((map(node.kwargs.data) do kwarg
ifelse(kwarg isa DispatchNode, get(updates, kwarg, kwarg), kwarg)
end)...,)))
_kwarguments(node::DispatchNode, updates) = pairs(NamedTuple())
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | code | 2237 | COMPRESSIONS = ["bz2", "bzip2", "gz", "gzip", "none", "unknown"]
KNOWN_COMPRESSIONS = ["bz2", "bzip2", "gz", "gzip", "none"]
ACTIONS = ["compress", "decompress", "unknown"]
KNOWN_ACTIONS = ["compress", "decompress"]
using DispatcherCache: get_compressor
@testset "Compression" begin
mktempdir() do dir
value = "some value"
for (compression, action) in Iterators.product(COMPRESSIONS, ACTIONS)
file = joinpath(abspath(dir), join("tmp", ".", compression))
compression in ["bz2", "bzip2"] && action=="compress" && begin
compressor = get_compressor(compression, action)
@test compressor == Bzip2CompressorStream
@test open(compressor, file, "w") do fid
write(fid, value)
end > 0
end
compression in ["bz2", "bzip2"] && action=="decompress" && begin
compressor = get_compressor(compression, action)
@test compressor == Bzip2DecompressorStream
@test open(compressor, file, "r") do fid
read(fid, typeof(value))
end == value
end
compression in ["gz", "gzip"] && action=="compress" && begin
compressor = get_compressor(compression, action)
@test compressor == GzipCompressorStream
@test open(compressor, file, "w") do fid
write(fid, value)
end > 0
end
compression in ["gz", "gzip"] && action=="decompress" && begin
compressor = get_compressor(compression, action)
@test compressor == GzipDecompressorStream
@test open(compressor, file, "r") do fid
read(fid, typeof(value))
end == value
end
compression == "none" && action in ["compress", "decompress"] &&
@test get_compressor(compression, action) == NoopStream
# No need to test reading/writing
!(compression in KNOWN_COMPRESSIONS) || !(action in KNOWN_ACTIONS) &&
@test_throws ErrorException get_compressor(compression, action)
end
end
end
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | code | 15178 | # Useful constants
const REGEX_EXEC_STORE = r"getfield\(DispatcherCache, Symbol\(\"#exec_store_wrapper#[0-9]+\"\)\)"
const REGEX_LOAD = r"getfield\(DispatcherCache, Symbol\(\"#loading_wrapper#[0-9]+\"\)\)"
const TMPDIR = tempdir()
const COMPRESSION = "none"
const EXECUTOR = AsyncExecutor()
using DispatcherCache: get_node, node_hash, load_hashchain
# Useful functions
function get_indexed_result_value(graph, idx; executor=AsyncExecutor())
_result = run!(executor, graph)
idx > 0 && idx <= length(_result) && return fetch(_result[idx].result.value)
return nothing
end
function get_labeled_result_value(graph, label; executor=AsyncExecutor())
_result = run!(executor, graph)
for r in _result
rlabel = r.result.value.label
rval = r.result.value
rlabel == label && return fetch(rval)
end
return nothing
end
function get_result_value(graph; executor=AsyncExecutor())
_result = run!(executor, graph)
return fetch(_result[1].result.value)
end
raw"""
Generates a Dispatcher task graph of the form below,
which will be used as a basis for the functional
testing of the module:
O top(..)
____|____
/ \
d1 O baz(..)
_________|________
/ \
O boo(...) O goo(...)
_______|_______ ____|____
/ | \ / | \
O O O O | O
foo(.) bar(.) baz(.) foo(.) v6 bar(.)
| | | | |
| | | | |
v1 v2 v3 v4 v5
"""
function example_of_dispatch_graph(modifiers=Dict{String,Function}())
# Default functions
_foo(argument) = argument
_bar(argument) = argument + 2
_baz(args...) = sum(args)
_boo(args...) = length(args) + sum(args)
_goo(args...) = sum(args) + 1
_top(argument, argument2) = argument - argument2
# Apply modifiers if any
local foo, bar, baz, boo, goo, top
for fname in ["foo", "bar", "baz", "boo", "goo", "top"]
foo = get(modifiers, "foo", _foo)
bar = get(modifiers, "bar", _bar)
baz = get(modifiers, "baz", _baz)
boo = get(modifiers, "boo", _boo)
goo = get(modifiers, "goo", _goo)
top = get(modifiers, "top", _top)
end
# Graph (for the function definitions above)
v1 = 1
v2 = 2
v3 = 3
v4 = 0
v5 = -1
v6 = -2
d1 = -3
foo1 = @op foo(v1); set_label!(foo1, "foo1")
foo2 = @op foo(v4); set_label!(foo2, "foo2")
bar1 = @op bar(v2); set_label!(bar1, "bar1")
bar2 = @op bar(v5); set_label!(bar2, "bar2")
baz1 = @op baz(v3); set_label!(baz1, "baz1")
boo1 = @op boo(foo1, bar1, baz1); set_label!(boo1, "boo1")
goo1 = @op goo(foo2, bar2, v6); set_label!(goo1, "goo1")
baz2 = @op baz(boo1, goo1); set_label!(baz2, "baz2")
top1 = @op top(d1, baz2); set_label!(top1, "top1")
graph = DispatchGraph(top1)
top_key = "top1"
return graph, top_key
end
@testset "Dispatch graph generation" begin
graph, top_key = example_of_dispatch_graph()
top_key_idx = [i for i in 1:length(graph.nodes)
if graph.nodes[i].label == top_key][1]
@test graph isa DispatchGraph
@test get_indexed_result_value(graph, top_key_idx) == -14
@test get_labeled_result_value(graph, top_key) == -14
end
@testset "First run" begin
mktempdir(TMPDIR) do cachedir
# Make dispatch graph
graph, top_key = example_of_dispatch_graph()
# Get endpoints
endpoints = [get_node(graph, top_key)]
# Add hash cache and update graph
updates = add_hash_cache!(graph, endpoints,
compression=COMPRESSION,
cachedir=cachedir)
# Test that all nodes have been wrapped (EXEC-STORE)
for i in length(graph.nodes)
node = graph.nodes[i]
node isa Op && @test occursin(REGEX_EXEC_STORE, string(node.func))
end
# Run the task graph
@test get_labeled_result_value(graph, top_key) == -14
# Test that files exist
hcfile = joinpath(cachedir, DispatcherCache.DEFAULT_HASHCHAIN_FILENAME)
hcdir = joinpath(cachedir, DispatcherCache.DEFAULT_HASHCACHE_DIR)
@test isfile(hcfile)
@test isdir(hcdir)
hashchain = open(hcfile, "r") do fid
JSON.parse(fid)
end
# Test the hashchain keys
@test hashchain["compression"] == COMPRESSION
@test hashchain["version"] == 1 # dummy test, version not used so far
# Test that each key corresponds to a cache file name
cachefiles = readdir(hcdir)
nodehashes = keys(hashchain["hashchain"])
@test length(nodehashes) == length(cachefiles)
for file in readdir(hcdir)
_hash = split(file, ".")[1]
@test _hash in nodehashes
end
end
end
@testset "Second run" begin
mktempdir(TMPDIR) do cachedir
# Make dispatch graph
graph, top_key = example_of_dispatch_graph()
# Get endpoints
endpoints = [get_node(graph, top_key)]
# Make a first run (generate cache, do not modify graph)
result = run!(EXECUTOR, graph, endpoints,
compression=COMPRESSION,
cachedir=cachedir)
@test fetch(result[1].result.value) == -14
# Add hash cache and update graph
updates = add_hash_cache!(graph, endpoints,
compression=COMPRESSION,
cachedir=cachedir)
# Test that all nodes have been wrapped (LOAD)
for i in length(graph.nodes)
node = graph.nodes[i]
node isa Op && @test occursin(REGEX_LOAD, string(node.func))
end
# Make a second run
@test get_labeled_result_value(graph, top_key) == -14
end
end
@testset "Node changes" begin
mktempdir(TMPDIR) do cachedir
# Make dispatch graph
graph, top_key = example_of_dispatch_graph()
# Get endpoints
endpoints = [get_node(graph, top_key)]
# Make a first run (generate cache, do not modify graph)
result = run!(EXECUTOR, graph, endpoints,
compression=COMPRESSION,
cachedir=cachedir)
@test fetch(result[1].result.value) == -14
# Create altered versions of initial graph
new_top(argument, argument2) = argument - argument2 - 1
g1, _ = example_of_dispatch_graph(Dict("top" => new_top))
g1data = ("top1", ("top1",), -15)
new_goo(args...) = sum(args) + 2
g2, _ = example_of_dispatch_graph(Dict("goo" => new_goo))
g2data = ("goo1", ("baz2", "top1"), -15)
for (graph, (key, impacted_keys, result)) in zip((g1, g2), (g1data, g2data))
# Get enpoints for new graphs
endpoints = [get_node(graph, top_key)]
# Add hash cache and update graph
updates = add_hash_cache!(graph, endpoints,
compression=COMPRESSION,
cachedir=cachedir)
# Test that impacted nodes are wrapped in EXEC-STORES,
# non impacted ones in LOADS
for i in length(graph.nodes)
node = graph.nodes[i]
node isa Op && !(node.label in impacted_keys) &&
@test occursin(REGEX_LOAD, string(node.func))
node isa Op && node.label in impacted_keys &&
@test occursin(REGEX_EXEC_STORE, string(node.func))
end
# Make a second run
@test get_labeled_result_value(graph, top_key) == result
end
end
end
@testset "Exec only nodes" begin
mktempdir(TMPDIR) do cachedir
EXEC_ONLY_KEY = "boo1"
# Make dispatch graph
graph, top_key = example_of_dispatch_graph()
# Get endpoints
endpoints = [get_node(graph, top_key)]
# Get uncacheable nodes
uncacheable = [get_node(graph, EXEC_ONLY_KEY)]
# Make a first run (generate cache, do not modify graph)
result = run!(EXECUTOR, graph, endpoints,
uncacheable, # node "boo1" is uncachable
compression=COMPRESSION,
cachedir=cachedir)
@test fetch(result[1].result.value) == -14
hashchain = load_hashchain(cachedir)
# Test that node hash is not in hashchain and no cache
# file exists in the cache directory
nh = node_hash(get_node(graph, EXEC_ONLY_KEY), Dict{String, String}())
@test !(nh in keys(hashchain))
@test !(join(nh, ".bin") in
readdir(joinpath(cachedir, DispatcherCache.DEFAULT_HASHCACHE_DIR)))
# Make a new graph that has the "goo1" node modified
new_goo(args...) = sum(args) + 2
new_graph, top_key = example_of_dispatch_graph(Dict("goo"=>new_goo))
# Check the final result:
# The output of node "boo1" is needed at node "baz2"
# because "goo1" was modified. A matching result indicates
# that the "boo1" dependencies were loaded and the node
# executed correctly which is the desired behaviour
# in such cases.
endpoints = [get_node(new_graph, top_key)]
uncacheable = [get_node(new_graph, EXEC_ONLY_KEY)]
result = run!(EXECUTOR, new_graph, endpoints,
uncacheable,
compression=COMPRESSION,
cachedir=cachedir)
@test fetch(result[1].result.value) == -15
end
end
@testset "Cache deletion" begin
mktempdir(TMPDIR) do cachedir
# Make dispatch graph
graph, top_key = example_of_dispatch_graph()
# Get endpoints
endpoints = [get_node(graph, top_key)]
# Make a first run (generate cache, do not modify graph)
result = run!(EXECUTOR, graph, endpoints,
compression=COMPRESSION,
cachedir=cachedir)
# Remove cache
hashcachedir = joinpath(cachedir, DispatcherCache.DEFAULT_HASHCACHE_DIR)
rm(hashcachedir, recursive=true, force=true)
@test !isdir(hashcachedir)
# Make a second run (no hash cache)
result = run!(EXECUTOR, graph, endpoints,
compression=COMPRESSION,
cachedir=cachedir)
@test fetch(result[1].result.value) == -14
end
end
@testset "Identical nodes" begin
v1 = 1
foo(x) = x + 1
bar(x, y) = x + y
foo1 = @op foo(v1); set_label!(foo1, "foo1")
foo2 = @op foo(v1); set_label!(foo1, "foo2")
bar1 = @op bar(foo1, foo2); set_label!(bar1, "bar1")
g = DispatchGraph(bar1)
mktempdir(TMPDIR) do cachedir
hcfile = joinpath(cachedir, DispatcherCache.DEFAULT_HASHCHAIN_FILENAME)
hcdir = joinpath(cachedir, DispatcherCache.DEFAULT_HASHCACHE_DIR)
# Run the first time
result = run!(EXECUTOR, g, ["bar1"],
compression=COMPRESSION,
cachedir=cachedir)
@test fetch(result[1].result.value) == 4
hashchain = load_hashchain(cachedir)
cachefiles = readdir(hcdir)
nodehashes = keys(hashchain)
# Test that 2 hashes / cache files exist (corresponding to "foo" and "bar")
@test length(nodehashes) == length(cachefiles) == 2
# Run the second time
result = run!(EXECUTOR, g, ["bar1"], cachedir=cachedir)
@test fetch(result[1].result.value) == 4
end
end
raw"""
Generates a Dispatcher task graph of the form below,
which will be used as a basis for the functional
testing of the module. This dispatch graph contains
all subtypes of `DispatchNode`.
O top(..)
____|____
/ \
O O
DataNode(1) IndexNode(...,1)
|
O
CollectNode(...)
_________|________
/ \
O foo(...) O bar(...)
| |
| |
v1 v2
"""
function example_of_dispatch_graph_w_mixed_nodes(modifiers=Dict{String,Function}())
# Default functions
_foo(argument) = argument
_bar(argument) = argument + 2
_top(argument, argument2) = argument - argument2
# Apply modifiers if any
local foo, bar, top
for fname in ["foo", "bar", "top"]
foo = get(modifiers, "foo", _foo)
bar = get(modifiers, "bar", _bar)
top = get(modifiers, "top", _top)
end
# Graph (for the function definitions above)
v1 = 1
v2 = 2
foo1 = @op foo(v1); set_label!(foo1, "foo1")
bar1 = @op bar(v2); set_label!(bar1, "bar1")
d1 = DataNode(1)
col1 = CollectNode([foo1, bar1])
idx1 = IndexNode(col1, 1)
top1 = @op top(d1, idx1); set_label!(top1, "top1")
graph = DispatchGraph(top1)
return graph, top1
end
@testset "Dispatch graph generation (mixed nodes)" begin
graph, top_node = example_of_dispatch_graph_w_mixed_nodes()
@test graph isa DispatchGraph
@test get_result_value(graph) == 0
end
@testset "First run (mixed nodes)" begin
mktempdir(TMPDIR) do cachedir
# Make dispatch graph
graph, top_node = example_of_dispatch_graph_w_mixed_nodes()
# Get endpoints
endpoints = [get_node(graph, top_node)]
# Add hash cache and update graph
updates = add_hash_cache!(graph, endpoints,
compression=COMPRESSION,
cachedir=cachedir)
# Test that all nodes have been wrapped (EXEC-STORE)
for i in length(graph.nodes)
node = graph.nodes[i]
node isa Op && @test occursin(REGEX_EXEC_STORE, string(node.func))
end
# Run the task graph
@test get_result_value(graph) == 0
# The other checks are omitted
# ...
end
end
@testset "Second run" begin
mktempdir(TMPDIR) do cachedir
# Make dispatch graph
graph, top_node = example_of_dispatch_graph_w_mixed_nodes()
# Get endpoints
endpoints = [get_node(graph, top_node)]
# Make a first run (generate cache, do not modify graph)
result = run!(EXECUTOR, graph, endpoints,
compression=COMPRESSION,
cachedir=cachedir)
@test fetch(result[1].result.value) == 0
# Add hash cache and update graph
updates = add_hash_cache!(graph, endpoints,
compression=COMPRESSION,
cachedir=cachedir)
# Test that all nodes have been wrapped (LOAD)
for i in length(graph.nodes)
node = graph.nodes[i]
node isa Op && @test occursin(REGEX_LOAD, string(node.func))
end
# Make a second run
@test get_result_value(graph) == 0
end
end
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | code | 1214 | using DispatcherCache: node_hash, source_hash, arg_hash, dep_hash, __hash
@testset "Hashing" begin
# Low level
v = 1
@test __hash(v) == string(hash(hash(typeof(v)) + hash(v)), base=16)
# Generate some functions and ops
foo(x, y) = x + y
bar(x) = x
another_foo(x, y) = begin
# some comment
x+y
end
yet_another_foo(x,y) = x + y + 1 - 1
bar1 = @op bar(10)
foo1 = @op foo(bar1, 10)
foo2 = @op another_foo(1, 2)
foo3 = @op yet_another_foo(1, 2)
# Source
@test source_hash(foo1) == source_hash(foo2)
@test source_hash(foo1) != source_hash(foo3)
# Arguments
@test arg_hash(foo2) == arg_hash(foo3)
@test arg_hash(foo1) == arg_hash(bar1)
@test arg_hash(bar1) == __hash(hash(nothing) + hash(10) + hash(Int))
# Dependencies
k2h = Dict{Op, String}()
hash_bar1, _ = node_hash(bar1, k2h)
@test dep_hash(foo1, k2h) == __hash(__hash(nothing) * hash_bar1)
# Entire node
h_src = source_hash(foo1)
h_arg = arg_hash(foo1)
h_dep = dep_hash(foo1, k2h)
@test node_hash(foo1, k2h) ==
(__hash(join(h_src, h_arg, h_dep)),
Dict("code" => h_src, "args" => h_arg, "deps" => h_dep))
end
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | code | 296 | using Test
using Dispatcher
using Memento
using TranscodingStreams
using CodecBzip2
using CodecZlib
using JSON
using DispatcherCache
# Set Dispatcher logging level to warning
setlevel!(getlogger("Dispatcher"), "warn")
# Run tests
include("compression.jl")
include("hash.jl")
include("core.jl")
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | docs | 202 | ## DispatcherCache Release Notes
v0.1.2
------
- Urgent bugfix release
v0.1.1
------
- Added caching support for all DispatchNodes
- Minor refactoring and bugfixes
v0.1.0
------
- Inital release
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | docs | 1777 | # DispatcherCache.jl
A task persistency mechanism based on hash-graphs for [Dispatcher.jl](https://github.com/invenia/Dispatcher.jl). Based on [graphchain](https://github.com/radix-ai/graphchain), [(commit baa1c3f)](https://github.com/radix-ai/graphchain/tree/baa1c3fa94da86bd6e495c64fe63c12b36d50a1a).
[](LICENSE.md)
[](https://travis-ci.org/zgornel/DispatcherCache.jl)
[](https://coveralls.io/github/zgornel/DispatcherCache.jl?branch=master)
[](https://zgornel.github.io/DispatcherCache.jl/stable)
[](https://zgornel.github.io/DispatcherCache.jl/dev)
## Installation
```bash
git clone "https://zgornel.github.com/DispatcherCache.jl"
```
or, from inside Julia,
```
] add DispatcherCache
```
and for the latest `master` branch,
```
] add https://github.com/zgornel/DispatcherCache.jl#master
```
## Features
To keep track with the latest features, please consult [NEWS.md](https://github.com/zgornel/DispatcherCache.jl/blob/master/NEWS.md) and the [documentation](https://zgornel.github.io/DispatcherCache.jl/dev).
## License
This code has an MIT license and therefore it is free.
## Reporting Bugs
Please [file an issue](https://github.com/zgornel/DispatcherCache.jl/issues/new) to report a bug or request a feature.
## References
[1] [Dispatcher.jl documentation](https://invenia.github.io/Dispatcher.jl/stable/)
[2] [Graphchain documentation](https://graphchain.readthedocs.io/en/latest/)
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | docs | 60 | ```@index
```
```@autodocs
Modules = [DispatcherCache]
```
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | docs | 2780 | # Usage examples
The following examples will attempt to illustrate the basic functionality of the package and how it can be employed to speed up computationally demanding processing pipelines. Although toy problems are being used, it should be straightforward to apply the concepts illustrated below to real-word applications. More subtle properties of the caching mechanism are exemplified in the [unit tests](https://github.com/zgornel/DispatcherCache.jl/blob/master/test/core.jl) of the package.
## Basics
Let us begin by defining a simple computational task graph with three nodes
```@repl index
using Dispatcher, DispatcherCache
# Some functions
foo(x) = begin sleep(3); x end;
bar(x) = begin sleep(3); x+1 end;
baz(x,y) = begin sleep(2); x-y end;
op1 = @op foo(1);
op2 = @op bar(2);
op3 = @op baz(op1, op2);
G = DispatchGraph(op3)
```
Once the dispatch graph `G` is defined, one can calculate the result for any of the nodes contained in it. For example, for the top or _leaf_ node `op3`,
```@repl index
extract(r) = fetch(r[1].result.value); # gets directly the result value
result = run!(AsyncExecutor(), G); # automatically runs op3
println("result (normal run) = $(extract(result))")
```
Using the `DispatcherCache` `run!` method caches all intermediary node outputs to a specified directory
```@repl index
cachedir = mktempdir() # cache temporary directory
@time result = run!(AsyncExecutor(), G, [op3], cachedir=cachedir);
println("result (caching run) = $(extract(result))")
```
!!! note
The `run!` method with caching support needs explicit specification of the output nodes (the `Dispatcher` one executes directly the leaf nodes of the graph). Through this, one may choose to cache only a subgraph of the full dispatch graph.
After the first _cached_ run, one can verify that the cache related files exist on disk
```@repl index
readdir(cachedir)
readdir(joinpath(cachedir, "cache"))
```
Running the computation a second time will result in loading the last - cached - result, operation noticeable through the fact that the time needed decreased.
```@repl index
@time result = run!(AsyncExecutor(), G, [op3], cachedir=cachedir);
println("result (cached run) = $(extract(result))")
```
The cache can be cleaned up by simply removing the cache directory.
```@repl index
rm(cachedir, recursive=true, force=true)
```
If the cache does not exist anymore, a new call of `run!(::Executor, G, [op3], cachedir=cachedir)` will re-create the cache by running each node.
!!! note
In the examples above, the functions `foo`, `bar` and `baz` use the `sleep` function to simulate longer running computations. This is useful to both illustrate the concept presented and to overcome the pre-compilation overhead that occurs then calling the `run!` method.
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.2 | 46b94ab89b7cfc439e8a34d9022eedcd0897b1fc | docs | 1743 | ```@meta
CurrentModule=DispatcherCache
```
# Introduction
DispatcherCache is a task persistency mechanism for [Dispatcher.jl](https://github.com/invenia/Dispatcher.jl) computational task graphs. It is based on [graphchain](https://github.com/radix-ai/graphchain) which is a caching mechanism for [Dask](https://dask.org) task graphs.
## Motivation
[Dispatcher.jl](https://github.com/invenia/Dispatcher.jl) represents a convenient way of organizing i.e. scheduling complex computational workflows for asynchronous or parallel execution. Running the same workflow multiple times is not uncommon, albeit with varying input parameters or data. Depending on the type of tasks being executed, in many cases, some of the tasks remain unchanged between distinct runs: the same function is being called on identical input arguments.
`DispatcherCache` provides a way of re-using the output of the nodes in the dispatch graph, as long as their state did not change to some unobserved one. By state it is understood the source code, arguments and the node dependencies associated with the nodes. If the state of some node does change between two consecutive executions of the graph, only the node and the nodes impacted downstream (towards the leafs of the graph) are actually executed. The nodes whose state did not change are in effect (not practice) pruned from the graph, with the exception of the ones that are dependencies of nodes that have to be re-executed due to state change.
## Installation
In the shell of choice, using
```
$ git clone https://github.com/zgornel/DispatcherCache.jl
```
or, inside Julia
```
] add DispatcherCache
```
and for the latest `master` branch,
```
] add https://github.com/zgornel/DispatcherCache.jl#master
```
| DispatcherCache | https://github.com/zgornel/DispatcherCache.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1093 | using Documenter
using PooksoftAlphaVantageDataStore
makedocs(sitename="Pooksoft",
pages = [
"index.md",
"Utility" =>[
"utility.md"
],
"Stock Time Series (STS)" => [
"execute_sts_daily_api_call.md",
"execute_sts_weekly_api_call.md",
"execute_sts_adjusted_weekly_api_call.md",
"execute_sts_monthly_api_call.md",
"execute_sts_adjusted_monthly_api_call.md",
"execute_quote_api_call.md",
"execute_search_api_call.md"
],
"Technical Indicators (TI)" => [
"execute_ti_simple_moving_average_api_call.md",
"execute_ti_exponential_moving_average_api_call.md",
"execute_ti_rsi_api_call.md"
]
]
)
# Deploy -
# Documenter can also automatically deploy documentation to gh-pages.
# See "Hosting Documentation" and deploydocs() in the Documenter manual
# for more information.
deploydocs(
repo = "github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git",
devbranch = "master",
devurl = "dev",
) | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1650 | """
_include_my_codes(base_path::String, code_file_array::Array{String,1})
Function to include Julia files in a directory
"""
function _include_my_codes(base_path::String, code_file_array::Array{String,1})
for code_file in code_file_array
path_to_code = joinpath(base_path,code_file)
include(path_to_code)
end
end
# define constants here -
const alphavantage_api_url_string = "https://www.alphavantage.co/query"
const path_to_package = dirname(pathof(@__MODULE__))
const _PATH_TO_BASE = joinpath(path_to_package,"base")
const _PATH_TO_STS = joinpath(path_to_package, "sts")
const _PATH_TO_TI = joinpath(path_to_package, "ti")
const _PATH_TO_CRYPTO = joinpath(path_to_package, "crypto")
const _PATH_TO_FUNDAMENTALS = joinpath(path_to_package, "fundamentals")
# load offical packages here -
using DataFrames
using CSV
using HTTP
using JSON
using Dates
using Logging
using Reexport
@reexport using PooksoftBase
# need to update this syntax -
# load my base codes -
my_base_codes = [
"Types.jl", "Network.jl", "User.jl", "Checks.jl",
"Log.jl", "Handlers.jl", "Filesystem.jl", "Datastore.jl", "General.jl"
];
_include_my_codes(_PATH_TO_BASE, my_base_codes)
# stock time series -
my_sts_codes = [
"STSDaily.jl", "STSWeekly.jl", "STSMonthly.jl", "Quote.jl", "Search.jl", "STSIntraday.jl"
]
_include_my_codes(_PATH_TO_STS, my_sts_codes)
# technical indicators -
my_ti_codes = [
"SMA.jl", "EMA.jl", "RSI.jl"
]
_include_my_codes(_PATH_TO_TI, my_ti_codes)
# fundementals -
my_fundamental_codes = [
"Overview.jl", "Earnings.jl", "Income.jl"
]
_include_my_codes(_PATH_TO_FUNDAMENTALS, my_fundamental_codes) | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1034 | module PooksoftAlphaVantageDataStore
# include -
include("Include.jl")
# export functions -
# low-level functions -
export log_api_call
export execute_sts_intraday_api_call
export execute_sts_daily_api_call
export execute_sts_adjusted_daily_api_call
export execute_sts_weekly_api_call
export execute_sts_adjusted_weekly_api_call
export execute_sts_monthly_api_call
export execute_sts_adjusted_monthly_api_call
export execute_sts_quote_api_call
export execute_sts_search_api_call
export execute_simple_moving_average_api_call
export execute_exponential_moving_average_api_call
export execute_relative_strength_index_api_call
export execute_company_overview_api_call
export execute_company_earnings_api_call
export execute_company_income_statement_api_call
# high-level functions -
export build_api_user_model
export build_datastore_apicall_model
export execute_api_call
# read/write -
export write_data_to_filestore
export read_data_from_filestore
# export types -
export PSUserModel
export PSDataStoreAPICallModel
end # module
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 3100 | function is_path_valid(path_to_file::String)::PSResult{Bool}
# the config file should be a json file, and should be reachable -
# TODO: fill me in ...
return PSResult{Bool}(true)
end
function is_string_empty(raw_string::String)::PSResult{Bool}
# if we have an empty string - return true
if isempty(raw_string) == true
return PSResult{Bool}(true)
end
# default Return is flase -
return PSResult{Bool}(false)
end
function check_missing_api_key(user_model::PSUserModel)::(Union{T, Nothing} where T<:Any)
# do we have the alpha_vantage_api_key -
if (hasfield(PSUserModel, :alphavantage_api_key) == false)
# throw -
return PSResult{PSError}(PSError("user model is missing API key information"))
end
# get the key -
api_key = user_model.alphavantage_api_key
# check -
if (isempty(api_key) == true)
# formulate an error message -
error_message = "the API key is empty in the user model"
# throw -
return PSResult{PSError}(PSError(error_message))
end
# ok -
return nothing
end
function check_missing_symbol(stock_symbol::String)::(Union{T, Nothing} where T<:Any)
if (isempty(stock_symbol) == true)
# formulate an error message -
error_message = "missing stock symbol"
# throw -
return PSResult{PSError}(PSError(error_message))
end
# return nothing -
return nothing
end
function check_json_api_return_data(api_call_raw_data::String)::(Union{T, Nothing} where T<:Any)
# well formed JSON?
if is_valid_json(api_call_raw_data).value == false
return PSError("invalid JSON $(api_call_raw_data)")
end
# need to check to see if legit data is coming back from the service -
api_data_dictionary = JSON.parse(api_call_raw_data)
if (haskey(api_data_dictionary,"Error Message") == true)
# grab the error mesage -
error_message = api_data_dictionary["Error Message"]
# throw -
return PSResult{PSError}(PSError(error_message))
end
# need to check - are we hitting the API call limit?
if (haskey(api_data_dictionary,"Note") == true)
# grab the error mesage -
error_message = api_data_dictionary["Note"]
# throw -
return PSResult{PSError}(PSError(error_message))
end
# default -
return nothing
end
function check_user_model(user_model::PSUserModel)::(Union{T, Nothing} where T<:Any)
return PSResult{Bool}(true)
end
"""
_is_valid_json(raw_string)->Bool
Checks to see if the input `string` is a valid JSON structure.
Returns `true` indicating valid JSON, `false` otherwise.
"""
function is_valid_json(raw_string::String)::PSResult{Bool}
# check: do we have an empty string?
if (is_string_empty(raw_string).value == true)
return PSResult{Bool}(false)
end
# otherwise, to check to see if the string is valid JSON, try to
# parse it.
try
JSON.parse(raw_string)
return PSResult{Bool}(true)
catch
return PSResult{Bool}(false)
end
end
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1931 | function build_datastore_apicall_model(callFunction::Function, tickerSymbol::String;
output::Symbol = :compact, datatype::Symbol = :json)::PSResult
# TODO: Checks -
# initialize -
output_type_set = Set{Symbol}()
datatype_set = Set{Symbol}()
# check output types -
push!(output_type_set,:compact)
push!(output_type_set,:full)
if (in(output,output_type_set) == false)
return PSResult(ArgumentError("Incompatible output type. Expected {:compact,:full} but received $(string(output))"))
end
# check datatype -
push!(datatype_set, :json)
push!(datatype_set, :csv)
if (in(datatype,datatype_set) == false)
return PSResult(ArgumentError("Incompatible data type. Expected {:json,:csv} but received $(string(output))"))
end
# check the tickerSymbol -
if (isa(tryparse(Float64,tickerSymbol), Number) == true)
return PSResult(ArgumentError("Incompatible ticker symbol type"))
end
# TODO: how do we check is we have a specific function?
# get stuff -
parameter_object = PSDataStoreAPICallModel(callFunction, tickerSymbol; dataType=datatype, output=output)
# return -
return PSResult(parameter_object)
end
"""
execute_api_call(user::PSUserModel, api::PSDataStoreAPICallModel; logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
High-level convenience function to execute an data store application programming interface call.
# Arguments
"""
function execute_api_call(usermodel::PSUserModel, apimodel::PSDataStoreAPICallModel;
logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
# get stuff from the parameters -
tickerSymbol = apimodel.ticker
outputsize = apimodel.outputsize
datatype = apimodel.dataType
apicall = apimodel.apicall
# make the call -
return apicall(usermodel,tickerSymbol; data_type = datatype, outputsize = outputsize, logger = logger)
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1024 | function write_data_to_filestore(base_repository_file_path::String, function_call::Symbol, asset_symbol::String, date_string::String, data_frame::DataFrame)::String
# -
# checks go here ...
# -
# build a path into the data repository -
function_call_string = string(function_call)
final_repo_path = "$(base_repository_file_path)/$(function_call_string)/$(asset_symbol)/$(date_string)"
final_repo_path_with_file_name = "$(base_repository_file_path)/$(function_call_string)/$(asset_symbol)/$(date_string)/data.csv"
# check - do we have the repo path directory structure?
if (isdir(final_repo_path) == false)
mkpath(final_repo_path)
end
# write file to disk -
CSV.write(final_repo_path_with_file_name,data_frame)
# return -
return final_repo_path_with_file_name
end
function read_data_from_filestore(repository_file_path::String)::DataFrame
# -
# checks go here ...
# -
# read file from repo -
return CSV.read(repository_file_path)
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 229 | function check_for_numerical_none_value(value::String)::Union{Missing,Float64}
return_value = missing
if (value != "None")
return_value = parse(Float64, value)
end
# return -
return return_value
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 35647 |
function process_raw_csv_api_data(api_call_raw_data::String)::PSResult
# check: do we have an empty string?
# check: legit string?
# need to check to see if legit data is coming back from the service -
if (is_valid_json(api_call_raw_data) == true)
api_data_dictionary = JSON.parse(api_call_raw_data)
if (haskey(api_data_dictionary,"Error Message") == true)
# grab the error mesage -
error_message = api_data_dictionary["Error Message"]
# throw -
return PSResult{PSError}(PSError(error_message))
else
# formulate an error message -
error_message = "Error: CSV type returns JSON without error message"
# throw -
return PSResult{PSError}(PSError(error_message))
end
end
# create a data table from the CSV data -
tmp_data_table = CSV.read(IOBuffer(api_call_raw_data), DataFrame)
# sort the table according to the timestamps -
idx_sort = sortperm(tmp_data_table[:,1])
# create a sorted data table -
sorted_data_table = tmp_data_table[idx_sort,:]
# return the sorted table -
return PSResult{DataFrame}(sorted_data_table)
end
function process_raw_json_api_data_sts(api_call_raw_data::String, data_series_key::String)::(Union{PSResult{T}, Nothing} where T<:Any)
# is the data coming back well formed, and does it contain valid data?
check_result = check_json_api_return_data(api_call_raw_data)
if check_result != nothing
return check_result
end
# TODO - check for missing data series key -
# ...
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(api_call_raw_data)
# grab the time series data -
time_series_key = data_series_key
if (haskey(api_data_dictionary, time_series_key) == false)
# throw an error -
error_message = "Error: Missing the Time series key = $(time_series_key)"
# throw -
return PSError(error_message)
end
# array of keys -
data_key_label_array = ["1. open", "2. high", "3. low", "4. close", "5. volume"]
number_of_fields = length(data_key_label_array)
# initialize storage for the fields -
timestamp_array = Dates.Date[]
open_price_array = Float64[]
high_price_array = Float64[]
low_price_array = Float64[]
close_price_array = Float64[]
volume_array = Int64[]
# ok, we have the time series key, go through the data and load into the table -
time_series_dictionary = api_data_dictionary[time_series_key]
time_series_key_array = collect(keys(time_series_dictionary))
for timestamp_value in time_series_key_array
# get the local_dictionary -
local_dictionary = time_series_dictionary[timestamp_value]
# cache -
push!(timestamp_array, Dates.Date(timestamp_value,"yyyy-mm-dd"))
# add the price data -
for key_index = 1:number_of_fields
# grab key -
local_key = data_key_label_array[key_index]
value = local_dictionary[local_key]
# populate the array's -
if (key_index == 1)
push!(open_price_array, parse(Float64, value))
elseif (key_index == 2)
push!(high_price_array, parse(Float64, value))
elseif (key_index == 3)
push!(low_price_array, parse(Float64, value))
elseif (key_index == 4)
push!(close_price_array, parse(Float64, value))
else
push!(volume_array, parse(Int64, value))
end
end
end
# we need to sort the timestamps, to make them in reverse order -
idx_sort = sortperm(timestamp_array)
# build the data frame -
data_frame = DataFrame(timestamp=timestamp_array[idx_sort], open=open_price_array[idx_sort], high=high_price_array[idx_sort], low=low_price_array[idx_sort], close=close_price_array[idx_sort], volume=volume_array[idx_sort])
# return the data back to the caller -
return PSResult{DataFrame}(data_frame)
end
function process_raw_json_api_data_sts_adjusted(api_call_raw_data::String, data_series_key::String)::(Union{PSResult{T}, Nothing} where T<:Any)
# is the data coming back well formed, and does it contain valid data?
check_result = check_json_api_return_data(api_call_raw_data)
if check_result !== nothing
return check_result
end
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(api_call_raw_data)
# grab the time series data -
time_series_key = data_series_key
if (haskey(api_data_dictionary, time_series_key) == false)
# throw an error -
error_message = "Error: Missing the Time series key = $(time_series_key)"
# throw -
return PSError(error_message)
end
#all repeated up to here possibly look to make the above a function for just checking data
# array of keys -- this is changed per function
data_key_label_array = ["1. open", "2. high", "3. low", "4. close", "5. adjusted close", "6. volume", "7. dividend amount"]
number_of_fields = length(data_key_label_array)
#initialize arrays to hold these fields for all dates
timestamp_array = Dates.Date[]
open_price_array = Float64[]
high_price_array = Float64[]
low_price_array = Float64[]
close_price_array = Float64[]
adjusted_close_array = Float64[]
volume_array = Int64[]
dividend_amount_array = Float64[]
# ok, we have the time series key, go through the data and load into the table -
time_series_dictionary = api_data_dictionary[time_series_key]
time_series_key_array = collect(keys(time_series_dictionary))
for timestamp_value in time_series_key_array
# get the local_dictionary -
local_dictionary = time_series_dictionary[timestamp_value]
# cache -
push!(timestamp_array, Dates.Date(timestamp_value,"yyyy-mm-dd"))
# add the price data -
for key_index = 1:number_of_fields
# grab key -
local_key = data_key_label_array[key_index]
value = local_dictionary[local_key]
# populate the array's -
if (key_index == 1)
push!(open_price_array, parse(Float64, value))
elseif (key_index == 2)
push!(high_price_array, parse(Float64, value))
elseif (key_index == 3)
push!(low_price_array, parse(Float64, value))
elseif (key_index == 4)
push!(close_price_array, parse(Float64, value))
elseif (key_index == 5)
push!(adjusted_close_array, parse(Float64, value))
elseif (key_index == 6)
push!(volume_array, parse(Int64, value))
else
push!(dividend_amount_array, parse(Float64, value))
end
end
end
# we need to sort the timestamps, to make them in reverse order -
idx_sort = sortperm(timestamp_array)
# build the data frame -
data_frame = DataFrame(timestamp=timestamp_array[idx_sort], open=open_price_array[idx_sort], high=high_price_array[idx_sort], low=low_price_array[idx_sort], close=close_price_array[idx_sort], adjusted_close = adjusted_close_array[idx_sort], volume=volume_array[idx_sort], dividend_amount = dividend_amount_array[idx_sort])
# return the data back to the caller -
return PSResult{DataFrame}(data_frame)
end
function process_raw_json_sts_intraday_data(api_call_raw_data::String)::PSResult
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(api_call_raw_data)
overall_intraday_data_dictionary = Dict{String,Any}()
# grab the Meta Data block -
keys_metadata_block = [
"1. Information",
"2. Symbol",
"3. Last Refreshed",
"4. Interval",
"5. Output Size",
"6. Time Zone"
]
metadata_block_dictionary = api_data_dictionary["Meta Data"]
interval_value = metadata_block_dictionary["4. Interval"]
# process the intraday data dictionary -
intraday_data_keys = [
"1. open", "2. high", "3. low", "4. close", "5. volume"
]
# setup date formatter -
F = DateFormat("yyyy-mm-dd HH:MM:SS")
# setup the data frame -
intraday_dataframe = DataFrame(timestamp=Dates.DateTime[],open=Union{Missing,Float64}[],close=Union{Missing,Float64}[],
high=close=Union{Missing,Float64}[],low=close=Union{Missing,Float64}[],volume=close=Union{Missing,Int64}[])
data_array_key_string = "Time Series ($(interval_value))"
list_of_intraday_dictionaries = api_data_dictionary[data_array_key_string]
list_of_timestamp_keys = keys(list_of_intraday_dictionaries)
for intraday_timestamp_key in list_of_timestamp_keys
# create an empty row -
data_row = Array{Union{DateTime,Float64},1}()
# format the timestamp -
timestamp_value = DateTime(intraday_timestamp_key, F)
push!(data_row, timestamp_value)
# from the intraday timestamp => get the data dictionary -
intraday_dictionary = list_of_intraday_dictionaries[intraday_timestamp_key]
for intraday_data_key in intraday_data_keys
data_value = intraday_dictionary[intraday_data_key]
push!(data_row, parse(Float64,data_value))
end
# push into the data frame -
push!(intraday_dataframe, tuple(data_row...))
end
# lets sort by timestamp -
timestamp_array = intraday_dataframe[:,:timestamp]
idx_sort = sortperm(timestamp_array)
sorted_intraday_dataframe = intraday_dataframe[idx_sort,:]
# package -
overall_intraday_data_dictionary["Meta Data"] = metadata_block_dictionary
overall_intraday_data_dictionary[data_array_key_string] = sorted_intraday_dataframe
# return -
return PSResult(overall_intraday_data_dictionary)
end
function process_raw_json_api_data_sts_daily_adjusted(api_call_raw_data::String, data_series_key::String)::(Union{PSResult{T}, Nothing} where T<:Any)
# is the data coming back well formed, and does it contain valid data?
check_result = check_json_api_return_data(api_call_raw_data)
if check_result !== nothing
return check_result
end
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(api_call_raw_data)
# grab the time series data -
time_series_key = data_series_key
if (haskey(api_data_dictionary, time_series_key) == false)
# throw an error -
error_message = "Error: Missing the Time series key = $(time_series_key)"
# throw -
return PSError(error_message)
end
#all repeated up to here possibly look to make the above a function for just checking data
# array of keys -- this is changed per function
data_key_label_array = ["1. open", "2. high", "3. low", "4. close", "5. adjusted close", "6. volume", "7. dividend amount", "8. split coefficient"]
number_of_fields = length(data_key_label_array)
#initialize arrays to hold these fields for all dates
timestamp_array = Dates.Date[]
open_price_array = Float64[]
high_price_array = Float64[]
low_price_array = Float64[]
close_price_array = Float64[]
adjusted_close_array = Float64[]
volume_array = Int64[]
dividend_amount_array = Float64[]
split_coefficient = Float64[]
# ok, we have the time series key, go through the data and load into the table -
time_series_dictionary = api_data_dictionary[time_series_key]
time_series_key_array = collect(keys(time_series_dictionary))
for timestamp_value in time_series_key_array
# get the local_dictionary -
local_dictionary = time_series_dictionary[timestamp_value]
# cache -
push!(timestamp_array, Dates.Date(timestamp_value,"yyyy-mm-dd"))
# add the price data -
for key_index = 1:number_of_fields
# grab key -
local_key = data_key_label_array[key_index]
value = local_dictionary[local_key]
# populate the array's -
if (key_index == 1)
push!(open_price_array, parse(Float64, value))
elseif (key_index == 2)
push!(high_price_array, parse(Float64, value))
elseif (key_index == 3)
push!(low_price_array, parse(Float64, value))
elseif (key_index == 4)
push!(close_price_array, parse(Float64, value))
elseif (key_index == 5)
push!(adjusted_close_array, parse(Float64, value))
elseif (key_index == 6)
push!(volume_array, parse(Int64, value))
elseif (key_index == 7)
push!(dividend_amount_array, parse(Float64, value))
else
push!(split_coefficient, parse(Float64, value))
end
end
end
# we need to sort the timestamps, to make them in reverse order -
idx_sort = sortperm(timestamp_array)
# build the data frame -
data_frame = DataFrame(timestamp=timestamp_array[idx_sort], open=open_price_array[idx_sort], high=high_price_array[idx_sort], low=low_price_array[idx_sort], close=close_price_array[idx_sort], adjusted_close = adjusted_close_array[idx_sort], volume=volume_array[idx_sort], dividend_amount = dividend_amount_array[idx_sort])
# return the data back to the caller -
return PSResult{DataFrame}(data_frame)
end
function process_raw_json_data_sts_global_quote(api_call_raw_data::String, data_series_key::String)::(Union{PSResult{T}, Nothing} where T<:Any)
# is the data coming back well formed, and does it contain valid data?
check_result = check_json_api_return_data(api_call_raw_data)
if check_result !== nothing
return check_result
end
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(api_call_raw_data)
# grab the time series data -
time_series_key = data_series_key
if (haskey(api_data_dictionary, time_series_key) == false)
# throw an error -
error_message = "Error: Missing the Time series key = $(time_series_key)"
# throw -
return PSError(error_message)
end
#all repeated up to here possibly look to make the above a function for just checking data
# array of keys -- this is changed per function
data_key_label_array = ["01. symbol", "02. open", "03. high", "04. low", "05. price", "06. volume", "07. latest trading day", "08. previous close", "09. change", "10. change percent"]
number_of_fields = length(data_key_label_array)
#initialize arrays to hold these fields for all dates
symbol_array = String[]
open_price_array = Float64[]
high_price_array = Float64[]
low_price_array = Float64[]
price_array = Float64[]
volume_array = Int64[]
latest_trading_day = Dates.Date[]
previous_close_array = Float64[]
change_array = Float64[]
change_percentage_array = String[]
# ok, we have the time series key, go through the data and load into the table -
data_dictionary = api_data_dictionary[time_series_key]
for key_index = 1:number_of_fields
# grab key -
local_key = data_key_label_array[key_index]
value = data_dictionary[local_key]
# populate the array's -
if (key_index == 1)
push!(symbol_array, value)
elseif (key_index == 2)
push!(open_price_array, parse(Float64, value))
elseif (key_index == 3)
push!(high_price_array, parse(Float64, value))
elseif (key_index == 4)
push!(low_price_array, parse(Float64, value))
elseif (key_index == 5)
push!(price_array, parse(Float64, value))
elseif (key_index == 6)
push!(volume_array, parse(Int64, value))
elseif (key_index == 7)
push!(latest_trading_day, Dates.Date(value,"yyyy-mm-dd"))
elseif (key_index == 8)
push!(previous_close_array, parse(Float64, value))
elseif (key_index == 9)
push!(change_array, parse(Float64, value))
else
push!(change_percentage_array, value)
end
end
# build the data frame -
data_frame = DataFrame(symbol=symbol_array, open=open_price_array, high=high_price_array, low=low_price_array, price=price_array,
volume=volume_array, timestamp=latest_trading_day, previous_close = previous_close_array, change=change_array, change_percentage = change_percentage_array)
# return the data back to the caller -
return PSResult{DataFrame}(data_frame)
end
function process_raw_json_data_sts_search_data(api_call_raw_data::String, data_series_key::String)::(Union{PSResult{T}, Nothing} where T<:Any)
# is the data coming back well formed, and does it contain valid data?
check_result = check_json_api_return_data(api_call_raw_data)
if check_result !== nothing
return check_result
end
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(api_call_raw_data)
# grab the time series data -
time_series_key = data_series_key
if (haskey(api_data_dictionary, time_series_key) == false)
# throw an error -
error_message = "Error: Missing data series key = $(time_series_key)"
# throw -
return PSError(error_message)
end
#all repeated up to here possibly look to make the above a function for just checking data
# array of keys -- this is changed per function
data_key_label_array = ["1. symbol", "2. name", "3. type", "4. region", "5. marketOpen", "6. marketClose", "7. timezone", "8. currency", "9. matchScore"]
number_of_fields = length(data_key_label_array)
#initialize arrays to hold these fields for all dates
symbol_array = String[]
name_array = String[]
type_array = String[]
region_array = String[]
market_open_array = String[]
market_close_array = String[]
timezone_array = String[]
currency_array = String[]
match_score_array = Float64[]
# ok, we have the time series key, go through the data and load into the table -
time_series_dictionary = api_data_dictionary[time_series_key]
time_series_key_array = collect(keys(time_series_dictionary))
for timestamp_value in time_series_key_array
# get the local_dictionary -
local_dictionary = time_series_dictionary[timestamp_value]
# add the price data -
for key_index = 1:number_of_fields
# grab key -
local_key = data_key_label_array[key_index]
value = local_dictionary[local_key]
# populate the array's -
if (key_index == 1)
push!(symbol_array, value)
elseif (key_index == 2)
push!(name_array, value)
elseif (key_index == 3)
push!(type_array, value)
elseif (key_index == 4)
push!(region_array, value)
elseif (key_index == 5)
push!(market_open_array, value)
elseif (key_index == 6)
push!(market_close_array, value)
elseif (key_index == 7)
push!(timezone_array, value)
elseif (key_index == 8)
push!(currency_array, value)
elseif (key_index == 9)
push!(match_score_array, parse(Float64,value))
end
end
end
# build the data frame -
data_frame = DataFrame(symbol=symbol_array, name=name_array, type=type_array, region=region_array, marketOpen=market_open_array, marketClose = market_close_array,
timezone=timezone_array, currency = currency_array, match = match_score_array)
# return the data back to the caller -
return PSResult{DataFrame}(data_frame)
end
# Technical indicators --
function process_raw_json_data_ti_sma_data(api_call_raw_data::String, data_series_key::String)::PSResult
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(api_call_raw_data)
# grab the time series data -
time_series_key = data_series_key
if (haskey(api_data_dictionary, time_series_key) == false)
# throw an error -
error_message = "Error: Missing the series key = $(time_series_key)"
# throw -
return PSError(error_message)
end
# initialize -
data_key_label_array = ["SMA"]
number_of_fields = length(data_key_label_array)
timestamp_array = Dates.Date[]
sma_value_array = Float64[]
# ok, get the data for each time point -
time_series_dictionary = api_data_dictionary[time_series_key]
time_series_key_array = collect(keys(time_series_dictionary))
for timestamp_value in time_series_key_array
# get the local_dictionary -
local_dictionary = time_series_dictionary[timestamp_value]
# cache -
push!(timestamp_array, Dates.Date(timestamp_value,"yyyy-mm-dd"))
# add the price data -
for key_index = 1:number_of_fields
# grab key -
local_key = data_key_label_array[key_index]
value = local_dictionary[local_key]
if (key_index == 1)
push!(sma_value_array, parse(Float64,value))
end
end
end
# we need to sort the timestamps, to make them in reverse order -
idx_sort = sortperm(timestamp_array)
# build the data frame -
data_frame = DataFrame(timestamp=timestamp_array[idx_sort], sma=sma_value_array[idx_sort])
# return the data back to the caller -
return PSResult{DataFrame}(data_frame)
end
function process_raw_json_data_ti_ema_data(api_call_raw_data::String, data_series_key::String)::PSResult
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(api_call_raw_data)
# grab the time series data -
time_series_key = data_series_key
if (haskey(api_data_dictionary, time_series_key) == false)
# throw an error -
error_message = "Error: Missing the series key = $(time_series_key)"
# throw -
return PSError(error_message)
end
# initialize -
data_key_label_array = ["EMA"]
number_of_fields = length(data_key_label_array)
timestamp_array = Dates.Date[]
ema_value_array = Float64[]
# ok, get the data for each time point -
time_series_dictionary = api_data_dictionary[time_series_key]
time_series_key_array = collect(keys(time_series_dictionary))
for timestamp_value in time_series_key_array
# get the local_dictionary -
local_dictionary = time_series_dictionary[timestamp_value]
# cache -
push!(timestamp_array, Dates.Date(timestamp_value,"yyyy-mm-dd"))
# add the price data -
for key_index = 1:number_of_fields
# grab key -
local_key = data_key_label_array[key_index]
value = local_dictionary[local_key]
if (key_index == 1)
push!(ema_value_array, parse(Float64,value))
end
end
end
# we need to sort the timestamps, to make them in reverse order -
idx_sort = sortperm(timestamp_array)
# build the data frame -
data_frame = DataFrame(timestamp=timestamp_array[idx_sort], ema=ema_value_array[idx_sort])
# return the data back to the caller -
return PSResult{DataFrame}(data_frame)
end
function process_raw_json_data_ti_rsi_data(api_call_raw_data::String,
data_series_key_array::String)::PSResult
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(api_call_raw_data)
# grab the time series data -
time_series_key = data_series_key
if (haskey(api_data_dictionary, time_series_key) == false)
# throw an error -
error_message = "Error: Missing the series key = $(time_series_key)"
# throw -
return PSError(error_message)
end
# initialize -
data_key_label_array = ["RSI"]
number_of_fields = length(data_key_label_array)
timestamp_array = Dates.Date[]
rsi_value_array = Float64[]
# ok, get the data for each time point -
time_series_dictionary = api_data_dictionary[time_series_key]
time_series_key_array = collect(keys(time_series_dictionary))
for timestamp_value in time_series_key_array
# get the local_dictionary -
local_dictionary = time_series_dictionary[timestamp_value]
# cache -
push!(timestamp_array, Dates.Date(timestamp_value,"yyyy-mm-dd"))
# add the price data -
for key_index = 1:number_of_fields
# grab key -
local_key = data_key_label_array[key_index]
value = local_dictionary[local_key]
if (key_index == 1)
push!(rsi_value_array, parse(Float64,value))
end
end
end
# we need to sort the timestamps, to make them in reverse order -
idx_sort = sortperm(timestamp_array)
# build the data frame -
data_frame = DataFrame(timestamp=timestamp_array[idx_sort], rsi=rsi_value_array[idx_sort])
# return the data back to the caller -
return PSResult{DataFrame}(data_frame)
end
# Fundamentals -
function process_raw_json_fundamentals_earnings_data(api_call_raw_data::String)::PSResult
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(api_call_raw_data)
earnings_data_dictionary = Dict{String,Any}()
# ok, so there is a symbol key that comes back -
earnings_data_dictionary["symbol"] = api_data_dictionary["symbol"]
# grab: annualEarnings -
list_of_annual_earnings_dictionaries = api_data_dictionary["annualEarnings"]
earnings_dataframe = DataFrame(fiscalDateEnding=Dates.Date[],reportedEPS=Union{Float64,Missing}[])
for earnings_dictionary in list_of_annual_earnings_dictionaries
# grab -
timestamp_value = earnings_dictionary["fiscalDateEnding"]
eps_value = earnings_dictionary["reportedEPS"]
# convert -
converted_timestamp_value = Dates.Date(timestamp_value,"yyyy-mm-dd")
converted_eps_value = parse(Float64, eps_value)
# package into the df -
push!(earnings_dataframe, (converted_timestamp_value, converted_eps_value))
end
# grab: quarterlyEarnings -
list_of_quaterly_earnings_dictionaries = api_data_dictionary["quarterlyEarnings"]
quarterly_dataframe = DataFrame(fiscalDateEnding=Dates.Date[],
reportedDate=Dates.Date[],
reportedEPS=Union{Float64,Missing}[],
estimatedEPS=Union{Float64,Missing}[],
surprise=Union{Float64,Missing}[],
surprisePercentage=Union{Float64,Missing}[])
for earnings_dictionary in list_of_quaterly_earnings_dictionaries
# grab data from dictionary -
fiscalDateEnding = Dates.Date(earnings_dictionary["fiscalDateEnding"], "yyyy-mm-dd")
reportedDate = Dates.Date(earnings_dictionary["reportedDate"], "yyyy-mm-dd")
# check reported EPS -
reportedEPS_value = earnings_dictionary["reportedEPS"]
reportedEPS = missing
if (reportedEPS_value != "None")
reportedEPS = parse(Float64, reportedEPS_value)
end
# check estimated EPS -
estimatedEPS_value = earnings_dictionary["estimatedEPS"]
estimatedEPS = missing
if (estimatedEPS_value != "None")
estimatedEPS = parse(Float64, estimatedEPS_value)
end
# check suprise -
surprise_value = earnings_dictionary["surprise"]
surprise = missing
if (surprise_value != "None")
surprise = parse(Float64, surprise_value)
end
# check surprisePercentage -
surprisePercentage_value = earnings_dictionary["surprisePercentage"]
surprisePercentage = missing
if (surprisePercentage_value != "None")
surprisePercentage = parse(Float64, surprisePercentage_value)
end
# package -
push!(quarterly_dataframe,(fiscalDateEnding, reportedDate, reportedEPS, estimatedEPS, surprise, surprisePercentage))
end
# package -
earnings_data_dictionary["annualEarnings"] = earnings_dataframe
earnings_data_dictionary["quarterlyEarnings"] = quarterly_dataframe
# return -
return PSResult(earnings_data_dictionary)
end
function process_raw_json_fundamentals_income_statement_data(api_call_raw_data::String)::PSResult
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(api_call_raw_data)
income_statement_data_dictionary = Dict{String,Any}()
# ok, so there is a symbol key that comes back -
income_statement_data_dictionary["symbol"] = api_data_dictionary["symbol"]
# process the annualReports -
# what keys are we looking for in the annual report dictionary -
annual_report_key_array = [
"fiscalDateEnding" ;
"reportedCurrency" ;
"grossProfit" ;
"totalRevenue" ;
"costOfRevenue" ;
"costofGoodsAndServicesSold" ;
"operatingIncome" ;
"sellingGeneralAndAdministrative" ;
"researchAndDevelopment" ;
"operatingExpenses" ;
"investmentIncomeNet" ;
"netInterestIncome" ;
"interestIncome" ;
"interestExpense" ;
"nonInterestIncome" ;
"otherNonOperatingIncome" ;
"depreciation" ;
"depreciationAndAmortization" ;
"incomeBeforeTax" ;
"incomeTaxExpense" ;
"interestAndDebtExpense" ;
"netIncomeFromContinuingOperations" ;
"comprehensiveIncomeNetOfTax" ;
"ebit" ;
"ebitda" ;
"netIncome" ;
]
# get the array of annual report dictionaries -
list_of_annual_report_dictionaries = api_data_dictionary["annualReports"]
annual_reports_dataframe = DataFrame(fiscalDateEnding=Dates.Date[],reportedCurrency=String[],grossProfit=Union{Float64,Missing}[],
totalRevenue=Union{Float64,Missing}[],costOfRevenue=Union{Float64,Missing}[],costofGoodsAndServicesSold=Union{Float64,Missing}[],operatingIncome=Union{Float64,Missing}[],
sellingGeneralAndAdministrative=Union{Float64,Missing}[],researchAndDevelopment=Union{Float64,Missing}[],operatingExpenses=Union{Float64,Missing}[],investmentIncomeNet=Union{Float64,Missing}[],
netInterestIncome=Union{Float64,Missing}[],interestIncome=Union{Float64,Missing}[],interestExpense=Union{Float64,Missing}[],nonInterestIncome=Union{Float64,Missing}[],otherNonOperatingIncome=Union{Float64,Missing}[],
depreciation=Union{Float64,Missing}[],depreciationAndAmortization=Union{Float64,Missing}[],incomeBeforeTax=Union{Float64,Missing}[],incomeTaxExpense=Union{Float64,Missing}[],interestAndDebtExpense=Union{Float64,Missing}[],
netIncomeFromContinuingOperations=Union{Float64,Missing}[],comprehensiveIncomeNetOfTax=Union{Float64,Missing}[],ebit=Union{Float64,Missing}[],ebitda=Union{Float64,Missing}[],netIncome=Union{Float64,Missing}[])
# loop throw, and pakage the annual report values into the dataframe -
for annual_report in list_of_annual_report_dictionaries
# init the row of data -
data_row = Array{Any,1}()
# go throw all the keys -
for annual_report_key in annual_report_key_array
# ok, so if we have fiscalDateEnding or reportedCurrency we have a non-numeric value
clean_value = 0.0
if annual_report_key == "fiscalDateEnding"
clean_value = Dates.Date(annual_report["fiscalDateEnding"], "yyyy-mm-dd")
elseif annual_report_key == "reportedCurrency"
clean_value = annual_report["reportedCurrency"]
else
# grab the value, check if none -
value = annual_report[annual_report_key]
clean_value = check_for_numerical_none_value(value)
end
# cache -
push!(data_row, clean_value)
end
# push the data row into the data frame -
push!(annual_reports_dataframe, tuple(data_row...))
end
income_statement_data_dictionary["annualReports"] = annual_reports_dataframe
# process the quarterly reports -
list_of_quaterly_report_dictionaries = api_data_dictionary["quarterlyReports"]
quaterly_reports_dataframe = DataFrame(fiscalDateEnding=Dates.Date[],reportedCurrency=String[],grossProfit=Union{Float64,Missing}[],
totalRevenue=Union{Float64,Missing}[],costOfRevenue=Union{Float64,Missing}[],costofGoodsAndServicesSold=Union{Float64,Missing}[],operatingIncome=Union{Float64,Missing}[],
sellingGeneralAndAdministrative=Union{Float64,Missing}[],researchAndDevelopment=Union{Float64,Missing}[],operatingExpenses=Union{Float64,Missing}[],investmentIncomeNet=Union{Float64,Missing}[],
netInterestIncome=Union{Float64,Missing}[],interestIncome=Union{Float64,Missing}[],interestExpense=Union{Float64,Missing}[],nonInterestIncome=Union{Float64,Missing}[],otherNonOperatingIncome=Union{Float64,Missing}[],
depreciation=Union{Float64,Missing}[],depreciationAndAmortization=Union{Float64,Missing}[],incomeBeforeTax=Union{Float64,Missing}[],incomeTaxExpense=Union{Float64,Missing}[],interestAndDebtExpense=Union{Float64,Missing}[],
netIncomeFromContinuingOperations=Union{Float64,Missing}[],comprehensiveIncomeNetOfTax=Union{Float64,Missing}[],ebit=Union{Float64,Missing}[],ebitda=Union{Float64,Missing}[],netIncome=Union{Float64,Missing}[])
# loop throw, and pakage the annual report values into the dataframe -> same keys as annualReport
for quaterly_report in list_of_quaterly_report_dictionaries
# init the row of data -
data_row = Array{Any,1}()
# go throw all the keys -
for annual_report_key in annual_report_key_array
# ok, so if we have fiscalDateEnding or reportedCurrency we have a non-numeric value
clean_value = 0.0
if annual_report_key == "fiscalDateEnding"
clean_value = Dates.Date(quaterly_report["fiscalDateEnding"], "yyyy-mm-dd")
elseif annual_report_key == "reportedCurrency"
clean_value = quaterly_report["reportedCurrency"]
else
# grab the value, check if none -
value = quaterly_report[annual_report_key]
clean_value = check_for_numerical_none_value(value)
end
# cache -
push!(data_row, clean_value)
end
# push the data row into the data frame -
push!(quaterly_reports_dataframe, tuple(data_row...))
end
income_statement_data_dictionary["quarterlyReports"] = quaterly_reports_dataframe
# return -
return PSResult(income_statement_data_dictionary)
end
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 445 | function log_api_call(logger::AbstractLogger, user_model::PSUserModel, message::String)
with_logger(logger) do
# get user email -
alphavantage_api_email = user_model.alphavantage_api_email
# current timestamp -
now_stamp = now()
# formulate log message -
log_mesage = "$(now_stamp)::$(alphavantage_api_email)::$(message)"
# log -
@debug(log_mesage)
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 630 | function http_get_call_with_url(url::String)::PSResult
# check: is the URL string empty?
# check: is it a legit URL string?
# ok, so we are going to make a HTTP GET call with the URL that was passed in -
response = HTTP.request("GET",url)
# ok, so let's check if we are getting a 200 back -
if (response.status == 200)
return PSResult{String}(String(response.body))
else
# create an error, and throw it back to the caller -
error_message = "http status flag $(response.status) was returned from url $(url)"
return PSResult{PSError}(PSError(error_message))
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 552 | struct PSUserModel
# data for user -
alphavantage_api_email::String
alphavantage_api_key::String
# constructor -
function PSUserModel(api_email::String, api_key::String)
new(api_email,api_key)
end
end
struct PSDataStoreAPICallModel
# data -
ticker::String
dataType::Symbol
outputsize::Symbol
apicall::Function
function PSDataStoreAPICallModel(apicall::Function, ticker::String; dataType::Symbol=:cvs, output::Symbol = :compact)
this = new(ticker,dataType,output,apicall)
end
end
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 2205 | # -- PRIVATE METHODS HERE ------------------------------------------------------- #
function check_if_user_dictionary_contains_correct_keys(user_data_dictionary::Dict{String,Any})::PSResult{Bool}
# the user dictionary should contain a user_data root, and a alpha_vantage_api_key child -
# TODO: fill me in ...
return PSResult{Bool}(true)
end
# -- PUBLIC METHODS HERE ------------------------------------------------------- #
"""
build_api_user_model(path)
Returns either a PSResult{PSError} if something went wrong, or a PSResult{PSUserModel} object holding the user email and AlphaVantage API key.
The PSError and PSUserModel can be accessed using the `value` field on the Result return wrapper
"""
function build_api_user_model(path_to_configuration_file::String)::(Union{PSResult{T}, Nothing} where T<:Any)
# some users checks -
# did the user pass in a legit path?
check_result = is_path_valid(path_to_configuration_file)
if (typeof(check_result.value) == Bool && check_result.value == false)
error_message = "error: $(path_to_configuration_file) in not a valid path"
return PSResult{PSError}(PSError(error_message))
end
# ok, path seems legit - load the default user information from the config.json file -
user_json_dictionary = JSON.parsefile(path_to_configuration_file)
# does the user dictionary contain the correct keys?
if (check_if_user_dictionary_contains_correct_keys(user_json_dictionary) == false)
error_message = "error: missing keys in user configuration dictionary"
return PSResult{PSError}(PSError(error_message))
end
# -- DO NOT EDIT BELOW THIS LINE ------------------------------------------#
# grab the user data -
alpha_vantage_api_key = user_json_dictionary["user_data"]["alpha_vantage_api_key"]
alpha_vantage_api_email = user_json_dictionary["user_data"]["alpha_vantage_api_email"]
# build APIUserModel -
api_user_model = PSUserModel(alpha_vantage_api_email, alpha_vantage_api_key)
# return the user_data_dictionary -
return PSResult{PSUserModel}(api_user_model)
# -------------------------------------------------------------------------#
end
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1242 | """
execute_company_earnings_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
Awesome description will be here. The best ever.
"""
function execute_company_earnings_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
# initialize -
api_call_url_string = ""
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"apikey" ; # the API key
]
try
# get stuff from the requestDictionary and build the url used in the call -
api_call_url_string = "$(alphavantage_api_url_string)?"
for key in required_api_keys
value = requestDictionary[key]
api_call_url_string*="&$(key)=$(value)"
end
# make the API call -
api_call_result = http_get_call_with_url(api_call_url_string)
response_body_string = checkresult(api_call_result)
# if we get here, we have valid JSON. Build dictionary -
return process_raw_json_fundamentals_earnings_data(response_body_string)
catch error
return PSResult(error)
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1272 | """
execute_company_income_statement_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
Awesome description will go here. It will be the best.
"""
function execute_company_income_statement_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
# initialize -
api_call_url_string = ""
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"apikey" ; # the API key
]
try
# get stuff from the requestDictionary and build the url used in the call -
api_call_url_string = "$(alphavantage_api_url_string)?"
for key in required_api_keys
value = requestDictionary[key]
api_call_url_string*="&$(key)=$(value)"
end
# make the API call -
api_call_result = http_get_call_with_url(api_call_url_string)
response_body_string = checkresult(api_call_result)
# if we get here, we have valid JSON. Build dictionary -
return process_raw_json_fundamentals_income_statement_data(response_body_string)
catch error
return PSResult(error)
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1296 | """
execute_company_overview_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
Best method description ever. Its going to be awesome.
"""
function execute_company_overview_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
# initialize -
api_call_url_string = ""
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"apikey" ; # the API key
]
try
# get stuff from the requestDictionary and build the url used in the call -
api_call_url_string = "$(alphavantage_api_url_string)?"
for key in required_api_keys
value = requestDictionary[key]
api_call_url_string*="&$(key)=$(value)"
end
# make the API call -
api_call_result = http_get_call_with_url(api_call_url_string)
response_body_string = checkresult(api_call_result)
# if we get here, we have valid JSON. Build dictionary -
api_data_dictionary = JSON.parse(response_body_string)
# return -
return PSResult(api_data_dictionary)
catch error
return PSResult(error)
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1636 | """
execute_sts_quote_api_call(user_model::PSUserModel, stock_symbol::String;
data_type::Symbol = :json, logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
This documentation will be awesome. the best description ever.
"""
function execute_sts_quote_api_call(user_model::PSUserModel, stock_symbol::String;
data_type::Symbol = :json, logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
try
# get key's and id -
api_key = user_model.alphavantage_api_key
# call to alpha_vantage_api to get data
url = "$(alphavantage_api_url_string)?function=GLOBAL_QUOTE&symbol=$(stock_symbol)&apikey=$(api_key)&datatype=$(string(data_type))"
api_call_raw_data = http_get_call_with_url(url) |> checkresult
# call to logger
if (isnothing(logger) == false)
log_api_call(logger, user_model, url)
end
# parse json if data called as a .json
if (data_type == :json)
# process json
data_series_key = "Global Quote"
return process_raw_json_data_sts_global_quote(api_call_raw_data, data_series_key)
elseif (data_type == :csv)
#return process .csv
return process_raw_csv_api_data(api_call_raw_data)
else
# tell user they requested an unsupported type of data
error_message = "$(data_type) isn't supported by AlphaVantage. Supported values are {:json, :csv}"
return PSResult{PSError}(PSError(error_message))
end
catch error
return PSResult(error)
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 3537 | """
execute_sts_daily_api_call(user_model::PSUserModel, ticker_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
Stuff will go here. Awesome stuff, the most beautiful stuff ever.
"""
function execute_sts_daily_api_call(user_model::PSUserModel, ticker_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
try
# get key's and id -
api_key = user_model.alphavantage_api_key
# use the alpha_vantage_api to download the data -
url = "$(alphavantage_api_url_string)?function=TIME_SERIES_DAILY&symbol=$(ticker_symbol)&apikey=$(api_key)&datatype=$(string(data_type))&outputsize=$(string(outputsize))"
api_call_raw_data = http_get_call_with_url(url) |> checkresult
# log if we have one -
if (isnothing(logger) == false)
log_api_call(logger, user_model, url)
end
# make the handler calls, depending upon the data type -
if (data_type == :json)
# process -
data_series_key = "Time Series (Daily)"
return process_raw_json_api_data_sts(api_call_raw_data, data_series_key)
elseif (data_type == :csv)
# return the data back to the caller -
return process_raw_csv_api_data(api_call_raw_data)
else
# formulate the error message -
error_message = "$(data_type) is not supported. Supported values are {:json,:csv}"
# throw -
return PSResult{PSError}(PSError(error_message))
end
catch error
return PSResult(error)
end
end
"""
execute_sts_adjusted_daily_api_call(user_model::PSUserModel, ticker_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
Stuff will go here. Awesome stuff, the most beautiful stuff ever.
"""
function execute_sts_adjusted_daily_api_call(user_model::PSUserModel, ticker_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
try
# get key's and id -
api_key = user_model.alphavantage_api_key
# call to alpha_vantage_api to get data
url = "$(alphavantage_api_url_string)?function=TIME_SERIES_DAILY_ADJUSTED&symbol=$(ticker_symbol)&apikey=$(api_key)&datatype=$(string(data_type))&outputsize=$(string(outputsize))"
api_call_raw_data = http_get_call_with_url(url) |> checkresult
# call to logger
if (isnothing(logger) == false)
log_api_call(logger, user_model, url)
end
# parse json if data called as a .json
if (data_type == :json)
# process json
data_series_key = "Time Series (Daily)"
return process_raw_json_api_data_sts_daily_adjusted(api_call_raw_data, data_series_key)
elseif (data_type == :csv)
# return process .csv
return process_raw_csv_api_data(api_call_raw_data)
else
# tell user they requested an unsupported type of data
error_message = "$(data_type) isn't supported by AlphaVantage. Supported values are {:json, :csv}"
return PSResult{PSError}(PSError(error_message))
end
catch error
return PSResult(error)
end
end
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 2672 | """
execute_sts_intraday_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
The best method description ever will go here. It will be amazing.
"""
function execute_sts_intraday_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
# initialize -
api_call_url_string = ""
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"interval" ; # what interval do we need?
"apikey" ; # the API key
"adjusted" ; # whether data is adjsuted or raw (default: true)
"outputsize" ; # Strings compact (default) and full are accepted: compact returns only the latest 100 data points in the intraday time series; full returns the full-length intraday time series.
"datatype" ; # what type of data is returned from API. Default: json
]
try
# ok, before we get too far, we need to check for the optional data in the requestDictionary -
if (haskey(requestDictionary,"adjusted") == false)
requestDictionary["adjusted"] = "true"
end
# outputsize -
if (haskey(requestDictionary,"outputsize") == false)
requestDictionary["outputsize"] = "compact"
end
# datatype -
if (haskey(requestDictionary,"datatype") == false)
requestDictionary["datatype"] = "json"
end
# get stuff from the requestDictionary and build the url used in the call -
api_call_url_string = "$(alphavantage_api_url_string)?"
for (index,key) in enumerate(required_api_keys)
value = requestDictionary[key]
if (index == 1)
api_call_url_string*="$(key)=$(value)"
else
api_call_url_string*="&$(key)=$(value)"
end
end
# make the API call -
api_call_result = http_get_call_with_url(api_call_url_string)
response_body_string = checkresult(api_call_result)
# ok, we process the response body differently depending upon json -or- csv type -
data_type = requestDictionary["datatype"]
if (Symbol(data_type) == :csv)
return process_raw_csv_api_data(response_body_string)
elseif (Symbol(data_type) == :json)
return process_raw_json_sts_intraday_data(response_body_string)
else
throw(PSError("unsupported datatype in request dictionary"))
end
catch error
return PSResult(error)
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 3686 | """
execute_sts_monthly_api_call(user_model::PSUserModel, stock_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
Stuff will go here. Awesome stuff, the most beautiful stuff ever.
"""
function execute_sts_monthly_api_call(user_model::PSUserModel, stock_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
try
# get key's and id -
api_key = user_model.alphavantage_api_key
# use alpha_vantage_api to access data change function to monthly in this case
# change this line for different time periods
url = "$(alphavantage_api_url_string)?function=TIME_SERIES_MONTHLy&symbol=$(stock_symbol)&apikey=$(api_key)&datatype=$(string(data_type))&outputsize=$(string(outputsize))"
api_call_raw_data = http_get_call_with_url(url) |> checkresult
# call to logger if we have one -
if (isnothing(logger) == false)
log_api_call(logger, user_model, url)
end
# call a different method depending upon data type -
if (data_type == :json)
# process json
data_series_key = "Monthly Time Series"
return process_raw_json_api_data_sts(api_call_raw_data, data_series_key)
elseif (data_type == :csv)
# process csv results -
return process_raw_csv_api_data(api_call_raw_data)
else
# formulate the error message -
error_message = "$(data_type) is not supported. Supported values are {:json,:csv}"
return PSResult{PSError}(PSError(error_message))
end
catch error
return PSResult(error)
end
end
"""
execute_sts_adjusted_monthly_api_call(user_model::PSUserModel, stock_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
Stuff will go here. Awesome stuff, the most beautiful stuff ever.
"""
function execute_sts_adjusted_monthly_api_call(user_model::PSUserModel, stock_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
try
# get key's and id -
api_key = user_model.alphavantage_api_key
# use alpha_vantage_api to access data change function to monthly in this case
# change this line for different time periods
url = "$(alphavantage_api_url_string)?function=TIME_SERIES_MONTHLY_ADJUSTED&symbol=$(stock_symbol)&apikey=$(api_key)&datatype=$(string(data_type))&outputsize=$(string(outputsize))"
api_call_raw_data = http_get_call_with_url(url) |> checkresult
# call to logger
if (isnothing(logger) == false)
log_api_call(logger, user_model, url)
end
if (data_type == :json)
# process json
# this string line should match the .json of the type we are calling
data_series_key = "Monthly Adjusted Time Series"
return process_raw_json_api_data_sts_adjusted(api_call_raw_data, data_series_key)
elseif (data_type == :csv)
return process_raw_csv_api_data(api_call_raw_data)
else
# formulate the error message -
error_message = "$(data_type) is not supported. Supported values are {:json,:csv}"
return PSResult{PSError}(PSError(error_message))
end
catch error
return PSResult(error)
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 3462 | """
execute_sts_weekly_api_call(user_model::PSUserModel, stock_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing, AbstractLogger} = nothing) -> (Union{PSResult{T}, Nothing} where T<:Any)
Stuff will go here. Awesome stuff, the most beautiful stuff ever.
"""
function execute_sts_weekly_api_call(user_model::PSUserModel, stock_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing, AbstractLogger} = nothing)::PSResult
try
# get key's and id -
api_key = user_model.alphavantage_api_key
# use alpha_vantage_api to access data change function to monthly in this case
url = "$(alphavantage_api_url_string)?function=TIME_SERIES_WEEKLY&symbol=$(stock_symbol)&apikey=$(api_key)&datatype=$(string(data_type))&outputsize=$(string(outputsize))"
api_call_raw_data = http_get_call_with_url(url) |> checkresult
# call to logger -
if (isnothing(logger) == false)
log_api_call(logger, user_model, url)
end
if (data_type == :json)
# process json data -
data_series_key = "Weekly Time Series"
return process_raw_json_api_data_sts(api_call_raw_data, data_series_key)
elseif (data_type == :csv)
# process csv data -
return process_raw_csv_api_data(api_call_raw_data)
else
# formulate the error message -
error_message = "$(data_type) is not supported. Supported values are {:json,:csv}"
return PSResult{PSError}(PSError(error_message))
end
catch error
return PSError(error)
end
end
"""
execute_sts_adjusted_weekly_api_call(user_model::PSUserModel, stock_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing, AbstractLogger} = nothing) -> (Union{PSResult{T}, Nothing} where T<:Any)
Stuff will go here. Awesome stuff, the most beautiful stuff ever.
"""
function execute_sts_adjusted_weekly_api_call(user_model::PSUserModel, stock_symbol::String;
data_type::Symbol = :json, outputsize::Symbol = :compact, logger::Union{Nothing, AbstractLogger} = nothing)::PSResult
try
# get key's and id -
api_key = user_model.alphavantage_api_key
# use alpha_vantage_api to get data
url = "$(alphavantage_api_url_string)?function=TIME_SERIES_WEEKLY_ADJUSTED&symbol=$(stock_symbol)&apikey=$(api_key)&datatype=$(string(data_type))&outputsize=$(string(outputsize))"
api_call_raw_data = http_get_call_with_url(url) |> checkresult
# call to logger -
if (isnothing(logger) == false)
log_api_call(logger, user_model, url)
end
# call different handlers depending upon type -
if (data_type == :json)
# process json -
data_series_key = "Weekly Adjusted Time Series"
return process_raw_json_api_data_sts_adjusted(api_call_raw_data, data_series_key)
elseif (data_type == :csv)
return process_raw_csv_api_data(api_call_raw_data)
else
# formulate the error message -
error_message = "$(data_type) is not supported by AlphaVantage. Supported values are {:json,:csv}"
return PSResult{PSError}(PSError(error_message))
end
catch error
return PSResult(error)
end
end
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1621 | """
execute_sts_search_api_call(user_model::PSUserModel, keyword::String;
data_type::Symbol = :json, logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
This documentation will be awesome. the best description ever.
"""
function execute_sts_search_api_call(user_model::PSUserModel, keyword::String;
data_type::Symbol = :json, logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
try
# get key's and id -
api_key = user_model.alphavantage_api_key
# call to alpha_vantage_api to get data
url = "$(alphavantage_api_url_string)?function=SYMBOL_SEARCH&keywords=$(keyword)&apikey=$(api_key)&datatype=$(string(data_type))"
api_call_raw_data = http_get_call_with_url(url) |> checkresult
# call to logger
if (isnothing(logger) == false)
log_api_call(logger, user_model, url)
end
# parse json if data called as a .json
if (data_type == :json)
# process json
data_series_key = "bestMatches"
return process_raw_json_data_sts_search_data(api_call_raw_data, data_series_key)
elseif (data_type == :csv)
# return process .csv
return process_raw_csv_api_data(api_call_raw_data)
else
# tell user they requested an unsupported type of data
error_message = "$(data_type) isn't supported by AlphaVantage. Supported values are {:json, :csv}"
return PSResult{PSError}(PSError(error_message))
end
catch error
return PSResult(error)
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 2315 | """
execute_exponential_moving_average_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
Something magical will go here. It will be awesome. The best ever. amazing
"""
function execute_exponential_moving_average_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
# initialize -
api_call_url_string = ""
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"interval" ; # what interval: 1min, 5min, 15min, 30min, 60min, daily, weekly, monthly
"time_period" ; # Number of data points used to calculate each moving average value. Positive integers are accepted (e.g., time_period=60, time_period=200)
"series_type" ; # The desired price type in the time series. Four types are supported: close, open, high, low
"datatype" ; # By default, datatype=json. Strings json and csv are accepted with the following specifications: json returns the daily time series in JSON format; csv returns the time series as a CSV (comma separated value) file.
"apikey" ; # the API key
]
try
# get stuff from the requestDictionary and build the url used in the call -
api_call_url_string = "$(alphavantage_api_url_string)?"
for key in required_api_keys
value = requestDictionary[key]
api_call_url_string*="&$(key)=$(value)"
end
# make the API call -
api_call_result = http_get_call_with_url(api_call_url_string)
response_body_string = checkresult(api_call_result)
# ok, we process the response body differently depending upon json -or- csv type -
data_type = requestDictionary["datatype"]
if (Symbol(data_type) == :csv)
return process_raw_csv_api_data(response_body_string)
elseif (Symbol(data_type) == :json)
data_series_key = "Technical Analysis: EMA"
return process_raw_json_data_ti_ema_data(response_body_string, data_series_key)
else
throw(PSError("unsupported datatype in request dictionary"))
end
catch error
return PSResult(error)
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 2464 | """
execute_relative_strength_index_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
Something magical will go here. It will be awesome. The best ever. amazing
"""
function execute_relative_strength_index_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
# initialize -
api_call_url_string = ""
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"interval" ; # what interval: 1min, 5min, 15min, 30min, 60min, daily, weekly, monthly
"time_period" ; # Number of data points used to calculate each moving average value. Positive integers are accepted (e.g., time_period=60, time_period=200)
"series_type" ; # The desired price type in the time series. Four types are supported: close, open, high, low
"datatype" ; # By default, datatype=json. Strings json and csv are accepted with the following specifications: json returns the daily time series in JSON format; csv returns the time series as a CSV (comma separated value) file.
"apikey" ; # the API key
]
try
# get stuff from the requestDictionary and build the url used in the call -
api_call_url_string = "$(alphavantage_api_url_string)?"
for (index,key) in enumerate(required_api_keys)
value = requestDictionary[key]
if (index == 1)
api_call_url_string*="$(key)=$(value)"
else
api_call_url_string*="&$(key)=$(value)"
end
end
# make the API call -
api_call_result = http_get_call_with_url(api_call_url_string)
response_body_string = checkresult(api_call_result)
# ok, we process the response body differently depending upon json -or- csv type -
data_type = requestDictionary["datatype"]
if (Symbol(data_type) == :csv)
return process_raw_csv_api_data(response_body_string)
elseif (Symbol(data_type) == :json)
data_series_key = "Technical Analysis: RSI"
return process_raw_json_data_ti_rsi_data(response_body_string, data_series_key)
else
throw(PSError("unsupported datatype in request dictionary"))
end
catch error
return PSResult(error)
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 2305 | """
execute_simple_moving_average_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing) -> PSResult
Something magical will go here. It will be awesome. The best ever. amazing
"""
function execute_simple_moving_average_api_call(requestDictionary::Dict{String,Any};
logger::Union{Nothing,AbstractLogger} = nothing)::PSResult
# initialize -
api_call_url_string = ""
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"interval" ; # what interval: 1min, 5min, 15min, 30min, 60min, daily, weekly, monthly
"time_period" ; # Number of data points used to calculate each moving average value. Positive integers are accepted (e.g., time_period=60, time_period=200)
"series_type" ; # The desired price type in the time series. Four types are supported: close, open, high, low
"datatype" ; # By default, datatype=json. Strings json and csv are accepted with the following specifications: json returns the daily time series in JSON format; csv returns the time series as a CSV (comma separated value) file.
"apikey" ; # the API key
]
try
# get stuff from the requestDictionary and build the url used in the call -
api_call_url_string = "$(alphavantage_api_url_string)?"
for key in required_api_keys
value = requestDictionary[key]
api_call_url_string*="&$(key)=$(value)"
end
# make the API call -
api_call_result = http_get_call_with_url(api_call_url_string)
response_body_string = checkresult(api_call_result)
# ok, we process the response body differently depending upon json -or- csv type -
data_type = requestDictionary["datatype"]
if (Symbol(data_type) == :csv)
return process_raw_csv_api_data(response_body_string)
elseif (Symbol(data_type) == :json)
data_series_key = "Technical Analysis: SMA"
return process_raw_json_data_ti_sma_data(response_body_string, data_series_key)
else
throw(PSError("unsupported datatype in request dictionary"))
end
catch error
return PSResult(error)
end
end | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 8036 | using PooksoftAlphaVantageDataStore
using Test
using JSON
using DataFrames
# -- User creation/low level download tests --------------------------------------------- #
function build_api_user_model_test()
# initialize -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
if (typeof(user_model_result.value) == PSError)
return false
end
user_model = user_model_result.value
user_email = user_model.alphavantage_api_email
# is the user email the same as in Configuration.json?
user_json_dictionary = JSON.parsefile(path_to_config_file)
alpha_vantage_api_email = user_json_dictionary["user_data"]["alpha_vantage_api_email"]
if (user_email == alpha_vantage_api_email)
return true
end
# return -
return false
end
function download_daily_appl_sts_test_low_level()
# initialize -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
if (typeof(user_model_result.value) == PSError)
return false
end
# get the user model, we'll need this to make an API call -
user_model = user_model_result.value
# make an API call -
stock_symbol = "aapl"
data_type = :json
outputsize = :compact
api_call_result = execute_sts_daily_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing)
# check -
if (typeof(api_call_result.value) == DataFrame)
return true
end
# return -
return false
end
function download_daily_adjusted_appl_sts_test_low_level()
# initialize -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
if (typeof(user_model_result.value) == PSError)
return false
end
# get the user model, we'll need this to make an API call -
user_model = user_model_result.value
# make an API call -
stock_symbol = "aapl"
data_type = :json
outputsize = :compact
api_call_result = execute_sts_adjusted_daily_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing)
# check -
if (typeof(api_call_result.value) == DataFrame)
return true
end
# return -
return false
end
function download_weekly_adjusted_appl_sts_test_low_level()
# initialize -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
if (typeof(user_model_result.value) == PSError)
return false
end
# get the user model, we'll need this to make an API call -
user_model = user_model_result.value
# make an API call -
stock_symbol = "aapl"
data_type = :json
outputsize = :compact
api_call_result = execute_sts_adjusted_weekly_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing)
# check -
if (typeof(api_call_result.value) == DataFrame)
return true
end
# return -
return false
end
function download_weekly_appl_sts_test_low_level()
# initialize -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
if (typeof(user_model_result.value) == PSError)
return false
end
# get the user model, we'll need this to make an API call -
user_model = user_model_result.value
# make an API call -
stock_symbol = "aapl"
data_type = :json
outputsize = :compact
api_call_result = execute_sts_weekly_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing)
# check -
if (typeof(api_call_result.value) == DataFrame)
return true
end
# return -
return false
end
function download_monthly_adjusted_appl_sts_test_low_level()
# initialize -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
if (typeof(user_model_result.value) == PSError)
return false
end
# get the user model, we'll need this to make an API call -
user_model = user_model_result.value
# make an API call -
stock_symbol = "aapl"
data_type = :json
outputsize = :compact
api_call_result = execute_sts_adjusted_monthly_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing)
# check -
if (typeof(api_call_result.value) == DataFrame)
return true
end
# return -
return false
end
function download_monthly_appl_sts_test_low_level()
# initialize -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
if (typeof(user_model_result.value) == PSError)
return false
end
# get the user model, we'll need this to make an API call -
user_model = user_model_result.value
# make an API call -
stock_symbol = "aapl"
data_type = :json
outputsize = :compact
api_call_result = execute_sts_monthly_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing)
# check -
if (typeof(api_call_result.value) == DataFrame)
return true
end
# return -
return false
end
#---------------------------------------------------------------------------------------- #
# -- User creation/high level download tests -------------------------------------------- #
function download_daily_appl_sts_test_high_level()
# initialize -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
if (typeof(user_model_result.value) == PSError)
return false
end
# get the user model, we'll need this to make an API call -
user_model = user_model_result.value
# setup the API call -
stock_symbol = "aapl"
data_type = :json
outputsize = :compact
call = execute_sts_adjusted_daily_api_call
apicall_model = build_datastore_apicall_model(call,stock_symbol; output=outputsize, datatype=data_type)
if (isa(apicall_model.value, PSDataStoreAPICallModel) == false)
return false
end
# make the apicall -
api_call_result = execute_api_call(user_model, apicall_model.value)
# check -
if (isa(api_call_result.value,DataFrame) == true)
return true
end
# return -
return false
end
#---------------------------------------------------------------------------------------- #
@testset "user_test_set" begin
# @test build_api_user_model_test_low_level() == true
# @test download_daily_appl_sts_test_low_level() == true
# @test download_daily_adjusted_appl_sts_test_low_level() == true
@test download_weekly_appl_sts_test_low_level() == true
@test download_weekly_adjusted_appl_sts_test_low_level() == true
@test download_monthly_appl_sts_test_low_level() == true
# sleep(60) # wait 1 min -
# @test download_monthly_adjusted_appl_sts_test_low_level() == true
@test download_daily_appl_sts_test_high_level() == true
end
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1660 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
user_model = checkresult(user_model_result)
# now that we have the userModel - lets populate the request dictionary -
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"interval" ; # what interval: 1min, 5min, 15min, 30min, 60min, daily, weekly, monthly
"time_period" ; # Number of data points used to calculate each moving average value. Positive integers are accepted (e.g., time_period=60, time_period=200)
"series_type" ; # The desired price type in the time series. Four types are supported: close, open, high, low
"datatype" ; # By default, datatype=json. Strings json and csv are accepted with the following specifications: json returns the daily time series in JSON format; csv returns the time series as a CSV (comma separated value) file.
"apikey" ; # the API key
]
requestDictionary = Dict{String,Any}()
requestDictionary["apikey"] = user_model.alphavantage_api_key
requestDictionary["symbol"] = "ALLY"
requestDictionary["function"] = "EMA"
requestDictionary["interval"] = "daily"
requestDictionary["time_period"] = "14"
requestDictionary["series_type"] = "close"
requestDictionary["datatype"] = "json"
api_call_result = execute_exponential_moving_average_api_call(requestDictionary)
df = checkresult(api_call_result)
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 857 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
user_model = checkresult(user_model_result)
# now that we have the userModel - lets populate the request dictionary -
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"apikey" ; # the API key
]
requestDictionary = Dict{String,Any}()
requestDictionary["apikey"] = user_model.alphavantage_api_key
requestDictionary["symbol"] = "ALLY"
requestDictionary["function"] = "EARNINGS"
api_call_result = execute_company_earnings_api_call(requestDictionary)
df = checkresult(api_call_result) | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 873 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
user_model = checkresult(user_model_result)
# now that we have the userModel - lets populate the request dictionary -
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"apikey" ; # the API key
]
requestDictionary = Dict{String,Any}()
requestDictionary["apikey"] = user_model.alphavantage_api_key
requestDictionary["symbol"] = "ALLY"
requestDictionary["function"] = "INCOME_STATEMENT"
api_call_result = execute_company_income_statement_api_call(requestDictionary)
df = checkresult(api_call_result) | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 857 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
user_model = checkresult(user_model_result)
# now that we have the userModel - lets populate the request dictionary -
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"apikey" ; # the API key
]
requestDictionary = Dict{String,Any}()
requestDictionary["apikey"] = user_model.alphavantage_api_key
requestDictionary["symbol"] = "ALLY"
requestDictionary["function"] = "OVERVIEW"
api_call_result = execute_company_overview_api_call(requestDictionary)
df = checkresult(api_call_result) | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 262 | using PooksoftAlphaVantageDataStore
using JSON
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_data_file = my_current_dir*"/test/data/IBM-Intraday-04-14-2021.json"
# load -
json_dictionary = JSON.parsefile(path_to_data_file) | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1657 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
user_model = checkresult(user_model_result)
# now that we have the userModel - lets populate the request dictionary -
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"interval" ; # what interval: 1min, 5min, 15min, 30min, 60min, daily, weekly, monthly
"time_period" ; # Number of data points used to calculate each moving average value. Positive integers are accepted (e.g., time_period=60, time_period=200)
"series_type" ; # The desired price type in the time series. Four types are supported: close, open, high, low
"datatype" ; # By default, datatype=json. Strings json and csv are accepted with the following specifications: json returns the daily time series in JSON format; csv returns the time series as a CSV (comma separated value) file.
"apikey" ; # the API key
]
requestDictionary = Dict{String,Any}()
requestDictionary["apikey"] = user_model.alphavantage_api_key
requestDictionary["symbol"] = "ALLY"
requestDictionary["function"] = "RSI"
requestDictionary["interval"] = "daily"
requestDictionary["time_period"] = "14"
requestDictionary["series_type"] = "close"
requestDictionary["datatype"] = "json"
api_call_result = execute_relative_strength_index_api_call(requestDictionary)
df = checkresult(api_call_result)
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 1658 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model_result = build_api_user_model(path_to_config_file)
user_model = checkresult(user_model_result)
# now that we have the userModel - lets populate the request dictionary -
required_api_keys = [
"symbol" ; # ticker symbol -
"function" ; # what function are we doing?
"interval" ; # what interval: 1min, 5min, 15min, 30min, 60min, daily, weekly, monthly
"time_period" ; # Number of data points used to calculate each moving average value. Positive integers are accepted (e.g., time_period=60, time_period=200)
"series_type" ; # The desired price type in the time series. Four types are supported: close, open, high, low
"datatype" ; # By default, datatype=json. Strings json and csv are accepted with the following specifications: json returns the daily time series in JSON format; csv returns the time series as a CSV (comma separated value) file.
"apikey" ; # the API key
]
requestDictionary = Dict{String,Any}()
requestDictionary["apikey"] = user_model.alphavantage_api_key
requestDictionary["symbol"] = "ALLY"
requestDictionary["function"] = "SMA"
requestDictionary["interval"] = "daily"
requestDictionary["time_period"] = "14"
requestDictionary["series_type"] = "close"
requestDictionary["datatype"] = "json"
api_call_result = execute_simple_moving_average_api_call(requestDictionary)
df = checkresult(api_call_result)
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 520 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model = build_api_user_model(path_to_config_file) |> checkresult
# make an API call -
stock_symbol = "aapl"
data_type = :json
outputsize = :compact
df = execute_sts_adjusted_daily_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing) |> checkresult | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 522 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model = build_api_user_model(path_to_config_file) |> checkresult
# make an API call -
stock_symbol = "aapl"
data_type = :json
outputsize = :compact
df = execute_sts_adjusted_monthly_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing) |> checkresult | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 521 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model = build_api_user_model(path_to_config_file) |> checkresult
# make an API call -
stock_symbol = "ally"
data_type = :json
outputsize = :compact
df = execute_sts_adjusted_weekly_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing) |> checkresult | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 511 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model = build_api_user_model(path_to_config_file) |> checkresult
# make an API call -
stock_symbol = "aapl"
data_type = :json
outputsize = :compact
df = execute_sts_daily_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing) |> checkresult | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 763 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model = build_api_user_model(path_to_config_file) |> checkresult
# setup call -
requestDictionary = Dict{String,Any}()
requestDictionary["apikey"] = user_model.alphavantage_api_key
requestDictionary["symbol"] = "ALLY"
requestDictionary["function"] = "TIME_SERIES_INTRADAY"
requestDictionary["interval"] = "30min"
requestDictionary["adjusted"] = "true"
requestDictionary["outputsize"] = "compact"
requestDictionary["datatype"] = "json"
api_call_result = execute_sts_intraday_api_call(requestDictionary)
df = checkresult(api_call_result)
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 513 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model = build_api_user_model(path_to_config_file) |> checkresult
# make an API call -
stock_symbol = "aapl"
data_type = :json
outputsize = :compact
df = execute_sts_monthly_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing) |> checkresult | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 466 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model = build_api_user_model(path_to_config_file) |> checkresult
# make an API call -
ticker_symbol = "ally"
data_type = :json
df = execute_sts_quote_api_call(user_model, ticker_symbol; data_type = data_type, logger=nothing) |> checkresult | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 455 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model = build_api_user_model(path_to_config_file) |> checkresult
# make an API call -
keyword = "ally"
data_type = :json
df = execute_sts_search_api_call(user_model, keyword; data_type = data_type, logger=nothing) |> checkresult | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | code | 512 | using PooksoftAlphaVantageDataStore
# ok, so lets build by user object -
my_current_dir = pwd() # where am I?
path_to_config_file = my_current_dir*"/test/configuration/Configuration.json"
# build the api user model -
user_model = build_api_user_model(path_to_config_file) |> checkresult
# make an API call -
stock_symbol = "ally"
data_type = :json
outputsize = :compact
df = execute_sts_weekly_api_call(user_model, stock_symbol; data_type = data_type, outputsize = outputsize, logger=nothing) |> checkresult | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 4803 | 
[](https://pooksoft.github.io/PooksoftAlphaVantageDataStore.jl/dev/)

## Introduction
``PooksoftAlphaVantageDataStore.jl`` is an application programming interface (API) for [AlphaVantage](https://www.alphavantage.co), a leading provider of realtime and historical stock, forex (FX) data, and digital/crypto currency data feeds written in the [Julia](https://julialang.org) programming language.
## Installation and Requirements
``PooksoftAlphaVantageDataStore.jl`` can be installed, updated, or removed using the [Julia package management system](https://docs.julialang.org/en/v1/stdlib/Pkg/). To access the package management interface, open the [Julia REPL](https://docs.julialang.org/en/v1/stdlib/REPL/), and start the package mode by pressing `]`.
While in package mode, to install ``PooksoftAlphaVantageDataStore.jl``, issue the command:
(@v1.6) pkg> add PooksoftAlphaVantageDataStore
To use ``PooksoftAlphaVantageDataStore.jl`` in your project issue the command:
julia> using PooksoftAlphaVantageDataStore
## Utility functions
The utility functions construct two important composite data types, [PSUserModel](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/base/Types.jl) which encapsulates information about your [AlphaVantage](https://www.alphavantage.co) account and [PSDataStoreAPICallModel](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/base/Types.jl) which can be used along with the high-level interface to make [AlphaVantage](https://www.alphavantage.co) application programming interface (API) calls.
Utility functions:
* [build_api_user_model](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/base/User.jl) | Function to build a user model object which requires an [AlphaVantage API key](https://www.alphavantage.co/support/#api-key)
* [build_datastore_apicall_model](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/base/Datastore.jl) | Utility function to build an api call model which is required for the high-level api call interface. This function returns a [PSDataStoreAPICallModel](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/base/Types.jl) object wrapped in a ``PSResult`` type; the [PSDataStoreAPICallModel](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/base/Types.jl) object can be accessed from the ``value`` field of the ``PSResult`` type.
## Stock Time Series (STS) functions (low-level interface)
STS functions allow the user to download daily, weekly or monthly stock price data (or adjusted data) with a frequency depending upon your [AlphaVantage](https://www.alphavantage.co/support/#support) account privileges. These functions take the form:
execute_sts_{*}_api_call
where `{*}` denotes the time frame for the data.
* [execute_sts_daily_api_call](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/sts/STSDaily.jl) | Download daily stock price information
* [execute_sts_adjusted_daily_api_call](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/sts/STSDaily.jl) | Download adjusted daily stock price information
* [execute_sts_weekly_api_call](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/sts/STSWeekly.jl) | Download weekly stock price information
* [execute_sts_adjusted_weekly_api_call](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/sts/STSWeekly.jl) | Download adjusted weekly stock price information
* [execute_sts_monthly_api_call](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/sts/STSMonthly.jl) | Download monthly stock price information
* [execute_sts_adjusted_monthly_api_call](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/sts/STSMonthly.jl) | Download adjusted monthly stock price information
## Stock Time Series (STS) functions (high-level interface)
There is also a high-level interface that calls the low-level functions. The high-level interface has the convenience function:
execute_api_call
which takes a [PSUserModel](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/base/Types.jl) and a [PSDataStoreAPICallModel](https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl/blob/master/src/base/Types.jl) object, along with an optional [logger](https://github.com/kmsquire/Logging.jl) instance. Use the help system at the julia prompt for additional information on the ``execute_api_call`` function.
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 211 | # Quote Endpoint
A lightweight alternative to the time series function. The `quote` functions returns the price and volume information for a ticker symbol of your choice.
```@docs
execute_sts_quote_api_call
``` | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 292 | # Search Endpoint
The Search Endpoint returns the best-matching symbols and market information based on keywords. The search results also contain match scores that provide you with the full flexibility to develop your own search and filtering logic.
```@docs
execute_sts_search_api_call
``` | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 78 | # Adjusted Time Series Daily
```@docs
execute_sts_adjusted_daily_api_call
``` | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 82 | # Adjusted Time Series Monthly
```@docs
execute_sts_adjusted_monthly_api_call
``` | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 80 | # Adjusted Time Series Weekly
```@docs
execute_sts_adjusted_weekly_api_call
``` | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 60 | # Time Series Daily
```@docs
execute_sts_daily_api_call
``` | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 64 | # Time Series Monthly
```@docs
execute_sts_monthly_api_call
``` | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 62 | # Time Series Weekly
```@docs
execute_sts_weekly_api_call
``` | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 92 | # Exponential Moving Average (EMA)
```@docs
execute_exponential_moving_average_api_call
``` | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 471 | # Relative Strength Index (RSI)
The Relative Strength Index (RSI) describes a momentum indicator that measures the magnitude of recent asset price changes to evaluate whether that asset is overbought or oversold. For more information on the RSI, please see the [RSI article on Investopedia](https://www.investopedia.com/articles/active-trading/042114/overbought-or-oversold-use-relative-strength-index-find-out.asp).
```@docs
execute_relative_strength_index_api_call
``` | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 82 | # Simple Moving Average (SMA)
```@docs
execute_simple_moving_average_api_call
``` | PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 0.1.1 | 35fb2ed71630b39a91e25bc63704af3d28b672f1 | docs | 2439 | # Introduction
This package encodes a set of [Julia](https://julialang.org) functions to download data from the [AlphaVantage](https://www.alphavantage.co) financial data application programming interface (API). This package assumes you have login credentials for the [AlphaVantage](https://www.alphavantage.co) API.
### Installation Requirements
`PooksoftAlphaVantageDataStore.jl` can be installed, updated, or removed using the [Julia package management system](https://docs.julialang.org/en/v1/stdlib/Pkg/). To access the package management interface, open the [Julia REPL](https://docs.julialang.org/en/v1/stdlib/REPL/), and start the package mode by pressing `]`.
While in package mode, to install `PooksoftAlphaVantageDataStore.jl`, issue the command:
(@v1.6) pkg> add PooksoftAlphaVantageDataStore
Required packages will be downloaded and installed for you automatically.
To use `PooksoftAlphaVantageDataStore.jl` in your projects issue the command:
julia> using PooksoftAlphaVantageDataStore
### Package Organization
Functions of `PooksoftAlphaVantageDataStore.jl` are organized around each of the category areas of the
[AlphaVantage](https://www.alphavantage.co) application programming interface. The current version of
`PooksoftAlphaVantageDataStore.jl` implements functions that wrap the Stock Time Series (STS) methods of
[AlphaVantage](https://www.alphavantage.co).
#### Disclaimer
`PooksoftAlphaVantageDataStore.jl` is offered solely for training and informational purposes. No offer or solicitation to buy or sell securities or securities derivative products of any kind, or any type of investment or trading advice or strategy, is made, given or in any manner endorsed by Pooksoft.
Trading involves risk. Carefully review your financial situation before investing in securities, futures contracts, options or commodity interests. Past performance, whether actual or indicated by historical tests of strategies, is no guarantee of future performance or success. Trading is generally not appropriate for someone with limited resources, investment or trading experience, or a low-risk tolerance. Only risk capital that will not be needed for living expenses.
You are fully responsible for any investment or trading decisions you make, and such decisions should be based solely on your evaluation of your financial circumstances, investment or trading objectives, risk tolerance and liquidity needs.
| PooksoftAlphaVantageDataStore | https://github.com/Pooksoft/PooksoftAlphaVantageDataStore.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 128 | const TEST_NAME = "AESNI1x128"
include("common.jl")
using RandomNumbers.Random123
r = AESNI1x(123)
test_all(r, 100_000_000)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 124 | const TEST_NAME = "ARS1x128"
include("common.jl")
using RandomNumbers.Random123
r = ARS1x(123)
test_all(r, 100_000_000)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 106 | const TEST_NAME = "BaseMT19937"
include("common.jl")
r = MersenneTwister(123)
test_all(r, 100_000_000)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 1086 | try
using RNGTest
catch
@warn "No RNGTest package found. Only speed benchmarks can be run."
end
using RandomNumbers
import Printf: @printf
using Random
function bigcrush(rng::RandomNumbers.AbstractRNG{T}) where {T<:Number}
p = RNGTest.wrap(r, T)
RNGTest.bigcrushTestU01(p)
end
function speed_test(rng::RandomNumbers.AbstractRNG{T}, n=100_000_000) where {T<:Number}
A = Array{T}(undef, n)
rand!(rng, A)
elapsed = @elapsed rand!(rng, A)
elapsed * 1e9 / n * 8 / sizeof(T)
end
function bigcrush(rng::MersenneTwister)
p = RNGTest.wrap(r, UInt64)
RNGTest.bigcrushTestU01(p)
end
function speed_test(rng::MersenneTwister, n=100_000_000)
T = UInt64
A = Array{T}(undef, n)
rand!(rng, A)
elapsed = @elapsed rand!(rng, A)
elapsed * 1e9 / n * 8 / sizeof(T)
end
function test_all(rng::Random.AbstractRNG, n=100_000_000)
fo = open("$TEST_NAME.log", "w")
redirect_stdout(fo)
println(TEST_NAME)
speed = speed_test(rng, n)
@printf "Speed Test: %.3f ns/64 bits\n" speed
flush(fo)
bigcrush(rng)
close(fo)
end
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 132 | const TEST_NAME = "MT19937"
include("common.jl")
using RandomNumbers.MersenneTwisters
r = MT19937(123)
test_all(r, 100_000_000)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 196 | const TEST_NAME = "PCG_RXS_M_XS_64_64"
include("common.jl")
using RandomNumbers.PCG
r = PCGStateSetseq(UInt64, PCG_RXS_M_XS, (0x018cd83e277674ac, 0x436cd6f2434be066))
test_all(r, 100_000_000)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 229 | const TEST_NAME = "PCG_XSH_RS_128_64"
include("common.jl")
using RandomNumbers.PCG
r = PCGStateSetseq(UInt64, PCG_XSH_RS,
(0x4e17a5abd5d47402ce332459f69eacfd, 0x96856028d0dc791c176537f21a77ab67))
test_all(r, 100_000_000)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 192 | const TEST_NAME = "PCG_XSH_RS_64_32"
include("common.jl")
using RandomNumbers.PCG
r = PCGStateSetseq(UInt64, PCG_XSH_RS, (0x018cd83e277674ac, 0x436cd6f2434be066))
test_all(r, 100_000_000)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 137 | const TEST_NAME = "Philox2x64"
include("common.jl")
using RandomNumbers.Random123
r = Philox2x(UInt64, 123)
test_all(r, 100_000_000)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 2673 | using RandomNumbers.MersenneTwisters
import Random123
using RandomNumbers.Random123
using RandomNumbers.PCG
using RandomNumbers.Xorshifts
import Random: rand
import Printf: @printf, @sprintf
include("common.jl")
function rank_timing_results(results)
sort!(results, by = last)
mtidx = findfirst(t -> endswith(t[1], "MersenneTwister"), results)
timemt = results[mtidx][2]
println("\n\nRanked:")
for i in 1:length(results)
rngstr, timetaken = results[i]
@printf "%4d. %s: %6.3f ns/64 bits" i rngstr timetaken
p = 100.0 * (timetaken-timemt)/timemt
c = (p < 0.0 ? :green : (p > 0.0 ? :red : :normal))
if p != 0.0
signstr = (p > 0.0 ? "+" : "")
printstyled(" ", signstr, @sprintf("%.2f%%", p); color=c)
end
println("")
end
end
macro rngtest(rngs...)
results = []
for rng in rngs
if rng.head == :call
r = @eval $rng
name = rng.args[1]
else
r = @eval $(rng.args[1])
name = rng.args[2]
end
timetaken = speed_test(r)
@printf "%20s: %6.3f ns/64 bits\n" name timetaken
push!(results, (@sprintf("%20s", name), timetaken))
end
rank_timing_results(results)
end
@rngtest(
Xorshift64(123),
Xorshift64Star(123),
Xorshift128Star(123),
Xorshift128Plus(123),
Xorshift1024Star(123),
Xorshift1024Plus(123),
Xoroshiro128Plus(123),
Xoroshiro128Star(123),
MersenneTwister(123),
MT19937(123),
(Threefry2x(UInt64, (123, 321)), Threefry2x64),
(Threefry4x(UInt64, (123, 321, 456, 654)), Threefry4x64),
(Threefry2x(UInt32, (123, 321)), Threefry2x32),
(Threefry4x(UInt32, (123, 321, 456, 654)), Threefry4x32),
(Philox2x(UInt64, 123), Philox2x64),
(Philox4x(UInt64, (123, 321)), Philox4x64),
(Philox2x(UInt32, 123), Philox2x32),
(Philox4x(UInt32, (123, 321)), Philox4x32),
(AESNI1x(123), AESNI1x128),
(ARS1x(123), ARS1x128),
(AESNI4x((123, 321, 456, 654)), AESNI4x32),
(ARS4x((123, 321, 456, 654)), ARS4x32),
(PCGStateOneseq(UInt64, PCG_XSH_RS, 123), PCG_XSH_RS_128),
(PCGStateOneseq(UInt64, PCG_XSH_RR, 123), PCG_XSH_RR_128),
(PCGStateOneseq(UInt64, PCG_RXS_M_XS, 123), PCG_RXS_M_XS_64),
(PCGStateOneseq(UInt64, PCG_XSL_RR, 123), PCG_XSL_RR_128),
(PCGStateOneseq(UInt64, PCG_XSL_RR_RR, 123), PCG_XSL_RR_RR_64),
(PCGStateOneseq(UInt32, PCG_XSH_RS, 123), PCG_XSH_RS_64),
(PCGStateOneseq(UInt32, PCG_XSH_RR, 123), PCG_XSH_RR_64),
(PCGStateOneseq(UInt32, PCG_RXS_M_XS, 123), PCG_RXS_M_XS_32),
(PCGStateOneseq(UInt32, PCG_XSL_RR, 123), PCG_XSL_RR_64),
(PCGStateOneseq(UInt128, PCG_RXS_M_XS, 123), PCG_RXS_M_XS_128),
(PCGStateOneseq(UInt128, PCG_XSL_RR_RR, 123), PCG_XSL_RR_RR_128),
)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 148 | const TEST_NAME = "Threefry2x64"
include("common.jl")
using RandomNumbers.Random123
r = Threefry2x(UInt64, (123, 321))
test_all(r, 100_000_000)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 143 | const TEST_NAME = "Xoroshiro128Plus"
include("common.jl")
using RandomNumbers.Xorshifts
r = Xoroshiro128Plus(123)
test_all(r, 100_000_000)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 143 | const TEST_NAME = "Xorshift1024Star"
include("common.jl")
using RandomNumbers.Xorshifts
r = Xorshift1024Star(123)
test_all(r, 100_000_000)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 191 | using Pkg
Pkg.instantiate()
Pkg.develop(PackageSpec(path=pwd()))
Pkg.develop(PackageSpec(url="https://github.com/JuliaRandom/Random123.jl"))
Pkg.build("Random123")
Pkg.build("RandomNumbers")
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 598 | using Documenter, RandomNumbers, DocumenterMarkdown
import Random123
DocMeta.setdocmeta!(RandomNumbers, :DocTestSetup, quote
using RandomNumbers
using RandomNumbers.PCG
using RandomNumbers.Xorshifts
using RandomNumbers.MersenneTwisters
using Random123
using Test
end; recursive=true)
makedocs(
modules = [RandomNumbers],
format = Markdown()
)
deploydocs(
repo = "github.com/JuliaRandom/RandomNumbers.jl.git",
deps = Deps.pip("pygments", "mkdocs", "python-markdown-math", "mkdocs-material"),
make = () -> run(`mkdocs build`),
target = "site"
)
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 730 | __precompile__(true)
"""
Main module for `RandomNumbers.jl` -- a random number generator package for Julia Language.
This module exports two types and four submodules:
- [`AbstractRNG`](@ref)
- [`WrappedRNG`](@ref)
- [`PCG`](@ref)
- [`MersenneTwisters`](@ref)
- [`Xorshifts`](@ref)
"""
module RandomNumbers
export AbstractRNG
export WrappedRNG
export output_type, seed_type
export PCG, MersenneTwisters, Xorshifts
include("common.jl")
include("utils.jl")
include("wrapped_rng.jl")
include(joinpath("PCG", "PCG.jl"))
include(joinpath("MersenneTwisters", "MersenneTwisters.jl"))
include(joinpath("Xorshifts", "Xorshifts.jl"))
export randfloat
include("randfloat.jl")
end
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 1275 | import Random
import Random: rand, UInt52, rng_native_52
"""
```julia
AbstractRNG{T} <: Random.AbstractRNG
```
The abstract type of Random Number Generators. T indicates the original output type of a RNG.
"""
abstract type AbstractRNG{T<:Number} <: Random.AbstractRNG end
const BitTypes = Union{Bool, UInt8, UInt16, UInt32, UInt64, UInt128, Int8, Int16, Int32, Int64, Int128}
# For compatibility with functions in Random stdlib.
rng_native_52(::AbstractRNG) = UInt64
rand(rng::AbstractRNG, ::Random.SamplerType{T}) where {T<:BitTypes} = rand(rng, T)
# see https://github.com/JuliaRandom/RandomNumbers.jl/issues/8
# TODO: find a better approach.
@inline function rand(rng::AbstractRNG{T}, ::Type{Float64}=Float64) where {T<:Union{UInt64, UInt128}}
reinterpret(Float64, Base.exponent_one(Float64) | rand(rng, UInt52())) - 1.0
end
@inline function rand(rng::AbstractRNG{T}, ::Type{Float64}=Float64) where {T<:Union{UInt8, UInt16, UInt32}}
rand(rng, T) * exp2(-sizeof(T) << 3)
end
@inline function rand(rng::AbstractRNG{T1}, ::Type{T2}) where {T1<:BitTypes, T2<:BitTypes}
s1 = sizeof(T1)
s2 = sizeof(T2)
t = rand(rng, T1) % T2
s1 > s2 && return t
for i in 2:(s2 ÷ s1)
t |= (rand(rng, T1) % T2) << ((s1 << 3) * (i - 1))
end
t
end
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 4167 | import Random: GLOBAL_RNG
"""Random number generator for Float32 in [0,1) that samples from
42*2^23 float32s in [0,1) compared to 2^23 for rand(Float32)."""
function randfloat(rng::Random.AbstractRNG,::Type{Float32})
# create exponent bits in 0000_0000 to 0111_1110
# at following chances
# e=01111110 at 50.0% for [0.5,1.0)
# e=01111101 at 25.0% for [0.25,0.5)
# e=01111100 at 12.5% for [0.125,0.25)
# ...
ui = rand(rng,UInt64)
# count leading zeros of random UInt64
# 0 leading zeros at 50% chance
# 1 leading zero at 25% chance
# 2 leading zeros at 12.5% chance etc.
# then convert leading zeros to exponent bits of Float32
lz = leading_zeros(ui)
e = ((126 - lz) % UInt32) << 23
# for 64 leading zeros the smallest float32 that can be created is 2.7105054f-20
# use last 23 bits for signficand, only when they are not part of the leading zeros
# to sample from all floats in 2.7105054f-20 to prevfloat(1f0)
ui = lz > 40 ? rand(rng,UInt64) : ui
# combine exponent and signficand
return reinterpret(Float32,e | ((ui % UInt32) & 0x007f_ffff))
end
"""Random number generator for Float64 in [0,1) that samples from
64*2^52 floats compared to 2^52 for rand(Float64)."""
function randfloat(rng::Random.AbstractRNG,::Type{Float64})
# create exponent bits in 000_0000_0000 to 011_1111_1110
# at following chances
# e=01111111110 at 50.0% for [0.5,1.0)
# e=01111111101 at 25.0% for [0.25,0.5)
# e=01111111100 at 12.5% for [0.125,0.25)
# ...
ui = rand(rng,UInt64)
# count leading zeros of random UInt64 in several steps
# 0 leading zeros at 50% chance
# 1 leading zero at 25% chance
# 2 leading zeros at 12.5% chance etc.
# then convert leading zeros to exponent bits of Float64
lz = leading_zeros(ui)
e = ((1022 - lz) % UInt64) << 52
# for 64 leading zeros the smallest float64 that
# can be created is 2.710505431213761e-20
# draw another UInt64 for significant bits in case the leading
# zeros reach into the bits that would be used for the significand
# (in which case the first signifcant bits would always be zero)
ui = lz > 11 ? rand(rng,UInt64) : ui
# combine exponent and significand (sign always 0)
return reinterpret(Float64,e | (ui & 0x000f_ffff_ffff_ffff))
end
"""Random number generator for Float16 in [0,1) that samples from
all 15360 float16s in that range."""
function randfloat(rng::Random.AbstractRNG,::Type{Float16})
# create exponent bits in 00000 to 01110
# at following chances
# e=01110 at 50.0% for [0.5,1.0)
# e=01101 at 25.0% for [0.25,0.5)
# e=01100 at 12.5% for [0.125,0.25)
# ...
ui = rand(rng,UInt32) | 0x0002_0000
# set 15th bit to 1 to have at most 14 leading zeros.
# count leading zeros of random UInt64 in several steps
# 0 leading zeros at 50% chance
# 1 leading zero at 25% chance
# 2 leading zeros at 12.5% chance etc.
# then convert leading zeros to exponent bits of Float16
lz = leading_zeros(ui)
e = ((14 - lz) % UInt32) << 10
# combine exponent and significand (sign always 0)
return reinterpret(Float16,(e | (ui & 0x0000_03ff)) % UInt16)
end
# use stdlib default RNG as a default here too
randfloat(::Type{T}=Float64) where {T<:Base.IEEEFloat} = randfloat(GLOBAL_RNG,T)
randfloat(rng::Random.AbstractRNG) = randfloat(rng,Float64)
# randfloat for arrays - in-place
function randfloat!(rng::Random.AbstractRNG, A::AbstractArray{T}) where T
for i in eachindex(A)
@inbounds A[i] = randfloat(rng, T)
end
A
end
# randfloat for arrays with memory allocation
randfloat(rng::Random.AbstractRNG, ::Type{T}, dims::Integer...) where {T<:Base.IEEEFloat} = randfloat!(rng, Array{T}(undef,dims))
randfloat(rng::Random.AbstractRNG, dims::Integer...) = randfloat!(rng, Array{Float64}(undef,dims))
randfloat(::Type{T}, dims::Integer...) where {T<:Base.IEEEFloat} = randfloat!(GLOBAL_RNG, Array{T}(undef,dims))
randfloat( dims::Integer...) = randfloat!(GLOBAL_RNG, Array{Float64}(undef,dims)) | RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 2089 | # import Random
import Random: RandomDevice
import RandomNumbers: AbstractRNG
"""
```julia
gen_seed(T[, n])
```
Generate a tuple of `n` truly random numbers in type `T`. If `n` is missing, return only one number.
The "truly" random numbers are provided by the random device of system. See
[`Random.RandomDevice`](https://github.com/JuliaLang/julia/blob/master/base/random.jl#L29).
# Examples
```julia
julia> RandomNumbers.gen_seed(UInt64, 2) # The output should probably be different on different computers.
(0x26aa3fe5e306f725,0x7b9dc3c227d8acc9)
julia> RandomNumbers.gen_seed(UInt32)
0x9ba60fdc
```
"""
gen_seed(::Type{T}) where {T<:Number} = rand(RandomDevice(), T)
gen_seed(::Type{T}, n) where {T<:Number} = Tuple(rand(RandomDevice(), T, n))
"Get the original output type of a RNG."
@inline output_type(::AbstractRNG{T}) where {T} = T
# For 1.0
# Random.gentype(::Type{T}) where {T<:AbstractRNG} = output_type(T)
"Get the default seed type of a RNG."
@inline seed_type(r::AbstractRNG) = seed_type(typeof(r))
@inline split_uint(x::UInt128, ::Type{UInt64}=UInt64) = (x % UInt64, (x >> 64) % UInt64)
@inline split_uint(x::UInt128, ::Type{UInt32}) = (x % UInt32, (x >> 32) % UInt32,
(x >> 64) % UInt32, (x >> 96) % UInt32)
@inline split_uint(x::UInt64, ::Type{UInt32}=UInt32) = (x % UInt32, (x >> 32) % UInt32)
@inline union_uint(x::NTuple{2, UInt32}) = unsafe_load(Ptr{UInt64}(pointer(collect(x))), 1)
@inline union_uint(x::NTuple{2, UInt64}) = unsafe_load(Ptr{UInt128}(pointer(collect(x))), 1)
@inline union_uint(x::NTuple{4, UInt32}) = unsafe_load(Ptr{UInt128}(pointer(collect(x))), 1)
@inline function unsafe_copyto!(r1::R, r2::R, ::Type{T}, len) where {R, T}
arr1 = Ptr{T}(pointer_from_objref(r1))
arr2 = Ptr{T}(pointer_from_objref(r2))
for i = 1:len
unsafe_store!(arr1, unsafe_load(arr2, i), i)
end
r1
end
@inline function unsafe_compare(r1::R, r2::R, ::Type{T}, len) where {R, T}
arr1 = Ptr{T}(pointer_from_objref(r1))
arr2 = Ptr{T}(pointer_from_objref(r2))
all(unsafe_load(arr1, i) == unsafe_load(arr2, i) for i in 1:len)
end
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 2728 | import Base: copy, copyto!, ==
import RandomNumbers: AbstractRNG, seed_type
"""
```julia
WrappedRNG{R, T1, T2} <: AbstractRNG{T2}
WrappedRNG(base_rng, T2)
WrappedRNG(R, T2, args...)
```
Wrap a RNG which originally provides output in T1 into a RNG that provides output in T2.
# Examples
```jldoctest
julia> r = Xorshifts.Xorshift128Star(123);
julia> RandomNumbers.output_type(r)
UInt64
julia> r1 = WrappedRNG(r, UInt32);
julia> RandomNumbers.output_type(r1)
UInt32
julia> r2 = WrappedRNG(Xorshifts.Xorshift128Star, UInt32, 123);
julia> RandomNumbers.output_type(r2)
UInt32
julia> @Test.test rand(r1, UInt32, 3) == rand(r2, UInt32, 3)
Test Passed
```
"""
mutable struct WrappedRNG{R<:AbstractRNG, T1<:BitTypes, T2<:BitTypes} <: AbstractRNG{T2}
base_rng::R
x::T1
p::Int
WrappedRNG{R, T1, T2}() where {R <: AbstractRNG, T1 <: BitTypes, T2 <: BitTypes} =
(@assert T1 ≠ T2; new())
end
function WrappedRNG(base_rng::AbstractRNG{T1}, ::Type{T2}) where {T1 <: BitTypes, T2 <: BitTypes}
wr = WrappedRNG{typeof(base_rng), T1, T2}()
wr.base_rng = copy(base_rng)
if sizeof(T1) > sizeof(T2)
wr.x = rand(wr.base_rng, T1)
end
wr.p = 0
wr
end
function WrappedRNG(::Type{R}, ::Type{T2}, args...) where {R <: AbstractRNG, T2 <: BitTypes}
base_rng = R(args...)
WrappedRNG(base_rng, T2)
end
WrappedRNG(base_rng::WrappedRNG{R, T1, T2}, ::Type{T3}) where
{R <: AbstractRNG, T1 <: BitTypes, T2 <: BitTypes, T3 <: BitTypes} = WrappedRNG(base_rng.base_rng, T3)
seed_type(::Type{WrappedRNG{R, T1, T2}}) where {R, T1, T2} = seed_type(R)
function copyto!(dest::R, src::R) where R <: WrappedRNG
copyto!(dest.base_rng, src.base_rng)
dest.x = src.x
dest.p = src.p
dest
end
function copy(src::R) where R <: WrappedRNG
wr = R()
wr.base_rng = copy(src.base_rng)
wr.x = src.x
wr.p = src.p
wr
end
==(r1::R, r2::R) where R <: WrappedRNG = r1.base_rng == r2.base_rng && r1.x == r2.x && r1.p == r2.p
function seed!(wr::WrappedRNG, seed...)
seed!(wr.base_rng, seed...)
if sizeof(T1) > sizeof(T2)
wr.x = rand(wr.base_rng, T1)
end
wr.p = 0
wr
end
@inline function rand(rng::WrappedRNG{R, T1, T2}, ::Type{T2}) where
{R <: AbstractRNG, T1 <: BitTypes, T2 <: BitTypes}
s1 = sizeof(T1)
s2 = sizeof(T2)
if s2 >= s1
t = rand(rng.base_rng, T1) % T2
for i in 2:(s2 ÷ s1)
t |= (rand(rng.base_rng, T1) % T2) << ((s1 << 3) * (i - 1))
end
else
t = rng.x % T2
rng.p += 1
if rng.p == s1 ÷ s2
rng.p = 0
rng.x = rand(rng.base_rng, T1)
else
rng.x >>= s2 << 3
end
end
return t
end
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 216 | __precompile__(true)
"""
The module for [Mersenne Twisters](@ref).
Currently only provide one RNG type:
- [`MT19937`](@ref)
"""
module MersenneTwisters
export MT19937
include("bases.jl")
include("main.jl")
end
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 2208 | import RandomNumbers: AbstractRNG, gen_seed
"""
```julia
MersenneTwister{T} <: AbstractRNG{T}
```
The base type of Mersenne Twisters.
"""
abstract type MersenneTwister{T<:Number} <: AbstractRNG{T} end
const N = 624
const M = 397
const UPPER_MASK = 0x80000000
const LOWER_MASK = 0x7fffffff
"""
```julia
MT19937 <: MersenneTwister{UInt32}
MT19937([seed])
```
MT19937 RNG. The `seed` is a `Tuple` of $N `UInt32` numbers, or an `Integer` which will be automatically
convert to an `UInt32` number.
"""
mutable struct MT19937 <: MersenneTwister{UInt32}
mt::Vector{UInt32}
mti::Int
function MT19937(x::Vector{UInt32}, i::Int)
@assert length(x) == N
new(x, i)
end
end
MT19937(seed::Integer) = seed!(MT19937(Vector{UInt32}(undef, N), 1), seed % UInt32)
MT19937(seed::NTuple{N, UInt32}=gen_seed(UInt32, N)) = seed!(MT19937(Vector{UInt32}(undef, N), 1), seed)
"Set up a `MT19937` RNG object using a `Tuple` of $N `UInt32` numbers."
@inline function mt_set!(r::MT19937, s::NTuple{N, UInt32})
@inbounds for i in 1:N
r.mt[i] = s[i]
end
r.mti = N + 1
r
end
"Set up a `MT19937` RNG object using an `UInt32` number."
@inline function mt_set!(r::MT19937, s::UInt32)
r.mt[1] = s
@inbounds for i in 2:N
r.mt[i] = 0x6c078965 * (r.mt[i-1] ⊻ (r.mt[i-1] >> 30)) + (i - 1) % UInt32
end
r.mti = N + 1
r
end
@inline mt_magic(y) = ((y % Int32) << 31 >> 31) & 0x9908b0df
"Get a random `UInt32` number from a `MT19937` object."
@inline function mt_get(r::MT19937)
mt = r.mt
if r.mti > N
@inbounds for i in 1:N-M
y = (mt[i] & UPPER_MASK) | (mt[i+1] & LOWER_MASK)
mt[i] = mt[i + M] ⊻ (y >> 1) ⊻ mt_magic(y)
end
@inbounds for i in N-M+1:N-1
y = (mt[i] & UPPER_MASK) | (mt[i+1] & LOWER_MASK)
mt[i] = mt[i + M - N] ⊻ (y >> 1) ⊻ mt_magic(y)
end
@inbounds begin
y = (mt[N] & UPPER_MASK) | (mt[1] & LOWER_MASK)
mt[N] = mt[M] ⊻ (y >> 1) ⊻ mt_magic(y)
end
r.mti = 1
end
k = mt[r.mti]
k ⊻= (k >> 11)
k ⊻= (k << 7) & 0x9d2c5680
k ⊻= (k << 15) & 0xefc60000
k ⊻= (k >> 18)
r.mti += 1
k
end
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 578 | import Base: copy, copyto!, ==
import Random: rand, seed!
import RandomNumbers: gen_seed, seed_type
@inline rand(r::MT19937, ::Type{UInt32}) = mt_get(r)
seed!(r::MT19937, seed::Integer) = mt_set!(r, seed % UInt32)
seed!(r::MT19937, seed::NTuple{N, UInt32}=gen_seed(UInt32, N)) = mt_set!(r, seed)
seed_type(::Type{MT19937}) = NTuple{N, UInt32}
function copyto!(dest::MT19937, src::MT19937)
copyto!(dest.mt, src.mt)
dest.mti = src.mti
dest
end
copy(src::MT19937) = MT19937(copy(src.mt), src.mti)
==(r1::MT19937, r2::MT19937) = r1.mt == r2.mt && r1.mti == r2.mti
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
|
[
"MIT"
] | 1.6.0 | c6ec94d2aaba1ab2ff983052cf6a606ca5985902 | code | 1622 | __precompile__(true)
"The module for [PCG Family](@ref)."
module PCG
# PCG
export PCGStateMCG, PCGStateOneseq, PCGStateUnique, PCGStateSetseq
# PCG Methods
export PCG_XSH_RS, PCG_XSH_RR, PCG_RXS_M_XS, PCG_XSL_RR, PCG_XSL_RR_RR
export PCGUInt, PCGMethod, PCG_LIST
export bounded_rand, advance!
const pcg_uints = (UInt8, UInt16, UInt32, UInt64, UInt128)
const PCGUInt = Union{pcg_uints...}
"""
One of PCG output method: high xorshift, followed by a random shift.
Fast.
"""
const PCG_XSH_RS = Val{:XSH_RS}
"""
One of PCG output method: high xorshift, followed by a random rotate.
Fast. Slightly better statistically than `PCG_XSH_RS`.
"""
const PCG_XSH_RR = Val{:XSH_RR}
"""
One of PCG output method: random xorshift, mcg multiply, fixed xorshift.
The most statistically powerful generator, but slower than some of the others.
"""
const PCG_RXS_M_XS = Val{:RXS_M_XS}
"""
One of PCG output method: fixed xorshift (to low bits), random rotate.
Useful for 128-bit types that are split across two CPU registers.
"""
const PCG_XSL_RR = Val{:XSL_RR}
"""
One of PCG output method: fixed xorshift (to low bits), random rotate (both parts).
Useful for 128-bit types that are split across two CPU registers. Use this in need of an invertable 128-bit
RNG.
"""
const PCG_XSL_RR_RR = Val{:XSL_RR_RR}
const pcg_methods = (PCG_XSH_RS, PCG_XSH_RR, PCG_RXS_M_XS, PCG_XSL_RR, PCG_XSL_RR_RR)
"""
The `Union` of all the PCG method types: `PCG_XSH_RS`, `PCG_XSH_RR`, `PCG_RXS_M_XS`, `PCG_XSL_RR`, and `PCG_XSL_RR_RR`.
"""
const PCGMethod = Union{pcg_methods...}
include("pcg_list.jl")
include("bases.jl")
include("main.jl")
end
| RandomNumbers | https://github.com/JuliaRandom/RandomNumbers.jl.git |
Subsets and Splits