licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 1.1.3 | fb6df4003cb65831e8b4603fda10eb136fda2631 | code | 224 | using TriangleMesh
using Test, DelimitedFiles
include("Test_Polygon.jl")
include("Test_doc_examples.jl")
include("Test_create_mesh.jl")
include("Test_create_mesh_o2.jl")
include("Test_refine.jl")
include("Test_write.jl")
| TriangleMesh | https://github.com/konsim83/TriangleMesh.jl.git |
|
[
"MIT"
] | 1.1.3 | fb6df4003cb65831e8b4603fda10eb136fda2631 | docs | 2237 | # TriangleMesh.jl
*TriangleMesh* is written to provide a convenient mesh generation and refinement tool for Delaunay and constraint Delaunay meshes in Julia. Please see the documentation.
| **Documentation** | **Build Status** | **Code Coverage**| **Windows Build** |
|:-----------------:|:----------------:|:----------------:|:-----------------:|
| [![][docs-latest-img]][docs-latest-url] | [![][travis-img]][travis-url] | [![][codecov-img]][codecov-url] | [![Build status][appveyor-img]][appveyor-url] |
### Installation
*TriangleMesh* is now officialy registered. To install run
```julia
] add TriangleMesh
```
After the build passed successfully type
```julia
using TriangleMesh
```
to use the package. If you are having trouble please open an [issue](https://github.com/konsim83/TriangleMesh.jl/issues).
### TriangleMesh on Windows
To build TriangleMesh you need the Visual Studio Community Edition 2017 which you can get [here](https://www.techspot.com/downloads/6278-visual-studio.html). It is for free and easy to install.
You can also use a newer version of Visual Studio (and at any rate it is only important that you have the build tools installed) but then you will have to modify the environment variable in the 'compile.bat' script in the 'deps/src/' directory:
> set VS150COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\Tools\
for example becomes
> set VS150COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\Tools\
[docs-latest-img]: https://img.shields.io/badge/docs-latest-blue.svg
[docs-latest-url]: https://konsim83.github.io/TriangleMesh.jl/latest
[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
[docs-stable-url]: https://konsim83.github.io/TriangleMesh.jl/stable
[travis-img]: https://travis-ci.org/konsim83/TriangleMesh.jl.svg?branch=master
[travis-url]: https://travis-ci.org/konsim83/TriangleMesh.jl
[codecov-img]: https://codecov.io/gh/konsim83/TriangleMesh.jl/branch/master/graph/badge.svg
[codecov-url]: https://codecov.io/gh/konsim83/TriangleMesh.jl
[appveyor-url]: https://ci.appveyor.com/project/konsim83/trianglemesh-jl
[appveyor-img]: https://ci.appveyor.com/api/projects/status/79ww082lilsp21re?svg=true
| TriangleMesh | https://github.com/konsim83/TriangleMesh.jl.git |
|
[
"MIT"
] | 1.1.3 | fb6df4003cb65831e8b4603fda10eb136fda2631 | docs | 2253 | # TriangleMesh.jl
*Generate and refine 2D unstructured triangular meshes with Julia.*
!!! note
*TriangleMesh* provides a Julia interface to
[Triangle](https://www.cs.cmu.edu/~quake/triangle.html) written by J.R.
Shewchuk. *TriangleMesh* does not come with any warranties. Also, note that *TriangleMesh* can be used according to the terms and conditions stated in the MIT license whereas Triangle does not. If you want to use Triangle for commercial purposes please contact the
[author](http://www.cs.cmu.edu/~jrs/).
*TriangleMesh* is written to provide a convenient **mesh generation** and (local) **refinement** tool for **Delaunay** and **constraint Delaunay meshes** for people working in numerical mathematics and related areas. So far we are not aware of a convenient tool in Julia that does this using polygons as input. For mesh generation from distance fields see the [Meshing.jl](https://github.com/JuliaGeometry/Meshing.jl) package.
This tool covers large parts of the full functionality of Triangle but not all
of it. If you have the impression that there is important functionality missing
or if you found bugs, have any suggestions or criticism please open an issue on [GitHub](https://github.com/konsim83/TriangleMesh.jl) or [contact me](https://www.clisap.de/de/forschung/a:-dynamik-und-variabilitaet-des-klimasystems/crg-numerische-methoden-in-den-geowissenschaften/gruppenmitglieder/konrad-simon/).
The convenience methods can be used without knowledge of Triangle but the interface also provides more advanced methods that allow to pass Triangle's command line switches. For their use the user is referred to [Triangle's documentation](https://www.cs.cmu.edu/~quake/triangle.html).
## Features
- Generate 2D unstructured triangular meshes from polygons and point sets
- Refine 2D unstructured triangular meshes
- Convenient and intuitive interface (no need to know the command line switches of Triangle)
- Possibility to to use command of line switches of Triangle directly (for advanced use)
- Generate Voronoi diagrams
- Write meshes to disk
```@meta
CurrentModule = TriangleMesh
```
## Contents
```@contents
Pages = [
"index.md",
"man/examples.md",
"man/mtm.md",
"man/mtm_idx.md"
]
```
| TriangleMesh | https://github.com/konsim83/TriangleMesh.jl.git |
|
[
"MIT"
] | 1.1.3 | fb6df4003cb65831e8b4603fda10eb136fda2631 | docs | 8782 | # Workflow
This section is intended to give you an idea of how to use `TriangleMesh`. The workflow is actually very simple and we demonstrate it here using a few simple examples.
## Create a Polygon
*Create a polygon to be meshed manually.*
First we need to create a polygon - a planar straight-line graph (PSLG) - that describes a bounded area in the plane. A PSLG consists of nodes and (optional) segments. Each node can (but does not need to) have a marker indicating that it belongs to a certain set and a number of real attributes. Each segment can have a marker as well. If the set of segments (and the set of segment markers) is empty the polygon is simply a set of unconnected points.
We will create a polygon that describes a rhombus with a squared hole in the middle from the node set
```julia
# size is number_points x 2
node = [1.0 0.0 ; 0.0 1.0 ; -1.0 0.0 ; 0.0 -1.0 ;
0.25 0.25 ; -0.25 0.25 ; -0.25 -0.25 ; 0.25 -0.25]
```
and the segments
```julia
# size is number_segments x 2
seg = [1 2 ; 2 3 ; 3 4 ; 4 1 ; 5 6 ; 6 7 ; 7 8 ; 8 5]
```
We now have two boundaries - an inner and an outer. We will also give each point and each segment a marker according to the boundary
```julia
# all points get marker 1
node_marker = [ones(Int,4,1) ; 2*ones(Int,4,1)]
# last segment gets a different marker
seg_marker = [ones(Int,4) ; 2*ones(Int,4)]
```
as well as 2 random attributes for each point
```julia
# size is number_points x number_attr
node_attr = rand(8,2)
```
We now have to specify that the segemnts 5``\rightarrow``6, 6``\rightarrow``7, 7``\rightarrow``8 and 8``\rightarrow``5 enclose a hole. This is done by providing a point that is in the interior of the hole:
```julia
# size is number_holes x 2
hole = [0.5 0.5]
```
!!! tip
- Not every point provided for a PSLG needs to be part of a segment.
- Segemnts will be present in the triangular mesh (although they might be subdivided)
- Do not place holes on a segment. They must be enclosed by segments.
The first step is to set up a [`Polygon_pslg`](@ref) struct that holds the polygon data:
```julia
poly = Polygon_pslg(8, 1, 2, 8, 1)
```
Now we have to pass nodes, segments and markers etc. manually:
```julia
set_polygon_point!(poly, node)
set_polygon_point_marker!(poly, node_marker)
set_polygon_point_attribute!(poly, node_attr)
set_polygon_segment!(poly, seg)
set_polygon_segment_marker!(poly, seg_marker)
set_polygon_hole!(poly, hole)
```
Now the polygon is created!
!!! note
Segment markers are set to one by default.
## Standard Polygons
*Create a standard polygon from convenience methods.*
`TriangleMesh` provides some standard polygons through convenience methods that are often used. Their code can of course be copied out and customized accoring to your needs (different markers, attributes etc.)
- [`polygon_unitSimplex`](@ref) creates a polygon desribing a unit simplex
- [`polygon_unitSquare`](@ref) creates a polygon desribing the unit square ``[0,1]\times[0,1]``
- [`polygon_unitSquareWithHole`](@ref) creates a polygon desribing the unit square with a hole ``[0,1]\times[0,1]\setminus [1/4,3/4]\times[1/4,3/4]``
- [`polygon_regular`](@ref) creates a polygon regular polyhedron whose corner points are one the unit circle
- [`polygon_Lshape`](@ref) creates a polygon desribing an L-shaped domain
- [`polygon_struct_from_points`](@ref) creates a polygon struct without segments (needed only internally, usually not necessary to use)
## Meshing a Polygon
*Convenience method for polygons.*
Here we show how to create a mesh from a PSLG using a convenience method. To demonstrate this we create a mesh of an L-shaped domain. It is actually a fairly simple procedure.
First we create an L-shaped domain using one of the above methods to construct standard polygons:
```julia
poly = polygon_Lshape()
```
Then we create the mesh by calling `create_mesh` with `poly` as its first argument. The rest of the arguments is optional but usually necessary to adust the behavior of the meshing algorithm according to your needs. An example could be:
```julia
mesh = create_mesh(poly, info_str="my mesh", voronoi=true, delaunay=true, set_area_max=true)
```
The argument `set_area_max=true` will make Julia ask the user for a maximum area of triangles in the mesh. Provide a reasonable input and the mesh will be created. For details of what the optional arguments are see [`create_mesh`](@ref).
## Meshing a Point Cloud
*Convenience method for point clouds.*
Meshing a point cloud with `TriangleMesh` is easy. As an example we will create mesh of the convex hull of a number of random points and tell the algorithm that it is not allowed to add more points to it (which could be done though to improve the mesh quality).
Let's create a point cloud:
```julia
p = rand(10,2)
```
And now the mesh:
```julia
mesh = create_mesh(p, info_str="my mesh", prevent_steiner_points=true)
```
For details of what the optional arguments are see [`create_mesh`](@ref).
## Using Triangle's switches
*Direct passing of command line switches.*
If you are familiar with the Triangle library then `TriangleMesh` leaves you the option to pass Triangle's command line switches to the meshing algorithm directly.
As an example we create a mesh of the unit square with a squared hole in its middle
```julia
poly = polygon_Lshape()
```
The define the switches:
```julia
switches = "penvVa0.01D"
```
The `p` switch tells Triangle to read a polygon, `e` outputs the edges, `n` the cell neighbors, `v` a Voronoi diagram and so on. For details see the [documentation of Triangle](https://www.cs.cmu.edu/~quake/triangle.html). Now create the mesh with the command
```julia
mesh = create_mesh(poly, switches)
```
Similarly to this one can create a mesh of a point cloud.
## Refining a Mesh
*Refine an existing mesh.*
Mesh refinement can, for example, be necessary to improve the quality of a finite element solution. `TriangleMesh` offers methods to refine a mesh.
Suppose an a-posteriori error estimator suggested to refine the triangles with the indices 1, 4, 9 of our mesh. Suppose also we would like to keep the edges of the original mesh (but we allow subdivision). No triangle in the new mesh that is a subtriangle of the ones to be refined should have an area larger than 1/10 of the "parent" triangle.
We use the convenience method for doing this refinement:
```julia
mesh_refined = refine(mesh, ind_cell=[1;4;9], divide_cell_into=10, keep_edges=true)
```
We could also pass Triangle's command line switches. Suppose we would like to refine the entire mesh and only keep segments (not edges). No triangle should have a larger area than 0.0001. This can be done, for example by:
```julia
switches = "rpenva0.0001q"
mesh_refined = refine(mesh, switches)
```
The `r` switch stands for refinement. For proper use of the switches we again refer to [Triangle](https://www.cs.cmu.edu/~quake/triangle.html).
`TriangleMesh` also offers a simple method to divide a list of triangle into 4 triangles. This will create for each triangle to be refined 4 similar triangles and will hence preserve the Delaunay property of a mesh.
```julia
mesh_refined = refine_rg(mesh, ind_cell=[1;4;9])
```
Omitting the second argument will simply refine the entire mesh.
!!! note
The [`refine_rg`](@ref) method is very slow for large meshes (>100000 triangles) and should be avoided. For smaller meshes it can be used though to create a simple hierarchy of meshes which can be advantegeous if one wants to compare numerical solutions on sucessively refined meshes.
## Visualization
There are of course many ways to visualize a triangular mesh. A very simple way is to run this script:
```julia
using TriangleMesh, PyPlot
function plot_TriMesh(m :: TriMesh;
linewidth :: Real = 1,
marker :: String = "None",
markersize :: Real = 10,
linestyle :: String = "-",
color :: String = "red")
fig = matplotlib[:pyplot][:figure]("2D Mesh Plot", figsize = (10,10))
ax = matplotlib[:pyplot][:axes]()
ax[:set_aspect]("equal")
# Connectivity list -1 for Python
tri = ax[:triplot](m.point[1,:], m.point[2,:], m.cell'.-1 )
setp(tri, linestyle = linestyle,
linewidth = linewidth,
marker = marker,
markersize = markersize,
color = color)
fig[:canvas][:draw]()
return fig
end
```
Note that you need to have the `PyPlot` package (well... actually only the `PyCall` package and Python's `matplotlib`) installed. Now you can call the function `plot_TriMesh` and you should see the mesh in a figure window. This file can also be found in the examples folder on GitHub.
| TriangleMesh | https://github.com/konsim83/TriangleMesh.jl.git |
|
[
"MIT"
] | 1.1.3 | fb6df4003cb65831e8b4603fda10eb136fda2631 | docs | 68 | # Modules, Types, Methods
```@autodocs
Modules = [TriangleMesh]
``` | TriangleMesh | https://github.com/konsim83/TriangleMesh.jl.git |
|
[
"MIT"
] | 0.1.1 | 344bb46e35400f5b6155cfb8624c13c2e64c8445 | code | 18936 | ### A Pluto.jl notebook ###
# v0.19.46
using Markdown
using InteractiveUtils
# ╔═╡ 120250f8-5689-443d-bf99-dca7e1b41a82
begin
import Pkg
# Pkg.develop(path = "/Users/gagebonner/Desktop/Repositories/Kneedle.jl/")
Pkg.add(url="https://github.com/70Gage70/Kneedle.jl", rev="master")
Pkg.add([
"CairoMakie",
"PlutoUI"])
using PlutoUI
import Random
end
# ╔═╡ a70f7664-47b1-43c2-93aa-4a8e3886e38f
using Kneedle
# ╔═╡ efaedc36-961b-4476-9a72-cca18ac7f385
using CairoMakie
# ╔═╡ 1d5ffaa0-224b-4587-b7ae-859a26512cc3
md"""
# Kneedle.jl
"""
# ╔═╡ 9b5823a0-2f97-464a-929a-0596dc6a99a2
md"""
!!! info ""
This is the documentation for the package [Kneedle.jl](https://github.com/70Gage70/Kneedle.jl). The documentation is powered by a [Pluto.jl](https://plutojl.org/) notebook.
Note that executed code shows the result *above* the code!
Kneedle.jl is a [Julia](https://julialang.org/) implementation of the Kneedle[^1] knee-finding algorithm. This detects "corners" (or "knees", "elbows", ...) in a dataset `(x, y)`.
**Features**
- Exports one main function `kneedle` with the ability to select the shape and number of knees to search for.
- Built-in data smoothing from [Loess.jl](https://github.com/JuliaStats/Loess.jl).
- [Makie](https://docs.makie.org/stable/) extension for quick visualization.
"""
# ╔═╡ 966454cc-11a6-486f-b124-d3f9d3ee0591
TableOfContents(title = "Contents", depth = 2, aside= false)
# ╔═╡ 74fc8395-26cf-4595-ac96-abecf7273d6c
md"""
# Installation
"""
# ╔═╡ ec2763fd-6d65-4d61-832c-cc683e3607b2
md"""
This package is in the Julia General Registry. In the Julia REPL, run the following code and follow the prompts:
"""
# ╔═╡ 99842416-3d0a-4def-aab9-731e2768b8c4
md"""
```julia
import Pkg
Pkg.add("Kneedle")
```
"""
# ╔═╡ 6de9b334-c487-4823-a08c-4166698967b8
md"""
Access the functionality of the package in your code by including the following line:
"""
# ╔═╡ ba5912d5-a2e7-4604-9cc5-28fbb6241ca1
md"""
# Quick Start
"""
# ╔═╡ 239cacee-2597-435f-add5-3aa057d291ec
md"""
Find a knee automatically using `kneedle(x, y)`:
"""
# ╔═╡ c3b80fa6-4ccc-4a2d-8e5c-7211814c1485
begin
x, y = Testers.CONCAVE_INC # an example data set
kr = kneedle(x, y) # kr is a `KneedleResult`
knees(kr) # [2], therefore a knee is detected at x = 2
end
# ╔═╡ 1e80ecea-7d79-4059-9a87-858c0c935dea
md"""
In order to use the plotting functionality, a Makie backend is required. For this example, this amounts to including the line `using CairoMakie`. This provides access to the function `viz(x, y, kr; kwargs...)`:
"""
# ╔═╡ bd7ee494-4952-48d4-945a-51ae69750757
md"""
We can then make the plot:
"""
# ╔═╡ fb718bd8-454e-44ae-b17b-ba20e89a09ed
viz(x, y, kr, show_data_smoothed = false)
# ╔═╡ b62b3349-bece-4c4a-8d90-d9f2b4ed14d7
md"""
# Tutorial
"""
# ╔═╡ 4e08d4bd-38d4-422b-a9a6-8757e549ff1f
md"""
## Automated detection of a single knee
"""
# ╔═╡ 630b4f66-1b23-4349-a95f-bca9548b45ba
md"""
Kneedle.jl is capable of automatically detecting the shape of the input knee. This works best when there is only one knee. The submodule `Testers` contains four datasets for each possible knee shape. We can use `kneedle` to find the knees and the `viz!` function to plot them all on in the same Makie figure.
"""
# ╔═╡ 1d6d6dcf-b326-4853-9327-2d8d4b49a30d
let
x1, y1 = Testers.CONCAVE_INC; kr1 = kneedle(x1, y1)
x2, y2 = Testers.CONCAVE_DEC; kr2 = kneedle(x2, y2)
x3, y3 = Testers.CONVEX_INC; kr3 = kneedle(x3, y3)
x4, y4 = Testers.CONVEX_DEC; kr4 = kneedle(x4, y4)
set_theme!(theme_latexfonts())
fig = Figure()
ax1 = Axis(fig[1, 1], xlabel = L"x", ylabel = L"y", title = "Concave Increasing")
viz!(ax1, x1, y1, kr1, show_data_smoothed=false)
ax2 = Axis(fig[1, 2], xlabel = L"x", ylabel = L"y", title = "Concave Decreasing")
viz!(ax2, x2, y2, kr2, show_data_smoothed=false)
ax3 = Axis(fig[2, 1], xlabel = L"x", ylabel = L"y", title = "Convex Increasing")
viz!(ax3, x3, y3, kr3, show_data_smoothed=false)
ax4 = Axis(fig[2, 2], xlabel = L"x", ylabel = L"y", title = "Convex Decreasing")
viz!(ax4, x4, y4, kr4, show_data_smoothed=false)
fig
end
# ╔═╡ 6362117c-9c75-45fb-93cb-c90cdddde3e9
md"""
## Detection of knees of a given shape
"""
# ╔═╡ 59ad1ba7-a5c9-41d7-ad9b-14e775a54e09
md"""
We may use `kneedle(x, y, shape)` to attempt to detect knees of the given `shape`.
There are four possible knee/elbow shapes in consideration. If a kneedle function takes shape as an argument, it should be one of these.
- concave increasing: `"|¯"` or `"concave_inc"`
- convex decreasing: `"|_"` or `"convex_dec"`
- concave decreasing: `"¯|"` or `"concave_dec"`
- convex increasing: `"_|"` or `"convex_inc"`
Note that the symbol `¯` is entered by typing `\highminus<TAB>`
"""
# ╔═╡ 6772952c-f55d-43f9-af23-bd55e3abc9ca
let
# Using the shape specification:
x, y = Testers.CONCAVE_INC
kneedle(x, y, "concave_inc") |> knees
end
# ╔═╡ 30d677a1-d224-4361-b239-cb9a5ef80a3f
let
# Using the pictoral specification:
x, y = Testers.CONCAVE_INC
kneedle(x, y, "|¯") |> knees
end
# ╔═╡ 01ffc4e6-0eb1-458d-a06e-10041745dac8
let
# This finds no knees of the requested shape because there are none!
x, y = Testers.CONCAVE_INC
kneedle(x, y, "concave_dec") |> knees
end
# ╔═╡ 653ab1fc-fbb4-4c5f-b649-06a6c83233dc
md"""
## Dealing with noisy data
"""
# ╔═╡ 15825daf-ee71-4e08-9a6c-858372475de1
md"""
To simulate a noisy data source, we will use the function `Testers.double_bump()`. This generates a data set from a sum of two Normal CDFs. We will set the amplitude of one of them to zero first so that we only have to deal with a single knee. First we examine the noiseless data and find a knee of shape `"|¯"`.
"""
# ╔═╡ 3e987a4b-b9ec-4aa7-993d-60f1cf32f60e
let
x_1, y_1 = Testers.double_bump(A2 = 0.0, noise_level = 0.0);
kr = kneedle(x_1, y_1, "|¯")
@info knees(kr)
viz(x_1, y_1, kr, show_data_smoothed=false)
end
# ╔═╡ 556517bc-6747-4c03-b796-3d735ad583e3
md"""
Now let us add noise and try the same calculation
"""
# ╔═╡ 72f19479-05f5-41ab-9875-8f971ddec619
begin
Random.seed!(1234) # reproducible randomness
x_1noise, y_1noise = Testers.double_bump(A2 = 0.0, noise_level = 0.05)
nothing
end
# ╔═╡ 49b7238e-c04e-46ba-8f94-878bc91297e6
let
viz(x_1noise, y_1noise, kneedle(x_1noise, y_1noise, "|¯"), show_data_smoothed=false)
end
# ╔═╡ e542ce7d-0030-4492-b287-a68b6c53f119
md"""
The algorithm has far too much detection. This is controlled by the `S` kwarg to `kneedle`. The higher `S` is, the less detection. We will increase `S` to see the effect:
"""
# ╔═╡ ad86a047-4995-4002-8d81-c28748e56487
let
kr = kneedle(x_1noise, y_1noise, "|¯", S = 10.0)
@info knees(kr)
viz(x_1noise, y_1noise, kr, show_data_smoothed=false)
end
# ╔═╡ 4dfc03de-d242-4ab0-abf5-98f77d6cfa65
md"""
We find one knee in a sensible position but still with some additional artifacts. Increasing `S` further "settles" into an incorrect location:
"""
# ╔═╡ 200aca58-2b61-4ab5-a63b-a9375122ca64
let
kr = kneedle(x_1noise, y_1noise, "|¯", S = 18.0)
@info knees(kr)
viz(x_1noise, y_1noise, kr, show_data_smoothed=false)
end
# ╔═╡ 32c75005-2884-40bf-9e70-737f79b40bbb
md"""
To handle this, we can use the `smoothing` kwarg to `kneedle`. `smoothing` refers to the amount of smoothing via interpolation that is applied to the data before knee detection. If `smoothing` == nothing, it will be bypassed entirely. If `smoothing` ∈ [0, 1], this parameter is passed directly to Loess.jl via its span parameter. Generally, higher `smoothing` results in less detection.
"""
# ╔═╡ fa2d353c-1160-4810-8dfd-851d192c5ac2
let
kr = kneedle(x_1noise, y_1noise, "|¯", smoothing = 0.5)
@info knees(kr)
viz(x_1noise, y_1noise, kr, show_data_smoothed=true)
end
# ╔═╡ 2f944526-26bc-4224-8253-9a2101245889
md"""
This is much closer to the original knee location and we can still obtain resonable results even with very noisy data:
"""
# ╔═╡ e0b8d187-db13-4230-927b-c7f9da340062
begin
Random.seed!(1234) # reproducible randomness
x_1noise_high, y_1noise_high = Testers.double_bump(A2 = 0.0, noise_level = 0.20)
nothing
end
# ╔═╡ f3d50c40-78f0-4560-b3c3-05e0b1179e6d
let
kr = kneedle(x_1noise_high, y_1noise_high, "|¯", smoothing = 0.7)
@info knees(kr)
viz(x_1noise_high, y_1noise_high, kr, show_data_smoothed=true)
end
# ╔═╡ 801bb8ec-d605-4c54-a913-469e0d40a43d
md"""
## Finding a given number of knees
"""
# ╔═╡ a2785447-64c0-4d0d-93a7-ac599e3e14fa
md"""
To avoid the tedious process of guessing `S` or `smoothing`, one can provide the exact number of knees to search for. This works by bisecting either S (if `scan_type == :S`) or smoothing (if `scan_type == :smoothing`). Instead of finding `smoothing = 0.7` manually as in the last example, we can simply pass `1` knee and `scan_type = :smoothing` to `kneedle:
"""
# ╔═╡ 8e4755cf-f2aa-41d0-a589-d2653a5154ca
let
kr = kneedle(x_1noise_high, y_1noise_high, "|¯", 1, scan_type = :smoothing)
@info knees(kr)
viz(x_1noise_high, y_1noise_high, kr, show_data_smoothed=true)
end
# ╔═╡ 43c68b71-cd5c-4185-ac6a-3926ca357fa4
md"""
The results are not identical as the bisection does not guarantee the minimum `smoothing` (and less smoothing does not always mean greater accuracy in knee location).
"""
# ╔═╡ 9d4e03a5-c9fc-4fc5-b121-1f88710f2780
md"""
We demonstrate this functionality further with a true double knee:
"""
# ╔═╡ a2592b4d-24c7-48f6-9ee8-b9207a63bd46
begin
Random.seed!(1234) # reproducible randomness
x_2, y_2 = Testers.double_bump(noise_level = 0.0)
x_2noise, y_2noise = Testers.double_bump(noise_level = 0.3)
nothing
end
# ╔═╡ 3b0080ef-6398-4106-960a-8dc0ec074864
let
kr = kneedle(x_2, y_2, "|¯", 2)
viz(x_2, y_2, kr)
end
# ╔═╡ 7dd9931e-8b3e-4928-b4c4-ffc9cbe6efd3
md"""
And again with noise. Keep in mind that for very noisy data we still might have to set the parameter that we aren't scanning according to the source.
"""
# ╔═╡ 0ef236c7-4cf8-424e-b32b-187ee5b71f25
let
kr = kneedle(x_2noise, y_2noise, "|¯", 2, scan_type = :S, smoothing = 0.4)
viz(x_2noise, y_2noise, kr)
end
# ╔═╡ ab8a3ae9-3c3f-4f6d-a4cc-ce23f734e66f
let
kr = kneedle(x_2noise, y_2noise, "|¯", 2, scan_type = :smoothing, S = 0.1)
viz(x_2noise, y_2noise, kr)
end
# ╔═╡ f1374349-bcfc-40dc-9dd5-20b2e1deea4f
md"""
## Application to sparse regression
"""
# ╔═╡ 94256cdc-1f6e-45e5-8674-17feac55b044
md"""
Linear regression finds coefficients $\xi$ such that $y ≈ X ξ$ for a feature matrix $X$ and target $y$. *Sparse* linear regression further asks that the coefficient vector $\xi$ be sparse, i.e that it contain as many zeros as possible. This can be thought of as asking to represent $y$ by just the most relevant features of $X$.
Many sparse regression algorithms involve a thresholding parameter, say $\lambda$, that determines the minimum size of allowed coefficients. The idea is that small $\lambda$ implies that $X \xi$ is accurate but not sparse, whereas large $\lambda$ implies that $X \xi$ is sparse but not accurate. The idea is therefore to pick the largest $\lambda$ possible while still having a sufficiently accurate solution. On a plot of model error vs. $\lambda$, this is equivalent to asking for the `_|` knee.
The following data come from a sparse regression problem solved offline.
"""
# ╔═╡ 8cc49a7c-3fbc-4271-b486-39d4043bf339
details("Data", md"""
```julia
λs = [0.1, 0.14384498882876628, 0.20691380811147897, 0.29763514416313186, 0.4281332398719394, 0.6158482110660264, 0.8858667904100826, 1.2742749857031337, 1.8329807108324359, 2.636650898730358, 3.79269019073225, 5.455594781168519, 7.847599703514613, 11.28837891684689, 16.237767391887218, 23.357214690901223, 33.59818286283783, 48.32930238571752, 69.51927961775606, 100.0]
errs = [10.934290106930245, 10.83733683826938, 11.017433337213713, 11.124893613300706, 11.119853842082296, 11.099548594837454, 11.03789948291263, 16.205849291523638, 16.206016168361213, 16.204933621558062, 16.205591636093704, 16.205684051603743, 16.20568226625062, 16.228735841057354, 16.23024949820953, 16.23017241763306, 16.229714642370798, 16.22987500373123, 16.229503091439568, 16.230329157796838]
```
""")
# ╔═╡ e8db1bdd-872f-4b83-b6f9-5d491e5b9b36
md"""
We see that `kneedle` correctly finds the largest $\lambda$ that has small error.
"""
# ╔═╡ b60e5d82-8ba6-4bd3-a471-5fec6353da69
md"""
# Docstrings
"""
# ╔═╡ 1a6cfb27-b5d6-46b3-8f48-399fae9247c3
details("kneedle", @doc kneedle)
# ╔═╡ faaf42c4-e817-441c-b3db-8ec562e15323
details("KneedleResult", @doc KneedleResult)
# ╔═╡ 975a39fa-f4ad-418a-b2c2-98af962a1037
details("knees", @doc knees)
# ╔═╡ 98a9b4cb-8fd9-480c-b1ce-418672e0441a
details("viz", @doc viz)
# ╔═╡ 671cfa97-f929-4cf4-8790-221d1ff1c6bc
details("viz!", @doc viz!)
# ╔═╡ 09000615-a7d7-421b-8444-113821058b96
details("Testers.double_bump", @doc Testers.double_bump)
# ╔═╡ c598e8c8-3d74-4946-ac16-ab757942f2da
details("Testers.CONCAVE_INC", @doc Testers.CONCAVE_INC)
# ╔═╡ 0ba861bf-0908-427b-8f2e-48be6d93842b
details("Testers.CONCAVE_DEC", @doc Testers.CONCAVE_DEC)
# ╔═╡ c50f5165-5d33-4479-801d-4925b60a84a6
details("Testers.CONVEX_INC", @doc Testers.CONVEX_INC)
# ╔═╡ 5428c5da-0111-4d97-8b3d-71a78b6b9d7d
details("Testers.CONVEX_DEC", @doc Testers.CONVEX_DEC)
# ╔═╡ 9ec18ec5-1aae-4689-b7f1-693d52cdec5b
md"""
# References
"""
# ╔═╡ 90ce7418-2a74-433b-9ae9-ca87e2b48027
md"""
[^1]: Satopaa, Ville, et al. *Finding a "kneedle" in a haystack: Detecting knee points in system behavior.* 2011 31st international conference on distributed computing systems workshops. IEEE, 2011.
"""
# ╔═╡ 384711a6-d7cb-4d69-a790-2f3d808aa5d8
md"""
---
---
---
"""
# ╔═╡ ae55f28c-7aac-11ef-0320-f11cdad35bfe
md"""
# Utilities
"""
# ╔═╡ 53873f99-598e-4136-affa-572f4ee2d4d3
md"""
This section contains tools to ensure the documentation works correctly; it is not part of the documentation itself.
"""
# ╔═╡ edd49b23-42bc-41cf-bfef-e1538fcdd924
begin
@info "Setting notebook width."
html"""
<style>
main {
margin: 0 auto;
max-width: 2000px;
padding-left: 5%;
padding-right: 5%;
}
</style>
"""
end
# ╔═╡ 269ad618-ce01-4d06-b0ce-e01a60dedfde
HTML("""
<!-- the wrapper span -->
<div>
<button id="myrestart" href="#">Restart</button>
<script>
const div = currentScript.parentElement
const button = div.querySelector("button#myrestart")
const cell= div.closest('pluto-cell')
console.log(button);
button.onclick = function() { restart_nb() };
function restart_nb() {
console.log("Restarting Notebook");
cell._internal_pluto_actions.send(
"restart_process",
{},
{
notebook_id: editor_state.notebook.notebook_id,
}
)
};
</script>
</div>
""")
# ╔═╡ a35b1234-a722-442b-8969-7635a28556ff
begin
@info "Defining data"
λs = [0.1, 0.14384498882876628, 0.20691380811147897, 0.29763514416313186, 0.4281332398719394, 0.6158482110660264, 0.8858667904100826, 1.2742749857031337, 1.8329807108324359, 2.636650898730358, 3.79269019073225, 5.455594781168519, 7.847599703514613, 11.28837891684689, 16.237767391887218, 23.357214690901223, 33.59818286283783, 48.32930238571752, 69.51927961775606, 100.0]
errs = [10.934290106930245, 10.83733683826938, 11.017433337213713, 11.124893613300706, 11.119853842082296, 11.099548594837454, 11.03789948291263, 16.205849291523638, 16.206016168361213, 16.204933621558062, 16.205591636093704, 16.205684051603743, 16.20568226625062, 16.228735841057354, 16.23024949820953, 16.23017241763306, 16.229714642370798, 16.22987500373123, 16.229503091439568, 16.230329157796838]
nothing
end
# ╔═╡ b3563637-f40d-4c11-ab15-5c60c365162c
let
fig = Figure(); ax = Axis(fig[1, 1], xlabel = L"\log_{10} \, \lambda", ylabel = "error")
log10λs = log10.(λs)
kr = kneedle(log10λs, errs, "_|")
viz!(ax, log10λs, errs, kr, show_data_smoothed=false)
Legend(fig[2, 1], ax, orientation = :horizontal)
fig
end
# ╔═╡ Cell order:
# ╟─1d5ffaa0-224b-4587-b7ae-859a26512cc3
# ╟─9b5823a0-2f97-464a-929a-0596dc6a99a2
# ╟─966454cc-11a6-486f-b124-d3f9d3ee0591
# ╟─74fc8395-26cf-4595-ac96-abecf7273d6c
# ╟─ec2763fd-6d65-4d61-832c-cc683e3607b2
# ╟─99842416-3d0a-4def-aab9-731e2768b8c4
# ╟─6de9b334-c487-4823-a08c-4166698967b8
# ╠═a70f7664-47b1-43c2-93aa-4a8e3886e38f
# ╟─ba5912d5-a2e7-4604-9cc5-28fbb6241ca1
# ╟─239cacee-2597-435f-add5-3aa057d291ec
# ╠═c3b80fa6-4ccc-4a2d-8e5c-7211814c1485
# ╟─1e80ecea-7d79-4059-9a87-858c0c935dea
# ╠═efaedc36-961b-4476-9a72-cca18ac7f385
# ╟─bd7ee494-4952-48d4-945a-51ae69750757
# ╠═fb718bd8-454e-44ae-b17b-ba20e89a09ed
# ╟─b62b3349-bece-4c4a-8d90-d9f2b4ed14d7
# ╟─4e08d4bd-38d4-422b-a9a6-8757e549ff1f
# ╟─630b4f66-1b23-4349-a95f-bca9548b45ba
# ╠═1d6d6dcf-b326-4853-9327-2d8d4b49a30d
# ╟─6362117c-9c75-45fb-93cb-c90cdddde3e9
# ╟─59ad1ba7-a5c9-41d7-ad9b-14e775a54e09
# ╠═6772952c-f55d-43f9-af23-bd55e3abc9ca
# ╠═30d677a1-d224-4361-b239-cb9a5ef80a3f
# ╠═01ffc4e6-0eb1-458d-a06e-10041745dac8
# ╟─653ab1fc-fbb4-4c5f-b649-06a6c83233dc
# ╟─15825daf-ee71-4e08-9a6c-858372475de1
# ╠═3e987a4b-b9ec-4aa7-993d-60f1cf32f60e
# ╟─556517bc-6747-4c03-b796-3d735ad583e3
# ╠═72f19479-05f5-41ab-9875-8f971ddec619
# ╠═49b7238e-c04e-46ba-8f94-878bc91297e6
# ╟─e542ce7d-0030-4492-b287-a68b6c53f119
# ╠═ad86a047-4995-4002-8d81-c28748e56487
# ╟─4dfc03de-d242-4ab0-abf5-98f77d6cfa65
# ╠═200aca58-2b61-4ab5-a63b-a9375122ca64
# ╟─32c75005-2884-40bf-9e70-737f79b40bbb
# ╠═fa2d353c-1160-4810-8dfd-851d192c5ac2
# ╟─2f944526-26bc-4224-8253-9a2101245889
# ╠═e0b8d187-db13-4230-927b-c7f9da340062
# ╠═f3d50c40-78f0-4560-b3c3-05e0b1179e6d
# ╟─801bb8ec-d605-4c54-a913-469e0d40a43d
# ╟─a2785447-64c0-4d0d-93a7-ac599e3e14fa
# ╠═8e4755cf-f2aa-41d0-a589-d2653a5154ca
# ╟─43c68b71-cd5c-4185-ac6a-3926ca357fa4
# ╟─9d4e03a5-c9fc-4fc5-b121-1f88710f2780
# ╠═a2592b4d-24c7-48f6-9ee8-b9207a63bd46
# ╠═3b0080ef-6398-4106-960a-8dc0ec074864
# ╟─7dd9931e-8b3e-4928-b4c4-ffc9cbe6efd3
# ╠═0ef236c7-4cf8-424e-b32b-187ee5b71f25
# ╠═ab8a3ae9-3c3f-4f6d-a4cc-ce23f734e66f
# ╟─f1374349-bcfc-40dc-9dd5-20b2e1deea4f
# ╟─94256cdc-1f6e-45e5-8674-17feac55b044
# ╟─8cc49a7c-3fbc-4271-b486-39d4043bf339
# ╠═b3563637-f40d-4c11-ab15-5c60c365162c
# ╟─e8db1bdd-872f-4b83-b6f9-5d491e5b9b36
# ╟─b60e5d82-8ba6-4bd3-a471-5fec6353da69
# ╟─1a6cfb27-b5d6-46b3-8f48-399fae9247c3
# ╟─faaf42c4-e817-441c-b3db-8ec562e15323
# ╟─975a39fa-f4ad-418a-b2c2-98af962a1037
# ╟─98a9b4cb-8fd9-480c-b1ce-418672e0441a
# ╟─671cfa97-f929-4cf4-8790-221d1ff1c6bc
# ╟─09000615-a7d7-421b-8444-113821058b96
# ╟─c598e8c8-3d74-4946-ac16-ab757942f2da
# ╟─0ba861bf-0908-427b-8f2e-48be6d93842b
# ╟─c50f5165-5d33-4479-801d-4925b60a84a6
# ╟─5428c5da-0111-4d97-8b3d-71a78b6b9d7d
# ╟─9ec18ec5-1aae-4689-b7f1-693d52cdec5b
# ╟─90ce7418-2a74-433b-9ae9-ca87e2b48027
# ╟─384711a6-d7cb-4d69-a790-2f3d808aa5d8
# ╟─ae55f28c-7aac-11ef-0320-f11cdad35bfe
# ╟─53873f99-598e-4136-affa-572f4ee2d4d3
# ╟─edd49b23-42bc-41cf-bfef-e1538fcdd924
# ╟─120250f8-5689-443d-bf99-dca7e1b41a82
# ╟─269ad618-ce01-4d06-b0ce-e01a60dedfde
# ╟─a35b1234-a722-442b-8969-7635a28556ff
| Kneedle | https://github.com/70Gage70/Kneedle.jl.git |
|
[
"MIT"
] | 0.1.1 | 344bb46e35400f5b6155cfb8624c13c2e64c8445 | code | 1851 | module KneedleMakieExt
using Kneedle
using Makie
using PrecompileTools: @compile_workload
import Random
function Kneedle.viz(
x::AbstractVector{<:Real},
y::AbstractVector{<:Real},
kr::KneedleResult;
show_data::Bool = true,
show_data_smoothed::Bool = true,
show_knees::Bool = true,
linewidth::Real = 2.0)
set_theme!(theme_latexfonts())
fig = Figure()
ax = Axis(fig[1, 1], xlabel = L"x", ylabel = L"y")
show_data && lines!(ax, x, y, color = :black, label = "Data", linewidth = linewidth)
show_data_smoothed && lines!(ax, kr.x_smooth, kr.y_smooth, color = :red, linewidth = linewidth, label = "Smoothed Data")
if show_knees
for knee_x in knees(kr)
vlines!(ax, knee_x, color = :blue, linewidth = linewidth, label = "Knee")
end
end
Legend(fig[2, 1], ax, orientation = :horizontal, merge = true)
rowsize!(fig.layout, 1, Aspect(1, 0.5))
resize_to_layout!(fig)
fig
end
function Kneedle.viz!(
ax::Makie.Axis,
x::AbstractVector{<:Real},
y::AbstractVector{<:Real},
kr::KneedleResult;
show_data::Bool = true,
show_data_smoothed::Bool = true,
show_knees::Bool = true,
linewidth::Real = 2.0)
show_data && lines!(ax, x, y, color = :black, label = "Data", linewidth = linewidth)
show_data_smoothed && lines!(ax, kr.x_smooth, kr.y_smooth, color = :red, linewidth = linewidth, label = "Smoothed Data")
if show_knees
for knee_x in knees(kr)
vlines!(ax, knee_x, color = :blue, linewidth = linewidth, label = "Knee")
end
end
return nothing
end
@compile_workload begin
Random.seed!(1234)
x, y = Testers.double_bump(noise_level = 0.1)
kr = kneedle(x, y, smoothing = 0.1)
viz(x, y, kr)
fig = Figure(); ax = Axis(fig[1, 1]);
viz!(ax, x, y, kr)
end
end # module | Kneedle | https://github.com/70Gage70/Kneedle.jl.git |
|
[
"MIT"
] | 0.1.1 | 344bb46e35400f5b6155cfb8624c13c2e64c8445 | code | 561 | module Kneedle
using ArgCheck, Loess
import Random
using PrecompileTools: @compile_workload
include("testers.jl")
export Testers
include("main.jl")
export KneedleResult
export kneedle, knees
include("viz.jl")
export viz, viz!
@compile_workload begin
Random.seed!(1234)
x, y = Testers.double_bump(noise_level = 0.1)
kneedle(x, y, smoothing = 0.1)
kneedle(x, y, "|¯", S = 0.1)
kneedle(x, y, "|¯", 2, scan_type = :S)
kneedle(x, y, "|¯", 2, scan_type = :smoothing)
x, y = Testers.CONCAVE_INC
kneedle(x, y)
end
end # module
| Kneedle | https://github.com/70Gage70/Kneedle.jl.git |
|
[
"MIT"
] | 0.1.1 | 344bb46e35400f5b6155cfb8624c13c2e64c8445 | code | 10263 | """
struct KneedleResult{X}
A container for the output of the Kneedle algorithm.
Use `knees` to access the `knees` field.
Refer to `viz` for visualization.
See the `kneedle` function for further general information.
### Fields
- `x_smooth`: The smoothed `x` points.
- `y_smooth`: The smoothed `y` points.
- `knees`: The a vector of `x` coordinates of the computed knees/elbows.
"""
struct KneedleResult{X<:Real}
x_smooth::Vector{Float64}
y_smooth::Vector{Float64}
knees::Vector{X}
end
"""
knees(kr::KneedleResult)
Return `kr.knees`.
Refer to `KneedleResult` or `kneedle` for more information.
"""
knees(kr::KneedleResult) = kr.knees
"""
kneedle(args...)
There are several methods for the `kneedle` function as detailed below; each returns a `KneedleResult`.
Use `knees(kr::KneedleResult)` to obtain the computed knees/elbows as a list of `x` coordinates.
Refer to `viz` for visualization.
Each `kneedle` function contains the args `x` and `y` which refer to the input data. It is required that `x` is sorted.
Each `kneedle` function contains the kwargs `S` and `smoothing`. `S > 0` refers to the sensitivity of the knee/elbow \
detection algorithm in the sense that higher `S` results in fewer detections. `smoothing` refers to the amount of \
smoothing via interpolation that is applied to the data before knee detection. If `smoothing == nothing`, it will \
be bypassed entirely. If `smoothing ∈ [0, 1]`, this parameter is passed directly to \
[Loess.jl](https://github.com/JuliaStats/Loess.jl) via its `span` parameter. Generally, higher `smoothing` results \
in less detection.
## Shapes
There are four possible knee/elbow shapes in consideration. If a `kneedle` function takes `shape` as an argument, it \
should be one of these.
- concave increasing: `"|¯"` or `"concave_inc"`
- convex decreasing: `"|_"` or `"convex_dec"`
- concave decreasing: `"¯|"` or `"concave_dec"`
- convex increasing: `"_|"` or `"convex_inc"`
Note that the symbol `¯` is entered by typing `\\highminus<TAB>`
## Methods
### Fully automated kneedle
kneedle(x, y; S = 1.0, smoothing = nothing, verbose = false)
This function attempts to determine the shape of the knee automatically. Toggle `verbose` to get a printout of \
the guessed shape.
### Kneedle with a specific shape
kneedle(x, y, shape; S = 1.0, smoothing = nothing)
This function finds knees/elbows with the given `shape`.
### Kneedle with a specific shape and number of knees
kneedle(x, y, shape, n_knees; scan_type = :S, S = 1.0, smoothing = nothing)
This function finds exactly `n_knees` knees/elbows with the given `shape`.
This works by bisecting either `S` (if `scan_type == :S`) or `smoothing` (if `scan_type == :smoothing`).
## Examples
Find a knee:
```julia-repl
julia> x, y = Testers.CONCAVE_INC
julia> kr1 = kneedle(x, y)
julia> knees(kr1) # 2, meaning that there is a knee at `x = 2`
```
Find a knee with a specific shape:
```julia-repl
julia> kr2 = kneedle(x, y, "concave_inc")
julia> knees(kr1) == knees(kr2) # true
```
Use the pictoral arguments:
```julia-repl
julia> kr3 = kneedle(x, y, "|¯")
julia> knees(kr3) == knees(kr1) # true
```
Find a given number of knees:
```julia-repl
julia> x, y = Testers.double_bump(noise_level = 0.3)
julia> kr4 = kneedle(x, y, "|¯", 2)
julia> length(knees(kr4)) # 2, meaning that the algorithm found 2 knees
```
"""
function kneedle end
# Compute the main Kneedle algorithm and return a KneedleResult
# Assumes that data is concave increasing, i.e. `|¯` and that `x` is sorted.
function _kneedle(
x::AbstractVector{<:Real},
y::AbstractVector{<:Real};
S::Real = 1.0,
smoothing::Union{Real, Nothing} = nothing)
n = length(x)
### STEP 1: SMOOTH IF NEEDED
if smoothing === nothing
x_s = x
y_s = y
else
model = loess(x, y, span = smoothing)
x_s = range(extrema(x)..., length = length(x))
y_s = predict(model, x_s)
end
### STEP 2: NORMALIZE
x_min, x_max = extrema(x_s)
x_sn = (x_s .- x_min)/(x_max - x_min)
y_min, y_max = extrema(y_s)
y_sn = (y_s .- y_min)/(y_max - y_min)
### STEP 3: DIFFERENCES
x_d = @view x_sn[:]
y_d = y_sn - x_sn
### STEP 4: CANDIDATE LOCAL MAXIMA
lmx = [i for i in 2:n-1 if (y_d[i - 1] < y_d[i]) && (y_d[i] > y_d[i + 1])]
if length(lmx) == 1
return KneedleResult(collect(float(x_s)), collect(float(y_s)), [x[lmx[1]]])
end
knees_res = eltype(x)[]
x_lmx = x_d[lmx]
y_lmx = y_d[lmx]
T_lmx = y_lmx .- (S/(n - 1))*sum(x_sn[i + 1] - x_sn[i] for i = 1:n - 1)
### STEP 5: THRESHOLD
for i = 1:length(x_lmx)
for j = lmx[i] + 1:(i == length(x_lmx) ? n : lmx[i + 1] - 1)
if y_d[j] < T_lmx[i]
push!(knees_res, x[lmx[i]])
break
end
end
end
return KneedleResult(collect(float(x_s)), collect(float(y_s)), knees_res)
end
# find knees with a particular shape
function kneedle(
x::AbstractVector{<:Real},
y::AbstractVector{<:Real},
shape::String;
S::Real = 1.0,
smoothing::Union{Real, Nothing} = nothing)
@argcheck length(x) == length(y) > 0
@argcheck S > 0
smoothing !== nothing && @argcheck 0 <= smoothing <= 1
@argcheck issorted(x)
@argcheck shape ∈ ["|¯", "|_", "¯|", "_|"] || shape ∈ ["concave_inc", "convex_dec", "concave_dec", "convex_inc"]
max_x, max_y = maximum(x), maximum(y)
if shape ∈ ["|¯", "concave_inc"]
# default, so no transformation required
return _kneedle(x, y, S = S, smoothing = smoothing)
elseif shape ∈ ["|_", "convex_dec"]
# flip vertically
kn = _kneedle(x, max_y .- y, S = S, smoothing = smoothing)
return KneedleResult(kn.x_smooth, max_y .- kn.y_smooth, kn.knees)
elseif shape ∈ ["¯|", "concave_dec"]
# flip horizontally; reverse to ensure x increasing
kn = _kneedle(reverse(max_x .- x), reverse(y), S = S, smoothing = smoothing)
return KneedleResult(reverse(max_x .- kn.x_smooth), reverse(kn.y_smooth), max_x .- kn.knees)
elseif shape ∈ ["_|", "convex_inc"]
# flip horizontally and vertically; reverse to ensure x increasing
kn = _kneedle(reverse(max_x .- x), reverse(max_y .- y), S = S, smoothing = smoothing)
return KneedleResult(reverse(max_x .- kn.x_smooth), reverse(max_y .- kn.y_smooth), max_x .- kn.knees)
end
end
# try to guess the shape automatically
function kneedle(
x::AbstractVector{<:Real},
y::AbstractVector{<:Real};
S::Real = 1.0,
smoothing::Union{Real, Nothing} = nothing,
verbose::Bool = false)
_line(X) = y[1] + (y[end] - y[1])*(X - x[1])/(x[end] - x[1])
concave = sum(y .> _line.(x)) >= length(x)/2
increasing = _line(x[end]) > _line(x[1])
if concave && increasing
verbose && @info "Found concave and increasing |¯"
return kneedle(x, y, "|¯", S = S, smoothing = smoothing)
elseif !concave && !increasing
verbose && @info "Found convex and decreasing |_"
return kneedle(x, y, "|_", S = S, smoothing = smoothing)
elseif concave && !increasing
verbose && @info "Found concave and decreasing ¯|"
return kneedle(x, y, "¯|", S = S, smoothing = smoothing)
elseif !concave && increasing
verbose && @info "Found convex and increasing _|"
return kneedle(x, y, "_|", S = S, smoothing = smoothing)
end
end
# use bisection to find a given number of knees by varying the sensitivity
function _kneedle_scan_S(
x::AbstractVector{<:Real},
y::AbstractVector{<:Real},
shape::String,
n_knees::Integer;
smoothing::Union{Real, Nothing} = nothing)
# usually, higher S means less knees; define 1/s here since easier to keep track of
_n_knees(s) = kneedle(x, y, shape, S = 1/s, smoothing = smoothing) |> x -> length(knees(x))
lb, ub = 0.1, 10.0
n_iters = 0
while _n_knees(lb) > n_knees
lb = lb/2
n_iters += 1
if n_iters == 10
error("Could not find the requested number of knees (requested too few).")
end
end
n_iters = 0
while _n_knees(ub) < n_knees
ub = ub*2
n_iters += 1
if n_iters == 10
error("Could not find the requested number of knees (requested too many.)")
end
end
a, b = lb, ub
c = (a + b)/2
n_iter = 0
# bisection
while _n_knees(c) != n_knees
if _n_knees(c) - n_knees > 0
b = c
else
a = c
end
c = (a + b)/2
n_iter += 1
if n_iter >= 20
break
end
end
return kneedle(x, y, shape, S = 1/c, smoothing = smoothing)
end
# use bisection to find a given number of knees by varying the smoothing
function _kneedle_scan_smooth(
x::AbstractVector{<:Real},
y::AbstractVector{<:Real},
shape::String,
n_knees::Integer;
S::Real = 1.0)
# usually, higher smoothing means less knees; define 1/s here since easier to keep track of
_n_knees(s) = kneedle(x, y, shape, S = S, smoothing = 1/s) |> x -> length(knees(x))
ub = 10.0
n_iters = 0
while _n_knees(ub) < n_knees
ub = ub*2
n_iters += 1
if n_iters == 10
error("Could not find the requested number of knees (requested too many).")
end
end
a, b = 1.0, ub
c = (a + b)/2
n_iter = 0
# bisection
while _n_knees(c) != n_knees
if _n_knees(c) - n_knees > 0
b = c
else
a = c
end
c = (a + b)/2
n_iter += 1
if n_iter >= 20
break
end
end
return kneedle(x, y, shape, S = S, smoothing = 1/c)
end
# find a given number of knees by searching either the sensitivity or the smoothing
function kneedle(
x::AbstractVector{<:Real},
y::AbstractVector{<:Real},
shape::String,
n_knees::Integer;
scan_type::Symbol = :S,
S::Real = 1.0,
smoothing::Union{Real, Nothing} = nothing)
@argcheck n_knees >= 1
@argcheck n_knees < length(x)/3 "Too many knees!"
@argcheck scan_type ∈ [:S, :smoothing]
if scan_type == :S
return _kneedle_scan_S(x, y, shape, n_knees, smoothing = smoothing)
elseif scan_type == :smoothing
return _kneedle_scan_smooth(x, y, shape, n_knees, S = S)
end
end | Kneedle | https://github.com/70Gage70/Kneedle.jl.git |
|
[
"MIT"
] | 0.1.1 | 344bb46e35400f5b6155cfb8624c13c2e64c8445 | code | 1502 | module Testers
using ArgCheck
using SpecialFunctions: erf
"""
const CONVEX_INC
A small set of convex, increasting data `(x, y)` with one knee.
"""
const CONVEX_INC = (0:9, [1, 2, 3, 4, 5, 10, 15, 20, 40, 100])
"""
const CONVEX_DEC
A small set of convex, decreasing data `(x, y)` with one knee.
"""
const CONVEX_DEC = (0:9, [100, 40, 20, 15, 10, 5, 4, 3, 2, 1])
"""
const CONCAVE_DEC
A small set of concave, decreasing data `(x, y)` with one knee.
"""
const CONCAVE_DEC = (0:9, [99, 98, 97, 96, 95, 90, 85, 80, 60, 0])
"""
const CONCAVE_INC
A small set of concave, increasting data `(x, y)` with one knee.
"""
const CONCAVE_INC = (0:9, [0, 60, 80, 85, 90, 95, 96, 97, 98, 99])
"""
double_bump(; μ1 = -1, μ2 = 5, A1 = 1, A2 = 2, σ1 = 1, σ2 = 1, n_points = 100, noise_level = 0.0)
Return a dataset `(x, y)` with `n_points` points with two knees generated from
`y(x) = A1*Φ(x; μ1, σ1) + A2*Φ(x; μ2, σ2) + noise_level*randn()`
where `Φ(x; μ, σ)` is the CDF of a Normal distribution with mean `μ` and standard deviation `σ`.
"""
function double_bump(;
μ1::Real = -1,
μ2::Real = 5,
A1::Real = 1,
A2::Real = 2,
σ1::Real = 1,
σ2::Real = 1,
n_points::Integer = 100,
noise_level::Real = 0.0)
@argcheck μ2 > μ1
_Ncdf(x, μ, σ) = (1/2)*(1 + erf((x - μ)/(sqrt(2)*σ)))
x = range(μ1 - 3*σ1, μ2 + 3*σ2, length = n_points)
_y(x) = A1*_Ncdf(x, μ1, σ1) + A2*_Ncdf(x, μ2, σ2) + noise_level*randn()
return (x, _y.(x))
end
end # module | Kneedle | https://github.com/70Gage70/Kneedle.jl.git |
|
[
"MIT"
] | 0.1.1 | 344bb46e35400f5b6155cfb8624c13c2e64c8445 | code | 1538 | """
viz(x, y, kneedle_result; show_data = true, show_data_smoothed = true, show_knees = true, linewidth = 2.0)
Visualize the computed knees in `kneedle_result` from data `x`, `y`. Optionally show various \
elements based on keyword arguments and set the line width.
This function requires a Makie backend to function, e.g. `import CairoMakie`.
### Example
Install a Makie backend such as `CairoMakie` if you haven't already via the following
```julia-repl
julia> import Pkg
julia> Pkg.add("CairoMakie")
julia> import CairoMakie
```
Once the backend is loaded, we have
```julia-repl
julia> x, y = Testers.CONVEX_INC
julia> kr = kneedle(x, y);
julia> viz(x, y, kr)
```
"""
function viz end
"""
viz!(ax, x, y, kneedle_result; show_data = true, show_data_smoothed = true, show_knees = true, linewidth = 2.0)
Identical to `viz`, but the plots are added to `ax::Makie.Axis`.
- The plot of `(x, y)` is labeled "Data"
- The plot of `(x_smooth, y_smooth)` is labeled "Smoothed Data"
- The plot of `knees(kneedle_result)` is labeled `Knees`.
This function requires a Makie backend to function, e.g. `import CairoMakie`.
### Example
Install a Makie backend such as `CairoMakie` if you haven't already via the following
```julia-repl
julia> import Pkg
julia> Pkg.add("CairoMakie")
julia> using CairoMakie
```
Once the backend is loaded, we have
```julia-repl
julia> fig = Figure(); ax = Axis(fig[1, 1]);
julia> x, y = Testers.CONVEX_INC
julia> kr = kneedle(x, y);
julia> viz!(ax, x, y, kr)
julia> fig
```
"""
function viz! end | Kneedle | https://github.com/70Gage70/Kneedle.jl.git |
|
[
"MIT"
] | 0.1.1 | 344bb46e35400f5b6155cfb8624c13c2e64c8445 | code | 899 | using Kneedle
using Test
import Random
Random.seed!(1234)
@testset "CONCAVE_DEC" begin
x, y = Testers.CONCAVE_DEC
@test knees(kneedle(x, y)) == [7]
@test knees(kneedle(x, y, "concave_dec")) == [7]
@test knees(kneedle(x, y, "concave_dec", 1)) == [7]
end
@testset "CONCAVE_INC" begin
x, y = Testers.CONCAVE_INC
@test knees(kneedle(x, y)) == [2]
@test knees(kneedle(x, y, "concave_inc")) == [2]
@test knees(kneedle(x, y, "concave_inc", 1)) == [2]
end
@testset "CONVEX_INC" begin
x, y = Testers.CONVEX_INC
@test knees(kneedle(x, y)) == [7]
@test knees(kneedle(x, y, "convex_inc")) == [7]
@test knees(kneedle(x, y, "convex_inc", 1)) == [7]
end
@testset "CONVEX_DEC" begin
x, y = Testers.CONVEX_DEC
@test knees(kneedle(x, y)) == [2]
@test knees(kneedle(x, y, "convex_dec")) == [2]
@test knees(kneedle(x, y, "convex_dec", 1)) == [2]
end
| Kneedle | https://github.com/70Gage70/Kneedle.jl.git |
|
[
"MIT"
] | 0.1.1 | 344bb46e35400f5b6155cfb8624c13c2e64c8445 | docs | 2068 | # Kneedle.jl
[](https://70gage70.github.io/Kneedle.jl/docs/kneedle-docs.html)
This is a [Julia](https://julialang.org/) implementation of the Kneedle[^1] knee-finding algorithm. This detects "corners" (or "knees", "elbows", ...) in a dataset `(x, y)`.
# Features
- Exports one main function `kneedle` with the ability to select the shape and number of knees to search for.
- Built-in data smoothing from [Loess.jl](https://github.com/JuliaStats/Loess.jl).
- [Makie](https://docs.makie.org/stable/) extension for quick visualization.
# Installation
This package is in the Julia General Registry. In the Julia REPL, run the following code and follow the prompts:
```julia
import Pkg
Pkg.add("Kneedle")
```
Access the functionality of the package in your code by including the following line:
```julia
using Kneedle
```
# Quick Start
Find a knee automatically using `kneedle(x, y)`:
```julia
using Kneedle
x, y = Testers.CONCAVE_INC
kr = kneedle(x, y) # kr is a `KneedleResult`
knees(kr) # [2], therefore a knee is detected at x = 2
```
In order to use the plotting functionality, a Makie backend is required. For this example, this amounts to including the line `import CairoMakie`. This provides access to the function `viz(x, y, kr; kwargs...)`:
```julia
import CairoMakie
viz(x, y, kr, show_data_smoothed = false) # we didn't use any smoothing here, so no need to show it
```
[](https://70gage70.github.io/Kneedle.jl/)
# Documentation
[Documentation](https://70gage70.github.io/Kneedle.jl/docs/kneedle-docs.html)
# See also
- [kneed](https://github.com/arvkevi/kneed): Knee-finding in Python.
- [Yellowbrick](https://www.scikit-yb.org/en/latest/api/cluster/elbow.html?highlight=knee): Machine learning visualization.
# References
[^1]: Satopaa, Ville, et al. *Finding a "kneedle" in a haystack: Detecting knee points in system behavior.* 2011 31st international conference on distributed computing systems workshops. IEEE, 2011. | Kneedle | https://github.com/70Gage70/Kneedle.jl.git |
|
[
"MIT"
] | 1.0.0 | 226a80838f99e0b9c92ad2b322141f5a8ded6a0f | code | 308 | # Generate documentation with this command:
# (cd docs && julia make.jl)
push!(LOAD_PATH, "..")
using Documenter
using ShortFFTs
makedocs(; sitename="ShortFFTs", format=Documenter.HTML(), modules=[ShortFFTs])
deploydocs(; repo="github.com/eschnett/ShortFFTs.jl.git", devbranch="main", push_preview=true)
| ShortFFTs | https://github.com/eschnett/ShortFFTs.jl.git |
|
[
"MIT"
] | 1.0.0 | 226a80838f99e0b9c92ad2b322141f5a8ded6a0f | code | 3383 | module ShortFFTs
using Primes
export short_fft
# Cooley-Tukey <https://en.wikipedia.org/wiki/Cooley–Tukey_FFT_algorithm>
# Unzip a vector of tuples
unzip(xs::AbstractVector{<:Tuple}) = ([x[1] for x in xs], [x[2] for x in xs])
# Find a common type for a tuple
promote_tuple(X::Tuple) = promote_type((typeof(x) for x in X)...)
cw(x) = Complex(imag(x), -real(x)) # -im * x
# Apply a phase to an expression
function phase(::Type{T}, k::Rational, expr) where {T<:Complex}
k = mod(k, 1)
# Ensure that simple cases do not lead to arithmetic operations
k == 0 && return expr
k >= 1//2 && return phase(T, k - 1//2, :(-$expr))
k >= 1//4 && return phase(T, k - 1//4, :(cw($expr)))
k == 1//8 && return :($((1 - im) * sqrt(T(1//2))) * $expr)
# Prevent round-off by evaluating the phase constants with very high precision
return :($(T(cispi(BigFloat(-2 * k)))) * $expr)
end
phase(::Type{T}, k::Integer, expr) where {T<:Complex} = phase(T, Rational(k), expr)
# Generate a short FFT
function gen_fft(::Type{T}, X::AbstractVector) where {T<:Complex}
N = length(X)
# Trivial case
if N == 0
code = quote end
res = []
return code, res
end
# Base case
if N == 1
Y = gensym(:Y)
code = quote
$Y = T($(X[1]))
end
res = [Y]
return code, res
end
# Handle prime lengths directly
if isprime(N)
Y = [gensym(Symbol(:Y, n - 1)) for n in 1:N]
# term(i, n) = :($(phase(T, ((i - 1) * (n - 1))//N)) * $(X[i]))
term(i, n) = phase(T, ((i - 1) * (n - 1))//N, X[i])
stmts = [
quote
$(Y[n]) = +($([term(i, n) for i in 1:N]...))
end for n in 1:N
]
code = quote
$(stmts...)
end
return code, Y
end
# TODO: Use split-radix FFT for N % 4 = 0 <https://en.wikipedia.org/wiki/Split-radix_FFT_algorithm>
# Apply Cooley-Tukey <https://en.wikipedia.org/wiki/Cooley–Tukey_FFT_algorithm> with the smallest prime factor
(N1, _), _ = iterate(eachfactor(N))
@assert N % N1 == 0
N2 = N ÷ N1
# First step: N1 FFTs of size N2
codeYs, Ys = unzip([gen_fft(T, [X[i] for i in n1:N1:N]) for n1 in 1:N1])
# twiddle factors
# twiddle(n1, n2) = phase(T, ((n1 - 1) * (n2 - 1))//(N1 * N2))
twiddle(n1, n2, expr) = phase(T, ((n1 - 1) * (n2 - 1))//(N1 * N2), expr)
# Second step: N2 FFTs of size N1
codeZs, Zs = unzip([gen_fft(T, [twiddle(n1, n2, Ys[n1][n2]) for n1 in 1:N1]) for n2 in 1:N2])
# Combine results
code = quote
$(codeYs...)
$(codeZs...)
end
Z = [Zs[n2][n1] for n1 in 1:N1 for n2 in 1:N2]
return code, Z
end
@generated function short_fft(::Type{T}, X::Tuple) where {T<:Complex}
N = length(fieldnames(X))
code, res = gen_fft(T, [:(X[$n]) for n in 1:N])
return quote
Base.@_inline_meta
begin
$code
end
return tuple($(res...))
end
end
@inline short_fft(X::Tuple) = short_fft(complex(promote_tuple(X)), X)
@inline short_fft(X::NTuple{N,T}) where {N,T<:Complex} = short_fft(T, X)
short_fft(X::AbstractVector{T}) where {T<:Complex} = short_fft(T, Tuple(X))
@inline short_fft(X::NTuple{N,T}) where {N,T<:Real} = short_fft(complex(T), X)
short_fft(X::AbstractVector{T}) where {T<:Real} = short_fft(complex(T), Tuple(X))
end
| ShortFFTs | https://github.com/eschnett/ShortFFTs.jl.git |
|
[
"MIT"
] | 1.0.0 | 226a80838f99e0b9c92ad2b322141f5a8ded6a0f | code | 2940 | using CUDA
using CUDA.CUFFT
using FFTW
using ShortFFTs
using Test
# What we test
const types = [Float16, Float32, Float64, BigFloat, Complex{Float16}, Complex{Float32}, Complex{Float64}, Complex{BigFloat}]
const lengths = [collect(1:31); collect(32:8:100)]
# How to map to the types that the RNG supports
const rng_types = Dict(
Float16 => Float16,
Float32 => Float32,
Float64 => Float64,
BigFloat => Float64,
Complex{Float16} => Complex{Float16},
Complex{Float32} => Complex{Float32},
Complex{Float64} => Complex{Float64},
Complex{BigFloat} => Complex{Float64},
)
# How to map to the types that FFTW supports
const fftw_types = Dict(
Float16 => Float32,
Float32 => Float32,
Float64 => Float64,
BigFloat => Float64,
Complex{Float16} => Complex{Float32},
Complex{Float32} => Complex{Float32},
Complex{Float64} => Complex{Float64},
Complex{BigFloat} => Complex{Float64},
)
@testset "short_fft T=$T N=0" for T in filter(T -> T <: Complex, types)
@test short_fft(T, ()) == ()
end
@testset "short_fft T=$T N=$N" for T in types, N in lengths
RT = rng_types[T]
input = T.(randn(RT, N))
FT = fftw_types[T]
want = complex(T).(fft(FT.(input)))
rtol = max(sqrt(eps(real(T))), sqrt(eps(real(FT))))
have = [short_fft(Tuple(input))...]
if !isapprox(have, want; rtol=rtol)
@show T N have want
@show have - want
@show maximum(abs, have - want)
end
@test have ≈ want rtol = rtol
end
@testset "Realistic example (CPU)" begin
input = randn(Complex{Float32}, (32, 4))
# Use FFTW: We need to allocate a temporary array and copy the input
input2 = zeros(Complex{Float32}, (32, 8))
input2[:, 1:4] = input
want = fft(input2, 2)
# Use ShortFFTw: We can write an efficient loop kernel
have = Array{Complex{Float32}}(undef, (32, 8))
for i in 1:size(input, 1)
X = (input[i, 1], input[i, 2], input[i, 3], input[i, 4], 0, 0, 0, 0)
Y = short_fft(X)
for j in 1:8
have[i, j] = Y[j]
end
end
@test have ≈ want
end
if CUDA.functional()
@inbounds function fft2!(have::CuDeviceArray{T,2}, input::CuDeviceArray{T,2}) where {T}
i = threadIdx().x
X = (input[i, 1], input[i, 2], input[i, 3], input[i, 4], 0.0f0, 0.0f0, 0.0f0, 0.0f0)
Y = short_fft(X)
for j in 1:8
have[i, j] = Y[j]
end
end
@testset "Realistic example (CUDA)" begin
input = CUDA.randn(Complex{Float32}, (32, 4))
# Use CUFFT: We need to allocate a temporary array and copy the input
input2 = CUDA.zeros(Complex{Float32}, (32, 8))
input2[:, 1:4] = input
want = fft(input2, 2)
# Use ShortFFTw: We can write an efficient loop kernel
have = similar(input, (32, 8))
@cuda threads = 32 blocks = 1 fft2!(have, input)
@test Array(have) ≈ Array(want)
end
end
| ShortFFTs | https://github.com/eschnett/ShortFFTs.jl.git |
|
[
"MIT"
] | 1.0.0 | 226a80838f99e0b9c92ad2b322141f5a8ded6a0f | docs | 3426 | # ShortFFTs.jl
Efficient and inlineable short Fast Fourier Transforms
* [](https://eschnett.github.io/ShortFFTs.jl/dev/)
* [](https://github.com/eschnett/ShortFFTs.jl/actions)
* [](https://codecov.io/gh/eschnett/ShortFFTs.jl)
* [](https://juliaci.github.io/NanosoldierReports/pkgeval_badges/S/ShortFFTs.html)
The `ShortFFTs.jl` package provides a single function, `short_fft`,
that performs a Fast Fourier Transform (FFT) on its input. Different
from the `fft` functions provided via the `AbstractFFTs.jl` interface
(e.g. by `FFTW.jl`), `short_fft` is designed to be called for a single
transform at a time.
One major advantage of `short_fft` is that it can be efficiently
called from within a loop kernel, or from device code (e.g. a CUDA
kernel). This allows combining several operations into a single loop,
e.g. pre-processing input data by applying a window function.
The term "short" in the name refers to FFT lengths less than about
1000.
## API
The function `short_fft` expects its input as a tuple. One also needs
to specify the output element type. The output of the FFT is returned
as a tuple as well. This makes sense because the call to `short_fft`
is expected to be inlined, and using arrays would introduce an
overhead.
```julia
using ShortFFTs
# short_fft(::Type{T<:Complex}, input::Tuple)::Tuple
output = short_fft(T, input)
# short_fft(input::Tuple)::Tuple
output = short_fft(input)
# short_fft(input::AbstractVector)::Tuple
output = short_fft(input)
```
`short_fft` can be called by device code, e.g. in a CUDA kernel.
`short_fft` is a "generated function" that is auto-generated at run
time for each output type and tuple length. The first call to
`short_fft` for a particular combination of argument types is much
more expensive than the following because Julia needs to generate the
optimzed code.
Since `short_fft` is implemented in pure Julia it can work with any
(complex) type. The implementation uses local variables as scratch
storage, and very large transforms are thus likely to be inefficient.
## Algorithm
`ShortFFTs.jl` implements the [Cooley-Tukey FFT
algorithm](https://en.wikipedia.org/wiki/Cooley–Tukey_FFT_algorithm)
as described on Wikipedia. It has been tested against `FFTW.jl`.
Transforms of any lenghts are supported. Lengths without large prime
factors (especially powers of two) are significantly more efficient.
`256` is a very good length, `257` is a very bad one (since it is a
prime number).
`short_fft` is not parallelized at all. It is expected that the
surrounding code is parallelized if that makes sense. For long
transforms `AbstractFFTs.jl` will provide a better performance.
## Example
This example first expands its input from 4 to 8 points by setting the
additional points to zero and then applies an FFT.
```julia
input = randn(Complex{Float32}, (32, 4))
# Use ShortFFTw: We can write an efficient loop kernel
output = Array{Complex{Float32}}(undef, (32, 8))
for i in 1:32
X = (input[i, 1], input[i, 2], input[i, 3], input[i, 4], 0, 0, 0, 0)
Y = short_fft(X)
for j in 1:8
output[i, j] = Y[j]
end
end
```
| ShortFFTs | https://github.com/eschnett/ShortFFTs.jl.git |
|
[
"MIT"
] | 1.0.0 | 226a80838f99e0b9c92ad2b322141f5a8ded6a0f | docs | 55 | # ShortFFTs.jl
```@autodocs
Modules = [ShortFFTs]
```
| ShortFFTs | https://github.com/eschnett/ShortFFTs.jl.git |
|
[
"MIT"
] | 0.0.2 | fb8d934ca061f0df10ef1d127a75f7b7020021f2 | code | 4273 | # This file is part of the Steganography package
# (http://github.com/davidssmith/Steganography.jl).
#
# The MIT License (MIT)
#
# Copyright (c) 2017 David Smith
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
module Steganography
using Compat
export embed, extract, setlastbits, getlastbits, setlast7, setlast8,
getlast7, getlast8
const version = v"0.0.2"
function flattencomplex{T<:Complex,N}(x::Array{T,N})
if T == Complex128
T2 = Float64
elseif T == Complex64
T2 = Float32
elseif T == Complex32
T2 = Float16
else
error("type $T unhandled in Steganography.flattencomplex()")
end
y = Array{T2}(2length(x))
for k in 1:length(x)
y[2k-1] = real(x[k])
y[2k] = imag(x[k])
end
return y
end
function unflattencomplex{T<:Real,N}(y::Array{T,N})
if T == Float64
T2 = Complex128
elseif T == Float32
T2 = Complex64
elseif T == Float16
T2 = Complex32
else
error("type $T unhandled in Steganography.unflattencomplex()")
end
x = Array{T2}(div(length(y),2))
for k in 1:length(x)
x[k] = complex(y[2k-1],y[2k])
end
return x
end
@compat function setlastbits{T<:Integer}(i::T, n::UInt8, nbits::UInt8)
S = typeof(i)
j = (i >> nbits) << nbits
j | (n & ((S(1) << nbits) - S(1)))
end
@compat setlastbits{T<:AbstractFloat}(x::T, n::UInt8, nbits::UInt8) = reinterpret(T, setlastbits(reinterpret(Unsigned, x), n, nbits))
@compat setlast8{T}(x::T, n::UInt8) = setlastbits(x, n, UInt8(8))
@compat setlast7{T}(x::T, n::UInt8) = setlastbits(x, n, UInt8(7))
@compat function getlastbits{T}(i::T, nbits::UInt8)
S = typeof(i)
return UInt8(i & ((S(1) << nbits) - S(1)))
end
@compat getlastbits{T<:AbstractFloat}(x::T, nbits::UInt8) = getlastbits(reinterpret(Unsigned, x), nbits)
@compat getlast8{T<:AbstractFloat}(x::T) = UInt8(reinterpret(Unsigned, x) & 0xff)
@compat getlast8{T}(x::T) = UInt8(x & 0xff) # catch all
@compat getlast7{T<:AbstractFloat}(x::T) = UInt8(reinterpret(Unsigned, x) & 0x7f)
@compat getlast7{T}(x::T) = UInt8(x & 0x7f) # catch all
@compat function embed{T,N}(data::Array{T,N}, text::Array{UInt8,1}; ignorenonascii::Bool=true)
if T <: Complex
data = flattencomplex(data)
end
@assert length(text) <= length(data)
y = copy(data) # make sure we have enough space
for j in 1:length(text)
@assert text[j] != 0x04
if !ignorenonascii
@assert text[j] <= 0x7f
end
if text[j] > 0x7f
println(text[j], " ", Char(text[j]), " ", hex(text[j]))
y[j] = setlast7(data[j], UInt8(0))
else
y[j] = setlast7(data[j], text[j])
end
end
if length(text) < length(data)
y[length(text)+1] = setlast7(data[length(text)+1], 0x04) # ASCII 0x04 means 'end of transmission'
end
if T <: Complex
return unflattencomplex(y)
else
return y
end
end
@compat function extract{T,N}(s::Array{T,N})
if T <: Complex
s = flattencomplex(s)
end
t = Array{UInt8}(length(s))
k = 1
while true
t[k] = getlast7(s[k])
if t[k] == 0x04
break
end
k += 1
end
t[1:k-1]
end
end
| Steganography | https://github.com/davidssmith/Steganography.jl.git |
|
[
"MIT"
] | 0.0.2 | fb8d934ca061f0df10ef1d127a75f7b7020021f2 | code | 1238 |
include("../src/Steganography.jl")
VERSION > v"0.7-" ? using Test : using Base.Test
@testset "steganography" begin
U = read("reliance.txt")
n = length(U)
@testset "$S in $T" for S in [UInt8], T in [Int32, Int64, UInt32, UInt64,
Float16, Float32, Float64]
A = rand(T, n)
for j in 1:n
r = Steganography.setlast8(A[j], U[j])
s = Steganography.setlastbits(A[j], U[j], UInt8(8))
u = Steganography.setlast7(A[j], U[j])
v = Steganography.setlastbits(A[j], U[j], UInt8(7))
@test r == s
@test u == v
@test Steganography.getlast8(u) == Steganography.getlastbits(u, UInt8(8))
@test Steganography.getlast7(u) == Steganography.getlastbits(u, UInt8(7))
@test U[j] == Steganography.getlast7(u)
@test U[j] == Steganography.getlast8(r)
end
A = rand(T, n + 2)
B = Steganography.embed(A, U)
V = Steganography.extract(B)
@test U == V
end
@testset "$S in $T" for S in [UInt8], T in [Complex32, Complex64, Complex128]
A = rand(T, n + 2)
B = Steganography.embed(A, U)
V = Steganography.extract(B)
@test U == V
end
end
| Steganography | https://github.com/davidssmith/Steganography.jl.git |
|
[
"MIT"
] | 0.0.2 | fb8d934ca061f0df10ef1d127a75f7b7020021f2 | docs | 1071 | # Steganography.jl
[](https://travis-ci.org/davidssmith/Steganography.jl)
[](https://ci.appveyor.com/project/davidssmith/steganography-jl/branch/master)
[](https://coveralls.io/github/davidssmith/Steganography.jl?branch=master)
Introduction
------------
Steganography methods for embedding text into the least significant bits of arbitrary data
Getting Help
------------
For help, file an issue on the [bug tracker](http://github.com/davidssmith/Steganography.jl/issues).
Third party help is welcome and can be contributed through pull requests.
Authors
-------
David S. Smith [<[email protected]>](mailto:[email protected])
Disclaimer
----------
This code comes with no warranty. Use at your own risk. If it breaks, let us know, and we'll try to help you fix it.
| Steganography | https://github.com/davidssmith/Steganography.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 157 | module SnoopCompileCore
using Core: MethodInstance, CodeInfo
include("snoop_inference.jl")
include("snoop_invalidations.jl")
include("snoop_llvm.jl")
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 5346 | export @snoop_inference
struct InferenceTiming
mi_info::Core.Compiler.Timings.InferenceFrameInfo
inclusive_time::Float64
exclusive_time::Float64
end
"""
inclusive(frame)
Return the time spent inferring `frame` and its callees.
"""
inclusive(it::InferenceTiming) = it.inclusive_time
"""
exclusive(frame)
Return the time spent inferring `frame`, not including the time needed for any of its callees.
"""
exclusive(it::InferenceTiming) = it.exclusive_time
struct InferenceTimingNode
mi_timing::InferenceTiming
start_time::Float64
children::Vector{InferenceTimingNode}
bt
parent::InferenceTimingNode
# Root constructor
InferenceTimingNode(mi_timing::InferenceTiming, start_time, @nospecialize(bt)) =
new(mi_timing, start_time, InferenceTimingNode[], bt)
# Child constructor
function InferenceTimingNode(mi_timing::InferenceTiming, start_time, @nospecialize(bt), parent::InferenceTimingNode)
child = new(mi_timing, start_time, InferenceTimingNode[], bt, parent)
push!(parent.children, child)
return child
end
end
inclusive(node::InferenceTimingNode) = inclusive(node.mi_timing)
exclusive(node::InferenceTimingNode) = exclusive(node.mi_timing)
InferenceTiming(node::InferenceTimingNode) = node.mi_timing
function InferenceTimingNode(t::Core.Compiler.Timings.Timing)
ttree = timingtree(t)
it, start_time, ttree_children = ttree::Tuple{InferenceTiming, Float64, Vector{Any}}
root = InferenceTimingNode(it, start_time, t.bt)
addchildren!(root, t, ttree_children)
return root
end
# Compute inclusive times and store as a temporary tree.
# To allow InferenceTimingNode to be both bidirectional and immutable, we need to create parent node before the child nodes.
# However, each node stores its inclusive time, which can only be computed efficiently from the leaves up (children before parents).
# This performs the inclusive-time computation, storing the result as a "temporary tree" that can be used during
# InferenceTimingNode creation (see `addchildren!`).
function timingtree(t::Core.Compiler.Timings.Timing)
time, start_time = t.time/10^9, t.start_time/10^9
incl_time = time
tchildren = []
for child in t.children
tchild = timingtree(child)
push!(tchildren, tchild)
incl_time += inclusive(tchild[1])
end
return (InferenceTiming(t.mi_info, incl_time, time), start_time, tchildren)
end
function addchildren!(parent::InferenceTimingNode, t::Core.Compiler.Timings.Timing, ttrees)
for (child, ttree) in zip(t.children, ttrees)
it, start_time, ttree_children = ttree::Tuple{InferenceTiming, Float64, Vector{Any}}
node = InferenceTimingNode(it, start_time, child.bt, parent)
addchildren!(node, child, ttree_children)
end
end
function start_deep_timing()
Core.Compiler.Timings.reset_timings()
Core.Compiler.__set_measure_typeinf(true)
end
function stop_deep_timing()
Core.Compiler.__set_measure_typeinf(false)
Core.Compiler.Timings.close_current_timer()
end
function finish_snoop_inference()
return InferenceTimingNode(Core.Compiler.Timings._timings[1])
end
function _snoop_inference(cmd::Expr)
return quote
start_deep_timing()
try
$(esc(cmd))
finally
stop_deep_timing()
end
finish_snoop_inference()
end
end
"""
tinf = @snoop_inference commands;
Produce a profile of julia's type inference, recording the amount of time spent
inferring every `MethodInstance` processed while executing `commands`. Each
fresh entrance to type inference (whether executed directly in `commands` or
because a call was made by runtime-dispatch) also collects a backtrace so the
caller can be identified.
`tinf` is a tree, each node containing data on a particular inference "frame"
(the method, argument-type specializations, parameters, and even any
constant-propagated values). Each reports the
[`exclusive`](@ref)/[`inclusive`](@ref) times, where the exclusive time
corresponds to the time spent inferring this frame in and of itself, whereas the
inclusive time includes the time needed to infer all the callees of this frame.
The top-level node in this profile tree is `ROOT`. Uniquely, its exclusive time
corresponds to the time spent _not_ in julia's type inference (codegen,
llvm_opt, runtime, etc).
Working with `tinf` effectively requires loading `SnoopCompile`.
!!! warning
Note the semicolon `;` at the end of the `@snoop_inference` macro call.
Because `SnoopCompileCore` is not permitted to invalidate any code, it cannot define
the `Base.show` methods that pretty-print `tinf`. Defer inspection of `tinf`
until `SnoopCompile` has been loaded.
# Example
```jldoctest; setup=:(using SnoopCompileCore), filter=r"([0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?/[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?|\\d direct)"
julia> tinf = @snoop_inference begin
sort(rand(100)) # Evaluate some code and profile julia's type inference
end;
```
"""
macro snoop_inference(cmd)
return _snoop_inference(cmd)
end
# These are okay to come at the top-level because we're only measuring inference, and
# inference results will be cached in a `.ji` file.
precompile(start_deep_timing, ())
precompile(stop_deep_timing, ())
precompile(finish_snoop_inference, ())
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 1328 | export @snoop_invalidations
"""
invs = @snoop_invalidations expr
Capture method cache invalidations triggered by evaluating `expr`.
`invs` is a sequence of invalidated `Core.MethodInstance`s together with "explanations," consisting
of integers (encoding depth) and strings (documenting the source of an invalidation).
Unless you are working at a low level, you essentially always want to pass `invs`
directly to [`SnoopCompile.invalidation_trees`](@ref).
# Extended help
`invs` is in a format where the "reason" comes after the items.
Method deletion results in the sequence
[zero or more (mi, "invalidate_mt_cache") pairs..., zero or more (depth1 tree, loctag) pairs..., method, loctag] with loctag = "jl_method_table_disable"
where `mi` means a `MethodInstance`. `depth1` means a sequence starting at `depth=1`.
Method insertion results in the sequence
[zero or more (depth0 tree, sig) pairs..., same info as with delete_method except loctag = "jl_method_table_insert"]
The authoritative reference is Julia's own `src/gf.c` file.
"""
macro snoop_invalidations(expr)
quote
local invs = ccall(:jl_debug_method_invalidation, Any, (Cint,), 1)
Expr(:tryfinally,
$(esc(expr)),
ccall(:jl_debug_method_invalidation, Any, (Cint,), 0)
)
invs
end
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 2092 | export @snoop_llvm
using Serialization
"""
@snoop_llvm "func_names.csv" "llvm_timings.yaml" begin
# Commands to execute, in a new process
end
causes the julia compiler to log timing information for LLVM optimization during the
provided commands to the files "func_names.csv" and "llvm_timings.yaml". These files can
be used for the input to `SnoopCompile.read_snoop_llvm("func_names.csv", "llvm_timings.yaml")`.
The logs contain the amount of time spent optimizing each "llvm module", and information
about each module, where a module is a collection of functions being optimized together.
"""
macro snoop_llvm(flags, func_file, llvm_file, commands)
return :(snoop_llvm($(esc(flags)), $(esc(func_file)), $(esc(llvm_file)), $(QuoteNode(commands))))
end
macro snoop_llvm(func_file, llvm_file, commands)
return :(snoop_llvm(String[], $(esc(func_file)), $(esc(llvm_file)), $(QuoteNode(commands))))
end
function snoop_llvm(flags, func_file, llvm_file, commands)
println("Launching new julia process to run commands...")
# addprocs will run the unmodified version of julia, so we
# launch it as a command.
code_object = """
using Serialization
while !eof(stdin)
Core.eval(Main, deserialize(stdin))
end
"""
process = open(`$(Base.julia_cmd()) $flags --eval $code_object --project=$(Base.active_project())`, stdout, write=true)
serialize(process, quote
let func_io = open($func_file, "w"), llvm_io = open($llvm_file, "w")
ccall(:jl_dump_emitted_mi_name, Nothing, (Ptr{Nothing},), func_io.handle)
ccall(:jl_dump_llvm_opt, Nothing, (Ptr{Nothing},), llvm_io.handle)
try
$commands
finally
ccall(:jl_dump_emitted_mi_name, Nothing, (Ptr{Nothing},), C_NULL)
ccall(:jl_dump_llvm_opt, Nothing, (Ptr{Nothing},), C_NULL)
close(func_io)
close(llvm_io)
end
end
exit()
end)
wait(process)
println("done.")
nothing
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 1052 | using Documenter
using SnoopCompileCore
using SnoopCompile
import PyPlot # so that the visualizations.jl file is loaded
makedocs(
sitename = "SnoopCompile",
format = Documenter.HTML(
prettyurls = true,
),
modules = [SnoopCompileCore, SnoopCompile],
linkcheck = true, # the link check is slow, set to false if you're building frequently
# doctest = :fix,
warnonly=true, # delete when https://github.com/JuliaDocs/Documenter.jl/issues/2541 is fixed
pages = ["index.md",
"Basic tutorials" => ["tutorials/invalidations.md", "tutorials/snoop_inference.md", "tutorials/snoop_llvm.md", "tutorials/pgdsgui.md", "tutorials/jet.md"],
"Advanced tutorials" => ["tutorials/snoop_inference_analysis.md", "tutorials/snoop_inference_parcel.md"],
"Explanations" => ["explanations/tools.md", "explanations/gotchas.md", "explanations/fixing_inference.md"],
"reference.md",
]
)
deploydocs(
repo = "github.com/timholy/SnoopCompile.jl.git",
push_preview=true
)
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 808 | """
OptimizeMe is a module used to demonstrate how to make code more precompilable
and more resistant to invalidation. It has deliberate weaknesses in its design,
and the analysis and resolution of these weaknesses via `@snoop_inference` is
discussed in the documentation.
"""
module OptimizeMe
struct Container{T}
value::T
end
function lotsa_containers()
list = [1, 0x01, 0xffff, 2.0f0, 'a', [0], ("key", 42)]
cs = Container.(list)
println("lotsa containers:")
display(cs)
end
howbig(str::AbstractString) = length(str)
howbig(x::Char) = 1
howbig(x::Unsigned) = x
howbig(x::Real) = abs(x)
function abmult(r::Int, ys)
if r < 0
r = -r
end
return map(x -> howbig(r * x), ys)
end
function main()
lotsa_containers()
return abmult(rand(-5:5), rand(3))
end
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 1259 | """
OptimizeMeFixed is the "improved" version of OptimizeMe. See the file in this same directory for details.
"""
module OptimizeMeFixed
using PrecompileTools
struct Container{T}
value::T
end
Base.show(io::IO, c::Container) = print(io, "Container(", c.value, ")")
function lotsa_containers(io::IO)
list = [1, 0x01, 0xffff, 2.0f0, 'a', [0], ("key", 42)]
cs = Container{Any}.(list)
println(io, "lotsa containers:")
show(io, MIME("text/plain"), cs)
end
howbig(str::AbstractString) = length(str)
howbig(x::Char) = 1
howbig(x::Unsigned) = x
howbig(x::Real) = abs(x)
function abmult(r::Int, ys)
if r < 0
r = -r
end
let r = r # Julia #15276
return map(x -> howbig(r * x), ys)
end
end
function main()
lotsa_containers(stdout)
return abmult(rand(-5:5), rand(3))
end
@compile_workload begin
lotsa_containers(devnull) # use `devnull` to suppress output
abmult(rand(-5:5), rand(3))
end
# since `devnull` is not a `Base.TTY`--the standard type of `stdout`--let's also
# use an explicit `precompile` directive. (Note this does not trigger any visible output).
# This doesn't "follow" runtime dispatch but at least it precompiles the entry point.
precompile(lotsa_containers, (Base.TTY,))
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 1716 | using SnoopCompile
using SnoopCompile: countchildren
function hastv(typ)
isa(typ, UnionAll) && return true
if isa(typ, DataType)
for p in typ.parameters
hastv(p) && return true
end
end
return false
end
trees = invalidation_trees(@snoop_invalidations using Revise)
function summary(trees)
npartial = ngreater = nlesser = nambig = nequal = 0
for methinvs in trees
method = methinvs.method
for fn in (:mt_backedges, :backedges)
list = getfield(methinvs, fn)
for item in list
sig = nothing
if isa(item, Pair)
sig = item.first
root = item.second
else
sig = item.mi.def.sig
root = item
end
# if hastv(sig)
# npartial += countchildren(invtree)
# else
ms1, ms2 = method.sig <: sig, sig <: method.sig
if ms1 && !ms2
ngreater += countchildren(root)
elseif ms2 && !ms1
nlesser += countchildren(root)
elseif ms1 && ms2
nequal += countchildren(root)
else
# if hastv(sig)
# npartial += countchildren(root)
# else
nambig += countchildren(root)
# end
end
# end
end
end
end
@assert nequal == 0
println("$ngreater | $nlesser | $nambig |") # $npartial |")
end
summary(trees)
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 1236 | module CthulhuExt
import Cthulhu
using Core: MethodInstance
using SnoopCompile: InstanceNode, TriggerNode, Suggested, InferenceTrigger, countchildren, callingframe, callerinstance
# Originally from invalidations.jl
Cthulhu.backedges(node::InstanceNode) = sort(node.children; by=countchildren, rev=true)
Cthulhu.method(node::InstanceNode) = Cthulhu.method(node.mi)
Cthulhu.specTypes(node::InstanceNode) = Cthulhu.specTypes(node.mi)
Cthulhu.instance(node::InstanceNode) = node.mi
# Originally from parcel_snoop_inference.jl
Cthulhu.descend(itrig::InferenceTrigger; kwargs...) = Cthulhu.descend(callerinstance(itrig); kwargs...)
Cthulhu.instance(itrig::InferenceTrigger) = MethodInstance(itrig.node)
Cthulhu.method(itrig::InferenceTrigger) = Method(itrig.node)
Cthulhu.specTypes(itrig::InferenceTrigger) = Cthulhu.specTypes(Cthulhu.instance(itrig))
Cthulhu.backedges(itrig::InferenceTrigger) = (itrig.callerframes,)
Cthulhu.nextnode(itrig::InferenceTrigger, edge) = (ret = callingframe(itrig); return isempty(ret.callerframes) ? nothing : ret)
Cthulhu.ascend(node::TriggerNode) = Cthulhu.ascend(node.itrig)
Cthulhu.ascend(s::Suggested) = Cthulhu.ascend(s.itrig)
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 3095 | module JETExt
@static if isdefined(Base, :get_extension)
import SnoopCompile: report_callee, report_caller, report_callees
using SnoopCompile: SnoopCompile, InferenceTrigger, callerinstance
using Cthulhu: specTypes
using JET: report_call, get_reports
else
import ..SnoopCompile: report_callee, report_caller, report_callees
using ..SnoopCompile: SnoopCompile, InferenceTrigger, callerinstance
using ..Cthulhu: specTypes
using ..JET: report_call, get_reports
end
"""
report_callee(itrig::InferenceTrigger)
Return the `JET.report_call` for the callee in `itrig`.
"""
SnoopCompile.report_callee(itrig::InferenceTrigger; jetconfigs...) = report_call(specTypes(itrig); jetconfigs...)
"""
report_caller(itrig::InferenceTrigger)
Return the `JET.report_call` for the caller in `itrig`.
"""
SnoopCompile.report_caller(itrig::InferenceTrigger; jetconfigs...) = report_call(specTypes(callerinstance(itrig)); jetconfigs...)
"""
report_callees(itrigs)
Filter `itrigs` for those with a non-passing `JET` report, returning the list of `itrig => report` pairs.
# Examples
```jldoctest jetfib; setup=(using SnoopCompile, JET), filter=[r"\\d direct children", r"[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?/[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?"]
julia> fib(n::Integer) = n ≤ 2 ? n : fib(n-1) + fib(n-2);
julia> function fib(str::String)
n = length(str)
return fib(m) # error is here
end
fib (generic function with 2 methods)
julia> fib(::Dict) = 0; fib(::Vector) = 0;
julia> list = [5, "hello"];
julia> mapfib(list) = map(fib, list)
mapfib (generic function with 1 method)
julia> tinf = @snoop_inference try mapfib(list) catch end
InferenceTimingNode: 0.049825/0.071476 on Core.Compiler.Timings.ROOT() with 5 direct children
julia> @report_call mapfib(list)
No errors detected
```
JET did not catch the error because the call to `fib` is hidden behind runtime dispatch.
However, when captured by `@snoop_inference`, we get
```jldoctest jetfib; filter=[r"@ .*", r"REPL\\[\\d+\\]|none"]
julia> report_callees(inference_triggers(tinf))
1-element Vector{Pair{InferenceTrigger, JET.JETCallResult{JET.JETAnalyzer{JET.BasicPass{typeof(JET.basic_function_filter)}}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}}}:
Inference triggered to call fib(::String) from iterate (./generator.jl:47) inlined into Base.collect_to!(::Vector{Int64}, ::Base.Generator{Vector{Any}, typeof(fib)}, ::Int64, ::Int64) (./array.jl:782) => ═════ 1 possible error found ═════
┌ @ none:3 fib(m)
│ variable `m` is not defined
└──────────
```
"""
function SnoopCompile.report_callees(itrigs; jetconfigs...)
function rr(itrig)
rpt = try
report_callee(itrig; jetconfigs...)
catch err
@warn "skipping $itrig due to report_callee error" exception=err
nothing
end
return itrig => rpt
end
hasreport((itrig, report)) = report !== nothing && !isempty(get_reports(report))
return [itrigrpt for itrigrpt in map(rr, itrigs) if hasreport(itrigrpt)]
end
end # module JETExt
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 1990 | module SCPrettyTablesExt
using SnoopCompile
using SnoopCompile: countchildren
import PrettyTables
function SnoopCompile.report_invalidations(io::IO = stdout;
invalidations,
n_rows::Int = 10,
process_filename::Function = x -> x,
)
@assert n_rows ≥ 0
trees = reverse(invalidation_trees(invalidations))
n_total_invalidations = length(uinvalidated(invalidations))
# TODO: Merge `@info` statement with one below
invs_per_method = map(trees) do methinvs
countchildren(methinvs)
end
n_invs_total = length(invs_per_method)
if n_invs_total == 0
@info "Zero invalidations! 🎉"
return nothing
end
nr = n_rows == 0 ? n_invs_total : n_rows
truncated_invs = nr < n_invs_total
sum_invs = sum(invs_per_method)
invs_per_method = invs_per_method[1:min(nr, n_invs_total)]
trees = trees[1:min(nr, n_invs_total)]
trunc_msg = truncated_invs ? " (showing $nr functions) " : ""
@info "$n_total_invalidations methods invalidated for $n_invs_total functions$trunc_msg"
n_invalidations_percent = map(invs_per_method) do inv
inv_perc = inv / sum_invs
Int(round(inv_perc*100, digits = 0))
end
meth_name = map(trees) do inv
"$(inv.method.name)"
end
fileinfo = map(trees) do inv
"$(process_filename(string(inv.method.file))):$(inv.method.line)"
end
header = (
["<file name>:<line number>", "Function Name", "Invalidations", "Invalidations %"],
["", "", "", "(xᵢ/∑x)"],
)
table_data = hcat(
fileinfo,
meth_name,
invs_per_method,
n_invalidations_percent,
)
PrettyTables.pretty_table(
io,
table_data;
header,
formatters = PrettyTables.ft_printf("%s", 2:2),
header_crayon = PrettyTables.crayon"yellow bold",
subheader_crayon = PrettyTables.crayon"green bold",
crop = :none,
alignment = [:l, :c, :c, :c],
)
end
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 2736 | module SCPyPlotExt
using SnoopCompile
using SnoopCompile: MethodLoc, InferenceTimingNode, PGDSData, lookups
using PyPlot: PyPlot, plt, PyCall
get_bystr(@nospecialize(by)) = by === inclusive ? "Inclusive" :
by === exclusive ? "Exclusive" : error("unknown ", by)
function pgdsgui(ax::PyCall.PyObject, ridata::AbstractVector{Pair{Union{Method,MethodLoc},PGDSData}}; bystr, consts, markersz=25, linewidth=0.5, t0 = 0.001, interactive::Bool=true, kwargs...)
methodref = Ref{Union{Method,MethodLoc}}() # returned to the user for inspection of clicked methods
function onclick(event)
xc, yc = event.xdata, event.ydata
(xc === nothing || yc === nothing) && return
# Find dot closest to the click position
idx = argmin((log.(rts .+ t0) .- log(xc)).^2 + (log.(its .+ t0) .- log(yc)).^2)
m = meths[idx]
methodref[] = m # store the clicked method
println(m, " ($(nspecs[idx]) specializations)")
end
# Unpack the inputs into a form suitable for plotting
meths, rts, its, nspecs, ecols = Union{Method,MethodLoc}[], Float64[], Float64[], Int[], Tuple{Float64,Float64,Float64}[]
for (m, d) in ridata # (rt, trtd, it, nspec)
push!(meths, m)
push!(rts, d.trun)
push!(its, d.tinf)
push!(nspecs, d.nspec)
push!(ecols, (d.trun > 0 ? d.trtd/d.trun : 0.0, 0.0, 0.0))
end
sp = sortperm(nspecs)
meths, rts, its, nspecs, ecols = meths[sp], rts[sp], its[sp], nspecs[sp], ecols[sp]
# Plot
# Add t0 to each entry to handle times of zero in the log-log plot
smap = ax.scatter(rts .+ t0, its .+ t0, markersz, nspecs; norm=plt.matplotlib.colors.LogNorm(), edgecolors=ecols, linewidths=linewidth, kwargs...)
ax.set_xscale("log")
ax.set_yscale("log")
ax.set_xlabel("Run time (self) + $t0 (s)")
ax.set_ylabel("$bystr inference time + $t0 (s)")
# ax.set_aspect("equal")
ax.axis("square")
constsmode = consts ? "incl." : "excl."
plt.colorbar(smap, label = "# specializations ($constsmode consts)", ax=ax)
if interactive
ax.get_figure().canvas.mpl_connect("button_press_event", onclick)
end
return methodref
end
function pgdsgui(ax::PyCall.PyObject, args...; consts::Bool=true, by=inclusive, kwargs...)
pgdsgui(ax, prep_ri(args...; consts, by, kwargs...); bystr=get_bystr(by), consts, kwargs...)
end
function pgdsgui(args...; kwargs...)
fig, ax = plt.subplots()
pgdsgui(ax, args...; kwargs...), ax
end
function prep_ri(tinf::InferenceTimingNode, pdata=Profile.fetch(); lidict=lookups, consts, by, kwargs...)
lookup_firstip!(lookups, pdata)
return runtime_inferencetime(tinf, pdata; lidict, consts, by)
end
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 4845 | """
SnoopCompile allows you to collect and analyze data on actions of Julia's compiler.
The capabilities depend on your version of Julia; in general, the capabilities that
require more recent Julia versions are also the most powerful and useful. When possible,
you should prefer them above the more limited tools available on earlier versions.
### Invalidations
- `@snoop_invalidations`: record invalidations
- `uinvalidated`: collect unique method invalidations from `@snoop_invalidations`
- `invalidation_trees`: organize invalidation data into trees
- `filtermod`: select trees that invalidate methods in particular modules
- `findcaller`: find a path through invalidation trees reaching a particular method
- `ascend`: interactive analysis of an invalidation tree (with Cthulhu.jl)
### LLVM
- `@snoop_llvm`: record data about the actions of LLVM, the library used to generate native code
- `read_snoop_llvm`: parse data collected by `@snoop_llvm`
### "Deep" data on inference
- `@snoop_inference`: record more extensive data about type-inference (`parcel` and `write` work on these data, too)
- `flamegraph`: prepare a visualization from `@snoop_inference`
- `flatten`: reduce the tree format recorded by `@snoop_inference` to list format
- `accumulate_by_source`: aggregate list items by their source
- `inference_triggers`: extract data on the triggers of inference
- `callerinstance`, `callingframe`, `skiphigherorder`, and `InferenceTrigger`: manipulate stack frames from `inference_triggers`
- `ascend`: interactive analysis of an inference-triggering call chain (with Cthulhu.jl)
- `runtime_inferencetime`: profile-guided deoptimization
"""
module SnoopCompile
using SnoopCompileCore
# More exports are defined below in the conditional loading sections
using Core: MethodInstance, CodeInfo
using InteractiveUtils
using Serialization
using Printf
using OrderedCollections
import YAML # For @snoop_llvm
using Base: specializations
# Parcel Regex
const anonrex = r"#{1,2}\d+#{1,2}\d+" # detect anonymous functions
const kwrex = r"^#kw##(.*)$|^#([^#]*)##kw$" # detect keyword-supplying functions (prior to Core.kwcall)
const kwbodyrex = r"^##(\w[^#]*)#\d+" # detect keyword body methods
const genrex = r"^##s\d+#\d+$" # detect generators for @generated functions
const innerrex = r"^#[^#]+#\d+" # detect inner functions
include("utils.jl")
# This is for SnoopCompile's own directives. You don't want to call this from packages because then
# SnoopCompile becomes a dependency of your package. Instead, make sure that `writewarnpcfail` is set to `true`
# in `SnoopCompile.write` and a copy of this macro will be placed at the top
# of your precompile files.
macro warnpcfail(ex::Expr)
modl = __module__
file = __source__.file === nothing ? "?" : String(__source__.file)
line = __source__.line
quote
$(esc(ex)) || @warn """precompile directive
$($(Expr(:quote, ex)))
failed. Please report an issue in $($modl) (after checking for duplicates) or remove this directive.""" _file=$file _line=$line
end
end
# Parcel
include("parcel_snoop_inference.jl")
include("inference_demos.jl")
export exclusive, inclusive, flamegraph, flatten, accumulate_by_source, collect_for, runtime_inferencetime, staleinstances
export InferenceTrigger, inference_triggers, callerinstance, callingframe, skiphigherorder, trigger_tree, suggest, isignorable
export report_callee, report_caller, report_callees
include("parcel_snoop_llvm.jl")
export read_snoop_llvm
include("invalidations.jl")
export uinvalidated, invalidation_trees, filtermod, findcaller
include("invalidation_and_inference.jl")
export precompile_blockers
# Write
include("write.jl")
# For PyPlot extension
"""
methodref, ax = pgdsgui(tinf::InferenceTimingNode; consts::Bool=true, by=inclusive)
methodref = pgdsgui(ax, tinf::InferenceTimingNode; kwargs...)
Create a scatter plot comparing:
- (vertical axis) the inference time for all instances of each Method, as captured by `tinf`;
- (horizontal axis) the run time cost, as estimated by capturing a `@profile` before calling this function.
Each dot corresponds to a single method. The face color encodes the number of times that method was inferred,
and the edge color corresponds to the fraction of the runtime spent on runtime dispatch (black is 0%, bright red is 100%).
Clicking on a dot prints the method (or location, if inlined) to the REPL, and sets `methodref[]` to
that method.
`ax` is the pyplot axis of the scatterplot.
!!! compat
`pgdsgui` depends on PyPlot via the Requires.jl package. You must load both SnoopCompile and PyPlot for this function to be defined.
"""
function pgdsgui end
export pgdsgui
# For PrettyTables extension
function report_invalidations end
export report_invalidations
end # module
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 4809 | """
tinf = SnoopCompile.flatten_demo()
A simple demonstration of [`@snoop_inference`](@ref). This demo defines a module
```julia
module FlattenDemo
struct MyType{T} x::T end
extract(y::MyType) = y.x
function packintype(x)
y = MyType{Int}(x)
return dostuff(y)
end
function domath(x)
y = x + x
return y*x + 2*x + 5
end
dostuff(y) = domath(extract(y))
end
```
It then "warms up" (forces inference on) all of Julia's `Base` methods needed for `domath`,
to ensure that these MethodInstances do not need to be inferred when we collect the data.
It then returns the results of
```julia
@snoop_inference FlattenDemo.packintypes(1)
```
See [`flatten`](@ref) for an example usage.
"""
function flatten_demo()
eval(:(
module FlattenDemo
struct MyType{T} x::T end
extract(y::MyType) = y.x
function packintype(x)
y = MyType{Int}(x)
return dostuff(y)
end
function domath(x)
y = x + x
return y*x + 2*x + 5
end
dostuff(y) = domath(extract(y))
end
))
z = (1 + 1)*1 + 2*1 + 5
return @snoop_inference Base.invokelatest(FlattenDemo.packintype, 1)
end
"""
tinf = SnoopCompile.itrigs_demo()
A simple demonstration of collecting inference triggers. This demo defines a module
```julia
module ItrigDemo
@noinline double(x) = 2x
@inline calldouble1(c) = double(c[1])
calldouble2(cc) = calldouble1(cc[1])
calleach(ccs) = (calldouble2(ccs[1]), calldouble2(ccs[2]))
end
```
It then "warms up" (forces inference on) `calldouble2(::Vector{Vector{Any}})`, `calldouble1(::Vector{Any})`, `double(::Int)`:
```julia
cc = [Any[1]]
ItrigDemo.calleach([cc,cc])
```
Then it collects and returns inference data using
```julia
cc1, cc2 = [Any[0x01]], [Any[1.0]]
@snoop_inference ItrigDemo.calleach([cc1, cc2])
```
This does not require any new inference for `calldouble2` or `calldouble1`, but it does force inference on `double` with two new types.
See [`inference_triggers`](@ref) to see what gets collected and returned.
"""
function itrigs_demo()
eval(:(
module ItrigDemo
@noinline double(x) = 2x
@inline calldouble1(c) = double(c[1])
calldouble2(cc) = calldouble1(cc[1])
calleach(ccs) = (calldouble2(ccs[1]), calldouble2(ccs[2]))
end
))
# Call once to infer `calldouble2(::Vector{Vector{Any}})`, `calldouble1(::Vector{Any})`, `double(::Int)`
cc = [Any[1]]
Base.invokelatest(ItrigDemo.calleach, [cc,cc])
# Now use UInt8 & Float64 elements to force inference on double, without forcing new inference on its callers
cc1, cc2 = [Any[0x01]], [Any[1.0]]
return @snoop_inference Base.invokelatest(ItrigDemo.calleach, [cc1, cc2])
end
"""
tinf = SnoopCompile.itrigs_higherorder_demo()
A simple demonstration of handling higher-order methods with inference triggers. This demo defines a module
```julia
module ItrigHigherOrderDemo
double(x) = 2x
@noinline function mymap!(f, dst, src)
for i in eachindex(dst, src)
dst[i] = f(src[i])
end
return dst
end
@noinline mymap(f::F, src) where F = mymap!(f, Vector{Any}(undef, length(src)), src)
callmymap(src) = mymap(double, src)
end
```
The key feature of this set of definitions is that the function `double` gets passed as an argument
through `mymap` and `mymap!` (the latter are [higher-order functions](https://en.wikipedia.org/wiki/Higher-order_function)).
It then "warms up" (forces inference on) `callmymap(::Vector{Any})`, `mymap(::typeof(double), ::Vector{Any})`,
`mymap!(::typeof(double), ::Vector{Any}, ::Vector{Any})` and `double(::Int)`:
```julia
ItrigHigherOrderDemo.callmymap(Any[1, 2])
```
Then it collects and returns inference data using
```julia
@snoop_inference ItrigHigherOrderDemo.callmymap(Any[1.0, 2.0])
```
which forces inference for `double(::Float64)`.
See [`skiphigherorder`](@ref) for an example using this demo.
"""
function itrigs_higherorder_demo()
eval(:(
module ItrigHigherOrderDemo
double(x) = 2x
@noinline function mymap!(f, dst, src)
for i in eachindex(dst, src)
dst[i] = f(src[i])
end
return dst
end
@noinline mymap(f::F, src) where F = mymap!(f, Vector{Any}(undef, length(src)), src)
callmymap(src) = mymap(double, src)
end
))
# Call once to infer `callmymap(::Vector{Any})`, `mymap(::typeof(double), ::Vector{Any})`,
# `mymap!(::typeof(double), ::Vector{Any}, ::Vector{Any})` and `double(::Int)`
Base.invokelatest(ItrigHigherOrderDemo.callmymap, Any[1, 2])
src = Any[1.0, 2.0] # double not yet inferred for Float64
return @snoop_inference Base.invokelatest(ItrigHigherOrderDemo.callmymap, src)
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 4516 | # Combining invalidation and snoop_inference data
struct StaleTree
method::Method
reason::Symbol # :inserting or :deleting
mt_backedges::Vector{Tuple{Any,Union{InstanceNode,MethodInstance},Vector{InferenceTimingNode}}} # sig=>root
backedges::Vector{Tuple{InstanceNode,Vector{InferenceTimingNode}}}
end
StaleTree(method::Method, reason) = StaleTree(method, reason, staletree_storage()...)
staletree_storage() = (
Tuple{Any,Union{InstanceNode,MethodInstance},Vector{InferenceTimingNode}}[],
Tuple{InstanceNode,Vector{InferenceTimingNode}}[])
function Base.show(io::IO, tree::StaleTree)
iscompact = get(io, :compact, false)::Bool
print(io, tree.reason, " ")
printstyled(io, tree.method, color = :light_magenta)
println(io, " invalidated:")
indent = iscompact ? "" : " "
if !isempty(tree.mt_backedges)
print(io, indent, "mt_backedges: ")
showlist(io, tree.mt_backedges, length(indent)+length("mt_backedges")+2)
end
if !isempty(tree.backedges)
print(io, indent, "backedges: ")
showlist(io, tree.backedges, length(indent)+length("backedges")+2)
end
iscompact && print(io, ';')
end
function printdata(io, tnodes::AbstractVector{InferenceTimingNode})
if length(tnodes) == 1
print(io, tnodes[1])
else
print(io, sum(inclusive, tnodes), " inclusive time for $(length(tnodes)) nodes")
end
end
"""
staletrees = precompile_blockers(invalidations, tinf::InferenceTimingNode)
Select just those invalidations that contribute to "stale nodes" in `tinf`, and link them together.
This can allow one to identify specific blockers of precompilation for particular MethodInstances.
# Example
```julia
using SnoopCompileCore
invalidations = @snoop_invalidations using PkgA, PkgB;
using SnoopCompile
trees = invalidation_trees(invalidations)
tinf = @snoop_inference begin
some_workload()
end
staletrees = precompile_blockers(trees, tinf)
```
In many cases, this reduces the number of invalidations that require analysis by one or more orders of magnitude.
!!! info
`precompile_blockers` is experimental and has not yet been thoroughly vetted by real-world use.
Users are encouraged to try it and report any "misses" or unnecessary "hits."
"""
function precompile_blockers(trees::Vector{MethodInvalidations}, tinf::InferenceTimingNode; kwargs...)
sig2node = nodedict!(IdDict{Type,InferenceTimingNode}(), tinf)
snodes = stalenodes(tinf; kwargs...)
mi2stalenode = Dict(MethodInstance(node) => i for (i, node) in enumerate(snodes))
# Prepare "thinned trees" focusing just on those invalidations that blocked precompilation
staletrees = StaleTree[]
for tree in trees
mt_backedges, backedges = staletree_storage()
for (sig, root) in tree.mt_backedges
triggers = Set{MethodInstance}()
thinned = filterbranch(node -> haskey(mi2stalenode, convert(MethodInstance, node)), root, triggers)
if thinned !== nothing
push!(mt_backedges, (sig, thinned, [snodes[mi2stalenode[mi]] for mi in triggers]))
end
end
for root in tree.backedges
triggers = Set{MethodInstance}()
thinned = filterbranch(node -> haskey(mi2stalenode, convert(MethodInstance, node)), root, triggers)
if thinned !== nothing
push!(backedges, (thinned, [snodes[mi2stalenode[mi]] for mi in triggers]))
end
end
if !isempty(mt_backedges) || !isempty(backedges)
sort!(mt_backedges; by=suminclusive)
sort!(backedges; by=suminclusive)
push!(staletrees, StaleTree(tree.method, tree.reason, mt_backedges, backedges))
end
end
sort!(staletrees; by=inclusive)
return staletrees
end
precompile_blockers(invalidations, tinf::InferenceTimingNode; kwargs...) =
precompile_blockers(invalidation_trees(invalidations)::Vector{MethodInvalidations}, tinf; kwargs...)
function nodedict!(d, tinf::InferenceTimingNode)
for child in tinf.children
sig = MethodInstance(child).specTypes
oldinf = get(d, sig, nothing)
if oldinf === nothing || inclusive(child) > inclusive(oldinf)
d[sig] = child
end
nodedict!(d, child)
end
return d
end
suminclusive(t::Tuple) = sum(inclusive, last(t))
SnoopCompileCore.inclusive(tree::StaleTree) =
sum(suminclusive, tree.mt_backedges; init=0.0) +
sum(suminclusive, tree.backedges; init=0.0)
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 30254 | export uinvalidated, invalidation_trees, filtermod, findcaller
function from_corecompiler(mi::MethodInstance)
fn = fullname(mi.def.module)
length(fn) < 2 && return false
fn[1] === :Core || return false
return fn[2] === :Compiler
end
"""
umis = uinvalidated(invlist)
Return the unique invalidated MethodInstances. `invlist` is obtained from [`SnoopCompileCore.@snoop_invalidations`](@ref).
This is similar to `filter`ing for `MethodInstance`s in `invlist`, except that it discards any tagged
`"invalidate_mt_cache"`. These can typically be ignored because they are nearly inconsequential:
they do not invalidate any compiled code, they only transiently affect an optimization of runtime dispatch.
"""
function uinvalidated(invlist; exclude_corecompiler::Bool=true)
umis = Set{MethodInstance}()
i, ilast = firstindex(invlist), lastindex(invlist)
while i <= ilast
item = invlist[i]
if isa(item, Core.MethodInstance)
if i < lastindex(invlist)
if invlist[i+1] == "invalidate_mt_cache"
i += 2
continue
end
if invlist[i+1] == "verify_methods"
# Skip over the cause, which can also be a MethodInstance
# These may be superseded, but they aren't technically invalidated
# (e.g., could still be called via `invoke`)
i += 2
end
end
if !exclude_corecompiler || !from_corecompiler(item)
push!(umis, item)
end
end
i += 1
end
return umis
end
# Variable names:
# - `node`, `root`, `leaf`, `parent`, `child`: all `InstanceNode`s, a.k.a. nodes in a MethodInstance tree
# - `methinvs::MethodInvalidations`: the set of invalidations that occur from inserting or deleting a method
# - `trees`: a list of `methinvs`
dummy() = nothing
dummy()
const dummyinstance = first(specializations(which(dummy, ())))
mutable struct InstanceNode
mi::MethodInstance
depth::Int32
children::Vector{InstanceNode}
parent::InstanceNode
# Create a new tree. Creates the `root`, but returns the `leaf` holding `mi`.
# `root == leaf` if `depth = 0`, otherwise parent "dummy" nodes are inserted until
# the root is created at `depth` 0.
function InstanceNode(mi::MethodInstance, depth)
leaf = new(mi, depth, InstanceNode[])
child = leaf
while depth > 0
depth -= 1
parent = new(dummyinstance, depth, InstanceNode[])
push!(parent.children, child)
child.parent = parent
child = parent
end
return leaf
end
InstanceNode(mi::MethodInstance, children::Vector{InstanceNode}) = return new(mi, 0, children)
# Create child with a given `parent`. Checks that the depths are consistent.
function InstanceNode(mi::MethodInstance, parent::InstanceNode, depth=parent.depth+Int32(1))
depth !== nothing && @assert parent.depth + Int32(1) == depth
child = new(mi, depth, InstanceNode[], parent)
push!(parent.children, child)
return child
end
# Creating a tree, starting with the leaves (omits `parent`)
function InstanceNode(node::InstanceNode, newchildren::Vector{InstanceNode})
return new(node.mi, node.depth, newchildren)
end
end
Core.MethodInstance(node::InstanceNode) = node.mi
Base.convert(::Type{MethodInstance}, node::InstanceNode) = node.mi
AbstractTrees.children(node::InstanceNode) = node.children
function getroot(node::InstanceNode)
while isdefined(node, :parent)
node = node.parent
end
return node
end
function Base.any(f, node::InstanceNode)
f(node) && return true
return any(f, node.children)
end
# TODO: deprecate this in favor of `AbstractTrees.print_tree`, and limit it to one layer (e.g., like `:showchildren=>false`)
function Base.show(io::IO, node::InstanceNode; methods=false, maxdepth::Int=5, minchildren::Int=round(Int, sqrt(countchildren(node))))
if get(io, :limit, false)
print(io, node.mi, " at depth ", node.depth, " with ", countchildren(node), " children")
else
nc = map(countchildren, node.children)
s = sum(nc) + length(node.children)
indent = " "^Int(node.depth)
print(io, indent, methods ? node.mi.def : node.mi)
println(io, " (", s, " children)")
if get(io, :showchildren, true)::Bool
p = sortperm(nc)
skipped = false
for i in p
child = node.children[i]
if child.depth <= maxdepth && nc[i] >= minchildren
show(io, child; methods=methods, maxdepth=maxdepth, minchildren=minchildren)
else
skipped = true
end
end
if skipped
println(io, indent, "⋮")
return nothing
end
end
end
end
Base.show(node::InstanceNode; kwargs...) = show(stdout, node; kwargs...)
function copybranch(node::InstanceNode)
children = InstanceNode[copybranch(child) for child in node.children]
newnode = InstanceNode(node, children)
for child in children
child.parent = newnode
end
return newnode
end
function countchildren(node::InstanceNode)
n = length(node.children)
for child in node.children
n += countchildren(child)
end
return n
end
struct MethodInvalidations
method::Method
reason::Symbol # :inserting or :deleting
mt_backedges::Vector{Pair{Type,Union{InstanceNode,MethodInstance}}} # sig=>root
backedges::Vector{InstanceNode}
mt_cache::Vector{MethodInstance}
mt_disable::Vector{MethodInstance}
end
methinv_storage() = Pair{Type,InstanceNode}[], InstanceNode[], MethodInstance[], MethodInstance[]
function MethodInvalidations(method::Method, reason::Symbol)
MethodInvalidations(method, reason, methinv_storage()...)
end
Base.isempty(methinvs::MethodInvalidations) = isempty(methinvs.mt_backedges) && isempty(methinvs.backedges) # ignore mt_cache
function Base.:(==)(methinvs1::MethodInvalidations, methinvs2::MethodInvalidations)
methinvs1.method == methinvs2.method || return false
methinvs1.reason == methinvs2.reason || return false
methinvs1.mt_backedges == methinvs2.mt_backedges || return false
methinvs1.backedges == methinvs2.backedges || return false
methinvs1.mt_cache == methinvs2.mt_cache || return false
methinvs1.mt_disable == methinvs2.mt_disable || return false
return true
end
countchildren(sigtree::Pair{<:Any,Union{InstanceNode,MethodInstance}}) = countchildren(sigtree.second)
countchildren(::MethodInstance) = 1
function countchildren(methinvs::MethodInvalidations)
n = 0
for list in (methinvs.mt_backedges, methinvs.backedges)
for root in list
n += countchildren(root)
end
end
return n
end
function Base.sort!(methinvs::MethodInvalidations)
sort!(methinvs.mt_backedges; by=countchildren)
sort!(methinvs.backedges; by=countchildren)
return methinvs
end
# We could use AbstractTrees here, but typically one is not interested in the full tree,
# just the top method and the number of children it has
function Base.show(io::IO, methinvs::MethodInvalidations)
iscompact = get(io, :compact, false)::Bool
print(io, methinvs.reason, " ")
printstyled(io, methinvs.method, color = :light_magenta)
println(io, " invalidated:")
indent = iscompact ? "" : " "
if !isempty(methinvs.mt_backedges)
print(io, indent, "mt_backedges: ")
showlist(io, methinvs.mt_backedges, length(indent)+length("mt_backedges")+2)
end
if !isempty(methinvs.backedges)
print(io, indent, "backedges: ")
showlist(io, methinvs.backedges, length(indent)+length("backedges")+2)
end
if !isempty(methinvs.mt_disable)
print(io, indent, "mt_disable: ")
println(io, first(methinvs.mt_disable))
if length(methinvs.mt_disable) > 1
println(io, indent, " "^12, "+", length(methinvs.mt_disable)-1, " more")
end
end
if !isempty(methinvs.mt_cache)
println(io, indent, length(methinvs.mt_cache), " mt_cache")
end
iscompact && print(io, ';')
end
function showlist(io::IO, treelist, indent::Int=0)
iscompact = get(io, :compact, false)::Bool
n = length(treelist)
nd = ndigits(n)
for i = 1:n
print(io, lpad(i, nd), ": ")
root = treelist[i]
sig = nothing
if isa(root, Pair)
print(io, "signature ")
sig = root.first
if isa(sig, MethodInstance)
# "insert_backedges_callee"/"insert_backedges" (delayed) invalidations
printstyled(io, try which(sig.specTypes) catch _ "(unavailable)" end, color = :light_cyan)
print(io, " (formerly ", sig.def, ')')
else
# `sig` (immediate) invalidations
printstyled(io, sig, color = :light_cyan)
end
print(io, " triggered ")
sig = root.first
root = root.second
elseif isa(root, Tuple)
printstyled(IOContext(io, :showchildren=>false), root[end-1], color = :light_yellow)
print(io, " blocked ")
printdata(io, root[end])
root = nothing
else
print(io, "superseding ")
printstyled(io, convert(MethodInstance, root).def , color = :light_cyan)
print(io, " with ")
sig = root.mi.def.sig
end
if root !== nothing
printstyled(io, convert(MethodInstance, root), color = :light_yellow)
print(io, " (", countchildren(root), " children)")
end
if iscompact
i < n && print(io, ", ")
else
print(io, '\n')
i < n && print(io, " "^indent)
end
end
end
new_backedge_table() = Dict{Union{Int32,MethodInstance},Union{Tuple{Any,Vector{Any}},InstanceNode}}()
"""
report_invalidations(
io::IO = stdout;
invalidations,
n_rows::Int = 10,
process_filename::Function = x -> x,
)
Print a tabular summary of invalidations given:
- `invalidations` the output of [`SnoopCompileCore.@snoop_invalidations`](@ref)
and (optionally)
- `io::IO` IO stream. Defaults to `stdout`
- `n_rows::Int` the number of rows to be displayed in the
truncated table. A value of 0 indicates no truncation.
A positive value will truncate the table to the specified
number of rows.
- `process_filename(::String)::String` a function to post-process
each filename, where invalidations are found
# Example usage
```julia
import SnoopCompileCore
invalidations = SnoopCompileCore.@snoop_invalidations begin
# load packages & define any additional methods
end;
using SnoopCompile
using PrettyTables # to load report_invalidations
report_invalidations(;invalidations)
```
Using `report_invalidations` requires that you first load the `PrettyTables.jl` package.
"""
function report_invalidations end
"""
trees = invalidation_trees(list)
Parse `list`, as captured by [`SnoopCompileCore.@snoop_invalidations`](@ref), into a set of invalidation trees, where parents nodes
were called by their children.
# Example
```julia
julia> f(x::Int) = 1
f (generic function with 1 method)
julia> f(x::Bool) = 2
f (generic function with 2 methods)
julia> applyf(container) = f(container[1])
applyf (generic function with 1 method)
julia> callapplyf(container) = applyf(container)
callapplyf (generic function with 1 method)
julia> c = Any[1]
1-element Array{Any,1}:
1
julia> callapplyf(c)
1
julia> trees = invalidation_trees(@snoop_invalidations f(::AbstractFloat) = 3)
1-element Array{SnoopCompile.MethodInvalidations,1}:
inserting f(::AbstractFloat) in Main at REPL[36]:1 invalidated:
mt_backedges: 1: signature Tuple{typeof(f),Any} triggered MethodInstance for applyf(::Array{Any,1}) (1 children) more specific
```
See the documentation for further details.
"""
function invalidation_trees(list; exclude_corecompiler::Bool=true)
function handle_insert_backedges(list, i, callee)
key, causes = list[i+=1], list[i+=1]
backedge_table[key] = (callee, causes)
return i
end
methodinvs = MethodInvalidations[]
delayed = Pair{Vector{Any},Vector{MethodInstance}}[] # from "insert_backedges" invalidations
leaf = nothing
mt_backedges, backedges, mt_cache, mt_disable = methinv_storage()
reason = nothing
backedge_table = new_backedge_table()
inserted_backedges = false
i = 0
while i < length(list)
item = list[i+=1]
if isa(item, MethodInstance)
mi = item
item = list[i+=1]
if isa(item, Int32)
depth = item
if leaf === nothing
leaf = InstanceNode(mi, depth)
else
# Recurse back up the tree until we find the right parent
node = leaf
while node.depth >= depth
node = node.parent
end
leaf = InstanceNode(mi, node, depth)
end
elseif isa(item, String)
loctag = item
if loctag ∉ ("insert_backedges_callee", "verify_methods") && inserted_backedges
# The integer index resets between packages, clear all with integer keys
ikeys = collect(Iterators.filter(x -> isa(x, Integer), keys(backedge_table)))
for key in ikeys
delete!(backedge_table, key)
end
inserted_backedges = false
end
if loctag == "invalidate_mt_cache"
push!(mt_cache, mi)
leaf = nothing
elseif loctag == "jl_method_table_insert"
if leaf !== nothing # we logged without actually invalidating anything (issue #354)
root = getroot(leaf)
root.mi = mi
if !exclude_corecompiler || !from_corecompiler(mi)
push!(backedges, root)
end
leaf = nothing
end
elseif loctag == "jl_insert_method_instance"
@assert leaf !== nothing
root = getroot(leaf)
root = only(root.children)
push!(mt_backedges, mi=>root)
elseif loctag == "jl_insert_method_instance caller"
if leaf === nothing
leaf = InstanceNode(mi, 1)
else
push!(leaf.children, mi)
end
elseif loctag == "jl_method_table_disable"
if leaf === nothing
push!(mt_disable, mi)
else
root = getroot(leaf)
root.mi = mi
if !exclude_corecompiler || !from_corecompiler(mi)
push!(backedges, root)
end
leaf = nothing
end
elseif loctag == "insert_backedges_callee"
i = handle_insert_backedges(list, i, mi)
inserted_backedges = true
elseif loctag == "verify_methods"
next = list[i+=1]
if isa(next, Integer)
ret = get(backedge_table, next, nothing)
ret === nothing && (@warn "$next not found in `backedge_table`"; continue)
trig, causes = ret
if isa(trig, MethodInstance)
newnode = InstanceNode(trig, 1)
push!(backedges, newnode)
newchild = InstanceNode(mi, 2)
push!(newnode.children, newchild)
backedge_table[trig] = newnode
backedge_table[mi] = newchild
else
newnode = InstanceNode(mi, 1)
push!(mt_backedges, trig => newnode)
backedge_table[mi] = newnode
end
for cause in causes
add_method_trigger!(methodinvs, cause, :inserting, mt_backedges, backedges, mt_cache, mt_disable)
end
mt_backedges, backedges, mt_cache, mt_disable = methinv_storage()
leaf = nothing
reason = nothing
else
@assert isa(next, MethodInstance) "unexpected logging format"
parent = get(backedge_table, next, nothing)
parent === nothing && (@warn "$next not found in `backedge_table`"; continue)
found = false
for child in parent.children
if child.mi == mi
found = true
break
end
end
if !found
newnode = InstanceNode(mi, parent)
if !haskey(backedge_table, mi)
backedge_table[mi] = newnode
end
end
end
elseif loctag == "insert_backedges"
key = (list[i+=1], list[i+=1])
trig, causes = backedge_table[key]
if leaf !== nothing
root = getroot(leaf)
root.mi = mi
if trig isa MethodInstance
oldroot = root
root = InstanceNode(trig, [root])
oldroot.parent = root
push!(backedges, root)
else
push!(mt_backedges, trig=>root)
end
end
for cause in causes
add_method_trigger!(methodinvs, cause, :inserting, mt_backedges, backedges, mt_cache, mt_disable)
end
mt_backedges, backedges, mt_cache, mt_disable = methinv_storage()
leaf = nothing
reason = nothing
else
error("unexpected loctag ", loctag, " at ", i)
end
else
error("unexpected item ", item, " at ", i)
end
elseif isa(item, Method)
method = item
isassigned(list, i+1) || @show i
item = list[i+=1]
if isa(item, String)
reason = checkreason(reason, item)
add_method_trigger!(methodinvs, method, reason, mt_backedges, backedges, mt_cache, mt_disable)
mt_backedges, backedges, mt_cache, mt_disable = methinv_storage()
leaf = nothing
reason = nothing
else
error("unexpected item ", item, " at ", i)
end
elseif isa(item, String)
# This shouldn't happen
reason = checkreason(reason, item)
push!(backedges, getroot(leaf))
leaf = nothing
reason = nothing
elseif isa(item, Type)
if length(list) > i && list[i+1] == "insert_backedges_callee"
i = handle_insert_backedges(list, i+1, item)
else
root = getroot(leaf)
if !exclude_corecompiler || !from_corecompiler(root.mi)
push!(mt_backedges, item=>root)
end
leaf = nothing
end
elseif isa(item, Core.TypeMapEntry) && list[i+1] == "invalidate_mt_cache"
i += 1
else
error("unexpected item ", item, " at ", i)
end
end
@assert all(isempty, Any[mt_backedges, backedges, #= mt_cache, =# mt_disable])
# Handle the delayed invalidations
callee2idx = Dict{Method,Int}()
for (i, methinvs) in enumerate(methodinvs)
for (sig, root) in methinvs.mt_backedges
for node in PreOrderDFS(root)
callee2idx[MethodInstance(node).def] = i
end
end
for root in methinvs.backedges
for node in PreOrderDFS(root)
callee2idx[MethodInstance(node).def] = i
end
end
end
solved = Int[]
for (i, (callees, callers)) in enumerate(delayed)
for callee in callees
if isa(callee, MethodInstance)
idx = get(callee2idx, callee.def, nothing)
if idx !== nothing
for caller in callers
join_invalidations!(methodinvs[idx].mt_backedges, [callee => caller])
end
push!(solved, i)
break
end
end
end
end
deleteat!(delayed, solved)
if !isempty(delayed)
@warn "Could not attribute the following delayed invalidations:"
for (callees, callers) in delayed
@assert !isempty(callees) # this shouldn't ever happen
printstyled(length(callees) == 1 ? callees[1] : callees; color = :light_cyan)
print(" invalidated ")
printstyled(length(callers) == 1 ? callers[1] : callers; color = :light_yellow)
println()
end
end
return sort!(methodinvs; by=countchildren)
end
function add_method_trigger!(methodinvs, method::Method, reason::Symbol, mt_backedges, backedges, mt_cache, mt_disable)
found = false
for tree in methodinvs
if tree.method == method && tree.reason == reason
join_invalidations!(tree.mt_backedges, mt_backedges)
append!(tree.backedges, backedges)
append!(tree.mt_cache, mt_cache)
append!(tree.mt_disable, mt_disable)
found = true
break
end
end
found || push!(methodinvs, sort!(MethodInvalidations(method, reason, mt_backedges, backedges, mt_cache, mt_disable)))
return methodinvs
end
function join_invalidations!(list::AbstractVector{<:Pair}, items::AbstractVector{<:Pair})
for (key, root) in items
found = false
node, mi = isa(root, MethodInstance) ? (InstanceNode(root, 0), root) : (root, root.mi)
for (key2, root2) in list
key2 == key || continue
mi2 = root2.mi
if mi2 == mi
# Find the first branch that isn't shared
join_branches!(node, root2)
found = true
break
end
end
found || push!(list, key => node)
end
return list
end
function join_branches!(to, from)
for cfrom in from.children
found = false
for cto in to.children
if cfrom.mi == cto.mi
join_branches!(cto, cfrom)
found = true
break
end
end
found || push!(to.children, cfrom)
end
return to
end
function checkreason(reason, loctag)
if loctag == "jl_method_table_disable"
@assert reason === nothing || reason === :deleting
reason = :deleting
elseif loctag == "jl_method_table_insert"
@assert reason === nothing || reason === :inserting
reason = :inserting
else
error("unexpected reason ", loctag)
end
return reason
end
"""
thinned = filtermod(module, trees::AbstractVector{MethodInvalidations}; recursive=false)
Select just the cases of invalidating a method defined in `module`.
If `recursive` is false, only the roots of trees are examined (i.e., the proximal source of
the invalidation must be in `module`). If `recursive` is true, then `thinned` contains
all routes to a method in `module`.
"""
function filtermod(mod::Module, trees::AbstractVector{MethodInvalidations}; kwargs...)
# We don't just broadcast because we want to filter at all levels
thinned = MethodInvalidations[]
for methinvs in trees
_invs = filtermod(mod, methinvs; kwargs...)
isempty(_invs) || push!(thinned, _invs)
end
return sort!(thinned; by=countchildren)
end
hasmod(mod::Module, node::InstanceNode) = hasmod(mod, MethodInstance(node))
hasmod(mod::Module, mi::MethodInstance) = mi.def.module === mod
function filtermod(mod::Module, methinvs::MethodInvalidations; recursive::Bool=false)
if recursive
out = MethodInvalidations(methinvs.method, methinvs.reason)
for (sig, node) in methinvs.mt_backedges
newnode = filtermod(mod, node)
if newnode !== nothing
push!(out.mt_backedges, sig => newnode)
end
end
for node in methinvs.backedges
newnode = filtermod(mod, node)
if newnode !== nothing
push!(out.backedges, newnode)
end
end
return out
end
mt_backedges = filter(pr->hasmod(mod, pr.second), methinvs.mt_backedges)
backedges = filter(root->hasmod(mod, root), methinvs.backedges)
return MethodInvalidations(methinvs.method, methinvs.reason, mt_backedges, backedges,
copy(methinvs.mt_cache), copy(methinvs.mt_disable))
end
function filterbranch(f, node::InstanceNode, storage=nothing)
if !isempty(node.children)
newchildren = InstanceNode[]
for child in node.children
newchild = filterbranch(f, child, storage)
if newchild !== nothing
push!(newchildren, newchild)
end
end
if !isempty(newchildren)
newnode = InstanceNode(node, newchildren)
for child in newchildren
child.parent = newnode
end
return newnode
end
end
if f(node)
storage !== nothing && push!(storage, convert(eltype(storage), node))
return copybranch(node)
end
return nothing
end
function filterbranch(f, node::MethodInstance, storage=nothing)
if f(node)
storage !== nothing && push!(storage, convert(eltype(storage), node))
return node
end
return nothing
end
filtermod(mod::Module, node::InstanceNode) = filterbranch(n -> hasmod(mod, n), node)
function filtermod(mod::Module, mi::MethodInstance)
m = mi.def
if isa(m, Method)
return m.module == mod ? mi : nothing
end
return m == mod ? mi : nothing
end
"""
methinvs = findcaller(method::Method, trees)
Find a path through `trees` that reaches `method`. Returns a single `MethodInvalidations` object.
# Examples
Suppose you know that loading package `SomePkg` triggers invalidation of `f(data)`.
You can find the specific source of invalidation as follows:
```
f(data) # run once to force compilation
m = @which f(data)
using SnoopCompile
trees = invalidation_trees(@snoop_invalidations using SomePkg)
methinvs = findcaller(m, trees)
```
If you don't know which method to look for, but know some operation that has had added latency,
you can look for methods using `@snoopi`. For example, suppose that loading `SomePkg` makes the
next `using` statement slow. You can find the source of trouble with
```
julia> using SnoopCompile
julia> trees = invalidation_trees(@snoop_invalidations using SomePkg);
julia> tinf = @snoopi using SomePkg # this second `using` will need to recompile code invalidated above
1-element Array{Tuple{Float64,Core.MethodInstance},1}:
(0.08518409729003906, MethodInstance for require(::Module, ::Symbol))
julia> m = tinf[1][2].def
require(into::Module, mod::Symbol) in Base at loading.jl:887
julia> findcaller(m, trees)
inserting ==(x, y::SomeType) in SomeOtherPkg at /path/to/code:100 invalidated:
backedges: 1: superseding ==(x, y) in Base at operators.jl:83 with MethodInstance for ==(::Symbol, ::Any) (16 children) more specific
```
"""
function findcaller(meth::Method, trees::AbstractVector{MethodInvalidations})
for methinvs in trees
ret = findcaller(meth, methinvs)
ret === nothing || return ret
end
return nothing
end
function findcaller(meth::Method, methinvs::MethodInvalidations)
function newtree(vectree)
root0 = pop!(vectree)
root = InstanceNode(root0.mi, root0.depth)
return newtree!(root, vectree)
end
function newtree!(parent, vectree)
isempty(vectree) && return getroot(parent)
child = pop!(vectree)
newchild = InstanceNode(child.mi, parent, child.depth) # prune all branches except the one leading through child.mi
return newtree!(newchild, vectree)
end
for (sig, node) in methinvs.mt_backedges
ret = findcaller(meth, node)
ret === nothing && continue
return MethodInvalidations(methinvs.method, methinvs.reason, [Pair{Any,InstanceNode}(sig, newtree(ret))], InstanceNode[],
copy(methinvs.mt_cache), copy(methinvs.mt_disable))
end
for node in methinvs.backedges
ret = findcaller(meth, node)
ret === nothing && continue
return MethodInvalidations(methinvs.method, methinvs.reason, Pair{Any,InstanceNode}[], [newtree(ret)],
copy(methinvs.mt_cache), copy(methinvs.mt_disable))
end
return nothing
end
function findcaller(meth::Method, node::InstanceNode)
meth === node.mi.def && return [node]
for child in node.children
ret = findcaller(meth, child)
if ret !== nothing
push!(ret, node)
return ret
end
end
return nothing
end
findcaller(meth::Method, mi::MethodInstance) = mi.def == meth ? mi : nothing
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 80073 | import FlameGraphs
using Base.StackTraces: StackFrame
using FlameGraphs.LeftChildRightSiblingTrees: Node, addchild
using AbstractTrees
using Core.Compiler.Timings: InferenceFrameInfo
using SnoopCompileCore: InferenceTiming, InferenceTimingNode, inclusive, exclusive
using Profile
const InferenceNode = Union{InferenceFrameInfo,InferenceTiming,InferenceTimingNode}
const flamegraph = FlameGraphs.flamegraph # For re-export
const rextest = r"Test\.jl$" # for detecting calls from a @testset
# While it might be nice to put some of these in SnoopCompileCore,
# SnoopCompileCore guarantees that it doesn't extend any Base function.
Core.MethodInstance(mi_info::InferenceFrameInfo) = mi_info.mi
Core.MethodInstance(t::InferenceTiming) = MethodInstance(t.mi_info)
Core.MethodInstance(t::InferenceTimingNode) = MethodInstance(t.mi_timing)
Core.Method(x::InferenceNode) = MethodInstance(x).def::Method # deliberately throw an error if this is a module
Base.convert(::Type{InferenceTiming}, node::InferenceTimingNode) = node.mi_timing
isROOT(mi::MethodInstance) = mi === Core.Compiler.Timings.ROOTmi
isROOT(m::Method) = m === Core.Compiler.Timings.ROOTmi.def
isROOT(mi_info::InferenceNode) = isROOT(MethodInstance(mi_info))
isROOT(node::InferenceTimingNode) = isROOT(node.mi_timing)
getroot(node::InferenceTimingNode) = isdefined(node.parent, :parent) ? getroot(node.parent) : node
# Record instruction pointers we've already looked up (performance optimization)
const lookups = Dict{Union{UInt, Core.Compiler.InterpreterIP}, Vector{StackTraces.StackFrame}}()
lookups_key(ip) = ip
lookups_key(ip::Ptr{Nothing}) = UInt(ip)
# These should be in SnoopCompileCore, except that it promises not to specialize Base methods
Base.show(io::IO, t::InferenceTiming) = (print(io, "InferenceTiming: "); _show(io, t))
function _show(io::IO, t::InferenceTiming)
print(io, @sprintf("%8.6f", exclusive(t)), "/", @sprintf("%8.6f", inclusive(t)), " on ")
print(io, stripifi(t.mi_info))
end
function Base.show(io::IO, node::InferenceTimingNode)
print(io, "InferenceTimingNode: ")
_show(io, node.mi_timing)
print(io, " with ", string(length(node.children)), " direct children")
end
"""
flatten(tinf; tmin = 0.0, sortby=exclusive)
Flatten the execution graph of `InferenceTimingNode`s returned from `@snoop_inference` into a Vector of `InferenceTiming`
frames, each encoding the time needed for inference of a single `MethodInstance`.
By default, results are sorted by `exclusive` time (the time for inferring the `MethodInstance` itself, not including
any inference of its callees); other options are `sortedby=inclusive` which includes the time needed for the callees,
or `nothing` to obtain them in the order they were inferred (depth-first order).
# Example
We'll use [`SnoopCompile.flatten_demo`](@ref), which runs `@snoop_inference` on a workload designed to yield reproducible results:
```jldoctest flatten; setup=:(using SnoopCompile), filter=r"([0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?/[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?|WARNING: replacing module FlattenDemo\\.\\n)"
julia> tinf = SnoopCompile.flatten_demo()
InferenceTimingNode: 0.002148974/0.002767166 on Core.Compiler.Timings.ROOT() with 1 direct children
julia> using AbstractTrees; print_tree(tinf)
InferenceTimingNode: 0.00242354/0.00303526 on Core.Compiler.Timings.ROOT() with 1 direct children
└─ InferenceTimingNode: 0.000150891/0.000611721 on SnoopCompile.FlattenDemo.packintype(::$Int) with 2 direct children
├─ InferenceTimingNode: 0.000105318/0.000105318 on SnoopCompile.FlattenDemo.MyType{$Int}(::$Int) with 0 direct children
└─ InferenceTimingNode: 9.43e-5/0.000355512 on SnoopCompile.FlattenDemo.dostuff(::SnoopCompile.FlattenDemo.MyType{$Int}) with 2 direct children
├─ InferenceTimingNode: 6.6458e-5/0.000124716 on SnoopCompile.FlattenDemo.extract(::SnoopCompile.FlattenDemo.MyType{$Int}) with 2 direct children
│ ├─ InferenceTimingNode: 3.401e-5/3.401e-5 on getproperty(::SnoopCompile.FlattenDemo.MyType{$Int}, ::Symbol) with 0 direct children
│ └─ InferenceTimingNode: 2.4248e-5/2.4248e-5 on getproperty(::SnoopCompile.FlattenDemo.MyType{$Int}, x::Symbol) with 0 direct children
└─ InferenceTimingNode: 0.000136496/0.000136496 on SnoopCompile.FlattenDemo.domath(::$Int) with 0 direct children
```
Note the printing of `getproperty(::SnoopCompile.FlattenDemo.MyType{$Int}, x::Symbol)`: it shows the specific Symbol, here `:x`,
that `getproperty` was inferred with. This reflects constant-propagation in inference.
Then:
```jldoctest flatten; setup=:(using SnoopCompile), filter=[r"[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?/[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?", r"WARNING: replacing module FlattenDemo.*"]
julia> flatten(tinf; sortby=nothing)
8-element Vector{SnoopCompileCore.InferenceTiming}:
InferenceTiming: 0.002423543/0.0030352639999999998 on Core.Compiler.Timings.ROOT()
InferenceTiming: 0.000150891/0.0006117210000000001 on SnoopCompile.FlattenDemo.packintype(::$Int)
InferenceTiming: 0.000105318/0.000105318 on SnoopCompile.FlattenDemo.MyType{$Int}(::$Int)
InferenceTiming: 9.43e-5/0.00035551200000000005 on SnoopCompile.FlattenDemo.dostuff(::SnoopCompile.FlattenDemo.MyType{$Int})
InferenceTiming: 6.6458e-5/0.000124716 on SnoopCompile.FlattenDemo.extract(::SnoopCompile.FlattenDemo.MyType{$Int})
InferenceTiming: 3.401e-5/3.401e-5 on getproperty(::SnoopCompile.FlattenDemo.MyType{$Int}, ::Symbol)
InferenceTiming: 2.4248e-5/2.4248e-5 on getproperty(::SnoopCompile.FlattenDemo.MyType{$Int}, x::Symbol)
InferenceTiming: 0.000136496/0.000136496 on SnoopCompile.FlattenDemo.domath(::$Int)
```
```
julia> flatten(tinf; tmin=1e-4) # sorts by exclusive time (the time before the '/')
4-element Vector{SnoopCompileCore.InferenceTiming}:
InferenceTiming: 0.000105318/0.000105318 on SnoopCompile.FlattenDemo.MyType{$Int}(::$Int)
InferenceTiming: 0.000136496/0.000136496 on SnoopCompile.FlattenDemo.domath(::$Int)
InferenceTiming: 0.000150891/0.0006117210000000001 on SnoopCompile.FlattenDemo.packintype(::$Int)
InferenceTiming: 0.002423543/0.0030352639999999998 on Core.Compiler.Timings.ROOT()
julia> flatten(tinf; sortby=inclusive, tmin=1e-4) # sorts by inclusive time (the time after the '/')
6-element Vector{SnoopCompileCore.InferenceTiming}:
InferenceTiming: 0.000105318/0.000105318 on SnoopCompile.FlattenDemo.MyType{$Int}(::$Int)
InferenceTiming: 6.6458e-5/0.000124716 on SnoopCompile.FlattenDemo.extract(::SnoopCompile.FlattenDemo.MyType{$Int})
InferenceTiming: 0.000136496/0.000136496 on SnoopCompile.FlattenDemo.domath(::$Int)
InferenceTiming: 9.43e-5/0.00035551200000000005 on SnoopCompile.FlattenDemo.dostuff(::SnoopCompile.FlattenDemo.MyType{$Int})
InferenceTiming: 0.000150891/0.0006117210000000001 on SnoopCompile.FlattenDemo.packintype(::$Int)
InferenceTiming: 0.002423543/0.0030352639999999998 on Core.Compiler.Timings.ROOT()
```
As you can see, `sortby` affects not just the order but also the selection of frames; with exclusive times, `dostuff` did
not on its own rise above threshold, but it does when using inclusive times.
See also: [`accumulate_by_source`](@ref).
"""
function flatten(tinf::InferenceTimingNode; tmin = 0.0, sortby::Union{typeof(exclusive),typeof(inclusive),Nothing}=exclusive)
out = InferenceTiming[]
flatten!(sortby === nothing ? exclusive : sortby, out, tinf, tmin)
return sortby===nothing ? out : sort!(out; by=sortby)
end
function flatten!(gettime::Union{typeof(exclusive),typeof(inclusive)}, out, node, tmin)
time = gettime(node)
if time >= tmin
push!(out, node.mi_timing)
end
for child in node.children
flatten!(gettime, out, child, tmin)
end
return out
end
"""
accumulate_by_source(flattened; tmin = 0.0, by=exclusive)
Add the inference timings for all `MethodInstance`s of a single `Method` together.
`flattened` is the output of [`flatten`](@ref).
Returns a list of `(t, method)` tuples.
When the accumulated time for a `Method` is large, but each instance is small, it indicates
that it is being inferred for many specializations (which might include specializations with different constants).
# Example
We'll use [`SnoopCompile.flatten_demo`](@ref), which runs `@snoop_inference` on a workload designed to yield reproducible results:
```jldoctest accum1; setup=:(using SnoopCompile), filter=[r"(in|@)", r"([0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?|:[0-9]+\\)|at .*/inference_demos.jl:\\d+|at Base\\.jl:\\d+|at compiler/typeinfer\\.jl:\\d+|WARNING: replacing module FlattenDemo\\.\\n)"]
julia> tinf = SnoopCompile.flatten_demo()
InferenceTimingNode: 0.004978/0.005447 on Core.Compiler.Timings.ROOT() with 1 direct children
julia> accumulate_by_source(flatten(tinf))
7-element Vector{Tuple{Float64, Union{Method, Core.MethodInstance}}}:
(4.6294999999999996e-5, getproperty(x, f::Symbol) @ Base Base.jl:37)
(5.8965e-5, dostuff(y) @ SnoopCompile.FlattenDemo ~/.julia/dev/SnoopCompile/src/inference_demos.jl:45)
(6.4141e-5, extract(y::SnoopCompile.FlattenDemo.MyType) @ SnoopCompile.FlattenDemo ~/.julia/dev/SnoopCompile/src/inference_demos.jl:36)
(8.9997e-5, (var"#ctor-self#"::Type{SnoopCompile.FlattenDemo.MyType{T}} where T)(x) @ SnoopCompile.FlattenDemo ~/.julia/dev/SnoopCompile/src/inference_demos.jl:35)
(9.2256e-5, domath(x) @ SnoopCompile.FlattenDemo ~/.julia/dev/SnoopCompile/src/inference_demos.jl:41)
(0.000117514, packintype(x) @ SnoopCompile.FlattenDemo ~/.julia/dev/SnoopCompile/src/inference_demos.jl:37)
(0.004977755, ROOT() @ Core.Compiler.Timings compiler/typeinfer.jl:79)
```
Compared to the output from [`flatten`](@ref), the two inferences passes on `getproperty` have been consolidated into a single aggregate call.
"""
function accumulate_by_source(::Type{M}, flattened::Vector{InferenceTiming}; tmin = 0.0, by::Union{typeof(exclusive),typeof(inclusive)}=exclusive) where M<:Union{Method,MethodInstance}
tmp = Dict{Union{M,MethodInstance},Float64}()
for frame in flattened
mi = MethodInstance(frame)
m = mi.def
if M === Method && isa(m, Method)
tmp[m] = get(tmp, m, 0.0) + by(frame)
else
tmp[mi] = by(frame) # module-level thunks are stored verbatim
end
end
return sort(Tuple{Float64,Union{M,MethodInstance}}[(t, m) for (m, t) in tmp if t >= tmin]; by=first)
end
accumulate_by_source(flattened::Vector{InferenceTiming}; kwargs...) = accumulate_by_source(Method, flattened; kwargs...)
"""
list = collect_for(m::Method, tinf::InferenceTimingNode)
list = collect_for(m::MethodInstance, tinf::InferenceTimingNode)
Collect all `InferenceTimingNode`s (descendants of `tinf`) that match `m`.
"""
collect_for(target::Union{Method,MethodInstance}, tinf::InferenceTimingNode) = collect_for!(InferenceTimingNode[], target, tinf)
function collect_for!(out, target, tinf)
matches(mi::MethodInstance, node) = MethodInstance(node) == mi
matches(m::Method, node) = (mi = MethodInstance(node); mi.def == m)
matches(target, tinf) && push!(out, tinf)
for child in tinf.children
collect_for!(out, target, child)
end
return out
end
"""
staleinstances(tinf::InferenceTimingNode)
Return a list of `InferenceTimingNode`s corresponding to `MethodInstance`s that have "stale" code
(specifically, `CodeInstance`s with outdated `max_world` world ages).
These may be a hint that invalidation occurred while running the workload provided to `@snoop_inference`,
and consequently an important origin of (re)inference.
!!! warning
`staleinstances` only looks *retrospectively* for stale code; it does not distinguish whether the code became
stale while running `@snoop_inference` from whether it was already stale before execution commenced.
While `staleinstances` is recommended as a useful "sanity check" to run before performing a detailed analysis of inference,
any serious examination of invalidation should use [`@snoop_invalidations`](@ref).
For more information about world age, see https://docs.julialang.org/en/v1/manual/methods/#Redefining-Methods.
"""
staleinstances(root::InferenceTimingNode; min_world_exclude = UInt(1)) = staleinstances!(InferenceTiming[], root, Base.get_world_counter(), UInt(min_world_exclude)::UInt)
stalenodes(root::InferenceTimingNode; min_world_exclude = UInt(1)) = staleinstances!(InferenceTimingNode[], root, Base.get_world_counter(), UInt(min_world_exclude)::UInt)
function staleinstances!(out, node::InferenceTimingNode, world::UInt, min_world_exclude::UInt)
if hasstaleinstance(MethodInstance(node), world, min_world_exclude)
push!(out, node)
last(out) == node && return out # don't check children if we collected the whole branch
end
for child in node.children
staleinstances!(out, child, world, min_world_exclude)
end
return out
end
# Tip: the following is useful in conjunction with MethodAnalysis.methodinstances() to discover pre-existing stale code
function hasstaleinstance(mi::MethodInstance, world::UInt = Base.get_world_counter(), min_world_exclude::UInt = UInt(1))
m = mi.def
mod = isa(m, Module) ? m : m.module
if Base.parentmodule(mod) !== Core # Core runs in an old world
if isdefined(mi, :cache)
# Check all CodeInstances
ci = mi.cache
while true
if min_world_exclude <= ci.max_world < world # 0 indicates a CodeInstance loaded from precompile cache
return true
end
if isdefined(ci, :next)
ci = ci.next
else
break
end
end
end
end
return false
end
## parcel and supporting infrastructure
function isprecompilable(mi::MethodInstance; excluded_modules=Set([Main::Module]))
m = mi.def
if isa(m, Method)
mod = m.module
can_eval = excluded_modules === nothing || mod ∉ excluded_modules
if can_eval
params = Base.unwrap_unionall(mi.specTypes)::DataType
for p in params.parameters
if p isa Type
if !known_type(mod, p)
can_eval = false
break
end
end
end
end
return can_eval
end
return false
end
struct Precompiles
mi_info::InferenceFrameInfo # entrance point to inference (the "root")
total_time::Float64 # total time for the root
precompiles::Vector{Tuple{Float64,MethodInstance}} # list of precompilable child MethodInstances with their times
end
Precompiles(node::InferenceTimingNode) = Precompiles(InferenceTiming(node).mi_info, inclusive(node), Tuple{Float64,MethodInstance}[])
Core.MethodInstance(pc::Precompiles) = MethodInstance(pc.mi_info)
SnoopCompileCore.inclusive(pc::Precompiles) = pc.total_time
precompilable_time(precompiles::Vector{Tuple{Float64,MethodInstance}}) = sum(first, precompiles; init=0.0)
precompilable_time(precompiles::Dict{MethodInstance,T}) where T = sum(values(precompiles); init=zero(T))
precompilable_time(pc::Precompiles) = precompilable_time(pc.precompiles)
function Base.show(io::IO, pc::Precompiles)
tpc = precompilable_time(pc)
print(io, "Precompiles: ", pc.total_time, " for ", MethodInstance(pc),
" had ", length(pc.precompiles), " precompilable roots reclaiming ", tpc,
" ($(round(Int, 100*tpc/pc.total_time))%)")
end
function precompilable_roots!(pc, node::InferenceTimingNode, tthresh; excluded_modules=Set([Main::Module]))
(t = inclusive(node)) >= tthresh || return pc
mi = MethodInstance(node)
if isprecompilable(mi; excluded_modules)
push!(pc.precompiles, (t, mi))
return pc
end
foreach(node.children) do c
precompilable_roots!(pc, c, tthresh; excluded_modules=excluded_modules)
end
return pc
end
function precompilable_roots(node::InferenceTimingNode, tthresh; kwargs...)
pcs = [precompilable_roots!(Precompiles(child), child, tthresh; kwargs...) for child in node.children if inclusive(node) >= tthresh]
t_grand_total = sum(inclusive, node.children)
tpc = precompilable_time.(pcs)
p = sortperm(tpc)
return (t_grand_total, pcs[p])
end
function parcel((t_grand_total,pcs)::Tuple{Float64,Vector{Precompiles}})
# Because the same MethodInstance can be compiled multiple times for different Const values,
# we just keep the largest time observed per MethodInstance.
pcdict = Dict{Module,Dict{MethodInstance,Float64}}()
for pc in pcs
for (t, mi) in pc.precompiles
m = mi.def
mod = isa(m, Method) ? m.module : m
pcmdict = get!(Dict{MethodInstance,Float64}, pcdict, mod)
pcmdict[mi] = max(t, get(pcmdict, mi, zero(Float64)))
end
end
pclist = [mod => (precompilable_time(pcmdict), sort!([(t, mi) for (mi, t) in pcmdict]; by=first)) for (mod, pcmdict) in pcdict]
sort!(pclist; by = pr -> pr.second[1])
return t_grand_total, pclist
end
"""
ttot, pcs = SnoopCompile.parcel(tinf::InferenceTimingNode)
Parcel the "root-most" precompilable MethodInstances into separate modules.
These can be used to generate `precompile` directives to cache the results of type-inference,
reducing latency on first use.
Loosely speaking, and MethodInstance is precompilable if the module that owns the method also
has access to all the types it need to precompile the instance.
When the root node of an entrance to inference is not itself precompilable, `parcel` examines the
children (and possibly, children's children...) until it finds the first node on each branch that
is precompilable. `MethodInstances` are then assigned to the module that owns the method.
`ttot` is the total inference time; `pcs` is a list of `module => (tmod, pclist)` pairs. For each module,
`tmod` is the amount of inference time affiliated with methods owned by that module; `pclist` is a list
of `(t, mi)` time/MethodInstance tuples.
See also: [`SnoopCompile.write`](@ref).
# Example
We'll use [`SnoopCompile.itrigs_demo`](@ref), which runs `@snoop_inference` on a workload designed to yield reproducible results:
```jldoctest parceltree; setup=:(using SnoopCompile), filter=r"([0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?|WARNING: replacing module ItrigDemo\\.\\n|UInt8|Float64|SnoopCompile\\.ItrigDemo\\.)"
julia> tinf = SnoopCompile.itrigs_demo()
InferenceTimingNode: 0.004490576/0.004711168 on Core.Compiler.Timings.ROOT() with 2 direct children
julia> ttot, pcs = SnoopCompile.parcel(tinf);
julia> ttot
0.000220592
julia> pcs
1-element Vector{Pair{Module, Tuple{Float64, Vector{Tuple{Float64, Core.MethodInstance}}}}}:
SnoopCompile.ItrigDemo => (0.000220592, [(9.8986e-5, MethodInstance for double(::Float64)), (0.000121606, MethodInstance for double(::UInt8))])
```
Since there was only one module, `ttot` is the same as `tmod`. The `ItrigDemo` module had two precomilable MethodInstances,
each listed with its corresponding inclusive time.
"""
parcel(tinf::InferenceTimingNode; tmin=0.0, kwargs...) = parcel(precompilable_roots(tinf, tmin; kwargs...))
### write
function get_reprs(tmi::Vector{Tuple{Float64,MethodInstance}}; tmin=0.001, kwargs...)
strs = OrderedSet{String}()
modgens = Dict{Module, Vector{Method}}()
tmp = String[]
twritten = 0.0
for (t, mi) in reverse(tmi)
if t >= tmin
if add_repr!(tmp, modgens, mi; check_eval=false, time=t, kwargs...)
str = pop!(tmp)
if !any(rex -> occursin(rex, str), default_exclusions)
push!(strs, str)
twritten += t
end
end
end
end
return strs, twritten
end
function write(io::IO, tmi::Vector{Tuple{Float64,MethodInstance}}; indent::AbstractString=" ", kwargs...)
strs, twritten = get_reprs(tmi; kwargs...)
for str in strs
println(io, indent, str)
end
return twritten, length(strs)
end
function write(prefix::AbstractString, pc::Vector{Pair{Module,Tuple{Float64,Vector{Tuple{Float64,MethodInstance}}}}}; ioreport::IO=stdout, header::Bool=true, always::Bool=false, kwargs...)
if !isdir(prefix)
mkpath(prefix)
end
for (mod, ttmi) in pc
tmod, tmi = ttmi
v, twritten = get_reprs(tmi; kwargs...)
if isempty(v)
println(ioreport, "$mod: no precompile statements out of $tmod")
continue
end
open(joinpath(prefix, "precompile_$(mod).jl"), "w") do io
if header
if any(str->occursin("__lookup", str), v)
println(io, lookup_kwbody_str)
end
println(io, "function _precompile_()")
!always && println(io, " ccall(:jl_generating_output, Cint, ()) == 1 || return nothing")
end
for ln in v
println(io, " ", ln)
end
header && println(io, "end")
end
println(ioreport, "$mod: precompiled $twritten out of $tmod")
end
end
## Profile-guided de-optimization
# These tools can help balance the need for specialization (to achieve good runtime performance)
# against the desire to reduce specialization to reduce latency.
struct MethodLoc
func::Symbol
file::Symbol
line::Int
end
MethodLoc(sf::StackTraces.StackFrame) = MethodLoc(sf.func, sf.file, sf.line)
Base.show(io::IO, ml::MethodLoc) = print(io, ml.func, " at ", ml.file, ':', ml.line, " [inlined and pre-inferred]")
struct PGDSData
trun::Float64 # runtime cost
trtd::Float64 # runtime dispatch cost
tinf::Float64 # inference time (either exclusive/inclusive depending on settings)
nspec::Int # number of specializations
end
PGDSData() = PGDSData(0.0, 0.0, 0.0, 0)
"""
ridata = runtime_inferencetime(tinf::InferenceTimingNode; consts=true, by=inclusive)
ridata = runtime_inferencetime(tinf::InferenceTimingNode, profiledata; lidict, consts=true, by=inclusive)
Compare runtime and inference-time on a per-method basis. `ridata[m::Method]` returns `(trun, tinfer, nspecializations)`,
measuring the approximate amount of time spent running `m`, inferring `m`, and the number of type-specializations, respectively.
`trun` is estimated from profiling data, which the user is responsible for capturing before the call.
Typically `tinf` is collected via `@snoop_inference` on the first call (in a fresh session) to a workload,
and the profiling data collected on a subsequent call. In some cases you may need to repeat the workload
several times to collect enough profiling samples.
`profiledata` and `lidict` are obtained from `Profile.retrieve()`.
"""
function runtime_inferencetime(tinf::InferenceTimingNode; kwargs...)
pdata = Profile.fetch()
lookup_firstip!(lookups, pdata)
return runtime_inferencetime(tinf, pdata; lidict=lookups, kwargs...)
end
function runtime_inferencetime(tinf::InferenceTimingNode, pdata;
lidict, consts::Bool=true,
by::Union{typeof(exclusive),typeof(inclusive)}=inclusive,
delay::Float64=ccall(:jl_profile_delay_nsec, UInt64, ())/10^9)
tf = flatten(tinf)
tm = accumulate_by_source(Method, tf; by=by) # this `by` is actually irrelevant, but less confusing this way
# MethodInstances that get inlined don't have the linfo field. Guess the method from the name/line/file.
# Filenames are complicated because of variations in how paths are encoded, especially for methods in Base & stdlibs.
methodlookup = Dict{Tuple{Symbol,Int},Vector{Pair{String,Method}}}() # (func, line) => [file => method]
for (_, m) in tm
isa(m, Method) || continue
fm = get!(Vector{Pair{String,Method}}, methodlookup, (m.name, Int(m.line)))
push!(fm, string(m.file) => m)
end
function matchloc(loc::MethodLoc)
fm = get(methodlookup, (loc.func, Int(loc.line)), nothing)
fm === nothing && return loc
meths = Set{Method}()
locfile = string(loc.file)
for (f, m) in fm
endswith(locfile, f) && push!(meths, m)
end
length(meths) == 1 && return pop!(meths)
return loc
end
matchloc(sf::StackTraces.StackFrame) = matchloc(MethodLoc(sf))
ridata = Dict{Union{Method,MethodLoc},PGDSData}()
# Insert the profiling data
lilists, nselfs, nrtds = select_firstip(pdata, lidict)
for (sfs, nself, nrtd) in zip(lilists, nselfs, nrtds)
for sf in sfs
mi = sf.linfo
m = isa(mi, MethodInstance) ? mi.def : matchloc(sf)
if isa(m, Method) || isa(m, MethodLoc)
d = get(ridata, m, PGDSData())
ridata[m] = PGDSData(d.trun + nself*delay, d.trtd + nrtd*delay, d.tinf, d.nspec)
else
@show typeof(m) m
error("whoops")
end
end
end
# Now add inference times & specialization counts. To get the counts we go back to tf rather than using tm.
if !consts
for (t, mi) in accumulate_by_source(MethodInstance, tf; by=by)
isROOT(mi) && continue
m = mi.def
if isa(m, Method)
d = get(ridata, m, PGDSData())
ridata[m] = PGDSData(d.trun, d.trtd, d.tinf + t, d.nspec + 1)
end
end
else
for frame in tf
isROOT(frame) && continue
t = by(frame)
m = MethodInstance(frame).def
if isa(m, Method)
d = get(ridata, m, PGDSData())
ridata[m] = PGDSData(d.trun, d.trtd, d.tinf + t, d.nspec + 1)
end
end
end
# Sort the outputs to try to prioritize opportunities for the developer. Because we have multiple objectives (fast runtime
# and fast compile time), there's no unique sorting order, nor can we predict the cost to runtime performance of reducing
# the method specialization. Here we use the following approximation: we naively estimate "what the inference time could be" if
# there were only one specialization of each method, and the answers are sorted by the estimated savings. This does not
# even attempt to account for any risk to the runtime. For any serious analysis, looking at the scatter plot with
# [`specialization_plot`](@ref) is recommended.
savings(d::PGDSData) = d.tinf * (d.nspec - 1)
savings(pr::Pair) = savings(pr.second)
return sort(collect(ridata); by=savings)
end
function lookup_firstip!(lookups, pdata)
isfirst = true
for (i, ip) in enumerate(pdata)
if isfirst
sfs = get!(()->Base.StackTraces.lookup(ip), lookups, ip)
if !all(sf -> sf.from_c, sfs)
isfirst = false
end
end
if ip == 0
isfirst = true
end
end
return lookups
end
function select_firstip(pdata, lidict)
counter = Dict{eltype(pdata),Tuple{Int,Int}}()
isfirst = true
isrtd = false
for ip in pdata
if isfirst
sfs = lidict[ip]
if !all(sf -> sf.from_c, sfs)
n, nrtd = get(counter, ip, (0, 0))
counter[ip] = (n + 1, nrtd + isrtd)
isfirst = isrtd = false
else
for sf in sfs
isrtd |= FlameGraphs.status(sf) & FlameGraphs.runtime_dispatch
end
end
end
if ip == 0
isfirst = true
isrtd = false
end
end
lilists, nselfs, nrtds = valtype(lidict)[], Int[], Int[]
for (ip, (n, nrtd)) in counter
push!(lilists, lidict[ip])
push!(nselfs, n)
push!(nrtds, nrtd)
end
return lilists, nselfs, nrtds
end
## Analysis of inference triggers
"""
InferenceTrigger(callee::MethodInstance, callerframes::Vector{StackFrame}, btidx::Int, bt)
Organize information about the "triggers" of inference. `callee` is the `MethodInstance` requiring inference,
`callerframes`, `btidx` and `bt` contain information about the caller.
`callerframes` are the frame(s) of call site that triggered inference; it's a `Vector{StackFrame}`, rather than a
single `StackFrame`, due to the possibility that the caller was inlined into something else, in which case the first entry
is the direct caller and the last entry corresponds to the MethodInstance into which it was ultimately inlined.
`btidx` is the index in `bt`, the backtrace collected upon entry into inference, corresponding to `callerframes`.
`InferenceTrigger`s are created by calling [`inference_triggers`](@ref).
See also: [`callerinstance`](@ref) and [`callingframe`](@ref).
"""
struct InferenceTrigger
node::InferenceTimingNode
callerframes::Vector{StackTraces.StackFrame}
btidx::Int # callerframes = StackTraces.lookup(bt[btidx])
end
function Base.show(io::IO, itrig::InferenceTrigger)
print(io, "Inference triggered to call ")
printstyled(io, stripmi(MethodInstance(itrig.node)); color=:yellow)
if !isempty(itrig.callerframes)
sf = first(itrig.callerframes)
print(io, " from ")
printstyled(io, sf.func; color=:red, bold=true)
print(io, " (", sf.file, ':', sf.line, ')')
caller = itrig.callerframes[end].linfo
if isa(caller, MethodInstance)
length(itrig.callerframes) == 1 ? print(io, " with specialization ") : print(io, " inlined into ")
printstyled(io, stripmi(caller); color=:blue)
if length(itrig.callerframes) > 1
sf = itrig.callerframes[end]
print(io, " (", sf.file, ':', sf.line, ')')
end
elseif isa(caller, Core.CodeInfo)
print(io, " called from toplevel code ", caller)
end
else
print(io, " called from toplevel")
end
end
"""
mi = callerinstance(itrig::InferenceTrigger)
Return the MethodInstance `mi` of the caller in the selected stackframe in `itrig`.
"""
callerinstance(itrig::InferenceTrigger) = itrig.callerframes[end].linfo
function callerinstances(itrigs::AbstractVector{InferenceTrigger})
callers = Set{MethodInstance}()
for itrig in itrigs
!isempty(itrig.callerframes) && push!(callers, callerinstance(itrig))
end
return callers
end
function callermodule(itrig::InferenceTrigger)
if !isempty(itrig.callerframes)
m = callerinstance(itrig).def
return isa(m, Module) ? m : m.module
end
return nothing
end
# Select the next (caller) frame that's a Julia (as opposed to C) frame; returns the stackframe and its index in bt, or nothing
function next_julia_frame(bt, idx, Δ=1; methodinstanceonly::Bool=true, methodonly::Bool=true)
while 1 <= idx+Δ <= length(bt)
ip = lookups_key(bt[idx+=Δ])
sfs = get!(()->Base.StackTraces.lookup(ip), lookups, ip)
sf = sfs[end]
sf.from_c && continue
mi = sf.linfo
methodinstanceonly && (isa(mi, Core.MethodInstance) || continue)
if isa(mi, MethodInstance)
m = mi.def
methodonly && (isa(m, Method) || continue)
# Exclude frames that are in Core.Compiler
isa(m, Method) && m.module === Core.Compiler && continue
end
return sfs, idx
end
return nothing
end
SnoopCompileCore.exclusive(itrig::InferenceTrigger) = exclusive(itrig.node)
SnoopCompileCore.inclusive(itrig::InferenceTrigger) = inclusive(itrig.node)
StackTraces.stacktrace(itrig::InferenceTrigger) = stacktrace(itrig.node.bt)
isprecompilable(itrig::InferenceTrigger) = isprecompilable(MethodInstance(itrig.node))
"""
itrigs = inference_triggers(tinf::InferenceTimingNode; exclude_toplevel=true)
Collect the "triggers" of inference, each a fresh entry into inference via a call dispatched at runtime.
All the entries in `itrigs` are previously uninferred, or are freshly-inferred for specific constant inputs.
`exclude_toplevel` determines whether calls made from the REPL, `include`, or test suites are excluded.
# Example
We'll use [`SnoopCompile.itrigs_demo`](@ref), which runs `@snoop_inference` on a workload designed to yield reproducible results:
```jldoctest triggers; setup=:(using SnoopCompile), filter=r"([0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?|.*/inference_demos\\.jl:\\d+|WARNING: replacing module ItrigDemo\\.\\n)"
julia> tinf = SnoopCompile.itrigs_demo()
InferenceTimingNode: 0.004490576/0.004711168 on Core.Compiler.Timings.ROOT() with 2 direct children
julia> itrigs = inference_triggers(tinf)
2-element Vector{InferenceTrigger}:
Inference triggered to call MethodInstance for double(::UInt8) from calldouble1 (/pathto/SnoopCompile/src/inference_demos.jl:86) inlined into MethodInstance for calldouble2(::Vector{Vector{Any}}) (/pathto/SnoopCompile/src/inference_demos.jl:87)
Inference triggered to call MethodInstance for double(::Float64) from calldouble1 (/pathto/SnoopCompile/src/inference_demos.jl:86) inlined into MethodInstance for calldouble2(::Vector{Vector{Any}}) (/pathto/SnoopCompile/src/inference_demos.jl:87)
```
```
julia> edit(itrigs[1]) # opens an editor at the spot in the caller
julia> using Cthulhu
julia> ascend(itrigs[2]) # use Cthulhu to inspect the stacktrace (caller is the second item in the trace)
Choose a call for analysis (q to quit):
> double(::Float64)
calldouble1 at /pathto/SnoopCompile/src/inference_demos.jl:86 => calldouble2(::Vector{Vector{Any}}) at /pathto/SnoopCompile/src/inference_demos.jl:87
calleach(::Vector{Vector{Vector{Any}}}) at /pathto/SnoopCompile/src/inference_demos.jl:88
...
```
"""
function inference_triggers(tinf::InferenceTimingNode; exclude_toplevel::Bool=true)
function first_julia_frame(bt)
ret = next_julia_frame(bt, 1)
if ret === nothing
return StackTraces.StackFrame[], 0
end
return ret
end
itrigs = map(tinf.children) do child
bt = child.bt
bt === nothing && throw(ArgumentError("it seems you've supplied a child node, but backtraces are collected only at the entrance to inference"))
InferenceTrigger(child, first_julia_frame(bt)...)
end
if exclude_toplevel
filter!(maybe_internal, itrigs)
end
return itrigs
end
function maybe_internal(itrig::InferenceTrigger)
for sf in itrig.callerframes
linfo = sf.linfo
if isa(linfo, MethodInstance)
m = linfo.def
if isa(m, Method)
if m.module === Base
m.name === :include_string && return false
m.name === :_include_from_serialized && return false
m.name === :return_types && return false # from `@inferred`
end
m.name === :eval && return false
end
end
match(rextest, string(sf.file)) !== nothing && return false
end
return true
end
"""
itrigcaller = callingframe(itrig::InferenceTrigger)
"Step out" one layer of the stacktrace, referencing the caller of the current frame of `itrig`.
You can retrieve the proximal trigger of inference with `InferenceTrigger(itrigcaller)`.
# Example
We collect data using the [`SnoopCompile.itrigs_demo`](@ref):
```julia
julia> itrig = inference_triggers(SnoopCompile.itrigs_demo())[1]
Inference triggered to call MethodInstance for double(::UInt8) from calldouble1 (/pathto/SnoopCompile/src/parcel_snoop_inference.jl:762) inlined into MethodInstance for calldouble2(::Vector{Vector{Any}}) (/pathto/SnoopCompile/src/parcel_snoop_inference.jl:763)
julia> itrigcaller = callingframe(itrig)
Inference triggered to call MethodInstance for double(::UInt8) from calleach (/pathto/SnoopCompile/src/parcel_snoop_inference.jl:764) with specialization MethodInstance for calleach(::Vector{Vector{Vector{Any}}})
```
"""
function callingframe(itrig::InferenceTrigger)
idx = itrig.btidx
if idx < length(itrig.node.bt)
ret = next_julia_frame(itrig.node.bt, idx)
if ret !== nothing
return InferenceTrigger(itrig.node, ret...)
end
end
return InferenceTrigger(itrig.node, StackTraces.StackFrame[], length(itrig.node.bt)+1)
end
"""
itrig0 = InferenceTrigger(itrig::InferenceTrigger)
Reset an inference trigger to point to the stackframe that triggered inference.
This can be useful to undo the actions of [`callingframe`](@ref) and [`skiphigherorder`](@ref).
"""
InferenceTrigger(itrig::InferenceTrigger) = InferenceTrigger(itrig.node, next_julia_frame(itrig.node.bt, 1)...)
"""
itrignew = skiphigherorder(itrig; exact::Bool=false)
Attempt to skip over frames of higher-order functions that take the callee as a function-argument.
This can be useful if you're analyzing inference triggers for an entire package and would prefer to assign
triggers to package-code rather than Base functions like `map!`, `broadcast`, etc.
# Example
We collect data using the [`SnoopCompile.itrigs_higherorder_demo`](@ref):
```julia
julia> itrig = inference_triggers(SnoopCompile.itrigs_higherorder_demo())[1]
Inference triggered to call MethodInstance for double(::Float64) from mymap! (/pathto/SnoopCompile/src/parcel_snoop_inference.jl:706) with specialization MethodInstance for mymap!(::typeof(SnoopCompile.ItrigHigherOrderDemo.double), ::Vector{Any}, ::Vector{Any})
julia> callingframe(itrig) # step out one (non-inlined) frame
Inference triggered to call MethodInstance for double(::Float64) from mymap (/pathto/SnoopCompile/src/parcel_snoop_inference.jl:710) with specialization MethodInstance for mymap(::typeof(SnoopCompile.ItrigHigherOrderDemo.double), ::Vector{Any})
julia> skiphigherorder(itrig) # step out to frame that doesn't have `double` as a function-argument
Inference triggered to call MethodInstance for double(::Float64) from callmymap (/pathto/SnoopCompile/src/parcel_snoop_inference.jl:711) with specialization MethodInstance for callmymap(::Vector{Any})
```
!!! warn
By default `skiphigherorder` is conservative, and insists on being sure that it's the callee being passed to the higher-order function.
Higher-order functions that do not get specialized (e.g., with `::Function` argument types) will not be skipped over.
You can pass `exact=false` to allow `::Function` to also be passed over, but keep in mind that this may falsely skip some frames.
"""
function skiphigherorder(itrig::InferenceTrigger; exact::Bool=true)
ft = Base.unwrap_unionall(Base.unwrap_unionall(MethodInstance(itrig.node).specTypes).parameters[1])
sfs, idx = itrig.callerframes, itrig.btidx
while idx < length(itrig.node.bt)
if !isempty(sfs)
callermi = sfs[end].linfo
if !hasparameter(callermi.specTypes, ft, exact)
return InferenceTrigger(itrig.node, sfs, idx)
end
end
ret = next_julia_frame(itrig.node.bt, idx)
ret === nothing && return InferenceTrigger(itrig.node, sfs, idx)
sfs, idx = ret
end
return itrig
end
function hasparameter(@nospecialize(typ), @nospecialize(ft), exact::Bool)
isa(typ, Type) || return false
typ = Base.unwrap_unionall(typ)
typ === ft && return true
exact || (typ === Function && return true)
typ === Union{} && return false
if isa(typ, Union)
hasparameter(typ.a, ft, exact) && return true
hasparameter(typ.b, ft, exact) && return true
return false
end
for p in typ.parameters
hasparameter(p, ft, exact) && return true
end
return false
end
"""
ncallees, ncallers = diversity(itrigs::AbstractVector{InferenceTrigger})
Count the number of distinct MethodInstances among the callees and callers, respectively, among the triggers in `itrigs`.
"""
function diversity(itrigs)
# Analyze caller => callee argument type diversity
callees, callers, ncextra = Set{MethodInstance}(), Set{MethodInstance}(), 0
for itrig in itrigs
push!(callees, MethodInstance(itrig.node))
caller = itrig.callerframes[end].linfo
if isa(caller, MethodInstance)
push!(callers, caller)
else
ncextra += 1
end
end
return length(callees), length(callers) + ncextra
end
# Integrations
AbstractTrees.children(tinf::InferenceTimingNode) = tinf.children
InteractiveUtils.edit(itrig::InferenceTrigger) = edit(Location(itrig.callerframes[end]))
# JET integrations are implemented lazily
"To use `report_caller` do `using JET`"
function report_caller end
"To use `report_callee` do `using JET`"
function report_callee end
"To use `report_callees` do `using JET`"
function report_callees end
filtermod(mod::Module, itrigs::AbstractVector{InferenceTrigger}) = filter(==(mod) ∘ callermodule, itrigs)
### inference trigger trees
# good for organizing into "events"
struct TriggerNode
itrig::Union{Nothing,InferenceTrigger}
children::Vector{TriggerNode}
parent::TriggerNode
TriggerNode() = new(nothing, TriggerNode[])
TriggerNode(parent::TriggerNode, itrig::InferenceTrigger) = new(itrig, TriggerNode[], parent)
end
function Base.show(io::IO, node::TriggerNode)
print(io, "TriggerNode for ")
AbstractTrees.printnode(io, node)
print(io, " with ", length(node.children), " direct children")
end
AbstractTrees.children(node::TriggerNode) = node.children
function AbstractTrees.printnode(io::IO, node::TriggerNode)
if node.itrig === nothing
print(io, "root")
else
print(io, stripmi(MethodInstance(node.itrig.node)))
end
end
function addchild!(node, itrig)
newnode = TriggerNode(node, itrig)
push!(node.children, newnode)
return newnode
end
truncbt(itrig::InferenceTrigger) = itrig.node.bt[max(1, itrig.btidx):end]
function findparent(node::TriggerNode, bt)
node.itrig === nothing && return node # this is the root
btnode = truncbt(node.itrig)
lbt, lbtnode = length(bt), length(btnode)
if lbt > lbtnode && view(bt, lbt - lbtnode + 1 : lbt) == btnode
return node
end
return findparent(node.parent, bt)
end
"""
root = trigger_tree(itrigs)
Organize inference triggers `itrigs` in tree format, grouping items via the call tree.
It is a tree rather than a more general graph due to the fact that caching inference results means that each node gets
visited only once.
"""
function trigger_tree(itrigs::AbstractVector{InferenceTrigger})
root = node = TriggerNode()
for itrig in itrigs
thisbt = truncbt(itrig)
node = findparent(node, thisbt)
node = addchild!(node, itrig)
end
return root
end
flatten(node::TriggerNode) = flatten!(InferenceTrigger[], node)
function flatten!(itrigs, node::TriggerNode)
if node.itrig !== nothing
push!(itrigs, node.itrig)
end
for child in node.children
flatten!(itrigs, child)
end
return itrigs
end
InteractiveUtils.edit(node::TriggerNode) = edit(node.itrig)
Base.stacktrace(node::TriggerNode) = stacktrace(node.itrig)
### tagged trigger lists
# good for organizing a collection of related triggers
struct TaggedTriggers{TT}
tag::TT
itrigs::Vector{InferenceTrigger}
end
const MethodTriggers = TaggedTriggers{Method}
"""
mtrigs = accumulate_by_source(Method, itrigs::AbstractVector{InferenceTrigger})
Consolidate inference triggers via their caller method. `mtrigs` is a vector of `Method=>list`
pairs, where `list` is a list of `InferenceTrigger`s.
"""
function accumulate_by_source(::Type{Method}, itrigs::AbstractVector{InferenceTrigger})
cs = Dict{Method,Vector{InferenceTrigger}}()
for itrig in itrigs
isempty(itrig.callerframes) && continue
mi = callerinstance(itrig)
m = mi.def
if isa(m, Method)
list = get!(Vector{InferenceTrigger}, cs, m)
push!(list, itrig)
end
end
return sort!([MethodTriggers(m, list) for (m, list) in cs]; by=methtrig->length(methtrig.itrigs))
end
function Base.show(io::IO, methtrigs::MethodTriggers)
ncallees, ncallers = diversity(methtrigs.itrigs)
print(io, methtrigs.tag, " (", ncallees, " callees from ", ncallers, " callers)")
end
"""
modtrigs = filtermod(mod::Module, mtrigs::AbstractVector{MethodTriggers})
Select just the method-based triggers arising from a particular module.
"""
filtermod(mod::Module, mtrigs::AbstractVector{MethodTriggers}) = filter(mtrig -> mtrig.tag.module === mod, mtrigs)
"""
modtrigs = SnoopCompile.parcel(mtrigs::AbstractVector{MethodTriggers})
Split method-based triggers into collections organized by the module in which the methods were defined.
Returns a `module => list` vector, with the module having the most `MethodTriggers` last.
"""
function parcel(mtrigs::AbstractVector{MethodTriggers})
bymod = Dict{Module,Vector{MethodTriggers}}()
for mtrig in mtrigs
m = mtrig.tag
modlist = get!(valtype(bymod), bymod, m.module)
push!(modlist, mtrig)
end
sort!(collect(bymod); by=pr->length(pr.second))
end
InteractiveUtils.edit(mtrigs::MethodTriggers) = edit(mtrigs.tag)
### inference trigger locations
# useful for analyzing patterns at the level of Methods rather than MethodInstances
struct Location # essentially a LineNumberNode + function name
func::Symbol
file::Symbol
line::Int
end
Location(sf::StackTraces.StackFrame) = Location(sf.func, sf.file, sf.line)
function Location(itrig::InferenceTrigger)
isempty(itrig.callerframes) && return Location(:from_c, :from_c, 0)
return Location(itrig.callerframes[1])
end
Base.show(io::IO, loc::Location) = print(io, loc.func, " at ", loc.file, ':', loc.line)
InteractiveUtils.edit(loc::Location) = edit(Base.fixup_stdlib_path(string(loc.file)), loc.line)
const LocationTriggers = TaggedTriggers{Location}
diversity(loctrigs::LocationTriggers) = diversity(loctrigs.itrigs)
function Base.show(io::IO, loctrigs::LocationTriggers)
ncallees, ncallers = diversity(loctrigs)
print(io, loctrigs.tag, " (", ncallees, " callees from ", ncallers, " callers)")
end
InteractiveUtils.edit(loctrig::LocationTriggers) = edit(loctrig.tag)
"""
loctrigs = accumulate_by_source(itrigs::AbstractVector{InferenceTrigger})
Aggregate inference triggers by location (function, file, and line number) of the caller.
# Example
We collect data using the [`SnoopCompile.itrigs_demo`](@ref):
```julia
julia> itrigs = inference_triggers(SnoopCompile.itrigs_demo())
2-element Vector{InferenceTrigger}:
Inference triggered to call MethodInstance for double(::UInt8) from calldouble1 (/pathto/SnoopCompile/src/parcel_snoop_inference.jl:762) inlined into MethodInstance for calldouble2(::Vector{Vector{Any}}) (/pathto/SnoopCompile/src/parcel_snoop_inference.jl:763)
Inference triggered to call MethodInstance for double(::Float64) from calldouble1 (/pathto/SnoopCompile/src/parcel_snoop_inference.jl:762) inlined into MethodInstance for calldouble2(::Vector{Vector{Any}}) (/pathto/SnoopCompile/src/parcel_snoop_inference.jl:763)
julia> accumulate_by_source(itrigs)
1-element Vector{SnoopCompile.LocationTriggers}:
calldouble1 at /pathto/SnoopCompile/src/parcel_snoop_inference.jl:762 (2 callees from 1 callers)
```
"""
function accumulate_by_source(itrigs::AbstractVector{InferenceTrigger}; bycallee::Bool=true)
cs = IdDict{Any,Vector{InferenceTrigger}}()
for itrig in itrigs
lockey = bycallee ? location_key(itrig) : Location(itrig)
itrigs_loc = get!(Vector{InferenceTrigger}, cs, lockey)
push!(itrigs_loc, itrig)
end
loctrigs = [LocationTriggers(lockey isa Location ? lockey : lockey[1], itrigs_loc) for (lockey, itrigs_loc) in cs]
return sort!(loctrigs; by=loctrig->length(loctrig.itrigs))
end
function location_key(itrig::InferenceTrigger)
# Identify a trigger by both its location and what it calls, since some lines can have multiple callees
loc = Location(itrig)
callee = MethodInstance(itrig.node)
tt = Base.unwrap_unionall(callee.specTypes)
isempty(tt.parameters) && return loc, callee.def # MethodInstance thunk
ft = tt.parameters[1]
return loc, ft
end
filtermod(mod::Module, loctrigs::AbstractVector{LocationTriggers}) = filter(loctrigs) do loctrig
any(==(mod) ∘ callermodule, loctrig.itrigs)
end
function linetable_match(linetable::Vector{Core.LineInfoNode}, sffile::String, sffunc::String, sfline::Int)
idxs = Int[]
for (idx, line) in enumerate(linetable)
(line.line == sfline && String(line.method) == sffunc) || continue
# filename matching is a bit troublesome because of differences in naming of Base & stdlibs, defer it
push!(idxs, idx)
end
length(idxs) == 1 && return idxs
# Look at the filename too
delidxs = Int[]
for (i, idx) in enumerate(idxs)
endswith(sffile, String(linetable[idx].file)) || push!(delidxs, i)
end
deleteat!(idxs, delidxs)
return idxs
end
linetable_match(linetable::Vector{Core.LineInfoNode}, sf::StackTraces.StackFrame) =
linetable_match(linetable, String(sf.file)::String, String(sf.func)::String, Int(sf.line)::Int)
### suggestions
@enum Suggestion begin
UnspecCall # a call with unspecified argtypes
UnspecType # type-call (constructor) that is not fully specified
Invoke # an "invoked" call, i.e., what should normally be an inferrable call
CalleeVariable # for f(args...) when f is a runtime variable
CallerVararg # the caller is a varargs function
CalleeVararg # the callee is a varargs function
InvokedCalleeVararg # callee is varargs and it was an invoked call
ErrorPath # inference aborted because this is on a path guaranteed to throw an exception
FromTestDirect # directly called from a @testset
FromTestCallee # one step removed from @testset
CallerInlineable # the caller is inlineworthy
NoCaller # no caller could be determined (e.g., @async)
FromInvokeLatest # called via `Base.invokelatest`
FromInvoke # called via `invoke`
MaybeFromC # no plausible Julia caller could be identified, but possibly due to a @ccall (e.g., finalizers)
HasCoreBox # has a Core.Box slot or ssavalue
end
struct Suggested
itrig::InferenceTrigger
categories::Vector{Suggestion}
end
Suggested(itrig::InferenceTrigger) = Suggested(itrig, Suggestion[])
function Base.show(io::IO, s::Suggested)
if !isempty(s.itrig.callerframes)
sf = s.itrig.callerframes[1]
print(io, sf.file, ':', sf.line, ": ")
sf = s.itrig.callerframes[end]
else
sf = "<none>"
end
rtcallee = MethodInstance(s.itrig.node)
show_suggest(io, s.categories, rtcallee, sf)
end
Base.haskey(s::Suggested, k::Suggestion) = k in s.categories
function show_suggest(io::IO, categories, rtcallee, sf)
showcaller = true
showvahint = showannotate = false
handled = false
if HasCoreBox ∈ categories
coreboxmsg(io)
return nothing
end
if categories == [FromTestDirect]
printstyled(io, "called by Test"; color=:cyan)
print(io, " (ignore)")
return nothing
end
if ErrorPath ∈ categories
printstyled(io, "error path"; color=:cyan)
print(io, " (deliberately uninferred, ignore)")
showcaller = false
elseif NoCaller ∈ categories
printstyled(io, "unknown caller"; color=:cyan)
print(io, ", possibly from a Task")
showcaller = false
elseif FromInvokeLatest ∈ categories
printstyled(io, "called by invokelatest"; color=:cyan)
print(io, " (ignore)")
showcaller = false
elseif FromInvoke ∈ categories
printstyled(io, "called by invoke"; color=:cyan)
print(io, " (ignore)")
showcaller = false
elseif MaybeFromC ∈ categories
printstyled(io, "no plausible Julia caller could be identified, but possibly due to a @ccall"; color=:cyan)
print(io, " (ignore)")
showcaller = false
else
if FromTestCallee ∈ categories && CallerInlineable ∈ categories && CallerVararg ∈ categories && !any(unspec, categories)
printstyled(io, "inlineable varargs called from Test"; color=:cyan)
print(io, " (ignore, it's likely to be inferred from a function)")
showcaller = false
handled = true
elseif categories == [FromTestCallee, CallerInlineable, UnspecType]
printstyled(io, "inlineable type-specialization called from Test"; color=:cyan)
print(io, " (ignore, it's likely to be inferred from a function)")
showcaller = false
handled = true
elseif CallerVararg ∈ categories && CalleeVararg ∈ categories
printstyled(io, "vararg caller and callee"; color=:cyan)
any(unspec, categories) && printstyled(io, " (uninferred)"; color=:cyan)
showvahint = true
showcaller = false
handled = true
elseif CallerInlineable ∈ categories && CallerVararg ∈ categories && any(unspec, categories)
printstyled(io, "uninferred inlineable vararg caller"; color=:cyan)
print(io, " (options: add relevant specialization, ignore)")
handled = true
elseif InvokedCalleeVararg ∈ categories
printstyled(io, "invoked callee is varargs"; color=:cyan)
showvahint = true
end
if !handled
if UnspecCall ∈ categories
printstyled(io, "non-inferrable or unspecialized call"; color=:cyan)
CallerVararg ∈ categories && printstyled(io, " with vararg caller"; color=:cyan)
CalleeVararg ∈ categories && printstyled(io, " with vararg callee"; color=:cyan)
showannotate = true
end
if UnspecType ∈ categories
printstyled(io, "partial type call"; color=:cyan)
CallerVararg ∈ categories && printstyled(io, " with vararg caller"; color=:cyan)
CalleeVararg ∈ categories && printstyled(io, " with vararg callee"; color=:cyan)
showannotate = true
end
if Invoke ∈ categories
printstyled(io, "invoked callee"; color=:cyan)
# if FromTestCallee ∈ categories || FromTestDirect ∈ categories
# print(io, " (consider precompiling ", sf, ")")
# else
print(io, " (", sf, " may fail to precompile)")
# end
showcaller = false
end
if CalleeVariable ∈ categories
printstyled(io, "variable callee"; color=:cyan)
print(io, ", if possible avoid assigning function to variable;\n perhaps use `cond ? f(a) : g(a)` rather than `func = cond ? f : g; func(a)`")
end
if isempty(categories) || categories ⊆ [FromTestDirect, FromTestCallee, CallerVararg, CalleeVararg, CallerInlineable]
printstyled(io, "Unspecialized or unknown"; color=:cyan)
print(io, " for ", stripmi(rtcallee), " consider `stacktrace(itrig)` or `ascend(itrig)` to investigate more deeply")
showcaller = false
end
end
end
if showvahint
print(io, " (options: ignore, homogenize the arguments, declare an umbrella type, or force-specialize the callee ", rtcallee, " in the caller)")
end
if showannotate
if CallerVararg ∈ categories
print(io, ", ignore or perhaps annotate ", sf, " with result type of ", stripmi(rtcallee))
else
print(io, ", perhaps annotate ", sf, " with result type of ", stripmi(rtcallee))
end
print(io, "\nIf a noninferrable argument is a type or function, Julia's specialization heuristics may be responsible.")
end
# if showcaller
# idx = s.itrig.btidx
# ret = next_julia_frame(s.itrig.node.bt, idx; methodonly=false)
# if ret !== nothing
# sfs, idx = ret
# # if categories != [Inlineable]
# # println(io, "\nimmediate caller(s):")
# # show(io, MIME("text/plain"), sfs)
# # end
# # if categories == [Inlineable]
# # print(io, "inlineable (ignore this one)")
# # if (UnspecCall ∈ categories || UnspecType ∈ categories || CallerVararg ∈ categories) && Inlineable ∈ categories
# # print(io, "\nNote: all callers were inlineable and this was called from a Test. You should be able to ignore this.")
# # end
# end
# # See if we can extract a Test line
# ret = next_julia_frame(s.itrig.node.bt, idx; methodonly=false)
# while ret !== nothing
# sfs, idx = ret
# itest = findfirst(sf -> match(rextest, String(sf.file)) !== nothing, sfs)
# if itest !== nothing && itest > 1
# print(io, "\nFrom test at ", sfs[itest-1])
# break
# end
# ret = next_julia_frame(s.itrig.node.bt, idx; methodonly=false)
# end
# end
end
function coreboxmsg(io::IO)
printstyled(io, "has Core.Box"; color=:red)
print(io, " (fix this before tackling other problems, see https://timholy.github.io/SnoopCompile.jl/stable/snoop_invalidations/#Fixing-Core.Box)")
end
"""
isignorable(s::Suggested)
Returns `true` if `s` is unlikely to be an inference problem in need of fixing.
"""
isignorable(s::Suggestion) = !unspec(s)
isignorable(s::Suggested) = all(isignorable, s.categories)
unspec(s::Suggestion) = s ∈ (UnspecCall, UnspecType, CalleeVariable)
unspec(s::Suggested) = any(unspec, s.categories)
Base.stacktrace(s::Suggested) = stacktrace(s.itrig)
InteractiveUtils.edit(s::Suggested) = edit(s.itrig)
"""
suggest(itrig::InferenceTrigger)
Analyze `itrig` and attempt to suggest an interpretation or remedy. This returns a structure of type `Suggested`;
the easiest thing to do with the result is to `show` it; however, you can also filter a list of suggestions.
# Example
```julia
julia> itrigs = inference_triggers(tinf);
julia> sugs = suggest.(itrigs);
julia> sugs_important = filter(!isignorable, sugs) # discard the ones that probably don't need to be addressed
```
!!! warning
Suggestions are approximate at best; most often, the proposed fixes should not be taken literally,
but instead taken as a hint about the "outcome" of a particular runtime dispatch incident.
The suggestions target calls made with non-inferrable argumets, but often the best place to fix the problem
is at an earlier stage in the code, where the argument was first computed.
You can get much deeper insight via `ascend` (and Cthulhu generally), and even `stacktrace` is often useful.
Suggestions are intended to be a quick and easier-to-comprehend first pass at analyzing an inference trigger.
"""
function suggest(itrig::InferenceTrigger)
s = Suggested(itrig)
# Did this call come from a `@testset`?
fromtest = false
ret = next_julia_frame(itrig.node.bt, 1; methodinstanceonly=false, methodonly=false)
if ret !== nothing
sfs, idx = ret
itest = findfirst(sf -> match(rextest, String(sf.file)) !== nothing, sfs)
if itest !== nothing && itest > 1
fromtest = true
push!(s.categories, FromTestDirect)
end
end
if !fromtest
# Also keep track of inline-worthy caller from Test---these would have been OK had they been called from a function
ret = next_julia_frame(itrig.node.bt, itrig.btidx; methodinstanceonly=false, methodonly=false)
if ret !== nothing
sfs, idx = ret
itest = findfirst(sf -> match(rextest, String(sf.file)) !== nothing, sfs)
if itest !== nothing && itest > 1
push!(s.categories, FromTestCallee)
# It's not clear that the following is useful
tt = Base.unwrap_unionall(itrig.callerframes[end].linfo.specTypes)::DataType
cts = Base.code_typed_by_type(tt; debuginfo=:source)
if length(cts) == 1 && (cts[1][1]::CodeInfo).inlineable
push!(s.categories, CallerInlineable)
end
end
end
end
if isempty(itrig.callerframes)
push!(s.categories, NoCaller)
return s
end
if any(frame -> frame.func === :invokelatest, itrig.callerframes)
push!(s.categories, FromInvokeLatest)
end
sf = itrig.callerframes[end]
tt = Base.unwrap_unionall(sf.linfo.specTypes)::DataType
cts = Base.code_typed_by_type(tt; debuginfo=:source)
rtcallee = MethodInstance(itrig.node)
if Base.isvarargtype(tt.parameters[end])
push!(s.categories, CallerVararg)
end
maybec = false
for (ct::CodeInfo, _) in cts
# Check for Core.Box
if hascorebox(ct)
push!(s.categories, HasCoreBox)
end
ltidxs = linetable_match(ct.linetable, itrig.callerframes[1])
stmtidxs = findall(∈(ltidxs), ct.codelocs)
rtcalleename = isa(rtcallee.def, Method) ? (rtcallee.def::Method).name : nothing
for stmtidx in stmtidxs
stmt = ct.code[stmtidx]
if isa(stmt, Expr)
if stmt.head === :invoke
mi = stmt.args[1]::MethodInstance
if mi == MethodInstance(itrig.node)
if mi.def.isva
push!(s.categories, InvokedCalleeVararg)
else
push!(s.categories, Invoke)
end
end
elseif stmt.head === :call
callee = stmt.args[1]
if isa(callee, Core.SSAValue)
callee = unwrapconst(ct.ssavaluetypes[callee.id])
if callee === Any
push!(s.categories, CalleeVariable)
# return s
end
elseif isa(callee, Core.Argument)
callee = unwrapconst(ct.slottypes[callee.n])
if callee === Any
push!(s.categories, CalleeVariable)
# return s
end
end
# argtyps = stmt.args[2]
# First, check if this is an error path
skipme = false
if stmtidx + 2 <= length(ct.code)
chkstmt = ct.code[stmtidx + 2]
if isa(chkstmt, Core.ReturnNode) && !isdefined(chkstmt, :val)
push!(s.categories, ErrorPath)
unique!(s.categories)
return s
end
end
calleef = nothing
rtm = rtcallee.def::Method
isssa = false
if isa(callee, GlobalRef) && isa(rtcallee.def, Method)
calleef = getfield(callee.mod, callee.name)
if calleef === Core._apply_iterate
callee = stmt.args[3]
calleef, isssa = getcalleef(callee, ct)
# argtyps = stmt.args[4]
elseif calleef === Base.invoke
push!(s.categories, FromInvoke)
callee = stmt.args[2]
calleef, isssa = getcalleef(callee, ct)
end
elseif isa(callee, Function) || isa(callee, UnionAll)
calleef = callee
end
if calleef === Any
push!(s.categories, CalleeVariable)
end
if isa(calleef, Function)
nameof(calleef) == rtcalleename || continue
# if isa(argtyps, Core.Argument)
# argtyps = unwrapconst(ct.slottypes[argtyps.n])
# elseif isa(argtyps, Core.SSAValue)
# argtyps = unwrapconst(ct.ssavaluetypes[argtyps.id])
# end
meths = methods(calleef)
if rtm ∈ meths
if rtm.isva
push!(s.categories, CalleeVararg)
end
push!(s.categories, UnspecCall)
elseif isempty(meths) && isssa
push!(s.categories, CalleeVariable)
elseif isssa
error("unhandled ssa condition on ", itrig)
elseif isempty(meths)
if isa(calleef, Core.Builtin)
else
error("unhandled meths are empty with calleef ", calleef, " on ", itrig)
end
end
elseif isa(calleef, UnionAll)
tt = Base.unwrap_unionall(calleef)
if tt <: Type
T = tt.parameters[1]
else
T = tt
end
if (Base.unwrap_unionall(T)::DataType).name.name === rtcalleename
push!(s.categories, UnspecType)
end
end
elseif stmt.head === :foreigncall
maybec = true
end
end
end
end
if isempty(s.categories) && maybec
push!(s.categories, MaybeFromC)
end
unique!(s.categories)
return s
end
function unwrapconst(@nospecialize(arg))
if isa(arg, Core.Const)
return arg.val
elseif isa(arg, Core.PartialStruct)
return arg.typ
elseif @static isdefined(Core.Compiler, :MaybeUndef) ? isa(arg, Core.Compiler.MaybeUndef) : false
return arg.typ
end
return arg
end
function getcalleef(@nospecialize(callee), ct)
if isa(callee, GlobalRef)
return getfield(callee.mod, callee.name), false
elseif isa(callee, Function) || isa(callee, Type)
return callee, false
elseif isa(callee, Core.SSAValue)
return unwrapconst(ct.ssavaluetypes[callee.id]), true
elseif isa(callee, Core.Argument)
return unwrapconst(ct.slottypes[callee.n]), false
end
error("unhandled callee ", callee, " with type ", typeof(callee))
end
function hascorebox(@nospecialize(typ))
if isa(typ, CodeInfo)
ct = typ
for typlist in (ct.slottypes, ct.ssavaluetypes)
for typ in typlist
if hascorebox(typ)
return true
end
end
end
end
typ = unwrapconst(typ)
isa(typ, Type) || return false
typ === Core.Box && return true
typ = Base.unwrap_unionall(typ)
typ === Union{} && return false
if isa(typ, Union)
return hascorebox(typ.a) | hascorebox(typ.b)
end
for p in typ.parameters
hascorebox(p) && return true
end
return false
end
function Base.summary(io::IO, mtrigs::MethodTriggers)
callers = callerinstances(mtrigs.itrigs)
m = mtrigs.tag
println(io, m, " had ", length(callers), " specializations")
hascb = false
for mi in callers
tt = Base.unwrap_unionall(mi.specTypes)::DataType
mlist = Base._methods_by_ftype(tt, -1, Base.get_world_counter())
if length(mlist) < 10
cts = Base.code_typed_by_type(tt; debuginfo=:source)
for (ct::CodeInfo, _) in cts
if hascorebox(ct)
hascb = true
print(io, mi, " ")
coreboxmsg(io)
println(io)
break
end
end
else
@warn "not checking $mi for Core.Box, too many methods"
end
hascb && break
end
loctrigs = accumulate_by_source(mtrigs.itrigs)
sort!(loctrigs; by=loctrig->loctrig.tag.line)
println(io, "Triggering calls:")
for loctrig in loctrigs
itrig = loctrig.itrigs[1]
ft = (Base.unwrap_unionall(MethodInstance(itrig.node).specTypes)::DataType).parameters[1]
loc = loctrig.tag
if loc.func == m.name
print(io, "Line ", loctrig.tag.line)
else
print(io, "Inlined ", loc)
end
println(io, ": calling ", ft2f(ft), " (", length(loctrig.itrigs), " instances)")
end
end
Base.summary(mtrigs::MethodTriggers) = summary(stdout, mtrigs)
struct ClosureF
ft
end
function Base.show(io::IO, cf::ClosureF)
lnns = [LineNumberNode(Int(m.line), m.file) for m in Base.MethodList(cf.ft.name.mt)]
print(io, "closure ", cf.ft, " at ")
if length(lnns) == 1
print(io, lnns[1])
else
sort!(lnns; by=lnn->(lnn.file, lnn.line))
# avoid the repr with #= =#
print(io, '[')
for (i, lnn) in enumerate(lnns)
print(io, lnn.file, ':', lnn.line)
i < length(lnns) && print(io, ", ")
end
print(io, ']')
end
end
function ft2f(@nospecialize(ft))
if isa(ft, DataType)
return ft <: Type ? #= Type{T} =# ft.parameters[1] :
isdefined(ft, :instance) ? #= Function =# ft.instance : #= closure =# ClosureF(ft)
end
error("unhandled: ", ft)
end
function Base.summary(io::IO, loctrig::LocationTriggers)
ncallees, ncallers = diversity(loctrig)
if ncallees > ncallers
callees = unique([Method(itrig.node) for itrig in loctrig.itrigs])
println(io, ncallees, " callees from ", ncallers, " callers, consider despecializing the callee(s):")
show(io, MIME("text/plain"), callees)
println(io, "\nor improving inferrability of the callers")
else
cats_callee_sfs = unique(first, [(suggest(itrig).categories, MethodInstance(itrig.node), itrig.callerframes) for itrig in loctrig.itrigs])
println(io, ncallees, " callees from ", ncallers, " callers, consider improving inference in the caller(s). Recommendations:")
for (catg, callee, sfs) in cats_callee_sfs
show_suggest(io, catg, callee, isempty(sfs) ? "<none>" : sfs[end])
end
end
end
Base.summary(loctrig::LocationTriggers) = summary(stdout, loctrig)
struct SuggestNode
s::Union{Nothing,Suggested}
children::Vector{SuggestNode}
end
SuggestNode(s::Union{Nothing,Suggested}) = SuggestNode(s, SuggestNode[])
AbstractTrees.children(node::SuggestNode) = node.children
function suggest(node::TriggerNode)
stree = node.itrig === nothing ? SuggestNode(nothing) : SuggestNode(suggest(node.itrig))
suggest!(stree, node)
end
function suggest!(stree, node)
for child in node.children
newnode = SuggestNode(suggest(child.itrig))
push!(stree.children, newnode)
suggest!(newnode, child)
end
return stree
end
function Base.show(io::IO, node::SuggestNode)
if node.s === nothing
print(io, "no inference trigger")
else
show(io, node.s)
end
print(" (", string(length(node.children)), " children)")
end
function strip_prefix(io::IO, obj, prefix)
print(io, obj)
str = String(take!(io))
return startswith(str, prefix) ? str[length(prefix)+1:end] : str
end
strip_prefix(obj, prefix) = strip_prefix(IOBuffer(), obj, prefix)
stripmi(args...) = strip_prefix(args..., "MethodInstance for ")
stripifi(args...) = strip_prefix(args..., "InferenceFrameInfo for ")
## Flamegraph creation
"""
flamegraph(tinf::InferenceTimingNode; tmin=0.0, excluded_modules=Set([Main]), mode=nothing)
Convert the call tree of inference timings returned from `@snoop_inference` into a FlameGraph.
Returns a FlameGraphs.FlameGraph structure that represents the timing trace recorded for
type inference.
Frames that take less than `tmin` seconds of inclusive time will not be included
in the resultant FlameGraph (meaning total time including it and all of its children).
This can be helpful if you have a very big profile, to save on processing time.
Non-precompilable frames are marked in reddish colors. `excluded_modules` can be used to mark methods
defined in modules to which you cannot or do not wish to add precompiles.
`mode` controls how frames are named in tools like ProfileView.
`nothing` uses the default of just the qualified function name, whereas
supplying `mode=Dict(method => count)` counting the number of specializations of
each method will cause the number of specializations to be included in the frame name.
# Example
We'll use [`SnoopCompile.flatten_demo`](@ref), which runs `@snoop_inference` on a workload designed to yield reproducible results:
```jldoctest flamegraph; setup=:(using SnoopCompile), filter=r"([0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?/[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?|at.*typeinfer\\.jl:\\d+|0:\\d+|WARNING: replacing module FlattenDemo\\.\\n)"
julia> tinf = SnoopCompile.flatten_demo()
InferenceTimingNode: 0.002148974/0.002767166 on Core.Compiler.Timings.ROOT() with 1 direct children
julia> fg = flamegraph(tinf)
Node(FlameGraphs.NodeData(ROOT() at typeinfer.jl:75, 0x00, 0:3334431))
```
```
julia> ProfileView.view(fg); # Display the FlameGraph in a package that supports it
```
You should be able to reconcile the resulting flamegraph to `print_tree(tinf)` (see [`flatten`](@ref)).
The empty horizontal periods in the flamegraph correspond to times when something other than inference is running.
The total width of the flamegraph is set from the `ROOT` node.
"""
function FlameGraphs.flamegraph(tinf::InferenceTimingNode; tmin = 0.0, excluded_modules=Set([Main::Module]), mode=nothing)
isROOT(tinf) && isempty(tinf.children) && @warn "Empty profile: no compilation was recorded."
io = IOBuffer()
# Compute a "root" frame for the top-level node, to cover the whole profile
node_data, _ = _flamegraph_frame(io, tinf, tinf.start_time, true, excluded_modules, mode; toplevel=true)
root = Node(node_data)
if !isROOT(tinf)
node_data, child_check_precompilable = _flamegraph_frame(io, tinf, tinf.start_time, true, excluded_modules, mode; toplevel=false)
root = addchild(root, node_data)
end
return _build_flamegraph!(root, io, tinf, tinf.start_time, tmin, true, excluded_modules, mode)
end
function _build_flamegraph!(root, io::IO, node::InferenceTimingNode, start_secs, tmin, check_precompilable, excluded_modules, mode)
for child in node.children
if inclusive(child) > tmin
node_data, child_check_precompilable = _flamegraph_frame(io, child, start_secs, check_precompilable, excluded_modules, mode; toplevel=false)
node = addchild(root, node_data)
_build_flamegraph!(node, io, child, start_secs, tmin, child_check_precompilable, excluded_modules, mode)
end
end
return root
end
# Create a profile frame for this node
function _flamegraph_frame(io::IO, node::InferenceTimingNode, start_secs, check_precompilable::Bool, excluded_modules, mode; toplevel)
function func_name(mi::MethodInstance, ::Nothing)
m = mi.def
return isa(m, Method) ? string(m.module, '.', m.name) : string(m, '.', "thunk")
end
function func_name(mi::MethodInstance, methcounts::AbstractDict{Method})
str = func_name(mi, nothing)
m = mi.def
if isa(m, Method)
n = get(methcounts, m, nothing)
if n !== nothing
str = string(str, " (", n, ')')
end
end
return str
end
function func_name(io::IO, mi_info::InferenceFrameInfo, mode)
if mode === :slots
show(io, mi_info)
str = String(take!(io))
startswith(str, "InferenceFrameInfo for ") && (str = str[length("InferenceFrameInfo for ")+1:end])
return str
elseif mode === :spec
return frame_name(io, mi_info)
else
return func_name(MethodInstance(mi_info), mode)
end
end
mistr = Symbol(func_name(io, InferenceTiming(node).mi_info, mode))
mi = MethodInstance(node)
m = mi.def
sf = isa(m, Method) ? StackFrame(mistr, mi.def.file, mi.def.line, mi, false, false, UInt64(0x0)) :
StackFrame(mistr, :unknown, 0, mi, false, false, UInt64(0x0))
status = 0x0 # "default" status -- see FlameGraphs.jl
if check_precompilable
mod = isa(m, Method) ? m.module : m
ispc = isprecompilable(mi; excluded_modules)
check_precompilable = !ispc
if !ispc
status |= FlameGraphs.runtime_dispatch
end
end
# Check for const-propagation
if hasconstprop(InferenceTiming(node))
status |= FlameGraphs.gc_event
end
start = node.start_time - start_secs
if toplevel
# Compute a range over the whole profile for the top node.
stop_secs = isROOT(node) ? max_end_time(node) : max_end_time(node, true)
range = round(Int, start*1e9) : round(Int, (stop_secs - start_secs)*1e9)
else
range = round(Int, start*1e9) : round(Int, (start + inclusive(node))*1e9)
end
return FlameGraphs.NodeData(sf, status, range), check_precompilable
end
hasconstprop(f::InferenceTiming) = hasconstprop(f.mi_info)
hasconstprop(mi_info::Core.Compiler.Timings.InferenceFrameInfo) = any(isconstant, mi_info.slottypes)
isconstant(@nospecialize(t)) = isa(t, Core.Const) && !isa(t.val, Union{Type,Function})
function frame_name(io::IO, mi_info::InferenceFrameInfo)
frame_name(io, mi_info.mi::MethodInstance)
end
function frame_name(io::IO, mi::MethodInstance)
m = mi.def
isa(m, Module) && return "thunk"
return frame_name(io, m.name, mi.specTypes)
end
# Special printing for Type Tuples so they're less ugly in the FlameGraph
function frame_name(io::IO, name, @nospecialize(tt::Type{<:Tuple}))
try
Base.show_tuple_as_call(io, name, tt)
v = String(take!(io))
return v
catch e
e isa InterruptException && rethrow()
@warn "Error displaying frame: $e"
return name
end
end
# NOTE: The "root" node doesn't cover the whole profile, because it's only the _complement_
# of the inference times (so it's missing the _overhead_ from the measurement).
# SO we need to manually create a root node that covers the whole thing.
function max_end_time(node::InferenceTimingNode, recursive::Bool=false, tmax=-one(node.start_time))
# It's possible that node is already the longest-reaching node.
t_end = node.start_time + inclusive(node)
# It's also possible that the last child extends past the end of node. (I think this is
# possible because of the small unmeasured overhead in computing these measurements.)
last_node = isempty(node.children) ? node : node.children[end]
child_end = last_node.start_time + inclusive(last_node)
# Return the maximum end time to make sure the top node covers the entire graph.
tmax = max(t_end, child_end, tmax)
if recursive
for child in node.children
tmax = max_end_time(child, true, tmax)
end
end
return tmax
end
for IO in (IOContext{Base.TTY}, IOContext{IOBuffer}, IOBuffer)
for T = (InferenceTimingNode, InferenceTrigger, Precompiles, MethodLoc, MethodTriggers, Location, LocationTriggers)
@warnpcfail precompile(show, (IO, T))
end
end
@warnpcfail precompile(flamegraph, (InferenceTimingNode,))
@warnpcfail precompile(inference_triggers, (InferenceTimingNode,))
@warnpcfail precompile(flatten, (InferenceTimingNode,))
@warnpcfail precompile(accumulate_by_source, (Vector{InferenceTiming},))
@warnpcfail precompile(isprecompilable, (MethodInstance,))
@warnpcfail precompile(parcel, (InferenceTimingNode,))
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 4201 | """
times, info = SnoopCompile.read_snoop_llvm("func_names.csv", "llvm_timings.yaml"; tmin_secs=0.0)
Reads the log file produced by the compiler and returns the structured representations.
The results will only contain modules that took longer than `tmin_secs` to optimize.
## Return value
- `times` contains the time spent optimizing each module, as a Pair from the time to an
array of Strings, one for every MethodInstance in that llvm module.
- `info` is a Dict containing statistics for each MethodInstance encountered, from before
and after optimization, including number of instructions and number of basicblocks.
## Example
```julia
julia> @snoop_llvm "func_names.csv" "llvm_timings.yaml" begin
using InteractiveUtils
@eval InteractiveUtils.peakflops()
end
Launching new julia process to run commands...
done.
julia> times, info = SnoopCompile.read_snoop_llvm("func_names.csv", "llvm_timings.yaml", tmin_secs = 0.025);
julia> times
3-element Vector{Pair{Float64, Vector{String}}}:
0.028170923 => ["Tuple{typeof(LinearAlgebra.copy_transpose!), Array{Float64, 2}, Base.UnitRange{Int64}, Base.UnitRange{Int64}, Array{Float64, 2}, Base.UnitRange{Int64}, Base.UnitRange{Int64}}"]
0.031356962 => ["Tuple{typeof(Base.copyto!), Array{Float64, 2}, Base.UnitRange{Int64}, Base.UnitRange{Int64}, Array{Float64, 2}, Base.UnitRange{Int64}, Base.UnitRange{Int64}}"]
0.149138788 => ["Tuple{typeof(LinearAlgebra._generic_matmatmul!), Array{Float64, 2}, Char, Char, Array{Float64, 2}, Array{Float64, 2}, LinearAlgebra.MulAddMul{true, true, Bool, Bool}}"]
julia> info
Dict{String, NamedTuple{(:before, :after), Tuple{NamedTuple{(:instructions, :basicblocks), Tuple{Int64, Int64}}, NamedTuple{(:instructions, :basicblocks), Tuple{Int64, Int64}}}}} with 3 entries:
"Tuple{typeof(LinearAlgebra.copy_transpose!), Ar… => (before = (instructions = 651, basicblocks = 83), after = (instructions = 348, basicblocks = 40…
"Tuple{typeof(Base.copyto!), Array{Float64, 2}, … => (before = (instructions = 617, basicblocks = 77), after = (instructions = 397, basicblocks = 37…
"Tuple{typeof(LinearAlgebra._generic_matmatmul!)… => (before = (instructions = 4796, basicblocks = 824), after = (instructions = 1421, basicblocks =…
```
"""
function read_snoop_llvm(func_csv_file, llvm_yaml_file; tmin_secs=0.0)
func_csv = _read_snoop_llvm_csv(func_csv_file)
llvm_yaml = YAML.load_file(llvm_yaml_file)
jl_names = Dict(r[1]::String => r[2]::String for r in func_csv)
try_get_jl_name(name) = if name in keys(jl_names)
jl_names[name]
else
@warn "Couldn't find $name"
name
end
time_secs(llvm_module) = llvm_module["time_ns"] / 1e9
times = [
time_secs(llvm_module) => [
try_get_jl_name(name)
for (name,_) in llvm_module["before"]
] for llvm_module in llvm_yaml
if time_secs(llvm_module) > tmin_secs
]
info = Dict(
try_get_jl_name(name) => (;
before = (;
instructions = before_stats["instructions"],
basicblocks = before_stats["basicblocks"],
),
after = (;
instructions = after_stats["instructions"],
basicblocks = after_stats["basicblocks"],
),
)
for llvm_module in llvm_yaml
for (name, before_stats) in llvm_module["before"]
for (name, after_stats) in llvm_module["after"]
if time_secs(llvm_module) > tmin_secs
)
# sort times so that the most costly items are displayed last
return (sort(times), info)
end
"""
`SnoopCompile._read_snoop_llvm_csv("compiledata.csv")` reads the log file produced by the
compiler and returns the function names as an array of pairs.
"""
function _read_snoop_llvm_csv(filename)
data = Vector{Pair{String,String}}()
# Format is [^\t]+\t[^\t]+. That is, tab-separated entries. No quotations or other
# whitespace are considered.
for line in eachline(filename)
c_name, jl_type = split2(line, '\t')
(length(c_name) < 2 || length(jl_type) < 2) && continue
push!(data, c_name => jl_type)
end
return data
end | SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 9259 | function reprcontext(mod::Module, @nospecialize(T))
# First check whether supplying module context allows evaluation
rplain = repr(T; context=:module=>mod)
try
ex = Meta.parse(rplain)
Core.eval(mod, ex)
return rplain
catch
# Add full module context
return repr(T; context=:module=>nothing)
end
end
let known_type_cache = IdDict{Tuple{Module,Tuple{Vararg{Symbol}},Symbol},Bool}()
global known_type
function known_type(mod::Module, @nospecialize(T::Union{Type,TypeVar}))
function startswith(@nospecialize(a::Tuple{Vararg{Symbol}}), @nospecialize(b::Tuple{Vararg{Symbol}}))
length(b) >= length(a) || return false
for i = 1:length(a)
a[i] == b[i] || return false
end
return true
end
function firstname(@nospecialize(tpath::Tuple{Vararg{Symbol}}))
i = 1
while i <= length(tpath)
sym = tpath[i]
sym === :Main || return sym
i += 1
end
return :ISNOTAMODULENAME
end
strippedname(tn::Core.TypeName) = Symbol(string(tn.name)[2:end])
if isa(T, TypeVar)
return known_type(mod, T.ub) && known_type(mod, T.lb)
end
T === Union{} && return true
T = Base.unwrap_unionall(T)
if isa(T, Union)
return known_type(mod, T.a) & known_type(mod, T.b)
end
T = T::DataType
tn = T.name
tpath = fullname(tn.module)
key = (mod, tpath, tn.name)
kt = get(known_type_cache, key, nothing)
if kt === nothing
kt = startswith(fullname(mod), tpath) ||
ccall(:jl_get_module_of_binding, Ptr{Cvoid}, (Any, Any), mod, firstname(tpath)) != C_NULL ||
(isdefined(mod, tn.name) && (T2 = getfield(mod, tn.name); isa(T2, Type) && Base.unwrap_unionall(T2) === T)) ||
(T <: Function && isdefined(mod, strippedname(tn)) && (f = getfield(mod, strippedname(tn)); typeof(f) === T))
known_type_cache[key] = kt
end
kt === false && return false
for p in T.parameters
isa(p, Type) || continue
known_type(mod, p) || return false
end
return true
end
end
function add_repr!(list, modgens::Dict{Module, Vector{Method}}, mi::MethodInstance, topmod::Module=mi.def.module; check_eval::Bool, time=nothing, kwargs...)
# Create the string representation of the signature
# Use special care with keyword functions, anonymous functions
tt = Base.unwrap_unionall(mi.specTypes)
m = mi.def
p = tt.parameters[1] # the portion of the signature related to the function itself
paramrepr = map(T->reprcontext(topmod, T), Iterators.drop(tt.parameters, 1)) # all the rest of the args
if any(str->occursin('#', str), paramrepr)
@debug "Skipping $tt due to argument types having anonymous bindings"
return false
end
mname, mmod = String(Base.unwrap_unionall(p).name.name), m.module # m.name strips the kw identifier
mkw = match(kwrex, mname)
mkwbody = match(kwbodyrex, mname)
isgen = match(genrex, mname) !== nothing
isanon = match(anonrex, mname) !== nothing || match(innerrex, mname) !== nothing
isgen && (mkwbody = nothing)
if mkw !== nothing
# Keyword function
fname = mkw.captures[1] === nothing ? mkw.captures[2] : mkw.captures[1]
fkw = "Core.kwftype(typeof($fname))"
return add_if_evals!(list, topmod, fkw, paramrepr, tt; check_eval=check_eval, time=time)
elseif mkwbody !== nothing
ret = handle_kwbody(topmod, m, paramrepr, tt; check_eval = check_eval, kwargs...)
if ret !== nothing
push!(list, append_time(ret, time))
return true
end
elseif isgen
# Generator for a @generated function
if !haskey(modgens, m.module)
callers = modgens[m.module] = methods_with_generators(m.module)
else
callers = modgens[m.module]
end
for caller in callers
if nameof(caller.generator.gen) == m.name
# determine whether the generator is being called from a kwbody method
sig = Base.unwrap_unionall(caller.sig)
cname, cmod = String(sig.parameters[1].name.name), caller.module
cparamrepr = map(repr, Iterators.drop(sig.parameters, 1))
csigstr = tuplestring(cparamrepr)
mkwc = match(kwbodyrex, cname)
if mkwc === nothing
getgen = "typeof(which($(caller.name),$csigstr).generator.gen)"
return add_if_evals!(list, topmod, getgen, paramrepr, tt; check_eval=check_eval, time=time)
else
getgen = "which(Core.kwfunc($(mkwc.captures[1])),$csigstr).generator.gen"
ret = handle_kwbody(topmod, caller, cparamrepr, tt; check_eval = check_eval, kwargs...) #, getgen)
if ret !== nothing
push!(list, append_time(ret, time))
return true
end
end
break
end
end
elseif isanon
# Anonymous function, wrap in an `isdefined`
prefix = "isdefined($mmod, Symbol(\"$mname\")) && "
fstr = "getfield($mmod, Symbol(\"$mname\"))" # this is universal, var is Julia 1.3+
return add_if_evals!(list, topmod, fstr, paramrepr, tt; prefix=prefix, check_eval = check_eval, time=time)
end
return add_if_evals!(list, topmod, reprcontext(topmod, p), paramrepr, tt, check_eval = check_eval, time=time)
end
function handle_kwbody(topmod::Module, m::Method, paramrepr, tt, fstr="fbody"; check_eval = true)
nameparent = Symbol(match(r"^#([^#]*)#", String(m.name)).captures[1])
if !isdefined(m.module, nameparent)
@debug "Module $topmod: skipping $m due to inability to look up kwbody parent" # see example related to issue #237
return nothing
end
fparent = getfield(m.module, nameparent)
pttstr = tuplestring(paramrepr[m.nkw+2:end])
whichstr = "which($nameparent, $pttstr)"
can1, exc1 = can_eval(topmod, whichstr, check_eval)
if can1
ttstr = tuplestring(paramrepr)
pcstr = """
let fbody = try Base.bodyfunction($whichstr) catch missing end
if !ismissing(fbody)
precompile($fstr, $ttstr)
end
end"""
can2, exc2 = can_eval(topmod, pcstr, check_eval)
if can2
return pcstr
else
@debug "Module $topmod: skipping $tt due to kwbody lookup failure" exception=exc2 _module=topmod _file="precompile_$topmod.jl"
end
else
@debug "Module $topmod: skipping $tt due to kwbody caller lookup failure" exception=exc1 _module=topmod _file="precompile_$topmod.jl"
end
return nothing
end
tupletypestring(params) = "Tuple{" * join(params, ',') * '}'
tupletypestring(fstr::AbstractString, params::AbstractVector{<:AbstractString}) =
tupletypestring([fstr; params])
tuplestring(params) = isempty(params) ? "()" : '(' * join(params, ',') * ",)"
"""
can_eval(mod::Module, str::AbstractString, check_eval::Bool=true)
Checks if the precompilation statement can be evaled.
In some cases, you may want to bypass this function by passing `check_eval=true` to increase the snooping performance.
"""
function can_eval(mod::Module, str::AbstractString, check_eval::Bool=true)
if check_eval
try
ex = Meta.parse(str)
if mod === Core
#https://github.com/timholy/SnoopCompile.jl/issues/76
Core.eval(Main, ex)
else
Core.eval(mod, ex)
end
catch e
return false, e
end
end
return true, nothing
end
"""
add_if_evals!(pclist, mod::Module, fstr, params, tt; prefix = "", check_eval::Bool=true)
Adds the precompilation statements only if they can be evaled. It uses [`can_eval`](@ref) internally.
In some cases, you may want to bypass this function by passing `check_eval=true` to increase the snooping performance.
"""
function add_if_evals!(pclist, mod::Module, fstr, params, tt; prefix = "", check_eval::Bool=true, time=nothing)
ttstr = tupletypestring(fstr, params)
can, exc = can_eval(mod, ttstr, check_eval)
if can
push!(pclist, append_time(prefix*wrap_precompile(ttstr), time))
return true
else
@debug "Module $mod: skipping $tt due to eval failure" exception=exc _module=mod _file="precompile_$mod.jl"
end
return false
end
append_time(str, ::Nothing) = str
append_time(str, t::AbstractFloat) = str * " # time: " * string(Float32(t))
wrap_precompile(ttstr::AbstractString) = "Base.precompile(" * ttstr * ')' # use `Base.` to avoid conflict with Core and Pkg
const default_exclusions = Set([
r"\bMain\b",
])
function split2(str, on)
i = findfirst(isequal(on), str)
i === nothing && return str, ""
return (SubString(str, firstindex(str), prevind(str, first(i))),
SubString(str, nextind(str, last(i))))
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 1942 | # Write precompiles for userimg.jl
const warnpcfail_str = """
# Use
# @warnpcfail precompile(args...)
# if you want to be warned when a precompile directive fails
macro warnpcfail(ex::Expr)
modl = __module__
file = __source__.file === nothing ? "?" : String(__source__.file)
line = __source__.line
quote
\$(esc(ex)) || @warn \"\"\"precompile directive
\$(\$(Expr(:quote, ex)))
failed. Please report an issue in \$(\$modl) (after checking for duplicates) or remove this directive.\"\"\" _file=\$file _line=\$line
end
end
"""
function write(io::IO, pc::Vector{<:AbstractString}; writewarnpcfail::Bool=true)
writewarnpcfail && println(io, warnpcfail_str, '\n')
for ln in pc
println(io, ln)
end
end
function write(filename::AbstractString, pc::Vector; kwargs...)
path, fn = splitdir(filename)
if !isdir(path)
mkpath(path)
end
return open(filename, "w") do io
write(io, pc)
end
end
"""
write(prefix::AbstractString, pc::Dict; always::Bool = false)
Write each modules' precompiles to a separate file. If `always` is
true, the generated function will always run the precompile statements
when called, otherwise the statements will only be called during
package precompilation.
"""
function write(prefix::AbstractString, pc::Dict; always::Bool = false)
if !isdir(prefix)
mkpath(prefix)
end
for (k, v) in pc
open(joinpath(prefix, "precompile_$k.jl"), "w") do io
println(io, warnpcfail_str, '\n')
if any(str->occursin("__lookup", str), v)
println(io, lookup_kwbody_str)
end
println(io, "function _precompile_()")
!always && println(io, " ccall(:jl_generating_output, Cint, ()) == 1 || return nothing")
for ln in sort(v)
println(io, " ", ln)
end
println(io, "end")
end
end
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 219 | using Test
using SnoopCompile
include("snoop_inference.jl")
include("snoop_llvm.jl")
include("snoop_invalidations.jl")
# otherwise-untested demos
retflat = SnoopCompile.flatten_demo()
@test !isempty(retflat.children)
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 35951 | using SnoopCompile
using SnoopCompile.SnoopCompileCore
using SnoopCompile.FlameGraphs
using Test
using InteractiveUtils
using Random
using Profile
using MethodAnalysis
using Core: MethodInstance
using Pkg
# using PyPlot: PyPlot, plt # uncomment to test visualizations
using SnoopCompile.FlameGraphs.AbstractTrees # For FlameGraphs tests
# Constant-prop works differently on different Julia versions.
# This utility lets you strip frames that const-prop a number.
hasconstpropnumber(f::SnoopCompileCore.InferenceTiming) = hasconstpropnumber(f.mi_info)
hasconstpropnumber(mi_info::Core.Compiler.Timings.InferenceFrameInfo) = any(t -> isa(t, Core.Const) && isa(t.val, Number), mi_info.slottypes)
@testset "@snoop_inference" begin
# WARMUP (to compile all the small, reachable methods)
M = Module()
@eval M begin # Example with some functions that include type instability
i(x) = x+5
h(a::Array) = i(a[1]::Integer) + 2
g(y::Integer) = h(Any[y])
end
M.g(2) # Warmup all deeply reachable functions
M.g(true)
# Redefine the module, so the snoop will only show these functions:
M = Module()
@eval M begin # Example with some functions that include type instability
i(x) = x+5
h(a::Array) = i(a[1]::Integer) + 2
g(y::Integer) = h(Any[y])
end
tinf = @snoop_inference begin
M.g(2)
M.g(true)
end
@test SnoopCompile.isROOT(Core.MethodInstance(tinf))
@test SnoopCompile.isROOT(Method(tinf))
child = tinf.children[1]
@test convert(SnoopCompile.InferenceTiming, child).inclusive_time > 0
@test SnoopCompile.getroot(child.children[1]) == child
@test SnoopCompile.getroot(child.children[1].children[1]) == child
@test isempty(staleinstances(tinf))
frames = filter(!hasconstpropnumber, flatten(tinf))
@test length(frames) == 7 # ROOT, g(::Int), g(::Bool), h(...), i(::Integer), i(::Int), i(::Bool)
@test issorted(frames; by=exclusive)
names = [Method(frame).name for frame in frames]
@test sort(names) == [:ROOT, :g, :g, :h, :i, :i, :i]
mg = which(M.g, (Int,))
tinfsg = collect_for(mg, tinf)
@test length(tinfsg) == 2
@test all(node -> Method(node) == mg, tinfsg)
mig = MethodInstance(first(tinfsg))
tinfsg1 = collect_for(mig, tinf)
@test length(tinfsg1) == 1
@test MethodInstance(tinfsg1[1]) == mig
@test all(node -> Method(node) == mg, tinfsg)
longest_frame_time = exclusive(frames[end])
@test length(filter(!hasconstpropnumber, flatten(tinf, tmin=longest_frame_time))) == 1
frames_unsorted = filter(!hasconstpropnumber, flatten(tinf; sortby=nothing))
ifi = frames_unsorted[1].mi_info
@test SnoopCompile.isROOT(Core.MethodInstance(ifi))
@test SnoopCompile.isROOT(Method(ifi))
names = [Method(frame).name for frame in frames_unsorted]
argtypes = [MethodInstance(frame).specTypes.parameters[2] for frame in frames_unsorted[2:end]]
@test names == [:ROOT, :g, :h, :i, :i, :g, :i]
@test argtypes == [ Int, Vector{Any}, Integer, Int, Bool, Bool]
timesm = accumulate_by_source(frames)
@test length(timesm) == 4
names = [m.name for (time, m) in timesm]
@test sort(names) == [:ROOT, :g, :h, :i]
longest_method_time = timesm[end][1]
@test length(accumulate_by_source(frames; tmin=longest_method_time)) == 1
@test SnoopCompile.isROOT(Core.MethodInstance(tinf))
@test SnoopCompile.isROOT(Method(tinf))
iframes = flatten(tinf; sortby=inclusive)
@test issorted(iframes; by=inclusive)
t = map(inclusive, frames_unsorted)
@test t[2] >= t[3] >= t[4]
ifi = frames_unsorted[2].mi_info
@test Core.MethodInstance(ifi).def == Method(ifi) == which(M.g, (Int,))
names = [Method(frame).name for frame in frames_unsorted]
argtypes = [MethodInstance(frame).specTypes.parameters[2] for frame in frames_unsorted[2:end]]
@test names == [:ROOT, :g, :h, :i, :i, :g, :i]
@test argtypes == [ Int, Vector{Any}, Integer, Int, Bool, Bool]
# Also check module-level thunks
@eval module M # Example with some functions that include type instability
i(x) = x+5
h(a::Array) = i(a[1]::Integer) + 2
g(y::Integer) = h(Any[y])
end
tinfmod = @snoop_inference begin
@eval @testset "Outer" begin
@testset "Inner" begin
for i = 1:2 M.g(2) end
end
end
end
frames = flatten(tinfmod)
timesm = accumulate_by_source(frames)
timesmod = filter(pr -> isa(pr[2], Core.MethodInstance), timesm)
@test length(timesmod) == 1
end
# For the higher-order function attribution test, we need to prevent `f2`
# from being passed via closure, so we define these globally.
fdouble(x) = 2x
@testset "inference_triggers" begin
myplus(x, y) = x + y # freshly redefined even if tests are re-run
function f(x)
x < 0.25 ? 1 :
x < 0.5 ? 1.0 :
x < 0.75 ? 0x01 : Float16(1)
end
g(c) = myplus(f(c[1]), f(c[2]))
tinf = @snoop_inference g([0.7, 0.8])
@test isempty(staleinstances(tinf))
itrigs = inference_triggers(tinf; exclude_toplevel=false)
@test length(itrigs) == 2
@test suggest(itrigs[1]).categories == [SnoopCompile.FromTestDirect]
s = suggest(itrigs[2])
@test SnoopCompile.FromTestCallee ∈ s.categories
@test SnoopCompile.UnspecCall ∈ s.categories
@test occursin("myplus", string(MethodInstance(itrigs[2].node).def.name))
itrigs = inference_triggers(tinf)
itrig = only(itrigs)
@test filtermod(@__MODULE__, itrigs) == [itrig]
@test isempty(filtermod(Base, itrigs))
io = IOBuffer()
show(io, itrig)
str = String(take!(io))
@test occursin(r".*myplus.*\(::UInt8, ::Float16\)", str)
@test occursin("from g", str)
@test occursin(r"with specialization .*::Vector\{Float64\}", str)
mis = callerinstance.(itrigs)
@test only(mis).def == which(g, (Any,))
@test callingframe(itrig).callerframes[1].func === :eval
@test_throws ArgumentError("it seems you've supplied a child node, but backtraces are collected only at the entrance to inference") inference_triggers(tinf.children[1])
@test stacktrace(itrig) isa Vector{StackTraces.StackFrame}
itrig0 = itrig
counter = 0
while !isempty(itrig.callerframes) && counter < 1000 # defensively prevent infinite loop
itrig = callingframe(itrig)
counter += 1
end
@test counter < 1000
show(io, itrig)
str = String(take!(io))
@test occursin("called from toplevel", str)
@test itrig != itrig0
@test InferenceTrigger(itrig) == itrig0
# Tree generation
itree = trigger_tree(itrigs)
@test length(itree.children) == 1
@test isempty(itree.children[1].children)
print_tree(io, itree)
@test occursin(r"myplus.*UInt8.*Float16", String(take!(io)))
# Where the caller is inlined into something else
callee(x) = 2x
@inline caller(c) = callee(c[1])
callercaller(cc) = caller(cc[1])
callercaller([Any[1]])
cc = [Any[0x01]]
tinf = @snoop_inference callercaller(cc)
@test isempty(staleinstances(tinf))
itrigs = inference_triggers(tinf)
itrig = only(itrigs)
show(io, itrig)
str = String(take!(io))
@test occursin(r"to call .*callee.*\(::UInt8\)", str)
@test occursin("from caller", str)
@test occursin(r"inlined into .*callercaller.*\(::Vector{Vector{Any}}\)", str)
s = suggest(itrig)
@test !isignorable(s)
print(io, s)
@test occursin(r"snoop_inference\.jl:\d+: non-inferrable or unspecialized call.*::UInt8", String(take!(io)))
mysqrt(x) = sqrt(x)
c = Any[1, 1.0, 0x01, Float16(1)]
tinf = @snoop_inference map(mysqrt, c)
@test isempty(staleinstances(tinf))
itrigs = inference_triggers(tinf)
itree = trigger_tree(itrigs)
io = IOBuffer()
print_tree(io, itree)
@test occursin(r"mysqrt.*Float64", String(take!(io)))
print(io, itree)
@test String(take!(io)) == "TriggerNode for root with 2 direct children"
@test length(flatten(itree)) > length(c)
length(suggest(itree).children) == 2
loctrigs = accumulate_by_source(itrigs)
show(io, loctrigs)
@test any(str->occursin("4 callees from 2 callers", str), split(String(take!(io)), '\n'))
@test filtermod(Base, loctrigs) == loctrigs
@test isempty(filtermod(@__MODULE__, loctrigs))
# This actually tests the suggest framework a bit, but...
for loctrig in loctrigs
summary(io, loctrig)
end
str = String(take!(io))
@test occursin("1 callees from 1 callers, consider improving inference", str)
@test occursin("4 callees from 2 callers, consider despecializing the callee", str)
@test occursin("non-inferrable or unspecialized call", str)
@test occursin("partial type call", str)
mtrigs = accumulate_by_source(Method, itrigs)
for mtrig in mtrigs
show(io, mtrig)
end
str = String(take!(io))
@test occursin(r"map\(f, A::AbstractArray\) (in|@) Base", str)
@test occursin("2 callees from 1 caller", str)
for mtrig in mtrigs
summary(io, mtrig)
end
str = String(take!(io))
@test occursin(r"map.*had 1 specialization", str)
@test occursin(r"calling Base\.Generator", str)
@test occursin("calling mysqrt (3 instances)", str)
modtrigs = SnoopCompile.parcel(mtrigs)
@test only(modtrigs).first === Base
@test filtermod(Base, mtrigs) == mtrigs
@test isempty(filtermod(@__MODULE__, mtrigs))
# Multiple callees on the same line
fline(x) = 2*x[]
gline(x) = x[]
fg(x) = fline(gline(x[]))
cc = Ref{Any}(Ref{Base.RefValue}(Ref(3)))
tinf = @snoop_inference fg(cc)
itrigs = inference_triggers(tinf)
loctrigs = accumulate_by_source(itrigs)
@test length(loctrigs) == 2
loctrigs[1].tag == loctrigs[2].tag
# Higher order function attribution
@noinline function mymap!(f, dst, src)
for i in eachindex(dst, src)
dst[i] = f(src[i])
end
return dst
end
@noinline mymap(f::F, src) where F = mymap!(f, Vector{Any}(undef, length(src)), src)
callmymap(x) = mymap(fdouble, x)
callmymap(Any[1, 2]) # compile all for one set of types
x = Any[1.0, 2.0] # fdouble not yet inferred for Float64
tinf = @snoop_inference callmymap(x)
@test isempty(staleinstances(tinf))
itrigs = inference_triggers(tinf)
itrig = only(itrigs)
@test occursin(r"with specialization .*mymap!.*\(::.*fdouble.*, ::Vector{Any}, ::Vector{Any}\)", string(itrig))
@test occursin(r"with specialization .*mymap.*\(::.*fdouble.*, ::Vector{Any}\)", string(callingframe(itrig)))
@test occursin(r"with specialization .*callmymap.*\(::Vector{Any}\)", string(skiphigherorder(itrig)))
# Ensure we don't skip non-higher order calls
callfdouble(c) = fdouble(c[1])
callfdouble(Any[1])
c = Any[Float16(1)]
tinf = @snoop_inference callfdouble(c)
@test isempty(staleinstances(tinf))
itrigs = inference_triggers(tinf)
itrig = only(itrigs)
@test skiphigherorder(itrig) == itrig
# With a closure
M = Module()
@eval M begin
function f(c, name)
stringx(x) = string(x) * name
stringx(x::Int) = string(x) * name
stringx(x::Float64) = string(x) * name
stringx(x::Bool) = string(x) * name
n = 0
for x in c
n += length(stringx(x))
end
return n
end
end
c = Any["hey", 7]
tinf = @snoop_inference M.f(c, " there")
itrigs = inference_triggers(tinf)
@test length(itrigs) > 1
mtrigs = accumulate_by_source(Method, itrigs)
summary(io, only(mtrigs))
str = String(take!(io))
@test occursin(r"closure.*stringx.*\{String\} at", str)
end
@testset "suggest" begin
categories(tinf) = suggest(only(inference_triggers(tinf))).categories
io = IOBuffer()
# UnspecCall and relation to Test
M = Module()
@eval M begin
callee(x) = 2x
caller(c) = callee(c[1])
end
tinf = @snoop_inference M.caller(Any[1])
itrigs = inference_triggers(tinf; exclude_toplevel=false)
@test length(itrigs) == 2
s = suggest(itrigs[1])
@test s.categories == [SnoopCompile.FromTestDirect]
show(io, s)
@test occursin(r"called by Test.*ignore", String(take!(io)))
s = suggest(itrigs[2])
@test s.categories == [SnoopCompile.FromTestCallee, SnoopCompile.UnspecCall]
show(io, s)
@test occursin(r"non-inferrable or unspecialized call.*annotate caller\(c::Vector\{Any\}\) at snoop_inference.*callee\(::Int", String(take!(io)))
# Same test, but check the test harness & inlineable detection
M = Module()
@eval M begin
callee(x) = 2x
@inline caller(c) = callee(c[1])
end
cats = categories(@snoop_inference M.caller(Any[1]))
@test cats == [SnoopCompile.FromTestCallee, SnoopCompile.CallerInlineable, SnoopCompile.UnspecCall]
SnoopCompile.show_suggest(io, cats, nothing, nothing)
@test occursin("non-inferrable or unspecialized call", String(take!(io)))
M = Module()
@eval M begin
struct Typ end
struct Container{N,T} x::T end
Container{N}(x::T) where {N,T} = Container{N,T}(x)
typeconstruct(c) = Container{3}(c[])
end
c = Ref{Any}(M.Typ())
cats = categories(@snoop_inference M.typeconstruct(c))
@test cats == [SnoopCompile.FromTestCallee, SnoopCompile.UnspecType]
SnoopCompile.show_suggest(io, cats, nothing, nothing)
# println(String(take!(io)))
@test occursin("partial type call", String(take!(io)))
# Invoke
M = Module()
@eval M begin
@noinline callf(@nospecialize(f::Function), x) = f(x)
g(x) = callf(sqrt, x)
end
tinf = @snoop_inference M.g(3)
itrigs = inference_triggers(tinf)
if !isempty(itrigs)
cats = categories(tinf)
@test cats == [SnoopCompile.FromTestCallee, SnoopCompile.CallerInlineable, SnoopCompile.Invoke]
SnoopCompile.show_suggest(io, cats, nothing, nothing)
@test occursin(r"invoked callee.*may fail to precompile", String(take!(io)))
else
@warn "Skipped Invoke test due to improvements in inference"
end
# FromInvokeLatest
M = Module()
@eval M begin
f(::Int) = 1
g(x) = Base.invokelatest(f, x)
end
cats = categories(@snoop_inference M.g(3))
@test SnoopCompile.FromInvokeLatest ∈ cats
@test isignorable(cats[1])
# CalleeVariable
mysin(x) = 1
mycos(x) = 2
docall(ref, x) = ref[](x)
function callvar(x)
fref = Ref{Any}(rand() < 0.5 ? mysin : mycos)
return docall(fref, x)
end
cats = categories(@snoop_inference callvar(0.2))
@test cats == [SnoopCompile.CalleeVariable]
SnoopCompile.show_suggest(io, cats, nothing, nothing)
@test occursin(r"variable callee.*avoid assigning function to variable", String(take!(io)))
# CalleeVariable as an Argument
M = Module()
@eval M begin
mysin(x) = 1
mycos(x) = 2
mytan(x) = 3
mycsc(x) = 4
getfunc(::Int) = mysin
getfunc(::Float64) = mycos
getfunc(::Char) = mytan
getfunc(::String) = mycsc
docall(@nospecialize(f), x) = f(x)
function callvar(ref, f=nothing)
x = ref[]
if f === nothing
f = getfunc(x)
end
return docall(f, x)
end
end
tinf = @snoop_inference M.callvar(Ref{Any}(0.2))
cats = suggest(inference_triggers(tinf)[end]).categories
@test cats == [SnoopCompile.CalleeVariable]
# CalleeVariable & varargs
M = Module()
@eval M begin
f1va(a...) = 1
f2va(a...) = 2
docallva(ref, x) = ref[](x...)
function callsomething(args)
fref = Ref{Any}(rand() < 0.5 ? f1va : f2va)
docallva(fref, args)
end
end
cats = categories(@snoop_inference M.callsomething(Any['a', 2]))
@test cats == [SnoopCompile.FromTestCallee, SnoopCompile.CallerInlineable, SnoopCompile.CalleeVariable]
M = Module()
@eval M begin
f1va(a...) = 1
f2va(a...) = 2
@noinline docallva(ref, x) = ref[](x...)
function callsomething(args)
fref = Ref{Any}(rand() < 0.5 ? f1va : f2va)
docallva(fref, args)
end
end
cats = categories(@snoop_inference M.callsomething(Any['a', 2]))
@test cats == [SnoopCompile.CalleeVariable]
# CallerVararg
M = Module()
@eval M begin
f1(x) = 2x
c1(x...) = f1(x[2])
end
c = Any['c', 1]
cats = categories(@snoop_inference M.c1(c...))
@test SnoopCompile.CalleeVararg ∉ cats
@test SnoopCompile.CallerVararg ∈ cats
@test SnoopCompile.UnspecCall ∈ cats
SnoopCompile.show_suggest(io, cats, nothing, nothing)
@test occursin(r"non-inferrable or unspecialized call with vararg caller.*annotate", String(take!(io)))
# CalleeVararg
M = Module()
@eval M begin
f2(x...) = 2*x[2]
c2(x) = f2(x...)
end
cats = categories(@snoop_inference M.c2(c))
@test SnoopCompile.CallerVararg ∉ cats
@test SnoopCompile.CalleeVararg ∈ cats
@test SnoopCompile.UnspecCall ∈ cats
SnoopCompile.show_suggest(io, cats, nothing, nothing)
@test occursin(r"non-inferrable or unspecialized call with vararg callee.*annotate", String(take!(io)))
# InvokeCalleeVararg
M = Module()
@eval M begin
struct AType end
struct BType end
Base.show(io::IO, ::AType) = print(io, "A")
Base.show(io::IO, ::BType) = print(io, "B")
@noinline doprint(ref) = print(IOBuffer(), "a", ref[], 3.2)
end
tinf = @snoop_inference M.doprint(Ref{Union{M.AType,M.BType}}(M.AType()))
if !isempty(inference_triggers(tinf))
cats = categories(tinf)
@test cats == [SnoopCompile.FromTestCallee, SnoopCompile.InvokedCalleeVararg]
SnoopCompile.show_suggest(io, cats, nothing, nothing)
@test occursin(r"invoked callee is varargs.*homogenize", String(take!(io)))
else
@warn "Skipped InvokeCalleeVararg test due to improvements in inference"
end
# Vararg that resolves to a UnionAll
M = Module()
@eval M begin
struct ArrayWrapper{T,N,A,Args} <: AbstractArray{T,N}
data::A
args::Args
end
ArrayWrapper{T}(data, args...) where T = ArrayWrapper{T,ndims(data),typeof(data),typeof(args)}(data, args)
@noinline makewrapper(data::AbstractArray{T}, args) where T = ArrayWrapper{T}(data, args...)
end
# run and redefine for reproducible results
M.makewrapper(rand(2,2), ["a", 'b', 5])
M = Module()
@eval M begin
struct ArrayWrapper{T,N,A,Args} <: AbstractArray{T,N}
data::A
args::Args
end
ArrayWrapper{T}(data, args...) where T = ArrayWrapper{T,ndims(data),typeof(data),typeof(args)}(data, args)
@noinline makewrapper(data::AbstractArray{T}, args) where T = ArrayWrapper{T}(data, args...)
end
tinf = @snoop_inference M.makewrapper(rand(2,2), ["a", 'b', 5])
itrigs = inference_triggers(tinf)
@test length(itrigs) == 2
s = suggest(itrigs[1])
@test s.categories == [SnoopCompile.FromTestCallee, SnoopCompile.UnspecType]
print(io, s)
@test occursin("partial type call", String(take!(io)))
s = suggest(itrigs[2])
@test s.categories == [SnoopCompile.CallerVararg, SnoopCompile.UnspecType]
print(io, s)
@test occursin(r"partial type call with vararg caller.*ignore.*annotate", String(take!(io)))
mtrigs = accumulate_by_source(Method, itrigs)
for mtrig in mtrigs
summary(io, mtrig)
end
str = String(take!(io))
@test occursin("makewrapper(data", str)
@test occursin("ArrayWrapper{Float64", str)
@test occursin("Tuple{String", str)
# ErrorPath
M = Module()
@eval M begin
struct MyType end
struct MyException <: Exception
info::Vector{MyType}
end
MyException(obj::MyType) = MyException([obj])
@noinline function checkstatus(b::Bool, info)
if !b
throw(MyException(info))
end
return nothing
end
end
tinf = @snoop_inference try M.checkstatus(false, M.MyType()) catch end
if !isempty(inference_triggers(tinf))
# Exceptions do not trigger a fresh entry into inference on Julia 1.8+
cats = categories(tinf)
@test cats == [SnoopCompile.FromTestCallee, SnoopCompile.ErrorPath]
SnoopCompile.show_suggest(io, cats, nothing, nothing)
@test occursin(r"error path.*ignore", String(take!(io)))
end
# Core.Box
@test !SnoopCompile.hascorebox(AbstractVecOrMat{T} where T) # test Union handling
M = Module()
@eval M begin
struct MyInt <: Integer end
Base.:(*)(::MyInt, r::Int) = 7*r
function abmult(r::Int, z) # from https://docs.julialang.org/en/v1/manual/performance-tips/#man-performance-captured
if r < 0
r = -r
end
f = x -> x * r
return f(z)
end
end
z = M.MyInt()
tinf = @snoop_inference M.abmult(3, z)
itrigs = inference_triggers(tinf)
itrig = only(itrigs)
s = suggest(itrig)
@test SnoopCompile.HasCoreBox ∈ s.categories
print(io, s)
@test occursin(r"Core\.Box.*fix this.*http", String(take!(io)))
mtrigs = accumulate_by_source(Method, itrigs)
summary(io, only(mtrigs))
@test occursin(r"Core\.Box.*fix this.*http", String(take!(io)))
# Test one called from toplevel
fromtask() = (while false end; 1)
tinf = @snoop_inference wait(@async fromtask())
@test isempty(staleinstances(tinf))
itrigs = inference_triggers(tinf)
itrig = only(itrigs)
s = suggest(itrig)
@test s.categories == [SnoopCompile.NoCaller]
itree = trigger_tree(itrigs)
print_tree(io, itree)
@test occursin(r"{var\"#fromtask", String(take!(io)))
print(io, s)
occursin(r"unknown caller.*Task", String(take!(io)))
mtrigs = accumulate_by_source(Method, itrigs)
@test isempty(mtrigs)
# Empty
SnoopCompile.show_suggest(io, SnoopCompile.Suggestion[], nothing, nothing)
@test occursin("Unspecialized or unknown for", String(take!(io)))
# Printing says *something* for any set of categories
annots = instances(SnoopCompile.Suggestion)
iter = [1:2 for _ in 1:length(annots)]
cats = SnoopCompile.Suggestion[]
for state in Iterators.product(iter...)
empty!(cats)
for (i, s) in enumerate(state)
if s == 2
push!(cats, annots[i])
end
end
SnoopCompile.show_suggest(io, cats, nothing, nothing)
@test !isempty(String(take!(io)))
end
end
@testset "flamegraph_export" begin
M = Module()
@eval M begin # Take another tinf
i(x) = x+5
h(a::Array) = i(a[1]::Integer) + 2
g(y::Integer) = h(Any[y])
end
tinf = @snoop_inference begin
M.g(2)
end
@test isempty(staleinstances(tinf))
frames = flatten(tinf; sortby=inclusive)
fg = SnoopCompile.flamegraph(tinf)
fgnodes = collect(AbstractTrees.PreOrderDFS(fg))
for tgtname in (:h, :i, :+)
@test mapreduce(|, fgnodes; init=false) do node
node.data.sf.linfo.def.name == tgtname
end
end
# Test that the span covers the whole tree, and check for const-prop
has_constprop = false
for leaf in AbstractTrees.PreOrderDFS(fg)
@test leaf.data.span.start in fg.data.span
@test leaf.data.span.stop in fg.data.span
has_constprop |= leaf.data.status & FlameGraphs.gc_event != 0x0
end
@test has_constprop
frame1, frame2 = frames[1], frames[2]
t1, t2 = inclusive(frame1), inclusive(frame2)
# Ensure there's a tinf gap, and that cutting off the fastest-to-infer won't leave the tree headless
if t1 != t2 && Method(frame1).name !== :g
cutoff_bottom_frame = (t1 + t2) / 2
fg2 = SnoopCompile.flamegraph(tinf, tmin = cutoff_bottom_frame)
@test length(collect(AbstractTrees.PreOrderDFS(fg2))) == (length(collect(AbstractTrees.PreOrderDFS(fg))) - 1)
end
fg1 = flamegraph(tinf.children[1])
@test endswith(string(fg.child.data.sf.func), ".g") && endswith(string(fg1.child.data.sf.func), ".h")
fg2 = flamegraph(tinf.children[2])
@test endswith(string(fg2.child.data.sf.func), ".i")
# Printing
M = Module()
@eval M begin
i(x) = x+5
h(a::Array) = i(a[1]::Integer) + 2
g(y::Integer) = h(Any[y])
end
tinf = @snoop_inference begin
M.g(2)
M.g(true)
end
@test isempty(staleinstances(tinf))
fg = SnoopCompile.flamegraph(tinf)
@test endswith(string(fg.child.data.sf.func), ".g")
counter = Dict{Method,Int}()
visit(M) do item
if isa(item, Core.MethodInstance)
m = item.def
if isa(m, Method)
counter[m] = get(counter, m, 0) + 1
end
return false
end
return true
end
fg = SnoopCompile.flamegraph(tinf; mode=counter)
@test endswith(string(fg.child.data.sf.func), ".g (2)")
# Non-precompilability
M = Module()
@eval M begin
struct MyFloat x::Float64 end
Base.isless(x::MyFloat, y::Float64) = isless(x.x, y)
end
tinf = @snoop_inference begin
z = M.MyFloat(2.0)
z < 3.0
end
fg = SnoopCompile.flamegraph(tinf)
nonpc = false
for leaf in AbstractTrees.PreOrderDFS(fg)
nonpc |= leaf.data.sf.func === Symbol("Base.<") && leaf.data.status & FlameGraphs.runtime_dispatch != 0x0
end
@test nonpc
end
@testset "demos" begin
# Just ensure they run
@test SnoopCompile.itrigs_demo() isa SnoopCompile.InferenceTimingNode
@test SnoopCompile.itrigs_higherorder_demo() isa SnoopCompile.InferenceTimingNode
end
include("testmodules/SnoopBench.jl")
@testset "parcel" begin
a = SnoopBench.A()
tinf = @snoop_inference SnoopBench.f1(a)
@test isempty(staleinstances(tinf))
ttot, prs = SnoopCompile.parcel(tinf)
mod, (tmod, tmis) = only(prs)
@test mod === SnoopBench
t, mi = only(tmis)
@test ttot == tmod == t # since there is only one
@test mi.def.name === :f1
ttot2, prs = SnoopCompile.parcel(tinf; tmin=10.0)
@test isempty(prs)
@test ttot2 == ttot
tinf = @snoop_inference begin
fn = SnoopBench.sv()
rm(fn)
end
ttot, prs = SnoopCompile.parcel(tinf.children[1].children[1])
mod, (tmod, tmis) = only(prs)
io = IOBuffer()
SnoopCompile.write(io, tmis; tmin=0.0)
str = String(take!(io))
@test occursin("bodyfunction", str)
A = [a]
tinf = @snoop_inference SnoopBench.mappushes(identity, A)
@test isempty(staleinstances(tinf))
ttot, prs = SnoopCompile.parcel(tinf)
mod, (tmod, tmis) = only(prs)
@test mod === SnoopBench
@test ttot == tmod # since there is only one
@test length(tmis) == 2
io = IOBuffer()
SnoopCompile.write(io, tmis; tmin=0.0)
str = String(take!(io))
@test occursin(r"typeof\(mappushes\),Any,Vector\{A\}", str)
@test occursin(r"typeof\(mappushes!\),typeof\(identity\),Vector\{Any\},Vector\{A\}", str)
list = Any[1, 1.0, Float16(1.0), a]
tinf = @snoop_inference SnoopBench.mappushes(isequal(Int8(1)), list)
@test isempty(staleinstances(tinf))
ttot, prs = SnoopCompile.parcel(tinf)
@test length(prs) == 2
_, (tmodBase, tmis) = prs[findfirst(pr->pr.first === Base, prs)]
tw, nw = SnoopCompile.write(io, tmis; tmin=0.0)
@test 0.0 <= tw <= tmodBase * (1+10*eps())
@test 0 <= nw <= length(tmis)
str = String(take!(io))
@test !occursin(r"Base.Fix2\{typeof\(isequal\).*SnoopBench.A\}", str)
@test length(split(chomp(str), '\n')) == nw
_, (tmodBench, tmis) = prs[findfirst(pr->pr.first === SnoopBench, prs)]
@test sum(inclusive, tinf.children[1:end-1]) <= tmodBench + tmodBase # last child is not precompilable
tw, nw = SnoopCompile.write(io, tmis; tmin=0.0)
@test nw == 2
str = String(take!(io))
@test occursin(r"typeof\(mappushes\),Any,Vector\{Any\}", str)
@test occursin(r"typeof\(mappushes!\),Base.Fix2\{typeof\(isequal\).*\},Vector\{Any\},Vector\{Any\}", str)
td = joinpath(tempdir(), randstring(8))
SnoopCompile.write(td, prs; tmin=0.0, ioreport=io)
str = String(take!(io))
@test occursin(r"Base: precompiled [\d\.]+ out of [\d\.]+", str)
@test occursin(r"SnoopBench: precompiled [\d\.]+ out of [\d\.]+", str)
file_base = joinpath(td, "precompile_Base.jl")
@test isfile(file_base)
@test occursin("ccall(:jl_generating_output", read(file_base, String))
rm(td, recursive=true, force=true)
SnoopCompile.write(td, prs; ioreport=io, header=false)
str = String(take!(io)) # just to clear it in case we use it again
@test !occursin("ccall(:jl_generating_output", read(file_base, String))
rm(td, recursive=true, force=true)
# issue #197
f197(::Vector{T}) where T<:Integer = zero(T)
g197(@nospecialize(x::Vector{<:Number})) = f197(x)
g197([1,2,3])
@test SnoopCompile.get_reprs([(rand(), mi) for mi in methodinstances(f197)])[1] isa AbstractSet
end
@testset "Specialization" begin
Ts = subtypes(Any)
tinf_unspec = @snoop_inference SnoopBench.mappushes(SnoopBench.spell_unspec, Ts)
tf_unspec = flatten(tinf_unspec)
tinf_spec = @snoop_inference SnoopBench.mappushes(SnoopBench.spell_spec, Ts)
tf_spec = flatten(tinf_spec)
@test length(tf_unspec) < length(Ts) ÷ 5
@test any(tmi -> occursin("spell_unspec(::Any)", repr(MethodInstance(tmi))), tf_unspec)
@test length(tf_spec) >= length(Ts)
@test !any(tmi -> occursin("spell_spec(::Any)", repr(MethodInstance(tmi))), tf_unspec)
@test !any(tmi -> occursin("spell_spec(::Type)", repr(MethodInstance(tmi))), tf_unspec)
# fig, axs = plt.subplots(1, 2)
nruns = 10^3
SnoopBench.mappushes(SnoopBench.spell_spec, Ts)
@profile for i = 1:nruns
SnoopBench.mappushes(SnoopBench.spell_spec, Ts)
end
rit = runtime_inferencetime(tinf_spec)
@test !any(rit) do (ml, _)
endswith(string(ml.file), ".c") # attribute all costs to Julia code, not C code
end
m = @which SnoopBench.spell_spec(first(Ts))
dspec = rit[findfirst(pr -> pr.first == m, rit)].second
@test dspec.tinf > dspec.trun # more time is spent on inference than on runtime
@test dspec.nspec >= length(Ts)
# Check that much of the time in `mappushes!` is spent on runtime dispatch
mp = @which SnoopBench.mappushes!(SnoopBench.spell_spec, [], first(Ts))
dmp = rit[findfirst(pr -> pr.first == mp, rit)].second
@test dmp.trtd >= 0.5*dmp.trun
# pgdsgui(axs[1], rit; bystr="Inclusive", consts=true, interactive=false)
Profile.clear()
SnoopBench.mappushes(SnoopBench.spell_unspec, Ts)
@profile for i = 1:nruns
SnoopBench.mappushes(SnoopBench.spell_unspec, Ts)
end
rit = runtime_inferencetime(tinf_unspec)
m = @which SnoopBench.spell_unspec(first(Ts))
dunspec = rit[findfirst(pr -> pr.first == m, rit)].second # trunspec, trdtunspec, tiunspec, nunspec
@test dunspec.tinf < dspec.tinf/10
@test dunspec.trun < 10*dspec.trun
@test dunspec.nspec == 1
# Test that no runtime dispatch occurs in mappushes!
dmp = rit[findfirst(pr -> pr.first == mp, rit)].second
@test dmp.trtd == 0
# pgdsgui(axs[2], rit; bystr="Inclusive", consts=true, interactive=false)
end
@testset "Stale" begin
cproj = Base.active_project()
cd(joinpath("testmodules", "Stale")) do
Pkg.activate(pwd())
Pkg.precompile()
end
invalidations = @snoop_invalidations begin
using StaleA, StaleC
using StaleB
end
smis = filter(SnoopCompile.hasstaleinstance, methodinstances(StaleA))
@test length(smis) == 2
stalenames = [mi.def.name for mi in smis]
@test :build_stale ∈ stalenames
@test :use_stale ∈ stalenames
trees = invalidation_trees(invalidations)
tree = length(trees) == 1 ? only(trees) : trees[findfirst(tree -> !isempty(tree.backedges), trees)]
@test tree.method == which(StaleA.stale, (String,)) # defined in StaleC
@test all(be -> Core.MethodInstance(be).def == which(StaleA.stale, (Any,)), tree.backedges)
root = only(filter(tree.backedges) do be
Core.MethodInstance(be).specTypes.parameters[end] === String
end)
@test convert(Core.MethodInstance, root.children[1]).def == which(StaleB.useA, ())
m2 = which(StaleB.useA2, ())
if any(item -> isa(item, Core.MethodInstance) && item.def == m2, invalidations) # requires julia#49449
@test convert(Core.MethodInstance, root.children[1].children[1]).def == m2
end
tinf = @snoop_inference begin
StaleB.useA()
StaleC.call_buildstale("hi")
end
@test isempty(SnoopCompile.StaleTree(first(smis).def, :noreason).backedges) # constructor test
healed = true
# If we don't discount ones left in an invalidated state,
# we get mt_backedges with a MethodInstance middle entry too
strees2 = precompile_blockers(invalidations, tinf; min_world_exclude=0)
root, hits = only(only(strees2).backedges)
mi_stale = only(filter(mi -> endswith(String(mi.def.file), "StaleA.jl"), methodinstances(StaleA.stale, (String,))))
@test Core.MethodInstance(root) == mi_stale
@test Core.MethodInstance(only(hits)) == methodinstance(StaleB.useA, ())
# What happens when we can't find it in the tree?
if any(isequal("verify_methods"), invalidations)
# The 1.9+ format
invscopy = copy(invalidations)
idx = findlast(==("verify_methods"), invscopy)
invscopy[idx+1] = 22
redirect_stderr(devnull) do
broken_trees = invalidation_trees(invscopy)
@test isempty(precompile_blockers(broken_trees, tinf))
end
else
# The older format
idx = findfirst(isequal("jl_method_table_insert"), invalidations)
redirect_stdout(devnull) do
broken_trees = invalidation_trees(invalidations[idx+1:end])
@test isempty(precompile_blockers(broken_trees, tinf))
end
end
# IO
io = IOBuffer()
print(io, trees)
@test occursin(r"stale\(x::String\) (in|@) StaleC", String(take!(io)))
if !healed
print(io, strees)
str = String(take!(io))
@test occursin(r"inserting stale.* in StaleC.*invalidated:", str)
@test occursin(r"blocked.*InferenceTimingNode: .*/.* on StaleA.use_stale", str)
@test endswith(str, "\n]")
print(IOContext(io, :compact=>true), strees)
str = String(take!(io))
@test endswith(str, ";]")
SnoopCompile.printdata(io, [hits; hits])
@test occursin("inclusive time for 2 nodes", String(take!(io)))
end
print(io, only(strees2))
str = String(take!(io))
@test occursin(r"inserting stale\(.* (in|@) StaleC.*invalidated:", str)
@test !occursin("mt_backedges", str)
@test occursin(r"blocked.*InferenceTimingNode: .*/.* on StaleB.useA", str)
Pkg.activate(cproj)
end
using JET, Cthulhu
@testset "JET integration" begin
function mysum(c) # vendor a simple version of `sum`
isempty(c) && return zero(eltype(c))
s = first(c)
for x in Iterators.drop(c, 1)
s += x
end
return s
end
call_mysum(cc) = mysum(cc[1])
cc = Any[Any[1,2,3]]
tinf = @snoop_inference call_mysum(cc)
rpt = @report_call call_mysum(cc)
@test isempty(JET.get_reports(rpt))
itrigs = inference_triggers(tinf)
irpts = report_callees(itrigs)
@test only(irpts).first == last(itrigs)
@test !isempty(JET.get_reports(only(irpts).second))
@test isempty(JET.get_reports(report_caller(itrigs[end])))
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 9730 | using SnoopCompile, InteractiveUtils, MethodAnalysis, Pkg, Test
import PrettyTables # so that the report_invalidations.jl file is loaded
module SnooprTests
f(x::Int) = 1
f(x::Bool) = 2
applyf(container) = f(container[1])
callapplyf(container) = applyf(container)
# "multi-caller". Mimics invalidations triggered by defining ==(::SomeType, ::Any)
mc(x, y) = false
mc(x::Int, y::Int) = x === y
mc(x::Symbol, y::Symbol) = x === y
function mcc(container, y)
x = container[1]
return mc(x, y)
end
function mcc(container, y, extra)
x = container[1]
return mc(x, y) + extra
end
mccc1(container, y) = mcc(container, y)
mccc2(container, y) = mcc(container, y, 10)
struct MyInt <: Integer
x::Int
end
end
# For recursive filtermod
module Inner
op(a, b) = a + b
end
module Outer
using ..Inner
function runop(list)
acc = 0
for item in list
acc = Inner.op(acc, item)
end
return acc
end
end
@testset "@snoop_invalidations" begin
prefix = "$(@__MODULE__).SnooprTests."
c = Any[1]
@test SnooprTests.callapplyf(c) == 1
mi1 = methodinstance(SnooprTests.applyf, (Vector{Any},))
mi2 = methodinstance(SnooprTests.callapplyf, (Vector{Any},))
@test length(uinvalidated([mi1])) == 1 # issue #327
@test length(uinvalidated([mi1, "verify_methods", mi2])) == 1
@test length(uinvalidated([mi1, "invalidate_mt_cache"])) == 0
invs = @snoop_invalidations SnooprTests.f(::AbstractFloat) = 3
@test !isempty(invs)
umis = uinvalidated(invs)
@test !isempty(umis)
trees = invalidation_trees(invs)
methinvs = only(trees)
m = which(SnooprTests.f, (AbstractFloat,))
@test methinvs.method == m
@test methinvs.reason === :inserting
sig, root = only(methinvs.mt_backedges)
@test sig === Tuple{typeof(SnooprTests.f), Any}
@test root.mi == mi1
@test SnoopCompile.getroot(root) === root
@test root.depth == 0
child = only(root.children)
@test child.mi == mi2
@test SnoopCompile.getroot(child) === root
@test child.depth == 1
@test isempty(child.children)
@test isempty(methinvs.backedges)
io = IOBuffer()
print(io, methinvs)
str = String(take!(io))
@test startswith(str, "inserting f(::AbstractFloat)")
@test occursin("mt_backedges: 1: signature", str)
@test occursin("triggered MethodInstance for $(prefix)applyf(::$(Vector{Any})) (1 children)", str)
cf = Any[1.0f0]
@test SnooprTests.callapplyf(cf) == 3
mi1 = methodinstance(SnooprTests.applyf, (Vector{Any},))
mi2 = methodinstance(SnooprTests.callapplyf, (Vector{Any},))
@test mi1.backedges == [mi2]
mi3 = methodinstance(SnooprTests.f, (AbstractFloat,))
invs = @snoop_invalidations SnooprTests.f(::Float32) = 4
@test !isempty(invs)
trees = invalidation_trees(invs)
methinvs = only(trees)
m = which(SnooprTests.f, (Float32,))
# These next are identical to the above
@test methinvs.method == m
@test methinvs.reason === :inserting
have_backedges = !isempty(methinvs.backedges)
if have_backedges
root = only(methinvs.backedges)
@test root.mi == mi3
@test SnoopCompile.getroot(root) === root
@test root.depth == 0
child = only(root.children)
@test child.mi == mi1
@test SnoopCompile.getroot(child) === root
@test child.depth == 1
end
if isempty(methinvs.backedges) || isempty(child.children)
# the mt_backedges got invalidated first
sig, root = only(methinvs.mt_backedges)
@test sig === Tuple{typeof(Main.SnooprTests.f), Any}
@test root.mi == mi1
cchild = only(root.children)
targetdepth = 1
else
cchild = only(child.children)
targetdepth = 2
end
@test cchild.mi == mi2
@test SnoopCompile.getroot(cchild) === root
@test cchild.depth == targetdepth
@test isempty(cchild.children)
@test any(nd->nd.mi == mi1, root)
@test !any(nd->nd.mi == mi3, child)
print(io, methinvs)
str = String(take!(io))
@test startswith(str, "inserting f(::Float32)")
if !isempty(methinvs.backedges)
@test occursin("backedges: 1: superseding f(::AbstractFloat)", str)
@test occursin("with MethodInstance for $(prefix)f(::AbstractFloat) ($targetdepth children)", str)
else
@test occursin("signature Tuple{typeof($(prefix)f), Any} triggered", str)
@test occursin("for $(prefix)applyf(::Vector{Any}) ($targetdepth children)", str)
end
show(io, root; minchildren=1)
str = String(take!(io))
lines = split(chomp(str), '\n')
@test length(lines) == 1+targetdepth
if targetdepth == 2
@test lines[1] == "MethodInstance for $(prefix)f(::AbstractFloat) (2 children)"
@test lines[2] == " MethodInstance for $(prefix)applyf(::$(Vector{Any})) (1 children)"
else
@test lines[1] == "MethodInstance for $(prefix)applyf(::$(Vector{Any})) (1 children)"
end
show(io, root; minchildren=2)
str = String(take!(io))
lines = split(chomp(str), '\n')
@test length(lines) == 2
@test lines[1] == (targetdepth == 2 ? "MethodInstance for $(prefix)f(::AbstractFloat) (2 children)" :
"MethodInstance for $(prefix)applyf(::$(Vector{Any})) (1 children)")
@test lines[2] == "⋮"
if have_backedges
ftrees = filtermod(@__MODULE__, trees)
ftree = only(ftrees)
@test ftree.backedges == methinvs.backedges
@test isempty(ftree.mt_backedges)
else
ftrees = filtermod(SnooprTests, trees)
@test ftrees == trees
end
cai = Any[1]
cas = Any[:sym]
@test SnooprTests.mccc1(cai, 1)
@test !SnooprTests.mccc1(cai, 2)
@test !SnooprTests.mccc1(cas, 1)
@test SnooprTests.mccc2(cai, 1) == 11
@test SnooprTests.mccc2(cai, 2) == 10
@test SnooprTests.mccc2(cas, 1) == 10
trees = invalidation_trees(@snoop_invalidations SnooprTests.mc(x::AbstractFloat, y::Int) = x == y)
root = only(trees).backedges[1]
@test length(root.children) == 2
m = which(SnooprTests.mccc1, (Any, Any))
ft = findcaller(m, trees)
fnode = only(ft.backedges)
while !isempty(fnode.children)
fnode = only(fnode.children)
end
@test fnode.mi.def === m
# Method deletion
m = which(SnooprTests.f, (Bool,))
invs = @snoop_invalidations Base.delete_method(m)
trees = invalidation_trees(invs)
tree = only(trees)
@test tree.reason === :deleting
@test tree.method == m
# Method overwriting
invs = @snoop_invalidations begin
@eval Module() begin
Base.@irrational twoπ 6.2831853071795864769 2*big(π)
Base.@irrational twoπ 6.2831853071795864769 2*big(π)
end
end
# Tabulate invalidations:
io = IOBuffer()
SnoopCompile.report_invalidations(io;
invalidations = invs,
)
str = String(take!(io))
@test occursin("Invalidations %", str)
trees = invalidation_trees(invs)
@test length(trees) >= 3
io = IOBuffer()
show(io, trees)
str = String(take!(io))
@test occursin(r"deleting Float64\(::Irrational{:twoπ}\).*invalidated:\n.*mt_disable: MethodInstance for Float64\(::Irrational{:twoπ}\)", str)
@test occursin(r"deleting Float32\(::Irrational{:twoπ}\).*invalidated:\n.*mt_disable: MethodInstance for Float32\(::Irrational{:twoπ}\)", str)
@test occursin(r"deleting BigFloat\(::Irrational{:twoπ}; precision\).*invalidated:\n.*backedges: 1: .*with MethodInstance for BigFloat\(::Irrational{:twoπ}\) \(1 children\)", str)
# Exclusion of Core.Compiler methods
invs = @snoop_invalidations (::Type{T})(x::SnooprTests.MyInt) where T<:Integer = T(x.x)
umis1 = uinvalidated(invs)
umis2 = uinvalidated(invs; exclude_corecompiler=false)
# recursive filtermod
list = Union{Int,String}[1,2]
Outer.runop(list)
invs = @snoop_invalidations Inner.op(a, b::String) = a + length(b)
trees = invalidation_trees(invs)
@test length(trees) == 1
@test length(filtermod(Inner, trees)) == 1
@test isempty(filtermod(Outer, trees))
@test length(filtermod(Outer, trees; recursive=true)) == 1
end
@testset "Delayed invalidations" begin
cproj = Base.active_project()
cd(joinpath(@__DIR__, "testmodules", "Invalidation")) do
Pkg.activate(pwd())
Pkg.develop(path="./PkgC")
Pkg.develop(path="./PkgD")
Pkg.precompile()
invalidations = @snoop_invalidations begin
@eval begin
using PkgC
PkgC.nbits(::UInt8) = 8
using PkgD
end
end
tree = only(invalidation_trees(invalidations))
@test tree.reason == :inserting
@test tree.method.file == Symbol(@__FILE__)
@test isempty(tree.backedges)
sig, root = only(tree.mt_backedges)
@test sig.parameters[1] === typeof(PkgC.nbits)
@test sig.parameters[2] === Integer
@test root.mi == first(SnoopCompile.specializations(only(methods(PkgD.call_nbits))))
end
Pkg.activate(cproj)
end
# This needs to come after "Delayed invalidations", as redefining `throw_boundserror` invalidates a lot of stuff
@testset "throw_boundserror" begin
# #268
invs = @snoop_invalidations begin
@eval Module() begin
@noinline Base.throw_boundserror(A, I) = throw(BoundsError(A, I))
end
end
trees = invalidation_trees(invs)
lines = split(sprint(show, trees), '\n')
idx = findfirst(str -> occursin("mt_disable", str), lines)
if idx !== nothing
@test occursin("throw_boundserror", lines[idx])
@test occursin(r"\+\d+ more", lines[idx+1])
end
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 532 | using Test
using SnoopCompile
@testset "@snoop_llvm" begin
@snoop_llvm "func_names.csv" "llvm_timings.yaml" begin
@eval module M
i(x) = x+5
h(a::Array) = i(a[1]::Integer) + 2
g(y::Integer) = h(Any[y])
end;
@eval M.g(3)
end;
times, info = SnoopCompile.read_snoop_llvm("func_names.csv", "llvm_timings.yaml")
@test length(times) == 3 # i(), h(), g()
@test length(info) == 3 # i(), h(), g()
rm("func_names.csv")
rm("llvm_timings.yaml")
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 273 | # dummy module for testing assignment in parcel
module A
module B
module C
struct CT
x::Int
end
end
module D
end
end
f(a) = 1
myjoin(arg::String, args::String...) = arg * " " * join(args, ' ')
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 98 | # dummy module for testing assignment in parcel
module E
struct ET
x::Int
end
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 2578 | module FuncKinds
# Many of these are written elaborately to defeat inference
function hasfoo(list) # a test for anonymous functions
hf = false
hf = map(list) do item
if isa(item, AbstractString)
(str->occursin("foo", str))(item)
else
false
end
end
return any(hf)
end
## A test for keyword functions
const randnums = Any[1, 1.0, UInt(14), 2.0f0]
@noinline returnsrandom() = randnums[rand(1:length(randnums))]
@noinline function haskw(x, y; a="hello", b=1, c=returnsrandom())
if isa(b, Integer)
return cos(rand()) + c + x + y
elseif isa(b, AbstractFloat)
s = 0.0
for i = 1:rand(1:10)
s += log(rand())
end
return s + c + x + y
end
return "string"
end
function callhaskw()
ret = Any[]
for i = 1:5
push!(ret, haskw(returnsrandom(), returnsrandom()))
end
push!(ret, haskw(returnsrandom(), returnsrandom(); b = 2.0))
return ret
end
@generated function gen(x::T) where T
Tbigger = T == Float32 ? Float64 : BigFloat
:(convert($Tbigger, x))
end
function gen2(x::Int, y)
if @generated
return y <: Integer ? :(x*y) : :(x+y)
else
return 2x+3y
end
end
function hasinner(x, y)
inner(z) = 2z
s = 0
for i = 1:10
s += inner(returnsrandom())
end
return s
end
# Two kwarg generated functions; one will be called from the no-kw call, the other from a kwcall
@generated function genkw1(; b=2)
:(string(typeof($b)))
end
@generated function genkw2(; b=2)
:(string(typeof($b)))
end
# Function styles from JuliaInterpreter
f1(x::Int) = 1
f1(x) = 2
# where signatures
f2(x::T) where T = -1
f2(x::T) where T<:Integer = T
f2(x::T) where Unsigned<:T<:Real = 0
f2(x::V) where V<:SubArray{T} where T = 2
f2(x::V) where V<:Array{T,N} where {T,N} = 3
f2(x::V) where V<:Base.ReshapedArray{T,N} where T where N = 4
# Varargs
f3(x::Int, y...) = 1
f3(x::Int, y::Symbol...) = 2
f3(x::T, y::U...) where {T<:Integer,U} = U
f3(x::Array{Float64,K}, y::Vararg{Symbol,K}) where K = K
# Default args
f4(x, y::Int=0) = 2
f4(x::UInt, y="hello", z::Int=0) = 3
f4(x::Array{Float64,K}, y::Int=0) where K = K
# Keyword args
f5(x::Int8; y=0) = y
f5(x::Int16; y::Int=0) = 2
f5(x::Int32; y="hello", z::Int=0) = 3
f5(x::Int64;) = 4
f5(x::Array{Float64,K}; y::Int=0) where K = K
# Default and keyword args
f6(x, y="hello"; z::Int=0) = 1
fsort() = @eval sortperm(rand(5); rev=true, by=sin)
fsort2() = @eval sortperm(rand(5); rev=true, by=x->x^2)
fsort3() = sortperm(rand(Float16, 5))
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 190 | module Reachable
module ModuleA
export RchA
struct RchA end
end # ModuleA
module ModuleB
using Reachable.ModuleA
export rchb
rchb(::RchA) = "hello"
f(a) = 1
end # ModuleB
end # Reachable
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 37 | module Reachable2
struct B end
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 1134 | module SnoopBench
# Assignment of parcel to modules
struct A end
f3(::A) = 1
f2(a::A) = f3(a)
f1(a::A) = f2(a)
function savevar(x; fn=joinpath(tempdir(), string(x)*".txt"))
open(fn, "w") do io
println(io, x)
end
return fn
end
sv() = savevar("horse")
# Like map! except it uses push!
# With a single call site
mappushes!(f, dest, src) = (for item in src push!(dest, f(item)) end; return dest)
mappushes(@nospecialize(f), src) = mappushes!(f, [], src)
function mappushes3!(f, dest, src)
# A version with multiple call sites
item1 = src[1]
push!(dest, item1)
item2 = src[2]
push!(dest, item2)
item3 = src[3]
push!(dest, item3)
return dest
end
mappushes3(@nospecialize(f), src) = mappushes3!(f, [], src)
# Useless specialization
function spell_spec(::Type{T}) where T
name = Base.unwrap_unionall(T).name.name
str = ""
for c in string(name)
str *= c
end
return str
end
function spell_unspec(@nospecialize(T))
name = (Base.unwrap_unionall(T)::DataType).name.name
str = ""
for c in string(name)
str *= c
end
return str
end
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 70 | module PkgC
nbits(::Int8) = 8
nbits(::Int16) = 16
end # module PkgC
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 190 | module PkgD
using PkgC
call_nbits(x::Integer) = PkgC.nbits(x)
map_nbits(list) = map(call_nbits, list)
nbits_list() = map_nbits(Integer[Int8(1), Int16(1)])
nbits_list()
end # module PkgD
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 243 | module StaleA
stale(x) = rand(1:8)
stale(x::Int) = length(digits(x))
not_stale(x::String) = first(x)
use_stale(c) = stale(c[1]) + not_stale("hello")
build_stale(x) = use_stale(Any[x])
# force precompilation
build_stale(37)
stale('c')
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 322 | module StaleB
# StaleB does not know about StaleC when it is being built.
# However, if StaleC is loaded first, we get `"insert_backedges"`
# invalidations.
using StaleA
# This will be invalidated if StaleC is loaded
useA() = StaleA.stale("hello")
useA2() = useA()
useA3() = useA2()
# force precompilation
useA3()
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 146 | module StaleC
using StaleA
StaleA.stale(x::String) = length(x)
call_buildstale(x) = StaleA.build_stale(x)
call_buildstale("hey")
end # module
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 142 | using Pkg
rootdir = dirname(@__DIR__)
Pkg.develop([
PackageSpec(path=joinpath(rootdir,"SnoopCompileCore")),
PackageSpec(path=rootdir),
])
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | code | 1548 | using LocalRegistry, Pkg
const __topdir__ = dirname(@__DIR__)
"""
bump_version(version::VersionNumber)
Bump the version number of all packages in the repository to `version`.
"""
function bump_version(version::VersionNumber)
for dir in (__topdir__,
joinpath(__topdir__, "SnoopCompileCore"))
projfile = joinpath(dir, "Project.toml")
lines = readlines(projfile)
idxs = findall(str->startswith(str, "version"), lines)
idx = only(idxs)
lines[idx] = "version = \"$version\""
# Find any dependencies on versions within the repo
idxs = findall(str->str == "[compat]", lines)
if !isempty(idxs)
idxcompat = only(idxs)
idxend = findnext(str->!isempty(str) && str[1] == '[', lines, idxcompat+1) # first line of next TOML section
if idxend === nothing
idxend = length(lines) + 1
end
for i = idxcompat:idxend-1
if startswith(lines[i], "SnoopCompile")
strs = split(lines[i], '=')
lines[i] = strs[1] * "= \"~" * string(version) * '"'
end
end
end
open(projfile, "w") do io
for line in lines
println(io, line)
end
end
end
return nothing
end
function register_all()
for pkg in ("SnoopCompileCore", "SnoopCompile")
pkgsym = Symbol(pkg)
@eval Main using $pkgsym
register(getfield(Main, pkgsym)::Module)
end
end
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 2457 | # NEWS.md
This file contains information about major updates since 1.0.
## Version 3
Version 3 is a greatly slimmed repository, focusing just on those tools that are relevant for modern Julia development.
If you are still using old tools, you should stick with SnoopCompile 2.x.
Major changes:
- `@snoopi_deep` has been renamed `@snoop_inference`
- `@snoopr` has been renamed `@snoop_invalidations`.
- `@snoopl` has been renamed `@snoop_llvm`.
- The old `@snoopc` has been deleted. Its functionality was largely subsumed into `julia --trace-compile`.
-`@snoopi` has been deleted, as `@snoopi_deep` provided more comprehensive information and is available on all modern Julia versions.
- SnoopCompileBot was deleted in favor of [CompileBot](https://github.com/aminya/CompileBot.jl)
- SnoopPrecompile was deleted because it is now [PrecompileTools](https://github.com/JuliaLang/PrecompileTools.jl)
- JET, Cthulhu, PrettyTables, and PyPlot are all integrated via package extensions. As a consequence, users now have to load them manually.
## Version 1.1
This version implements major changes in how `parcel` works on the output of `@snoopi`:
- keyword-supplying methods are now looked up as `Core.kwftype(f)` rather than by
the gensymmed name (`var"#f##kw"` or `getfield(Base, Symbol("#kw##sum"))`)
- On Julia 1.4+, SnoopCompile will write precompile directives for keyword-body functions
(sometimes called the "implementation" function) that look them up by an algorithm that
doesn't depend on the gensymmed name. For example,
```julia
let fbody = try __lookup_kwbody__(which(printstyled, (String,Vararg{String,N} where N,))) catch missing end
if !ismissing(fbody)
precompile(fbody, (Bool,Symbol,typeof(printstyled),String,Vararg{String,N} where N,))
end
end
```
For files that have precompile statements for keyword-body functions, the `__lookup_kwbody__`
method is defined at the top of the file generated by `SnoopCompile.write`.
- Function generators are looked up in a gensym-invariant manner with statements like
```julia
precompile(Tuple{typeof(which(FuncKinds.gen2,(Int64,Any,)).generator.gen),Any,Any,Any})
```
- Anonymous functions and inner functions are looked up inside an `isdefined` check, to
prevent errors when the precompile file is used on a different version of Julia with
different gensym numbering.
Other changes:
- A convenience utility, `timesum`, was introduced (credit to aminya)
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 726 | # SnoopCompile
[](https://github.com/timholy/SnoopCompile.jl/actions?query=workflow%3A%22CI%22+branch%3Amaster)
[](https://codecov.io/gh/timholy/SnoopCompile.jl)
SnoopCompile observes the Julia compiler, causing it to record the
functions and argument types it's compiling. From these lists of methods,
you can generate lists of `precompile` directives that may reduce the latency between
loading packages and using them to do "real work."
See the documentation:
[](https://timholy.github.io/SnoopCompile.jl/dev/)
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 115 | # SnoopCompileCore
SnoopCompileCore is a component of [SnoopCompile](https://github.com/timholy/SnoopCompile.jl).
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 3618 | # SnoopCompile.jl
Julia is fast, but its execution speed depends on optimizing code through *compilation*. Code must be compiled before you can use it, and unfortunately compilation is slow. This can cause *latency* the first time you use code: this latency is often called *time-to-first-plot* (TTFP) or more generally *time-to-first-execution* (TTFX). If something feels slow the first time you use it, and fast thereafter, you're probably experiencing the latency of compilation. Note that TTFX is distinct from time-to-load (TTL, which refers to the time you spend waiting for `using MyPkg` to finish), even though both contribute to latency.
Modern versions of Julia can store compiled code to disk (*precompilation*) to reduce or eliminate latency. Users and developers who are interested in reducing TTFX should first head to [PrecompileTools](https://github.com/JuliaLang/PrecompileTools.jl), read its documentation thoroughly, and try using it to solve latency problems.
This package, **SnoopCompile**, should be considered when:
- precompilation doesn't reduce TTFX as much as you wish
- precompilation "works," but only in isolation: as soon as you load (certain) additional packages, TTFX is bad again
- you're wondering if you can reduce the amount of time needed to precompile your package and/or the size of the precompilation cache files
In other words, SnoopCompile is a diagonostic package that helps reveal the causes of latency. Historically, it proceeded PrecompileTools, and indeed PrecompileTools was split out from SnoopCompile. Today, SnoopCompile is generally needed only when PrecompileTools fails to deliver the desired benefits.
## SnoopCompile analysis modes
SnoopCompile "snoops" on the Julia compiler, collecting information that may be useful to developers. Here are some of the things you can do with SnoopCompile:
- diagnose *invalidations*, cases where Julia must throw away previously-compiled code (see [Tutorial on `@snoop_invalidations`](@ref))
- trace *inference*, to learn what code is being newly (or freshly) analyzed in an early stage of the compilation pipeline ([Tutorial on `@snoop_inference`](@ref))
- trace *code generation by LLVM*, a late stage in the compilation pipeline ([Tutorial on `@snoop_llvm`](@ref))
- reveal methods with excessive numbers of compiler-generated specializations, a.k.a.*profile-guided despecialization* ([Tutorial on PGDS](@ref pgds))
- integrate with tools like [JET](https://github.com/aviatesk/JET.jl) to further reduce the risk that your lovingly-precompiled code will be invalidated by loading other packages ([Tutorial on JET integration](@ref))
## Background information
If nothing else, you should know this:
- invalidations occur when you *load* code (e.g., `using MyPkg`) or otherwise define new methods
- inference and other stages of compilation occur the first time you *run* code for a particular combination of input types
The individual tutorials briefly explain core concepts. More detail can be found in [Understanding SnoopCompile and Julia's compilation pipeline](@ref).
## Who should use this package
SnoopCompile is intended primarily for package *developers* who want to improve the
experience for their users. It is also recommended for users who are willing to "dig deep" and understand why packages they depend on have high latency. **Your experience with latency may be personal, as it can depend on the specific combination of packages you load.** If latency troubles you, don't make the assumption that it must be unfixable: you might be the first person affected by that specific cause of latency.
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 845 | # Reference
## Data collection
```@docs
SnoopCompileCore.@snoop_invalidations
SnoopCompileCore.@snoop_inference
SnoopCompileCore.@snoop_llvm
```
## GUIs
```@docs
flamegraph
pgdsgui
```
## Analysis of invalidations
```@docs
uinvalidated
invalidation_trees
precompile_blockers
filtermod
findcaller
report_invalidations
```
## Analysis of `@snoop_inference`
```@docs
flatten
exclusive
inclusive
accumulate_by_source
collect_for
staleinstances
inference_triggers
trigger_tree
suggest
isignorable
callerinstance
callingframe
skiphigherorder
InferenceTrigger
runtime_inferencetime
SnoopCompile.parcel
SnoopCompile.write
report_callee
report_callees
report_caller
```
## Analysis of LLVM
```@docs
SnoopCompile.read_snoop_llvm
```
## Demos
```@docs
SnoopCompile.flatten_demo
SnoopCompile.itrigs_demo
SnoopCompile.itrigs_higherorder_demo
```
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 3285 | # Understanding SnoopCompile and Julia's compilation pipeline
Julia uses
[Just-in-time (JIT) compilation](https://en.wikipedia.org/wiki/Just-in-time_compilation) to
generate the code that runs on your CPU.
Broadly speaking, there are two major compilation steps: *inference* and *code generation*.
Inference is the process of determining the type of each object, which in turn
determines which specific methods get called; once type inference is complete,
code generation performs optimizations and ultimately generates the assembly
language (native code) used on CPUs.
Some aspects of this process are documented [here](https://docs.julialang.org/en/v1/devdocs/eval/).
Using code that has never been compiled requires that it first be JIT-compiled, and this contributes to the latency of using the package.
In some circumstances, you can cache (store) the results of compilation to files to
reduce the latency when your package is used. These files are the the `*.ji` and
`*.so` files that live in the `compiled` directory of your Julia depot, usually
located at `~/.julia/compiled`. However, if these files become large, loading
them can be another source for latency. Julia needs time both to load and
validate the cached compiled code. Minimizing the latency of using a package
involves focusing on caching the compilation of code that is both commonly used
and takes time to compile.
Caching code for later use is called *precompilation*. Julia has had some forms of precompilation almost since the very first packages. However, it was [Julia
1.9](https://julialang.org/blog/2023/04/julia-1.9-highlights/#caching_of_native_code) that first supported "complete" precompilation, including the ability to store native code in shared-library cache files.
SnoopCompile is designed to try to allow you to analyze the costs of JIT-compilation, identify
key bottlenecks that contribute to latency, and set up `precompile` directives to see whether
it produces measurable benefits.
## Package precompilation
When a package is precompiled, here's what happens under the hood:
- Julia loads all of the package's dependencies (the ones in the `[deps]` section of the `Project.toml` file), typically from precompile cache files
- Julia evaluates the source code (text files) that define the package module(s). Evaluating `function foo(args...) ... end` creates a new method `foo`. Note that:
+ the source code might also contain statements that create "data" (e.g., `const`s). In some cases this can lead to some subtle precompilation ["gotchas"](@ref running-during-pc)
+ the source code might also contain a precompile workload, which forces compilation and tracking of package methods.
- Julia iterates over the module contents and writes the *result* to disk. Note that the module contents might include compiled code, and if so it is written along with everything else to the cache file.
When Julia loads your package, it just loads the "snapshot" stored in the cache file: it does not re-evaluate the source-text files that defined your package! It is appropriate to think of the source files of your package as "build scripts" that create your module; once the "build scripts" are executed, it's the module itself that gets cached, and the job of the build scripts is done.
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 7290 | # Techniques for fixing inference problems
Here we assume you've dug into your code with a tool like Cthulhu, and want to know how to fix some of the problems that you discover. Below is a collection of specific cases and some tricks for handling them.
Note that there is also a [tutorial on fixing inference](@ref inferrability) that delves into advanced topics.
## Adding type annotations
### Using concrete types
Defining variables like `list = []` can be convenient, but it creates a `list` of type `Vector{Any}`. This prevents inference from knowing the type of items extracted from `list`. Using `list = String[]` for a container of strings, etc., is an excellent fix. When in doubt, check the type with `isconcretetype`: a common mistake is to think that `list_of_lists = Array{Int}[]` gives you a vector-of-vectors, but
```jldoctest
julia> isconcretetype(Array{Int})
false
```
reminds you that `Array` requires a second parameter indicating the dimensionality of the array. (Or use `list_of_lists = Vector{Int}[]` instead, as `Vector{Int} === Array{Int, 1}`.)
Many valuable tips can be found among [Julia's performance tips](https://docs.julialang.org/en/v1/manual/performance-tips/), and readers are encouraged to consult that page.
### Working with non-concrete types
In cases where invalidations occur, but you can't use concrete types (there are indeed many valid uses of `Vector{Any}`),
you can often prevent the invalidation using some additional knowledge.
One common example is extracting information from an [`IOContext`](https://docs.julialang.org/en/v1/manual/networking-and-streams/#IO-Output-Contextual-Properties-1) structure, which is roughly defined as
```julia
struct IOContext{IO_t <: IO} <: AbstractPipe
io::IO_t
dict::ImmutableDict{Symbol, Any}
end
```
There are good reasons that `dict` uses a value-type of `Any`, but that makes it impossible for the compiler to infer the type of any object looked up in an `IOContext`.
Fortunately, you can help!
For example, the documentation specifies that the `:color` setting should be a `Bool`, and since it appears in documentation it's something we can safely enforce.
Changing
```
iscolor = get(io, :color, false)
```
to
```
iscolor = get(io, :color, false)::Bool # assert that the rhs is Bool-valued
```
will throw an error if it isn't a `Bool`, and this allows the compiler to take advantage of the type being known in subsequent operations.
If the return type is one of a small number of possibilities (generally three or fewer), you can annotate the return type with `Union{...}`. This is generally advantageous only when the intersection of what inference already knows about the types of a variable and the types in the `Union` results in an concrete type.
As a more detailed example, suppose you're writing code that parses Julia's `Expr` type:
```julia
julia> ex = :(Array{Float32,3})
:(Array{Float32, 3})
julia> dump(ex)
Expr
head: Symbol curly
args: Vector{Any(3,))
1: Symbol Array
2: Symbol Float32
3: Int64 3
```
`ex.args` is a `Vector{Any}`.
However, for a `:curly` expression only certain types will be found among the arguments; you could write key portions of your code as
```julia
a = ex.args[2]
if a isa Symbol
# inside this block, Julia knows `a` is a Symbol, and so methods called on `a` will be resistant to invalidation
foo(a)
elseif a isa Expr && length((a::Expr).args) > 2
a::Expr # sometimes you have to help inference by adding a type-assert
x = bar(a) # `bar` is now resistant to invalidation
elseif a isa Integer
# even though you've not made this fully-inferrable, you've at least reduced the scope for invalidations
# by limiting the subset of `foobar` methods that might be called
y = foobar(a)
end
```
Other tricks include replacing broadcasting on `v::Vector{Any}` with `Base.mapany(f, v)`--`mapany` avoids trying to narrow the type of `f(v[i])` and just assumes it will be `Any`, thereby avoiding invalidations of many `convert` methods.
Adding type-assertions and fixing inference problems are the most common approaches for fixing invalidations.
You can discover these manually, but using Cthulhu is highly recommended.
## Inferrable field access for abstract types
When invalidations happen for methods that manipulate fields of abstract types, often there is a simple solution: create an "interface" for the abstract type specifying that certain fields must have certain types.
Here's an example:
```
abstract type AbstractDisplay end
struct Monitor <: AbstractDisplay
height::Int
width::Int
maker::String
end
struct Phone <: AbstractDisplay
height::Int
width::Int
maker::Symbol
end
function Base.show(@nospecialize(d::AbstractDisplay), x)
str = string(x)
w = d.width
if length(str) > w # do we have to truncate to fit the display width?
...
```
In this `show` method, we've deliberately chosen to prevent specialization on the specific type of `AbstractDisplay` (to reduce the total number of times we have to compile this method).
As a consequence, Julia's inference may not realize that `d.width` returns an `Int`.
Fortunately, you can help by defining an interface for generic `AbstractDisplay` objects:
```
function Base.getproperty(d::AbstractDisplay, name::Symbol)
if name === :height
return getfield(d, :height)::Int
elseif name === :width
return getfield(d, :width)::Int
elseif name === :maker
return getfield(d, :maker)::Union{String,Symbol}
end
return getfield(d, name)
end
```
Julia's [constant propagation](https://en.wikipedia.org/wiki/Constant_folding) will ensure that most accesses of those fields will be determined at compile-time, so this simple change robustly fixes many inference problems.
## Fixing `Core.Box`
[Julia issue 15276](https://github.com/JuliaLang/julia/issues/15276) is one of the more surprising forms of inference failure; it is the most common cause of a `Core.Box` annotation.
If other variables depend on the `Box`ed variable, then a single `Core.Box` can lead to widespread inference problems.
For this reason, these are also among the first inference problems you should tackle.
Read [this explanation of why this happens and what you can do to fix it](https://docs.julialang.org/en/v1/manual/performance-tips/#man-performance-captured).
If you are directed to find `Core.Box` inference triggers via [`suggest`](@ref), you may need to explore around the call site a bit--
the inference trigger may be in the closure itself, but the fix needs to go in the method that creates the closure.
Use of `ascend` is highly recommended for fixing `Core.Box` inference failures.
## Handling edge cases
You can sometimes get invalidations from failing to handle "formal" possibilities.
For example, operations with regular expressions might return a `Union{Nothing, RegexMatch}`.
You can sometimes get poor type inference by writing code that fails to take account of the possibility that `nothing` might be returned.
For example, a comprehension
```julia
ms = [m.match for m in match.((rex,), my_strings)]
```
might be replaced with
```julia
ms = [m.match for m in match.((rex,), my_strings) if m !== nothing]
```
and return a better-typed result.
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 1503 | # Precompilation "gotcha"s
## [Running code during module definition](@id running-during-pc)
Suppose you're working on an astronomy package and your source code has a line
```
const planets = map(makeplanet, ["Mercury", ...])
```
Julia will dutifully create `planets` and store it in the package's precompile cache file. This also runs `makeplanet`, and if this is the first time it gets run, it will compile `makeplanet`. Assuming that `makeplanet` is a method defined in the package, the compiled code for `makeplanet` will be stored in the cache file.
However, two circumstances can lead to puzzling omissions from the cache files:
- if `makeplanet` is a method defined in a dependency of your package, it will *not* be cached in your package. You'd want to add precompilation of `makeplanet` to the package that creates that method.
- if `makeplanet` is poorly-infered and uses runtime dispatch, any such callees that are not owned by your package will not be cached. For example, suppose `makeplanet` ends up calling methods in Base Julia or its standard libraries that are not precompiled into Julia itself: the compiled code for those methods will not be added to the cache file.
One option to ensure this dependent code gets cached is to create `planets` inside `PrecompileTools.@compile_workload`:
```
@compile_workload begin
global planets
const planet = map(makeplanet, ["Mercury", ...])
end
```
Note that your package definition can have multiple `@compile_workload` blocks.
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 2513 | # Package roles and alternatives
## SnoopCompile
SnoopCompileCore is a tiny package with no dependencies; it's used for collecting data, and it has been designed in such a way that it cannot cause any invalidations of its own. Collecting data on invalidations and inference with SnoopCompileCore is the only way you can be sure you are observing the "native state" of your code.
## SnoopCompile
SnoopCompile is a much larger package that performs analysis on the data collected by SnoopCompileCore; loading SnoopCompile can (and does) trigger invalidations.
Consequently, you're urged to always collect data with just SnoopCompileCore loaded,
and wait to load SnoopCompile until after you've finished collecting the data.
## Cthulhu
[Cthulhu](https://github.com/JuliaDebug/Cthulhu.jl) is a companion package that gives deep insights into the origin of invalidations or inference failures.
## AbstractTrees
[AbstractTrees](https://github.com/JuliaCollections/AbstractTrees.jl) is the one package in this list that can be both a "workhorse" and a developer tool. SnoopCompile uses it mostly for pretty-printing.
## JET
[JET](https://github.com/aviatesk/JET.jl) is a powerful developer tool that in some ways is an alternative to SnoopCompile. While the two have different goals, the packages have some overlap in what they can tell you about your code. However, their mechanisms of action are fundamentally different:
- JET is a "static analyzer," which means that it analyzes the code itself. JET can tell you about inference failures (runtime dispatch) much like SnoopCompile, with a major advantage: SnoopCompileCore omits information about any callees that are already compiled, but JET's `@report_opt` provides *exhaustive* information about the entire *inferable* callgraph (i.e., the part of the callgraph that inference can predict from the initial call) regardless of whether it has been previously compiled. With JET, you don't have to remember to run each analysis in a fresh session.
- SnoopCompileCore collects data by watching normal inference at work. On code that hasn't been compiled previously, this can yield results similar to JET's, with a different major advantage: JET can't "see through" runtime dispatch, but SnoopCompileCore can. With SnoopCompile, you can immediately get a wholistic view of your entire callgraph.
Combining JET and SnoopCompile can provide insights that are difficult to obtain with either package in isolation. See the [Tutorial on JET integration](@ref).
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 17093 | # Tutorial on `@snoop_invalidations`
## What are invalidations?
In this context, *invalidation* means discarding previously-compiled code. Invalidations occur because of interactions between independent pieces of code. Invalidations are essential to make Julia fast, interactive, and correct: you *need* invalidations if you want to be able to define some methods, run (compile) some code, and then in the same session define new methods that might lead to different answers if you were to recompile the code in the presence of the new methods.
Invalidations can happen just from loading packages. Packages are precompiled in isolation, but you can load many packages into a single interactive session. It's impossible for the individual packages to anticipate the full "world of methods" in your interactive session, so sometimes Julia has to discard code that was compiled in a smaller world because it's at risk for being incorrect in the larger world.
The downside of invalidations is that they make latency worse, as code must be recompiled when you first run it. The benefits of precompilation are partially lost, and the work done during precompilation is partially wasted.
While some invalidations are unavoidable, in practice a good developer can often design packages to minimize the number and/or impact of invalidations. Invalidation-resistant code is often faster, with smaller binary size, than code that is vulnerable to invalidation.
A good first step is to measure what's being invalidated, and why.
## Learning to observe, diagnose, and fix invalidations
We'll illustrate invalidations by creating two packages, where loading the second package invalidates some code that was compiled in the first one. We'll then go over approaches for "fixing" invalidations (i.e., preventing them from occuring).
!!! tip
Since SnoopCompile's tools are interactive, you are strongly encouraged to try these examples yourself as you read along.
### Add SnoopCompileCore, SnoopCompile, and helper packages to your environment
Here, we'll add these packages to your [default environment](https://pkgdocs.julialang.org/v1/environments/). (With the exception of `AbstractTrees`, these "developer tool" packages should not be added to the Project file of any real packages unless you're extending the tool itself.) From your default environment (i.e., in package mode you should see something like `(@v1.10) pkg>`), do
```
using Pkg
Pkg.add(["SnoopCompileCore", "SnoopCompile", "AbstractTrees", "Cthulhu"]);
```
### Create the demonstration packages
We're going to implement a toy version of the card game [blackjack](https://www.wikihow.com/Play-Blackjack), where players take cards with the aim of collecting 21 points. The higher you go the better, *unless* you go over 21 points, in which case you "go bust" (i.e., you lose). Because our real goal is to illustrate invalidations, we'll create a "blackjack ecosystem" that involves an interaction between two packages.
While [PkgTemplates](https://github.com/JuliaCI/PkgTemplates.jl) is recommended for creating packages, here we'll just use the basic capabilities in `Pkg`.
To create the (empty) packages, the code below executes the following steps:
- navigate to a temporary directory and create both packages
- make the first package (`Blackjack`) depend on [PrecompileTools](https://github.com/JuliaLang/PrecompileTools.jl) (we're interested in reducing latency!)
- make the second package (`BlackjackFacecards`) depend on the first one (`Blackjack`)
```@repl tutorial-invalidations
oldproj = Base.active_project() # hide
cd(mktempdir())
using Pkg
Pkg.generate("Blackjack");
Pkg.activate("Blackjack")
Pkg.add("PrecompileTools");
Pkg.generate("BlackjackFacecards");
Pkg.activate("BlackjackFacecards")
Pkg.develop(PackageSpec(path=joinpath(pwd(), "Blackjack")));
```
Now it's time to create the code for `Blackjack`. Normally, you'd do this with an editor, but to make it reproducible here we'll use code to create these packages. The package code we'll create below defines the following:
- a `score` function to assign a numeric value to a card
- `tallyscore` which adds the total score for a hand of cards
- `playgame` which uses a simple strategy to decide whether to take another card from the deck and add it to the hand
To reduce latency on first use, we then precompile `playgame`. In a real application, we'd also want a function to manage the `deck` of cards, but for brevity we'll omit this and do it manually.
```@repl tutorial-invalidations
write(joinpath("Blackjack", "src", "Blackjack.jl"), """
module Blackjack
using PrecompileTools
export playgame
const deck = [] # the deck of cards that can be dealt
# Compute the score of one card
score(card::Int) = card
# Add up the score in a hand of cards
function tallyscores(cards)
s = 0
for card in cards
s += score(card)
end
return s
end
# Play the game! We use a simple strategy to decide whether to draw another card.
function playgame()
myhand = []
while tallyscores(myhand) <= 14 && !isempty(deck)
push!(myhand, pop!(deck)) # "Hit me!"
end
myscore = tallyscores(myhand)
return myscore <= 21 ? myscore : "Busted"
end
# Precompile `playgame`:
@setup_workload begin
push!(deck, 8, 10) # initialize the deck
@compile_workload begin
playgame()
end
end
end
""")
```
Suppose you use `Blackjack` and like it, but you notice it doesn't support face cards. Perhaps you're nervous about contributing to the `Blackjack` package (you shouldn't be!), and so you decide to start your own package that extends its functionality. You create `BlackjackFacecards` to add scoring of the jack, queen, king, and ace (for simplicity we'll make the ace always worth 11):
```@repl tutorial-invalidations
write(joinpath("BlackjackFacecards", "src", "BlackjackFacecards.jl"), """
module BlackjackFacecards
using Blackjack
# Add a new `score` method:
Blackjack.score(card::Char) = card ∈ ('J', 'Q', 'K') ? 10 :
card == 'A' ? 11 : error(card, " not known")
end
""")
```
!!! warning
Because `BlackjackFacecards` "owns" neither `Char` nor `score`, this is [piracy](https://docs.julialang.org/en/v1/manual/style-guide/#Avoid-type-piracy-1) and should generally be avoided. Piracy is one way to cause invalidations, but it's not the only one. `BlackjackFacecards` could avoid committing piracy by defining a `struct Facecard ... end` and defining `score(card::Facecard)` instead of `score(card::Char)`. However, this would *not* fix the invalidations--all the factors described below are unchanged.
Now we're ready!
### Recording invalidations
Here are the steps executed by the code below
- load `SnoopCompileCore`
- load `Blackjack` and `BlackjackFacecards` while *recording invalidations* with the `@snoop_invalidations` macro.
- load `SnoopCompile` and `AbstractTrees` for analysis
```@repl tutorial-invalidations
using SnoopCompileCore
invs = @snoop_invalidations using Blackjack, BlackjackFacecards;
using SnoopCompile, AbstractTrees
```
!!! tip
If you get errors like `Package SnoopCompileCore not found in current path`, a likely explanation is that
you didn't add it to your default environment. In the example above, we're in the `BlackjackFacecards` environment
so we can develop the package, but you also need access to `SnoopCompile` and `SnoopCompileCore`. Having these in your [default environment](https://docs.julialang.org/en/v1/manual/code-loading/#Environment-stacks) lets them be found even if they aren't part of the current environment.
### Analyzing invalidations
Now we're ready to see what, if anything, got invalidated:
```@repl tutorial-invalidations
trees = invalidation_trees(invs)
```
This has only one "tree" of invalidations. `trees` is a `Vector` so we can index it:
```@repl tutorial-invalidations
tree = trees[1]
```
Each tree stems from a single *cause* described in the top line. For this tree, the cause was adding the new method `score(::Char)` in `BlackjackFacecards`.
Each *cause* is associated with one or more *victims* of invalidation, a list here named `mt_backedges`. Let's extract the final (and in this case, only) victim:
```@repl tutorial-invalidations
sig, victim = tree.mt_backedges[end];
```
!!! note
`mt_backedges` stands for "MethodTable backedges." In other cases you may see a second type of invalidation, just called `backedges`. With these, there is no `sig`, and so you'll use just `victim = tree.backedges[i]`.
First let's look at the the problematic method `sig`nature:
```@repl tutorial-invalidations
sig
```
This is a type-tuple, i.e., `Tuple{typeof(f), typesof(args)...}`. We see that `score` was called on an object of (inferred) type `Any`. **Calling a function with unknown argument types makes code vulnerable to invalidation, and insertion of the new `score` method "exploited" this vulnerability.**
`victim` shows which compiled code got invalidated:
```@repl tutorial-invalidations
victim
```
But this is not the full extent of what got invalidated:
```@repl tutorial-invalidations
print_tree(victim)
```
Invalidations propagate throughout entire call trees, here up to `playgame()`: anything that calls code that may no longer be correct is itself at risk for being incorrect.
In general, victims with lots of "children" deserve the greatest attention.
While `print_tree` can be useful, Cthulhu's `ascend` is a far more powerful tool for gaining deeper insight:
```julia
julia> using Cthulhu
julia> ascend(victim)
Choose a call for analysis (q to quit):
> tallyscores(::Vector{Any})
playgame()
```
This is an interactive REPL-menu, described more completely (via text and video) at [ascend](https://github.com/JuliaDebug/Cthulhu.jl?tab=readme-ov-file#usage-ascend).
There are quite a few other tools for working with `invs` and `trees`, see the [Reference](@ref). If your list of invalidations is dauntingly large, you may be interested in [precompile_blockers](@ref).
### Why the invalidations occur
`tallyscores` and `playgame` were compiled in `Blackjack`, a "world" where the `score` method defined in `BlackjackFacecards` does not yet exist. When you load the `BlackjackFacecards` package, Julia must ask itself: now that this new `score` method exists, am I certain that I would compile `tallyscores` the same way? If the answer is "no," Julia invalidates the old compiled code, and compiles a fresh version with full awareness of the new `score` method in `BlackjackFacecards`.
Why would the compilation of `tallyscores` change? Evidently, `cards` is a `Vector{Any}`, and this means that `tallyscores` can't guess what kind of object `card` might be, and thus it can't guess what kind of objects are passed into `score`. The crux of the invalidation is thus:
- when `Blackjack` is compiled, inference does not know which `score` method will be called. However, at the time of compilation the only `score` method is for `Int`. Thus Julia will reason that anything that isn't an `Int` is going to trigger an error anyway, and so you might as well optimize `tallyscore` expecting all cards to be `Int`s.
- however, when `BlackjackFacecards` is loaded, suddenly there are two `score` methods supporting both `Int` and `Char`. Now Julia's guess that all `cards` will probably be `Int`s doesn't seem so likely to be true, and thus `tallyscores` should be recompiled.
Thus, invalidations arise from optimization based on what methods and types are "in the world" at the time of compilation (sometimes called *world-splitting*). This form of optimization can have performance benefits, but it also leaves your code vulnerable to invalidation.
### Fixing invalidations
In broad strokes, there are three ways to prevent invalidation.
#### Method 1: defer compilation until the full world is known
The first and simplest technique is to ensure that the full range of possibilties (the entire "world of code") is present before any compilation occurs. In this case, probably the best approach would be to merge the `BlackjackFacecards` package into `Blackjack` itself. Or, if you are a maintainer of the "Blackjack ecosystem" and have reasons for thinking that keeping the packages separate makes sense, you could alternatively move the `PrecompileTools` workload to `BlackjackFacecards`. Either approach should prevent the invalidations from occuring.
#### Method 2: improve inferability
The second way to prevent invalidations is to improve the inferability of the victim(s). If `Int` and `Char` really are the only possible kinds of cards, then in `playgame` it would be better to declare
```julia
myhand = Union{Int,Char}[]
```
and similarly for `deck` itself. That untyped `[]` is what makes `myhand` (and thus `cards`, when passed to `tallyscore`) a `Vector{Any}`, and the possibilities for `card` are endless. By constraining the possible types, we allow inference to know more clearly what methods might be called. More tips on fixing invalidations through improving inference can be found in [Techniques for fixing inference problems](@ref).
In this particular case, just annotating `Union{Int,Char}[]` isn't sufficient on its own, because the `score` method for `Char` doesn't yet exist, so Julia doesn't know what to call. However, in most real-world cases this change alone would be sufficient: usually all the needed methods exist, it's just a question of reassuring Julia that no other options are even possible.
!!! note
This fix leverages [union-splitting](https://julialang.org/blog/2018/08/union-splitting/), which is conceptually related to "world-splitting." However, union-splitting is far more effective at fixing inference problems, as it guarantees that no other possibilities will *ever* exist, no matter how many other methods get defined.
!!! tip
Many vulnerabilities can be fixed by improving inference. In complex code, it's easy to unwittingly write things in ways that defeat Julia's type inference. Tools that help you discover inference problems, like SnoopCompile and [JET](@ref), help you discover these unwitting "mistakes."
While in real life it's usually a bad idea to "blame the victim," it's typically the right attitude for fixing invalidations. Keep in mind, though, that the source of the problem may not be the immediate victim: in this case, it was a poor container choice in `playgame` that put `tallyscore` in the bad position of having to operate on a `Vector{Any}`.
Improving inferability is probably the most broadly-applicable technique, and when applicable it usually gives the best outcomes: not only is your code more resistant to invalidation, but it's likely faster and compiles to smaller binaries. However, of the three approaches it is also the one that requires the deepest understanding of Julia's type system, and thus may be difficult for some coders to use.
There are cases where there is no good way to make the code inferable, in which case other strategies are needed.
#### Method 3: disable Julia's speculative optimization
The third option is to prevent Julia's speculative optimization: one could replace `score(card)` with `invokelatest(score, card)`:
```julia
function tallyscores(cards)
s = 0
for card in cards
s += invokelatest(score, card)
end
return s
end
```
This forces Julia to always look up the appropriate method of `score` while the code is running, and thus prevents the speculative optimizations that leave the code vulnerable to invalidation. However, the cost is that your code may run somewhat more slowly, particularly here where the call is inside a loop.
If you plan to define at least two `score` methods, another way to turn off this optimization would be to declare
```julia
Base.Experimental.@max_methods 1 function score end
```
before defining any `score` methods. You can read the documentation on `@max_methods` to learn more about how it works.
!!! tip
Most of us learn best by doing. Try at least one of these methods of fixing the invalidation, and use SnoopCompile to verify that it works.
### Undoing the damage from invalidations
If you can't prevent the invalidation, an alternative approach is to recompile the invalidated code. For example, one could repeat the precompile workload from `Blackjack` in `BlackjackFacecards`. While this will mean that the whole "stack" will be compiled twice and cached twice (which is wasteful), it should be effective in reducing latency for users.
PrecompileTools also has a `@recompile_invalidations`. This isn't generally recommended for use in package (you can end up with long compile times for things you don't need), but it can be useful in personal "Startup packages" where you want to reduce latency for a particular project you're working on. See the PrecompileTools documentation for details.
```@repl tutorial-invalidations
Pkg.activate(oldproj) # hide
```
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 8189 | # Tutorial on JET integration
[JET](https://github.com/aviatesk/JET.jl) is a powerful tool for analyzing your code.
As described [elsewhere](@ref JET), some of its functionality overlaps SnoopCompile, but its mechanism of action is very different. The combination JET and SnoopCompile provides capabilities that neither package has on its own.
Specifically, one can use SnoopCompile to collect data on the full callgraph and JET to perform the exhaustive analysis of individual nodes.
The integration between the two packages is bundled into SnoopCompile, specifically [`report_callee`](@ref),
[`report_callees`](@ref), and [`report_caller`](@ref). These take [`InferenceTrigger`](@ref) (see the page on [inference failures](@ref inferrability)) and use them to generate JET reports. These tools focus on error-analysis rather than optimization, as SnoopCompile can already identify runtime dispatch.
We can demonstrate both the need and use of these tools with a simple extended example.
## JET usage
As a basic introduction to JET, let's analyze the following call from JET's own documentation:
```jldoctest jet; filter=[r"@ reduce.*", r"(in|@)", r"(REPL\[\d+\]|none)"]
julia> using JET
julia> list = Any[1,2,3];
julia> sum(list)
6
julia> @report_call sum(list)
═════ 1 possible error found ═════
┌ sum(a::Vector{Any}) @ Base ./reducedim.jl:1010
│┌ sum(a::Vector{Any}; dims::Colon, kw::@Kwargs{}) @ Base ./reducedim.jl:1010
││┌ _sum(a::Vector{Any}, ::Colon) @ Base ./reducedim.jl:1014
│││┌ _sum(a::Vector{Any}, ::Colon; kw::@Kwargs{}) @ Base ./reducedim.jl:1014
││││┌ _sum(f::typeof(identity), a::Vector{Any}, ::Colon) @ Base ./reducedim.jl:1015
│││││┌ _sum(f::typeof(identity), a::Vector{Any}, ::Colon; kw::@Kwargs{}) @ Base ./reducedim.jl:1015
││││││┌ mapreduce(f::typeof(identity), op::typeof(Base.add_sum), A::Vector{Any}) @ Base ./reducedim.jl:357
│││││││┌ mapreduce(f::typeof(identity), op::typeof(Base.add_sum), A::Vector{Any}; dims::Colon, init::Base._InitialValue) @ Base ./reducedim.jl:357
││││││││┌ _mapreduce_dim(f::typeof(identity), op::typeof(Base.add_sum), ::Base._InitialValue, A::Vector{Any}, ::Colon) @ Base ./reducedim.jl:365
│││││││││┌ _mapreduce(f::typeof(identity), op::typeof(Base.add_sum), ::IndexLinear, A::Vector{Any}) @ Base ./reduce.jl:432
││││││││││┌ mapreduce_empty_iter(f::typeof(identity), op::typeof(Base.add_sum), itr::Vector{Any}, ItrEltype::Base.HasEltype) @ Base ./reduce.jl:380
│││││││││││┌ reduce_empty_iter(op::Base.MappingRF{typeof(identity), typeof(Base.add_sum)}, itr::Vector{Any}, ::Base.HasEltype) @ Base ./reduce.jl:384
││││││││││││┌ reduce_empty(op::Base.MappingRF{typeof(identity), typeof(Base.add_sum)}, ::Type{Any}) @ Base ./reduce.jl:361
│││││││││││││┌ mapreduce_empty(::typeof(identity), op::typeof(Base.add_sum), T::Type{Any}) @ Base ./reduce.jl:372
││││││││││││││┌ reduce_empty(::typeof(Base.add_sum), ::Type{Any}) @ Base ./reduce.jl:352
│││││││││││││││┌ reduce_empty(::typeof(+), ::Type{Any}) @ Base ./reduce.jl:343
││││││││││││││││┌ zero(::Type{Any}) @ Base ./missing.jl:106
│││││││││││││││││ MethodError: no method matching zero(::Type{Any}): Base.throw(Base.MethodError(zero, tuple(Base.Any)::Tuple{DataType})::MethodError)
││││││││││││││││└────────────────────
```
The final line reveals that while `sum` happened to work for the specific `list` we provided, it nevertheless has a "gotcha" for the types we supplied: if `list` happens to be empty, `sum` depends on the ability to generate `zero(T)` for the element-type `T` of `list`, but because we constructed `list` to have an element-type of `Any`, there is no such method and `sum(Any[])` throws an error:
```jldoctest
julia> sum(Int[])
0
julia> sum(Any[])
ERROR: MethodError: no method matching zero(::Type{Any})
[...]
```
(This can be circumvented with `sum(Any[]; init=0)`.)
This is the kind of bug that can lurk undetected for a long time, and JET excels at exposing them.
## JET limitations
JET is a *static* analyzer, meaning that it works from the argument types provided, and that has an important consequence: if a particular callee can't be inferred, JET can't analyze it. We can illustrate that quite easily:
```jldoctest jet
julia> callsum(listcontainer) = sum(listcontainer[1])
callsum (generic function with 1 method)
julia> lc = Any[list]; # "hide" `list` inside a Vector{Any}
julia> callsum(lc)
6
julia> @report_call callsum(lc)
No errors detected
```
Because we "hid" the type of `list` from inference, JET couldn't tell what specific instance of `sum` was going to be called, so it was unable to detect any errors.
## JET/SnoopCompile integration
A resolution to this problem is to use SnoopCompile to do the "data collection" and JET to do the analysis.
The key reason is that SnoopCompile is a dynamic analyzer, and is capable of bridging across runtime dispatch.
As always, you need to do the data collection in a fresh session where the calls have not previously been inferred.
After restarting Julia, we can do this:
```julia
julia> using SnoopCompileCore
julia> list = Any[1,2,3];
julia> lc = Any[list]; # "hide" `list` inside a Vector{Any}
julia> callsum(listcontainer) = sum(listcontainer[1]);
julia> tinf = @snoop_inference callsum(lc);
julia> using SnoopCompile, JET, Cthulhu
julia> tinf.children
2-element Vector{SnoopCompileCore.InferenceTimingNode}:
InferenceTimingNode: 0.000869/0.000869 on callsum(::Vector{Any}) with 0 direct children
InferenceTimingNode: 0.000196/0.006685 on sum(::Vector{Any}) with 1 direct children
julia> report_callees(inference_triggers(tinf))
1-element Vector{Pair{InferenceTrigger, JET.JETCallResult{JET.JETAnalyzer{JET.BasicPass}, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}}}}:
Inference triggered to call sum(::Vector{Any}) from callsum (./REPL[5]:1) with specialization callsum(::Vector{Any}) => ═════ 1 possible error found ═════
┌ sum(a::Vector{Any}) @ Base ./reducedim.jl:1010
│┌ sum(a::Vector{Any}; dims::Colon, kw::@Kwargs{}) @ Base ./reducedim.jl:1010
││┌ _sum(a::Vector{Any}, ::Colon) @ Base ./reducedim.jl:1014
│││┌ _sum(a::Vector{Any}, ::Colon; kw::@Kwargs{}) @ Base ./reducedim.jl:1014
││││┌ _sum(f::typeof(identity), a::Vector{Any}, ::Colon) @ Base ./reducedim.jl:1015
│││││┌ _sum(f::typeof(identity), a::Vector{Any}, ::Colon; kw::@Kwargs{}) @ Base ./reducedim.jl:1015
││││││┌ mapreduce(f::typeof(identity), op::typeof(Base.add_sum), A::Vector{Any}) @ Base ./reducedim.jl:357
│││││││┌ mapreduce(f::typeof(identity), op::typeof(Base.add_sum), A::Vector{Any}; dims::Colon, init::Base._InitialValue) @ Base ./reducedim.jl:357
││││││││┌ _mapreduce_dim(f::typeof(identity), op::typeof(Base.add_sum), ::Base._InitialValue, A::Vector{Any}, ::Colon) @ Base ./reducedim.jl:365
│││││││││┌ _mapreduce(f::typeof(identity), op::typeof(Base.add_sum), ::IndexLinear, A::Vector{Any}) @ Base ./reduce.jl:432
││││││││││┌ mapreduce_empty_iter(f::typeof(identity), op::typeof(Base.add_sum), itr::Vector{Any}, ItrEltype::Base.HasEltype) @ Base ./reduce.jl:380
│││││││││││┌ reduce_empty_iter(op::Base.MappingRF{typeof(identity), typeof(Base.add_sum)}, itr::Vector{Any}, ::Base.HasEltype) @ Base ./reduce.jl:384
││││││││││││┌ reduce_empty(op::Base.MappingRF{typeof(identity), typeof(Base.add_sum)}, ::Type{Any}) @ Base ./reduce.jl:361
│││││││││││││┌ mapreduce_empty(::typeof(identity), op::typeof(Base.add_sum), T::Type{Any}) @ Base ./reduce.jl:372
││││││││││││││┌ reduce_empty(::typeof(Base.add_sum), ::Type{Any}) @ Base ./reduce.jl:352
│││││││││││││││┌ reduce_empty(::typeof(+), ::Type{Any}) @ Base ./reduce.jl:343
││││││││││││││││┌ zero(::Type{Any}) @ Base ./missing.jl:106
│││││││││││││││││ MethodError: no method matching zero(::Type{Any}): Base.throw(Base.MethodError(zero, tuple(Base.Any)::Tuple{DataType})::MethodError)
││││││││││││││││└────────────────────
```
Because SnoopCompileCore collected the runtime-dispatched `sum` call, we can pass it to JET.
`report_callees` filters those calls which generate JET reports, allowing you to focus on potential errors.
!!! note
JET integration is enabled only if JET.jl _and_ Cthulhu.jl have been loaded into your main session.
This is why there's the `using JET, Cthulhu` statement included in the example given.
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 14969 | # [Profile-guided despecialization](@id pgds)
Julia's multiple dispatch allows developers to create methods for specific argument types. On top of this, the Julia compiler performs *automatic specialization*:
```
function countnonzeros(A::AbstractArray)
...
end
```
will be compiled separately for `Vector{Int}`, `Matrix{Float64}`, `SubArray{...}`, and so on, if it gets called for each of these types.
Each specialization (each `MethodInstance` with different argument types) costs extra inference and code-generation time,
so while specialization often improves runtime performance, that has to be weighed against the cost in latency.
There are also cases in which [overspecialization can hurt both run-time and compile-time performance](https://docs.julialang.org/en/v1/manual/performance-tips/#The-dangers-of-abusing-multiple-dispatch-(aka,-more-on-types-with-values-as-parameters)).
Consequently, an analysis of specialization can be a powerful tool for improving package quality.
`SnoopCompile` ships with an interactive tool, [`pgdsgui`](@ref), short for "Profile-guided despecialization."
The name is a reference to a related technique, [profile-guided optimization](https://en.wikipedia.org/wiki/Profile-guided_optimization) (PGO).
Both PGO and PGDS use runtime profiling to help guide decisions about code optimization.
PGO is often used in languages whose default mode is to avoid specialization, whereas PGDS seems more appropriate for
a language like Julia which specializes by default.
While PGO is sometimes an automatic part of the compiler that optimizes code midstream during execution, SnoopCompile's PGDS is a tool for making static changes (edits) to code.
Again, this seems appropriate for a language where specialization typically happens prior to the first execution of the code.
### Add SnoopCompileCore, SnoopCompile, and helper packages to your environment
We'll add these packages to your [default environment](https://pkgdocs.julialang.org/v1/environments/) so you can use them while in the package environment:
```
using Pkg
Pkg.add(["SnoopCompileCore", "SnoopCompile", "PyPlot"]);
```
PyPLot is used for the PGDS interface in part to reduce interference with native-Julia plotting packages like Makie--it's a little awkward to depend on a package that you might be simultaneously modifying!
## Using the PGDS graphical user interface
To illustrate the use of PGDS, we'll examine an example in which some methods get specialized for hundreds of types.
To keep this example short, we'll create functions that operate on types themselves.
!!! note
As background to this example, for a `DataType` `T`, `T.name` returns a `Core.TypeName`, and `T.name.name` returns the name as a `Symbol`.
`Base.unwrap_unionall(T)` preserves `DataType`s as-is, but converts a `UnionAll` type into a `DataType`.
```julia
"""
spelltype(T::Type)
Spell out a type's name, one character at a time.
"""
function spelltype(::Type{T}) where T
name = Base.unwrap_unionall(T).name.name
str = ""
for c in string(name)
str *= c
end
return str
end
"""
mappushes!(f, dest, src)
Like `map!` except it grows `dest` by one for each element in `src`.
"""
function mappushes!(f, dest, src)
for item in src
push!(dest, f(item))
end
return dest
end
mappushes(f, src) = mappushes!(f, [], src)
```
There are two stages to PGDS: first (and preferrably starting in a fresh Julia session), we profile type-inference:
```julia
julia> using SnoopCompileCore
julia> Ts = subtypes(Any); # get a long list of different types
julia> tinf = @snoop_inference mappushes(spelltype, Ts);
```
Then, *in the same session*, profile the runtime:
```
julia> using Profile
julia> @profile mappushes(spelltype, Ts);
```
Typically, it's best if the workload here is reflective of a "real" workload (test suites often are not), so that you
get a realistic view of where your code spends its time during actual use.
Now let's launch the PDGS GUI:
```julia
julia> using SnoopCompile
julia> import PyPlot # the GUI is dependent on PyPlot, must load it before the next line
julia> mref, ax = pgdsgui(tinf);
```
You should see something like this:

In this graph, each dot corresponds to a single method; for this method, we plot inference time (vertical axis) against the run time (horizontal axis).
The coloration of each dot encodes the number of specializations (the number of distinct `MethodInstance`s) for that method;
by default it even includes the number of times the method was inferred for specific constants ([constant propagation](https://en.wikipedia.org/wiki/Constant_folding)), although you can exclude those cases using the `consts=false` keyword.
Finally, the edge of each dot encodes the fraction of time spent on runtime dispatch (aka, type-instability), with black indicating
0% and bright red indicating 100%.
In this plot, we can see that no method runs for more than 0.01 seconds, whereas some methods have an aggregate inference time of up to 1s.
Overall, inference-time dominates this plot.
Moreover, for the most expensive cases, the number of specializations is in the hundreds or thousands.
To learn more about *what* is being specialized, just click on one of the dots; if you choose the upper-left dot (the one with highest inference time), you should see something like this in your REPL:
```julia
spelltype(::Type{T}) where T in Main at REPL[1]:6 (586 specializations)
```
This tells you the method corresponding to this dot. Moreover, `mref` (one of the outputs of `pgdsgui`) holds this method:
```julia
julia> mref[]
spelltype(::Type{T}) where T in Main at REPL[1]:6
```
What are the specializations, and how costly was each?
```julia
julia> collect_for(mref[], tinf)
586-element Vector{SnoopCompileCore.InferenceTimingNode}:
InferenceTimingNode: 0.003486/0.020872 on InferenceFrameInfo for spelltype(::Type{T}) where T with 7 direct children
InferenceTimingNode: 0.003281/0.003892 on InferenceFrameInfo for spelltype(::Type{AbstractArray}) with 2 direct children
InferenceTimingNode: 0.003349/0.004023 on InferenceFrameInfo for spelltype(::Type{AbstractChannel}) with 2 direct children
InferenceTimingNode: 0.000827/0.001154 on InferenceFrameInfo for spelltype(::Type{AbstractChar}) with 5 direct children
InferenceTimingNode: 0.003326/0.004070 on InferenceFrameInfo for spelltype(::Type{AbstractDict}) with 2 direct children
InferenceTimingNode: 0.000833/0.001159 on InferenceFrameInfo for spelltype(::Type{AbstractDisplay}) with 5 direct children
⋮
InferenceTimingNode: 0.000848/0.001160 on InferenceFrameInfo for spelltype(::Type{YAML.Span}) with 5 direct children
InferenceTimingNode: 0.000838/0.001148 on InferenceFrameInfo for spelltype(::Type{YAML.Token}) with 5 direct children
InferenceTimingNode: 0.000833/0.001150 on InferenceFrameInfo for spelltype(::Type{YAML.TokenStream}) with 5 direct children
InferenceTimingNode: 0.000809/0.001126 on InferenceFrameInfo for spelltype(::Type{YAML.YAMLDocIterator}) with 5 direct children
```
So we can see that one `MethodInstance` for each type in `Ts` was generated.
If you see a list of `MethodInstance`s, and the first is extremely costly in terms of inclusive time, but all the rest are not, then you might not need to worry much about over-specialization:
your inference time will be dominated by that one costly method (often, the first time the method was called), and the fact that lots of additional specializations were generated may not be anything to worry about.
However, in this case, the distribution of time is fairly flat, each contributing a small portion to the overall time.
In such cases, over-specialization may be a problem.
### Reducing specialization with `@nospecialize`
How might we change this? To reduce the number of specializations of `spelltype`, we use `@nospecialize` in its definition:
```julia
function spelltype(@nospecialize(T::Type))
name = Base.unwrap_unionall(T).name.name
str = ""
for c in string(name)
str *= c
end
return str
end
```
!!! warning
`where` type-parameters force specialization: in `spelltype(@nospecialize(::Type{T})) where T`, the `@nospecialize` has no impact and you'll get full specialization on `T`.
Instead, use `@nospecialize(T::Type)` (without the `where` statement) as shown.
If we now rerun that demo, you should see a plot of the same kind as shown above, but with different costs for each dot.
The differences are best appreciated comparing them side-by-side ([`pgdsgui`](@ref) allows you to specify a particular axis into
which to plot):

The results with `@nospecialize` are shown on the right. You can see that:
- Now, the most expensive-to-infer method is <0.01s (formerly it was ~1s)
- No method has more than 2 specializations
Moreover, our runtimes (post-compilation) really aren't very different, both in the ballpark of a few millseconds (you can check with `@btime` from BenchmarkTools to be sure).
In total, we've reduced compilation time approximately 50× without appreciably hurting runtime performance.
Reducing specialization, when appropriate, can often yield your biggest reductions in latency.
!!! tip
When you add `@nospecialize`, sometimes it's beneficial to compensate for the loss of inferrability by adding some type assertions.
This topic will be discussed in greater detail in the next section, but for the example above we can improve runtime performance by annotating the return type of `Base.unwrap_unionall(T)`: `name = (Base.unwrap_unionall(T)::DataType).name.name`.
Then, later lines in `spelltype` know that `name` is a `Symbol`.
With this change, the unspecialized variant outperforms the specialized variant in *both compile-time and run-time*.
The reason is that the specialized variant of `spell` needs to be called by runtime dispatch, whereas for the unspecialized variant there's only one `MethodInstance`, so its dispatch is handled at compile time.
### Blocking inference: `Base.@nospecializeinfer`
Perhaps surprisingly, `@nospecialize` doesn't prevent Julia's type-inference from inspecting a method. The reason is that it's sometimes useful if the *caller* knows what type will be returned, even if the *callee* doesn't exploit this information. In our `mappushes` example, this isn't an issue, because `Ts` is a `Vector{Any}` and this already defeats inference. But in other cases, the caller may be inferable but (to save inference time) you'd prefer to block inference from inspecting the method.
Beginning with Julia 1.10, you can prevent even inference from "looking at" `@nospecialize`d arguments with `Base.@nospecializeinfer`:
```
Base.@nospecializeinfer function spelltype(@nospecialize(T::Type))
name = (Base.unwrap_unionall(T)::DataType).name.name
str = ""
for c in string(name)
str *= c
end
return str
end
```
Note that the `::DataType` annotation described in the tip above is still effective and recommended. `@nospecializeinfer` directly affects only arguments that are marked with `@nospecialize`, and in this case the type-assertion prevents type uncertainty from propagating to the remainder of the function.
### Argument standardization
While not immediately relevant to the example above, a very important technique that falls within the domain of reducing specialization is *argument standardization*: instead of
```julia
function foo(x, y)
# some huge function, slow to compile, and you'd prefer not to compile it many times for different types of x and y
end
```
consider whether you can safely write this as
```julia
function foo(x::X, y::Y) # X and Y are concrete types
# some huge function, but the concrete typing ensures you only compile it once
end
foo(x, y) = foo(convert(X, x)::X, convert(Y, y)::Y) # this allows you to still call it with any argument types
```
The "standardizing method" `foo(x, y)` is short and therefore quick to compile, so it doesn't really matter if you compile many different instances.
!!! tip
In `convert(X, x)::X`, the final `::X` guards against a broken `convert` method that fails to return an object of type `X`.
Without it, `foo(x, y)` might call itself in an infinite loop, ultimately triggering a StackOverflowError.
StackOverflowErrors are a particularly nasty form of error, and the typeassert ensures that you get a simple `TypeError` instead.
In other contexts, such typeasserts would also have the effect of fixing inference problems even if the type of `x` is not well-inferred, but in this case dispatch to `foo(x::X, y::Y)` would have ensured the same outcome.
There are of course cases where you can't implement your code in this way: after all, part of the power of Julia is the ability of generic methods to "do the right thing" for a wide variety of types. But in cases where you're doing a standard task, e.g., writing some data to a file, there's really no good reason to recompile your `save` method for a filename encoded as a `String` and again for a `SubString{String}` and again for a `SubstitutionString` and again for an `AbstractString` and ...: after all, the core of the `save` method probably isn't sensitive to the precise encoding of the filename. In such cases, it should be safe to convert all filenames to `String`, thereby reducing the diversity of input arguments for expensive-to-compile methods.
If you're using `pgdsgui`, the cost of inference and the number of specializations may guide you to click on specific dots; `collect_for(mref[], tinf)` then allows you to detect and diagnose cases where argument standardization might be helpful.
You can do the same analysis without `pgdsgui`. The opportunity for argument standardization is often facilitated by looking at, e.g.,
```julia
julia> tms = accumulate_by_source(flatten(tinf)); # collect all MethodInstances that belong to the same Method
julia> t, m = tms[end-1] # the ones towards the end take the most time, maybe they are over-specialized?
(0.4138147, save(filename::AbstractString, data) in SomePkg at /pathto/SomePkg/src/SomePkg.jl:23)
julia> methodinstances(m) # let's see what specializations we have
7-element Vector{Core.MethodInstance}:
MethodInstance for save(::String, ::Vector{SomePkg.SomeDataType})
MethodInstance for save(::SubString{String}, ::Vector{SomePkg.SomeDataType})
MethodInstance for save(::AbstractString, ::Vector{SomePkg.SomeDataType})
MethodInstance for save(::String, ::Vector{SomePkg.SomeDataType{SubString{String}}})
MethodInstance for save(::SubString{String}, ::Array)
MethodInstance for save(::String, ::Vector{var"#s92"} where var"#s92"<:SomePkg.SomeDataType)
MethodInstance for save(::String, ::Array)
```
In this case we have 7 `MethodInstance`s (some of which are clearly due to poor inferrability of the caller) when one might suffice.
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 8823 | # Tutorial on `@snoop_inference`
Inference may occur when you *run* code. Inference is the first step of *type-specialized* compilation. `@snoop_inference` collects data on what inference is doing, giving you greater insight into what is being inferred and how long it takes.
Compilation is needed only for "fresh" code; running the demos below on code you've already used will yield misleading results. When analyzing inference, you're advised to always start from a fresh session. See also the [comparison between SnoopCompile and JET](@ref JET).
### Add SnoopCompileCore, SnoopCompile, and helper packages to your environment
Here, we'll add these packages to your [default environment](https://pkgdocs.julialang.org/v1/environments/). (With the exception of `AbstractTrees`, these "developer tool" packages should not be added to the Project file of any real packages unless you're extending the tool itself.)
```
using Pkg
Pkg.add(["SnoopCompileCore", "SnoopCompile", "AbstractTrees", "ProfileView"]);
```
## Setting up the demo
To see `@snoop_inference` in action, we'll use the following demo:
```jldoctest flatten-demo; filter=r"Main\.var\"Main\"\."
module FlattenDemo
struct MyType{T} x::T end
extract(y::MyType) = y.x
function domath(x)
y = x + x
return y*x + 2*x + 5
end
dostuff(y) = domath(extract(y))
function packintype(x)
y = MyType{Int}(x)
return dostuff(y)
end
end
# output
FlattenDemo
```
The main call, `packintype`, stores the input in a `struct`, and then calls functions that extract the field value and performs arithmetic on the result.
## [Collecting the data](@id sccshow)
To profile inference on this call, do the following:
```jldoctest flatten-demo; filter=r"([0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?|WARNING: replacing module FlattenDemo\.\n)"
julia> using SnoopCompileCore
julia> tinf = @snoop_inference FlattenDemo.packintype(1);
julia> using SnoopCompile
julia> tinf
InferenceTimingNode: 0.002712/0.003278 on Core.Compiler.Timings.ROOT() with 1 direct children
```
!!! tip
Don't omit the semicolon on the `tinf = @snoop_inference ...` line, or you may get an enormous amount of output. The compact display on the final line is possible only because `SnoopCompile` defines nice `Base.show` methods for the data returned by `@snoop_inference`. These methods cannot be defined in `SnoopCompileCore` because it has a fundamental design constraint: loading `SnoopCompileCore` is not allowed to invalidate any code. Moving those `Base.show` methods to `SnoopCompileCore` would violate that guarantee.
This may not look like much, but there's a wealth of information hidden inside `tinf`.
## A quick check for potential invalidations
After running `@snoop_inference`, it's generally recommended to check the output of [`staleinstances`](@ref):
```julia
julia> staleinstances(tinf)
SnoopCompileCore.InferenceTiming[]
```
If you see this, all's well.
A non-empty list might indicate method invalidations, which can be checked (in a fresh session) using the tools described in [Tutorial on `@snoop_invalidations`](@ref).
If you do have a lot of invalidations, [`precompile_blockers`](@ref) may be an effective way to reveal those invalidations that affect your particular package and workload.
## [Viewing the results](@id flamegraph)
Let's start unpacking the output of `@snoop_inference` and see how to get more insight.
First, notice that the output is an `InferenceTimingNode`: it's the root element of a tree of such nodes, all connected by caller-callee relationships.
Indeed, this particular node is for `Core.Compiler.Timings.ROOT()`, a "dummy" node that is the root of all such trees.
You may have noticed that this `ROOT` node prints with two numbers.
It will be easier to understand their meaning if we first display the whole tree.
We can do that with the [AbstractTrees](https://github.com/JuliaCollections/AbstractTrees.jl) package:
```jldoctest flatten-demo; filter=[r"[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?", r"Main\.var\"Main\"\."]
julia> using AbstractTrees
julia> print_tree(tinf)
InferenceTimingNode: 0.002712/0.003278 on Core.Compiler.Timings.ROOT() with 1 direct children
└─ InferenceTimingNode: 0.000133/0.000566 on FlattenDemo.packintype(::Int64) with 2 direct children
├─ InferenceTimingNode: 0.000094/0.000094 on FlattenDemo.MyType{Int64}(::Int64) with 0 direct children
└─ InferenceTimingNode: 0.000089/0.000339 on FlattenDemo.dostuff(::FlattenDemo.MyType{Int64}) with 2 direct children
├─ InferenceTimingNode: 0.000064/0.000122 on FlattenDemo.extract(::FlattenDemo.MyType{Int64}) with 2 direct children
│ ├─ InferenceTimingNode: 0.000034/0.000034 on getproperty(::FlattenDemo.MyType{Int64}, ::Symbol) with 0 direct children
│ └─ InferenceTimingNode: 0.000024/0.000024 on getproperty(::FlattenDemo.MyType{Int64}, x::Symbol) with 0 direct children
└─ InferenceTimingNode: 0.000127/0.000127 on FlattenDemo.domath(::Int64) with 0 direct children
```
This tree structure reveals the caller-callee relationships, showing the specific types that were used for each `MethodInstance`.
Indeed, as the calls to `getproperty` reveal, it goes beyond the types and even shows the results of [constant propagation](https://en.wikipedia.org/wiki/Constant_folding);
the `getproperty(::MyType{Int64}, x::Symbol)` corresponds to `y.x` in the definition of `extract`.
!!! note
Generally we speak of [call graphs](https://en.wikipedia.org/wiki/Call_graph) rather than call trees.
But because inference results are cached (a.k.a., we only "visit" each node once), we obtain a tree as a depth-first-search of the full call graph.
You can extract the `MethodInstance` with
```jldoctest flatten-demo
julia> Core.MethodInstance(tinf)
MethodInstance for Core.Compiler.Timings.ROOT()
julia> Core.MethodInstance(tinf.children[1])
MethodInstance for FlattenDemo.packintype(::Int64)
```
Each node in this tree is accompanied by a pair of numbers.
The first number is the *exclusive* inference time (in seconds), meaning the time spent inferring the particular `MethodInstance`, not including the time spent inferring its callees.
The second number is the *inclusive* time, which is the exclusive time plus the time spent on the callees.
Therefore, the inclusive time is always at least as large as the exclusive time.
The `ROOT` node is a bit different: its exclusive time measures the time spent on all operations *except* inference.
In this case, we see that the entire call took approximately 10ms, of which 9.3ms was spent on activities besides inference.
Almost all of that was code-generation, but it also includes the time needed to run the code.
Just 0.76ms was needed to run type-inference on this entire series of calls.
As you will quickly discover, inference takes much more time on more complicated code.
We can also display this tree as a flame graph, using the [ProfileView.jl](https://github.com/timholy/ProfileView.jl) package:
```jldoctest flatten-demo; filter=r":\d+"
julia> fg = flamegraph(tinf)
Node(FlameGraphs.NodeData(ROOT() at typeinfer.jl:75, 0x00, 0:10080857))
```
```julia
julia> using ProfileView
julia> ProfileView.view(fg)
```
You should see something like this:

Users are encouraged to read the ProfileView documentation to understand how to interpret this, but briefly:
- the horizontal axis is time (wide boxes take longer than narrow ones), the vertical axis is call depth
- hovering over a box displays the method that was inferred
- left-clicking on a box causes the full `MethodInstance` to be printed in your REPL session
- right-clicking on a box opens the corresponding method in your editor
- ctrl-click can be used to zoom in
- empty horizontal spaces correspond to activities other than type-inference
- any boxes colored red (there are none in this particular example, but you'll see some later) correspond to *naively non-precompilable* `MethodInstance`s, in which the method is owned by one module but the types are from another unrelated module. Such `MethodInstance`s are omitted from the precompile cache file unless they've been "marked" by `PrecompileTools.@compile_workload` or an explicit `precompile` directive.
- any boxes colored orange-yellow (there is one in this demo) correspond to methods inferred for specific constants (constant propagation)
You can explore this flamegraph and compare it to the output from `print_tree`.
Finally, [`flatten`](@ref), on its own or together with [`accumulate_by_source`](@ref), allows you to get an sense for the cost of individual `MethodInstance`s or `Method`s.
The tools here allow you to get an overview of where inference is spending its time.
This gives you insight into the major contributors to latency.
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 14455 | # [Using `@snoop_inference` results to improve inferrability](@id inferrability)
Throughout this page, we'll use the `OptimizeMe` demo, which ships with `SnoopCompile`.
!!! note
To understand what follows, it's essential to refer to [`OptimizeMe` source code](https://github.com/timholy/SnoopCompile.jl/blob/master/examples/OptimizeMe.jl) as you follow along.
```@repl fix-inference
using SnoopCompileCore, SnoopCompile # here we need the SnoopCompile path for the next line (normally you should wait until after data collection is complete)
include(joinpath(pkgdir(SnoopCompile), "examples", "OptimizeMe.jl"))
tinf = @snoop_inference OptimizeMe.main();
fg = flamegraph(tinf)
```
If you visualize `fg` with ProfileView, you may see something like this:

From the standpoint of precompilation, this has some obvious problems:
- even though we called a single method, `OptimizeMe.main()`, there are many distinct flames separated by blank spaces. This indicates that many calls are being made by runtime dispatch: each separate flame is a fresh entrance into inference.
- several of the flames are marked in red, indicating that they are not naively precompilable (see the [Tutorial on `@snoop_inference`](@ref)). While `@compile_workload` can handle these flames, an even more robust solution is to eliminate them altogether.
Our goal will be to improve the design of `OptimizeMe` to make it more readily precompilable.
## Analyzing inference triggers
We'll first extract the "triggers" of inference, which is just a repackaging of part of the information contained within `tinf`.
Specifically an [`InferenceTrigger`](@ref) captures callee/caller relationships that straddle a fresh entrance to type-inference, allowing you to identify which calls were made by runtime dispatch and what `MethodInstance` they called.
```@repl fix-inference
itrigs = inference_triggers(tinf)
```
The number of elements in this `Vector{InferenceTrigger}` tells you how many calls were (1) made by runtime dispatch and (2) the callee had not previously been inferred.
!!! tip
In the REPL, `SnoopCompile` displays `InferenceTrigger`s with yellow coloration for the callee, red for the caller method, and blue for the caller specialization. This makes it easier to quickly identify the most important information.
In some cases, this might indicate that you'll need to fix each case separately; fortunately, in many cases fixing one problem addresses many other.
### [Method triggers](@id methtrigs)
Most often, it's most convenient to organize them by the method triggering the need for inference:
```@repl fix-inference
mtrigs = accumulate_by_source(Method, itrigs)
```
The methods triggering the largest number of inference runs are shown at the bottom.
You can also select methods from a particular module:
```@repl fix-inference
modtrigs = filtermod(OptimizeMe, mtrigs)
```
Rather than filter by a single module, you can alternatively call `SnoopCompile.parcel(mtrigs)` to split them out by module.
In this case, most of the triggers came from `Base`, not `OptimizeMe`.
However, many of the failures in `Base` were nevertheless indirectly due to `OptimizeMe`: our methods in `OptimizeMe` call `Base` methods with arguments that trigger internal inference failures.
Fortunately, we'll see that using more careful design in `OptimizeMe` can avoid many of those problems.
!!! tip
If you have a longer list of inference triggers than you feel comfortable tackling, filtering by your package's module or using [`precompile_blockers`](@ref) can be a good way to start.
Fixing issues in the package itself can end up resolving many of the "indirect" triggers too.
Also be sure to note the ability to filter out likely "noise" from [test suites](@ref test-suites).
You can get an overview of each Method trigger with `summary`:
```@repl fix-inference
mtrig = modtrigs[1]
summary(mtrig)
```
You can also say `edit(mtrig)` and be taken directly to the method you're analyzing in your editor.
You can still "dig deep" into individual triggers:
```@repl fix-inference
itrig = mtrig.itrigs[1]
```
This is useful if you want to analyze with `Cthulhu.ascend`.
`Method`-based triggers, which may aggregate many different individual triggers, can be useful because tools like [Cthulhu.jl](https://github.com/JuliaDebug/Cthulhu.jl) show you the inference results for the entire `MethodInstance`, allowing you to fix many different inference problems at once.
### Trigger trees
While method triggers are probably the most useful way of organizing these inference triggers, for learning purposes here we'll use a more detailed scheme, which organizes inference triggers in a tree:
```@repl fix-inference
itree = trigger_tree(itrigs)
using AbstractTrees
print_tree(itree)
```
This gives you a big-picture overview of how the inference failures arose.
The parent-child relationships are based on the backtraces at the entrance to inference,
and the nodes are organized in the order in which inference occurred.
Inspection of these trees can be informative; for example, here we notice a lot of method specializations for `Container{T}` for different `T`.
We're going to march through these systematically.
### `suggest` and fixing `Core.Box`
You may have noticed above that `summary(mtrig)` generated a red `has Core.Box` message. Assuming that `itrig` is still the first (and it turns out, only) trigger from this method, let's look at this again, explicitly using [`suggest`](@ref), the tool that generated this hint:
```@repl fix-inference
suggest(itrig)
```
You can see that SnoopCompile recommends tackling this first; depending on how much additional code is affected, fixing a `Core.Box` allows inference to work better and may resolve other triggers.
This message also directs readers to a section of [this documentation](@ref Fixing-Core.Box) that links to a page of the Julia manual describing the underlying problem. The Julia documentation suggests a couple of fixes, of which the best (in this case) is to use the `let` statement to rebind the variable and end any "conflict" with the closure:
```
function abmult(r::Int, ys)
if r < 0
r = -r
end
let r = r # Julia #15276
return map(x -> howbig(r * x), ys)
end
end
```
### `suggest` and a fix involving manual `eltype` specification
Let's look at the other Method-trigger rooted in `OptimizeMe`:
```@repl fix-inference
mtrig = modtrigs[2]
summary(mtrig)
itrig = mtrig.itrigs[1]
```
If you use Cthulhu's `ascend(itrig)` you might see something like this:

The first thing to note here is that `cs` is inferred as an `AbstractVector`--fixing this to make it a concrete type should be our next goal. There's a second, more subtle hint: in the call menu at the bottom, the selected call is marked `< semi-concrete eval >`. This is a hint that a method is being called with a non-concrete type.
What might that non-concrete type be?
```@repl fix-inference
isconcretetype(OptimizeMe.Container)
```
The statement `Container.(list)` is thus creating an `AbstractVector` with a non-concrete element type.
You can seem in greater detail what happens, inference-wise, in this snippet from `print_tree(itree)`:
```
├─ similar(::Base.Broadcast.Broadcasted{Base.Broadcast.DefaultArrayStyle{1}, Tuple{Base.OneTo{Int64}}, Type{Main.OptimizeMe.Container}, Tuple{Base.Broadcast.Extruded{Vector{Any}, Tuple{Bool}, Tuple{Int64}}}}, ::Type{Main.OptimizeMe.Container{Int64}})
├─ setindex!(::Vector{Main.OptimizeMe.Container{Int64}}, ::Main.OptimizeMe.Container{Int64}, ::Int64)
├─ Base.Broadcast.copyto_nonleaf!(::Vector{Main.OptimizeMe.Container{Int64}}, ::Base.Broadcast.Broadcasted{Base.Broadcast.DefaultArrayStyle{1}, Tuple{Base.OneTo{Int64}}, Type{Main.OptimizeMe.Container}, Tuple{Base.Broadcast.Extruded{Vector{Any}, Tuple{Bool}, Tuple{Int64}}}}, ::Base.OneTo{Int64}, ::Int64, ::Int64)
│ ├─ similar(::Base.Broadcast.Broadcasted{Base.Broadcast.DefaultArrayStyle{1}, Tuple{Base.OneTo{Int64}}, Type{Main.OptimizeMe.Container}, Tuple{Base.Broadcast.Extruded{Vector{Any}, Tuple{Bool}, Tuple{Int64}}}}, ::Type{Main.OptimizeMe.Container})
│ └─ Base.Broadcast.restart_copyto_nonleaf!(::Vector{Main.OptimizeMe.Container}, ::Vector{Main.OptimizeMe.Container{Int64}}, ::Base.Broadcast.Broadcasted
```
In rough terms, what this means is the following:
- since the first item in `list` is an `Int`, the output initially gets created as a `Vector{Container{Int}}`
- however, `copyto_nonleaf!` runs into trouble when it goes to copy the second item, which is a `Container{UInt8}`
- hence, `copyto_nonleaf!` re-allocates the output array to be a generic `Vector{Container}` and then calls `restart_copyto_nonleaf!`.
We can prevent all this hassle with one simple change: rewrite that line as
```
cs = Container{Any}.(list)
```
We use `Container{Any}` here because there is no more specific element type--other than an unreasonably-large `Union`--that can hold all the items in `list`.
If you make these edits manually, you'll see that we've gone from dozens of `itrigs` (38 on Julia 1.10, you may get a different number on other Julia versions) down to about a dozen (13 on Julia 1.10). Real progress!
### Replacing hard-to-infer calls with lower-level APIs
We note that many of the remaining triggers are somehow related to `show`, for example:
```
Inference triggered to call show(::IOContext{Base.TTY}, ::MIME{Symbol("text/plain")}, ::Vector{Main.OptimizeMe.Container{Any}}) from #55 (/cache/build/builder-amdci4-0/julialang/julia-release-1-dot-10/usr/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273) with specialization (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(::Any)
```
In this case we see that the calling method is `#55`. This is a `gensym`, or generated symbol, indicating that the method was generated during Julia's lowering pass, and might indicate a macro, a `do` block or other anonymous function, the generator for a `@generated` function, etc.
`edit(itrig)` (or equivalently, `edit(node)` where `node` is a child of `itree`) takes us to this method in `Base`, for which key lines are
```julia
function display(d::REPLDisplay, mime::MIME"text/plain", x)
x = Ref{Any}(x)
with_repl_linfo(d.repl) do io
⋮
show(io, mime, x[])
⋮
end
```
The generated method corresponds to the `do` block here.
The call to `show` comes from `show(io, mime, x[])`.
This implementation uses a clever trick, wrapping `x` in a `Ref{Any}(x)`, to prevent specialization of the method defined by the `do` block on the specific type of `x`.
This trick is designed to limit the number of `MethodInstance`s inferred for this `display` method.
A great option is to replace the call to `display` with an explicit
```
show(stdout, MIME("text/plain"), cs)
```
There's one extra detail: the type of `stdout` is not fixed (and therefore not known), because one can use a terminal, a file, `devnull`, etc., as `stdout`. If you want to prevent all runtime dispatch from this call, you'd need to supply an `io::IO` object of known type as the first argument. It could, for example, be passed in to `lotsa_containers` from `main`:
```
function lotsa_containers(io::IO)
⋮
println(io, "lotsa containers:")
show(io, MIME("text/plain"), cs)
end
```
However, if you want it to go to `stdout`--and to allow users to redirect `stdout` to a target of their choosing--then an `io` argument may have to be of unknown type when called from `main`.
### When you need to rely on `@compile_workload`
Most of the remaining triggers are difficult to fix because they occur in deliberately-`@nospecialize`d portions of Julia's internal code for displaying arrays. In such cases, adding a `PrecompileTools.@compile_workload` is your best option. Here we use an interesting trick:
```
@compile_workload begin
lotsa_containers(devnull) # use `devnull` to suppress output
abmult(rand(-5:5), rand(3))
end
precompile(lotsa_containers, (Base.TTY,))
```
During the workload, we pass `devnull` as the `io` object to `lotsa_containers`: this suppresses the output so you don't see anything during precompilation. However, `devnull` is not a `Base.TTY`, the standard type of `stdout`. Nevertheless, this is effective because we can see that many of the callees in the remaining inference-triggers do not depend on the `io` object.
To really ice the cake, we also add a manual `precompile` directive. (`precompile` doesn't execute the method, it just compiles it.) This doesn't "step through" runtime dispatch, but at least it precompiles the entry point.
Thus, at least `lotsa_containers` will be precompiled for the most likely `IO` type encountered in practice.
With these changes, we've fixed nearly all the latency problems in `OptimizeMe`, and made it much less vulnerable to invalidation as well. You can see the final code in the [`OptimizeMeFixed` source code](https://github.com/timholy/SnoopCompile.jl/blob/master/examples/OptimizeMeFixed.jl). Note that this would have to be turned into a real package for the `@compile_workload` to have any effect.
## [A note on analyzing test suites](@id test-suites)
If you're doing a package analysis, it's convenient to use the package's `runtests.jl` script as a way to cover much of the package's functionality.
SnoopCompile has a couple of enhancements designed to make it easier to ignore inference triggers that come from the test suite itself.
First, `suggest.(itrigs)` may show something like this:
```
./broadcast.jl:1315: inlineable (ignore this one)
./broadcast.jl:1315: inlineable (ignore this one)
./broadcast.jl:1315: inlineable (ignore this one)
./broadcast.jl:1315: inlineable (ignore this one)
```
This indicates a broadcasting operation in the `@testset` itself.
Second, while it's a little dangerous (because `suggest` cannot entirely be trusted), you can filter these out:
```julia
julia> itrigsel = [itrig for itrig in itrigs if !isignorable(suggest(itrig))];
julia> length(itrigs)
222
julia> length(itrigsel)
71
```
While there is some risk of discarding triggers that provide clues about the origin of other triggers (e.g., they would have shown up in the same branch of the `trigger_tree`), the shorter list may help direct your attention to the "real" issues.
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 3738 | # [Using `@snoop_inference` to emit manual precompile directives](@id precompilation)
In a few cases, it may be inconvenient or impossible to precompile using a [workload](https://julialang.github.io/PrecompileTools.jl/stable/#Tutorial:-forcing-precompilation-with-workloads). Some examples might be:
- an application that opens graphical windows
- an application that connects to a database
- an application that creates, deletes, or rewrites files on disk
In such cases, one alternative is to create a manual list of precompile directives using Julia's `precompile(f, argtypes)` function.
!!! warning
Manual precompile directives are much more likely to "go stale" as the package is developed---`precompile` does not throw an error if a method for the given `argtypes` cannot be found. They are also more likely to be dependent on the Julia version, operating system, or CPU architecture. Whenever possible, it's safer to use a workload.
`precompile` directives have to be emitted by the module that owns the method and/or types.
SnoopCompile comes with a tool, `parcel`, that splits out the "root-most" precompilable MethodInstances into their constituent modules.
This will typically correspond to the bottom row of boxes in the [flame graph](@ref flamegraph).
In cases where you have some that are not naively precompilable, they will include MethodInstances from higher up in the call tree.
Let's use `SnoopCompile.parcel` on our [`OptimizeMe`](@ref inferrability) demo:
```@repl parcel-inference
using SnoopCompileCore, SnoopCompile # here we need the SnoopCompile path for the next line (normally you should wait until after data collection is complete)
include(joinpath(pkgdir(SnoopCompile), "examples", "OptimizeMe.jl"))
tinf = @snoop_inference OptimizeMe.main();
ttot, pcs = SnoopCompile.parcel(tinf);
ttot
pcs
```
`ttot` shows the total amount of time spent on type-inference.
`parcel` discovered precompilable MethodInstances for four modules, `Core`, `Base.Multimedia`, `Base`, and `OptimizeMe` that might benefit from precompile directives.
These are listed in increasing order of inference time.
Let's look specifically at `OptimizeMeFixed`, since that's under our control:
```@repl parcel-inference
pcmod = pcs[end]
tmod, tpcs = pcmod.second;
tmod
tpcs
```
This indicates the amount of time spent specifically on `OptimizeMe`, plus the list of calls that could be precompiled in that module.
We could look at the other modules (packages) similarly.
## SnoopCompile.write
You can generate files that contain ready-to-use `precompile` directives using `SnoopCompile.write`:
```@repl parcel-inference
SnoopCompile.write("/tmp/precompiles_OptimizeMe", pcs)
```
You'll now find a directory `/tmp/precompiles_OptimizeMe`, and inside you'll find files for modules that could have precompile directives added manually.
The contents of the last of these should be recognizable:
```julia
function _precompile_()
ccall(:jl_generating_output, Cint, ()) == 1 || return nothing
Base.precompile(Tuple{typeof(main)}) # time: 0.4204474
end
```
The first `ccall` line ensures we only pay the cost of running these `precompile` directives if we're building the package; this is relevant mostly if you're running Julia with `--compiled-modules=no`, which can be a convenient way to disable precompilation and examine packages in their "native state."
(It would also matter if you've set `__precompile__(false)` at the top of your module, but if so why are you reading this?)
This file is ready to be moved into the `OptimizeMe` repository and `include`d into your module definition.
You might also consider submitting some of the other files (or their `precompile` directives) to the packages you depend on.
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 3.0.0 | d13633f779365415c877f5ffe07db8803ad73fba | docs | 1517 | # Tutorial on `@snoop_llvm`
Julia uses the [LLVM compiler](https://llvm.org/) to generate machine code. Typically, the two main contributors to the overall compile time are inference and LLVM, and thus together `@snoop_inference` and `@snoop_llvm` collect fairly comprehensive data on the compiler.
`@snoop_llvm` has a somewhat different design than `@snoop_inference`: while `@snoop_inference` runs in the same session that you'll be using for analysis (and thus requires that you remember to do the data gathering in a fresh session), `@snoop_llvm` spawns a fresh process to collect the data. The downside is that you get less interactivity, as the data have to be written out in intermediate forms as a text file.
### Add SnoopCompileCore and SnoopCompile to your environment
Here, we'll add these packages to your [default environment](https://pkgdocs.julialang.org/v1/environments/).
```
using Pkg
Pkg.add(["SnoopCompileCore", "SnoopCompile"]);
```
## Collecting the data
Here's a simple demonstration of usage:
```@repl tutorial-llvm
using SnoopCompileCore
@snoop_llvm "func_names.csv" "llvm_timings.yaml" begin
using InteractiveUtils
@eval InteractiveUtils.peakflops()
end
using SnoopCompile
times, info = SnoopCompile.read_snoop_llvm("func_names.csv", "llvm_timings.yaml", tmin_secs = 0.025);
```
This will write two files, `"func_names.csv"` and `"llvm_timings.yaml"`, in your current working directory. Let's look at what was read from these files:
```@repl tutorial-llvm
times
info
```
| SnoopCompile | https://github.com/timholy/SnoopCompile.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | code | 421 | using Documenter, SyntheticObjects
makedocs(sitename="SyntheticObjects.jl",
#format="html"
doctest = false,
modules = [SyntheticObjects]
, pages = Any[
"SyntheticObjects.jl" => "index.md",
]
,
#make = Documenter.make_julia_cmd()
)
deploydocs(repo = "github.com/hzarei4/SyntheticObjects.jl.git",
target = "gh-pages",
) | SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | code | 181 | using SyntheticObjects
using View5D
sz = (128,128,128)
pollen = pollen3D(sz);
filaments = filaments3D(sz);
@ve pollen filaments
volume(pollen)
volume(filaments)
arr = zeros(sz) | SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | code | 181 | module SyntheticObjects
include("utils.jl")
include("pollen3D.jl")
include("hollow_sphere.jl")
include("filaments3D.jl")
include("spokes_object.jl")
include("annotations.jl")
end
| SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | code | 5637 | using Cairo
# using FourierTools:filter_gaussian
export annotation_3D!, annotation_3D, init_annonate, annotate_string!, matrix_read, resolution_offset
"""
annotation_3D!(sz=(128,128, 1); numbers_or_alphabets="alphabets", font_size=Float64.(minimum(sz[1:2]))-10.0, bkg=0.9)
Create a 3D array of alphabets or numbers with varying font sizes and background levels.
# Arguments
- `sz::Tuple`: A tuple of three integers representing the size of the volume. Default is (128, 128, 1).
- `numbers_or_alphabets::String`: A string representing whether to use alphabets or numbers. Default is "alphabets".
- `font_size::Float64`: A float representing the font size. Default is the minimum of the first two elements of `sz` minus 10.0.
- `bkg::Float64`: A float representing the background level. Default is 0.9.
# Returns
A 3D array of alphabets or numbers with varying font sizes and background levels.
"""
function annotation_3D!(arr; numbers_or_alphabets="alphabets", font_size=Float64.(minimum(size(arr)[1:2]))-10.0, bkg=0.9)
sz = size(arr);
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
cr, c = init_annonate(sz)
if numbers_or_alphabets == "alphabets"
if size(arr)[3] > length(alphabet)
println("Number of slices is greater than 26. Please choose numbers instead of alphabets.")
return
end
for i1 in 1:sz[3]
modi = mod(i1-1, length(alphabet)) +1
annotate_string!(cr, c, arr, string(alphabet[modi]), font_size, i1, bkg) # arr[:, :, i1] =
end
elseif numbers_or_alphabets == "numbers"
for i1 in 1:sz[3]
annotate_string!(cr, c, arr, string(i1), font_size, i1, bkg)
end
end
return (arr./maximum(arr))
end
function annotation_3D(::Type{T}, sz=(128,128, 1); numbers_or_alphabets="alphabets", font_size=Float64.(minimum(sz[1:2]))-10.0, bkg=0.9) where {T}
arr = zeros(T, sz)
annotation_3D!(arr; numbers_or_alphabets=numbers_or_alphabets, font_size=font_size, bkg=bkg)
return (arr./maximum(arr))
end
function annotation_3D(sz=(128,128, 1); numbers_or_alphabets="alphabets", font_size=Float64.(minimum(sz[1:2]))-10.0, bkg=0.9)
return annotation_3D(Float32, sz; numbers_or_alphabets=numbers_or_alphabets, font_size=font_size, bkg=bkg)
end
"""
resolution_offset(sz = (100, 100, 1); numbers_or_alphabets="alphabets")
Create a 3D array of alphabets or numbers with varying font sizes and background levels.
# Arguments
- `sz::Tuple`: A tuple of three integers representing the size of the volume. Default is (100, 100, 1).
Note that the first two elements are multiplied by 10 to defined the array size. The third elements will contain varying alphabets or numbers.
- `numbers_or_alphabets::String`: A string representing whether to use alphabets or numbers. Default is "alphabets".
# Returns
A 3D array of alphabets or numbers with varying font sizes and background levels.
# Example
```jldoctest
julia> resolution_test(; sz_each_section=(100, 100), num_slices=1, numbers_or_alphabets="alphabets")
```
"""
function resolution_offset(::Type{T}, sz = (512, 512, 1); divisions = (8, 8), numbers_or_alphabets="alphabets") where {T}
arr_final = zeros(T, sz)
size_per_division = sz[1:2].÷divisions
letter_space = (size_per_division..., sz[3])
for font in 1:divisions[1]
for bkg_lvl in 1:divisions[2]
my_bg = (bkg_lvl-1.0)/divisions[2]
my_fs = Float64(size_per_division[1]*font/divisions[1])
arr_final[(font-1)*size_per_division[1]+1:font*size_per_division[1],
(bkg_lvl-1)*size_per_division[2]+1:bkg_lvl*size_per_division[2], :] .= annotation_3D(letter_space, numbers_or_alphabets=numbers_or_alphabets, font_size=my_fs, bkg=my_bg)
end
end
return arr_final
end
function resolution_offset(sz = (512, 512, 1); divisions = (16,16), numbers_or_alphabets="alphabets")
return resolution_offset(Float32, sz; divisions=divisions, numbers_or_alphabets=numbers_or_alphabets)
end
function init_annonate(sz)
c = CairoRGBSurface(sz[1:2]...);
cr = CairoContext(c);
save(cr);
return cr, c
end
function annotate_string!(cr, c, arr, string_to_write::AbstractString, font_size::Float64, i1, bkg)
# println("annotating $string_to_write at fs $font_size and bg $bkg")
save(cr);
set_source_rgb(cr, bkg, bkg, bkg); # background color
rectangle(cr, 0.0, 0.0, c.height, c.width); # background boundry
fill(cr);
restore(cr);
save(cr);
select_font_face(cr, "Sans", Cairo.FONT_SLANT_NORMAL, Cairo.FONT_WEIGHT_NORMAL);
set_source_rgb(cr, 1.0, 1.0, 1.0);
set_font_size(cr, font_size);
extents = text_extents(cr, string_to_write);
xy = ((c.height, c.width) .÷ 2 ) .- (extents[3]/2 + extents[1], (extents[4]/2 + extents[2]));
move_to(cr, xy...);
show_text(cr, string_to_write);
arr[:, :, i1] = matrix_read(c)#./maximum(matrix_read(c))
set_source_rgb(cr, 0.0, 0.0, 0.0); # white
rectangle(cr, 0.0, 0.0, c.height, c.width); # background
fill(cr);
restore(cr);
save(cr);
#return cr, c, arr
end
"""
function matrix_read(surface)
paint the input surface into a matrix image of the same size to access
the pixels.
"""
function matrix_read(surface)
w = Int(surface.width)
h = Int(surface.height)
z = zeros(UInt32,w,h)
surf = CairoImageSurface(z, Cairo.FORMAT_RGB24)
cr = CairoContext(surf)
set_source_surface(cr,surface,0,0)
paint(cr)
r = surf.data
return r #(r .- minimum(r))./maximum((r .- minimum(r)))
end
| SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | code | 5544 | using IndexFunArrays:gaussian
using Random
export filaments3D, filaments3D!
"""
filaments3D!(obj; radius = 0.8, rand_offset=0.05, rel_theta=1.0, num_filaments=50, apply_seed=true, thickness=0.8)
Create a 3D representation of filaments.
# Arguments
- `obj`: A 3D array representing the volume into which the filaments will be added.
- `radius`: A tuple of real numbers (or a single real number) representing the relative radius of the volume in which the filaments will be created.
Default is 0.8. If a tuple is used, the filamets will be created in a corresponding elliptical volume.
Note that the radius is only enforced in the version `filaments3D` which creates the array rather than adding.
- `rand_offset`: A tuple of real numbers representing the random offsets of the filaments in relation to the size. Default is 0.05.
- `rel_theta`: A real number representing the relative theta range of the filaments. Default is 1.0.
- `num_filaments`: An integer representing the number of filaments to be created. Default is 50.
- `apply_seed`: A boolean representing whether to apply a seed to the random number generator. Default is true.
- `thickness`: A real number representing the thickness of the filaments in pixels. Default is 0.8.
The result is added to the obj input array
# Example
```julia
# create a 100x100x100 volume with 10 filaments where only the central slice has a random arrangement of filaments
julia> obj = rand(100,100,100); # create an array of random numbers
julia> filaments3D!(obj; num_filaments=10, rel_theta=0, rand_offset=(0.1,0.1,0), intensity=2.0);
```
"""
function filaments3D!(obj; intensity = 1.0, radius = 0.8, rand_offset=0.05, rel_theta=1.0, num_filaments=50, apply_seed=true, thickness=0.8)
# save the state of the rng to reset it after the function is done
rng = copy(Random.default_rng());
if apply_seed
Random.seed!(42)
end
sz = size(obj)
mid = sz .÷ 2 .+1
# draw random lines equally distributed over the 3D sphere
for n in 1:num_filaments
phi = 2π*rand()
#theta should be scaled such that the distribution over the unit sphere is uniform
theta = acos(rel_theta*(1 - 2 * rand()));
pos = (sz.*radius./2) .* (sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta));
pos_offset = Tuple(rand_offset.*sz.*(rand(3).-0.5))
# println("Drawing line $n at theta = $theta and phi = $phi")
draw_line!(obj, pos.+pos_offset.+mid, mid.+pos_offset.-pos, thickness=thickness, intensity=intensity)
end
# reset the rng to the state before this function was called
copy!(Random.default_rng(), rng);
return obj
end
"""
filaments3D([DataType], sz= (128, 128, 128), rand_offset=0.05, rel_theta=1.0, num_filaments=50, apply_seed=true, thickness=0.8)
Create a 3D representation of filaments.
# Arguments
- `DataType`: The datatype of the output array. Default is Float32.
- `sz`: A tuple of integers representing the size of the volume into which the filaments will be created. Default is (128, 128, 128).
- `radius`: A tuple of real numbers (or a single real number) representing the relative radius of the volume in which the filaments will be created.
Default is 0.8. If a tuple is used, the filamets will be created in a corresponding elliptical volume.
Note that the radius is only enforced in the version `filaments3D` which creates the array rather than adding.
- `rand_offset`: A tuple of real numbers representing the random offsets of the filaments in relation to the size. Default is 0.05.
- `rel_theta`: A real number representing the relative theta range of the filaments. Default is 1.0.
- `num_filaments`: An integer representing the number of filaments to be created. Default is 50.
- `apply_seed`: A boolean representing whether to apply a seed to the random number generator. Default is true.
- `thickness`: A real number representing the thickness of the filaments in pixels. Default is 0.8.
The result is added to the obj input array
# Example
```jldoctest
# create a 100x100x100 volume with 100 filaments where only the central slice has a random arrangement of filaments
julia> obj = filaments3D((100,100,100); rel_theta=0, rand_offset=(0.2, 0.2, 0));
# create a 100x100x100 volume with 100 filaments arranged in 3D
julia> obj = filaments3D((100,100,100));
```
"""
function filaments3D(::Type{T}, sz= (128, 128, 128); intensity=one(T), radius = 0.8, rand_offset=0.2, num_filaments=50, rel_theta=1.0, apply_seed=true, thickness=0.8) where {T}
obj = zeros(T, sz)
filaments3D!(obj; intensity=intensity, radius=radius, rand_offset=rand_offset,
num_filaments=num_filaments, apply_seed=apply_seed, rel_theta=rel_theta, thickness)
obj .*= (rr(eltype(obj), size(obj), scale=1 ./(sz .* radius./2)) .< 1.0)
return obj
end
function filaments3D(sz= (128, 128, 128); intensity=one(Float32), radius = 0.8, rand_offset=0.2, num_filaments=50, rel_theta=1.0, apply_seed=true, thickness=0.8)
return filaments3D(Float32, sz; intensity=intensity, radius=radius, rand_offset=rand_offset,
num_filaments=num_filaments, apply_seed=apply_seed, thickness, rel_theta=rel_theta)
end
# function filaments_rand!(arr; num_filaments=10, seeding=true)
# if seeding
# Random.seed!(42)
# end
# for i in 1:num_filaments
# #println("Drawing line $i")
# draw_line!(arr, Tuple(rand(10.0:size(arr, 1)-10, (1, 3))), Tuple(rand(10.0:size(arr, 1)-10, (1, 3))), thickness= rand(0.0:2.0))
# end
# end
| SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | code | 3764 | export hollow_sphere!, hollow_sphere, object_3D
"""
hollow_sphere(obj, radius=0.8, center=sz.÷2 .+1; thickness=0.8)
Create a 3D representation of a hollow sphere.
# Arguments
- `obj`: A 3D array representing the object into which the sphere will be added.
- `radius`: A float representing the radius of the sphere.
- `thickness`: A float representing the thickness of the sphere in pixels. Default is 0.8.
- `center`: A tuple representing the center of the sphere. Default is the center of the object.
# Returns
- `sph::Array{Float64}`: A 3D array representing the hollow sphere.
# Example
```jldoctest
# create a centered sphere of 80% of the object size with a thickness of 0.8 pixels
julia> obj = zeros(Float64, (128, 128, 128));
julia> hollow_sphere(obj, 0.8)
```
"""
function hollow_sphere!(obj, radius=0.8, center=size(obj).÷2 .+1; thickness=0.8)
draw_sphere!(obj, size(obj).*radius./2, center, thickness=thickness)
return obj
end
"""
hollow_sphere([DataType], sz= (128, 128, 128), radius=0.8, center=sz.÷2 .+1; thickness=0.8)
Create a 3D representation of a hollow sphere.
# Arguments
- `DataType`: The optional datatype of the output array. Default is Float32.
- `sz`: A vector of integers representing the size of the sphere.
- `radius`: A float representing the radius of the sphere.
- `thickness`: A float representing the thickness of the sphere in pixels. Default is 0.8.
- `center`: A tuple representing the center of the sphere. Default is the center of the object.
# Returns
- `sph::Array{Float64}`: A 3D array representing the hollow sphere.
# Example
```jldoctest
# create a centered sphere of 80% of the object size with a thickness of 0.8 pixels
julia> hollow_sphere()
```
"""
function hollow_sphere(::Type{T}, sz= (128, 128, 128), radius=0.8, center=sz.÷2 .+1; thickness=0.8) where {T}
obj = zeros(T, sz)
hollow_sphere!(obj, radius, center, thickness=thickness)
return obj
end
function hollow_sphere(sz= (128, 128, 128), radius=0.8, center=sz.÷2 .+1; thickness=0.8)
return hollow_sphere(Float32, sz, radius, center, thickness=thickness)
end
"""
object_3D([DataType], sz, radius=0.8, center=sz.÷2 .+1; thickness=0.8)
Create a 3D object with a hollow sphere (cell membrane), another hollow sphere (nucleus), a hollow small sphere (vesicle), a filled small sphere (), and a line.
# Arguments
- `DataType`: The optional datatype of the output array. Default is Float32.
- `sz`: A vector of integers representing the size of the object. Default is (128, 128, 128).
- `radius`: A float (or tuple) representing the relative radius of the cell sphere.
- `center`: A tuple representing the center of the object.
- `thickness`: A float representing the thickness of the membranes in pixels. Default is 0.8.
"""
function object_3D(::Type{T}, sz= (128, 128, 128), radius=0.8, center=sz.÷2 .+1; thickness=0.8) where {T}
obj = hollow_sphere(T, sz, radius, center, thickness=thickness)
nucleus_offset = ntuple((d)-> (d==1) ? - radius*(sz[1] ./ 4) : sz[d]/20 ,length(sz))
hollow_sphere!(obj, radius./2, center.+nucleus_offset, thickness=thickness)
vesicle1_offset = ntuple((d)-> (d==1) ? sz[1] .÷ 3 : 0 ,length(sz))
hollow_sphere!(obj, radius./10, center.+vesicle1_offset, thickness=thickness)
vesicle2_offset = ntuple((d)-> (d==2) ? sz[1] .÷ 3 : 0 ,length(sz))
draw_sphere!(obj, sz.*radius./20, center.+vesicle2_offset, thickness=thickness, filled=true)
line_dir = ntuple((d)-> (d==1) ? sz[1] .÷ 2.2 : 0 ,length(sz))
draw_line!(obj, center .- line_dir, center .+ line_dir, thickness=thickness)
return obj
end
function object_3D(sz= (128, 128, 128), radius=0.8, center=sz.÷2 .+1; thickness=0.8)
return object_3D(Float32, sz, radius, center, thickness=thickness)
end
| SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | code | 1854 | using IndexFunArrays:rr, xx, yy, zz, phiphi
export pollen3D, pollen3D!
"""
Pollen3D(sv = (128, 128, 128), dphi::Float64=0.0, dtheta::Float64=0.0)
Create a 3D representation of a pollen grain.
# Arguments
- `sv::Tuple`: A tuple of three integers representing the size of the volume in which the pollen grain will be created. Default is (128, 128, 128).
- `dphi::Float64`: A float representing the phi angle offset in radians. Default is 0.0.
- `dtheta::Float64`: A float representing the theta angle offset in radians. Default is 0.0.
# Returns
- `ret::Array{Float64}`: A 3D array representing the pollen grain.
# Example
```jldoctest
Pollen3D((256, 256, 256), 0.0, 0.0)
```
"""
function pollen3D(sv = (128, 128, 128); dphi=0.0, dtheta=0.0, thickness=0.8)
return pollen3D(Float32, sv; dphi=dphi, dtheta=dtheta, thickness=thickness)
end
function pollen3D(::Type{T}, sv = (128, 128, 128); dphi=0.0, dtheta=0.0, thickness=0.8) where {T}
obj = zeros(T, sv)
pollen3D!(obj; dphi=dphi, dtheta=dtheta, thickness=thickness)
return obj
end
function pollen3D!(arr; dphi=0.0, dtheta=0.0, thickness = 0.8, intensity=1.0, filled=false)
sv = size(arr)
x = xx(eltype(arr), sv)
y = yy(eltype(arr),sv)
z = zz(eltype(arr),sv)
phi = phiphi(eltype(arr), sv)
m = (z .!= 0)
theta = zeros(eltype(arr), size(x)) # Allocate
theta[m] = asin.(z[m] ./ sqrt.(x[m].^2 + y[m].^2 + z[m].^2)) .+ dtheta
a = abs.(cos.(theta .* 20))
b = abs.(sin.((phi .+ dphi) .* sqrt.(max.(zero(eltype(arr)), 20^2 .* cos.(theta))) .- theta .+ pi/2))
# calculate the relative distance to the surface of the pollen grain
dc = ((0.4*sv[1] .+ (a .* b).^5 * sv[1]/20.0) .+ cos.(phi .+ dphi) .* sv[1]/20) .- rr(sv)
sigma2 = 2*thickness^2
arr .+= gauss_value.(dc, sigma2, intensity, thickness, filled)
return arr
end
| SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | code | 1678 | using IndexFunArrays:xx, yy, zz, phiphi, rr
export spokes_object
"""
spokes_object(imgSize = (256, 256), numSpikes = 21, continuous = true, makeRound = true)
Generates a 2D or 3D representation of a spokes object.
# Arguments
- `imgSize::Tuple{Int, Int}`: A tuple of integers representing the size of the image. Default is (256, 256).
- `numSpikes::Int`: An integer representing the number of spikes. Default is 21.
- `continuous::Bool`: A boolean indicating whether the spokes are continuous. Default is true.
- `makeRound::Bool`: A boolean indicating whether the object is round. Default is true.
# Returns
- `obj2::Array{Float64}`: A 2D or 3D array representing the spokes object.
# Example
```jldoctest
spokes_object((512, 512), 30, false, false)
```
"""
function spokes_object(imgSize = (256, 256), numSpikes = 21, continuous = true, makeRound = true)
if makeRound
obj = zeros(imgSize)
rr_coords = rr(imgSize) #, "freq")
obj[rr_coords .< 100.0] .= 1.0
else
xx_coords = xx(imgSize) #, "freq")
yy_coords = yy(imgSize) #, "freq")
obj = (abs.(xx_coords) .< 0.4) .* (abs.(yy_coords) .< 0.4)
end
zchange = 1
if length(imgSize) > 2 && imgSize[3] > 1
zchange = zz(imgSize) #, "freq")
end
myphiphi = length(imgSize) > 2 ? repeat(phiphi(imgSize[1:2]), outer=(1, 1, imgSize[3])) : phiphi(imgSize)
if !continuous
obj[mod.(myphiphi .+ pi .+ 2 * pi * zchange, 2 * pi / numSpikes) .< 2 * pi / numSpikes / 2] .= 0
obj2 = obj
else
obj2 = obj .* (1 .+ cos.(numSpikes * (myphiphi .+ 2 * pi * zchange)))
end
return Array{Float64}(obj2)
end
| SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | code | 3757 | export draw_line!, draw_sphere!, gauss_value
function sqr_dist_to_line(p::CartesianIndex, start, n)
# Implementations for is_on_line
d = Tuple(p) .- start
return sum(abs2.(d .- sum(d.*n).*n)), sum(d.*n), sqrt(sum(abs2.(sum(d.*n).*n)));
end
"""
draw_line!(arr, start, stop; thickness=0.5, intensity=one(eltype(arr)))
Draw a line in a 3D array by adding a Gaussian profile to the array.
#Arguments:
- `arr::Array`: A 3D array to which the line will be added.
- `start`: The starting point of the line as a tuple.
- `stop`: The stopping point of the line as a tuple.
- `thickness`: The thickness of the line. Default is 0.5.
- `intensity::Float64`: The intensity of the line. Default is 1.0.
"""
function draw_line!(arr, start, stop; thickness=0.5, intensity=one(eltype(arr)))
direction = stop .- start
line_length = sqrt(sum(abs2.(direction)))
n = direction ./ line_length
# Implementations for draw_line
# println("Drawing line from $start to $stop")
sigma2 = 2*thickness^2
for p in CartesianIndices(arr)
d2, t =sqr_dist_to_line(p, start, n)
if (d2 < 4*thickness^2)
if (t > 0 && t < line_length)
arr[p] += intensity*exp(-d2/sigma2); # Gaussian profile
elseif (t < 0 && t > -2*thickness)
arr[p] += intensity*exp(-d2/sigma2)*exp(-(t*t)/sigma2); # Gaussian profile
elseif (t > line_length && t < line_length + 2*thickness)
arr[p] += intensity*exp(-d2/sigma2)*exp(-(t-line_length)^2/sigma2); # Gaussian profile
end
end
end
end
"""
draw_sphere!(arr, radius, center; thickness=0.8, intensity=one(eltype(arr)))
Draw a sphere in a 3D array by adding a Gaussian profile to the array.
#Arguments:
- `arr::Array`: A 3D array to which the sphere will be added.
- `radius`: The radius of the sphere as a number or tuple.
- `center`: The center of the sphere as a tuple. DEFAULT: size(arr).÷2 .+1 which is the (bottom-right) pixel from the center of the array
- `thickness`: The thickness of the sphere. Default is 0.8.
- `intensity::Float64`: The intensity of the sphere. Default is 1.0.
# Example
```jldoctest
julia> arr = zeros(Float32, (128, 128, 128));
julia> draw_sphere!(arr, 10);
julia> draw_sphere!(arr, (20,30,40), (50,30,80));
```
"""
function draw_sphere!(arr, radius, center=size(arr).÷2 .+1; thickness=0.8, intensity=one(eltype(arr)), filled=false)
# Implementations for draw_line
# println("Drawing line from $start to $stop")
# creats on IndexFunArray to be used to measure distance. Note that this does not allocate or access an array
d = rr(eltype(arr), size(arr), offset=center, scale=1 ./radius)
rel_thickness = thickness ./ maximum(radius)
sigma2 = 2*rel_thickness^2
for p in CartesianIndices(arr)
myd = d[p] .- one(real(eltype(arr)))
if (filled)
if (myd < 0)
arr[p] += intensity;
elseif (myd < 2*rel_thickness)
arr[p] += intensity*exp(-myd*myd/sigma2); # Gaussian profile
end
else
if (abs.(myd) < 2*rel_thickness)
arr[p] += intensity*exp(-myd*myd/sigma2); # Gaussian profile
end
end
end
end
function gauss_value(myd, sigma2, intensity, thickness, filled)::typeof(myd)
ret = let
if (filled)
if (myd < 0)
intensity;
elseif (myd < 2*thickness)
intensity*exp(-myd*myd/sigma2); # Gaussian profile
end
else
if (abs.(myd) < 2*thickness)
intensity*exp(-myd*myd/sigma2); # Gaussian profile
else
0
end
end
end
return ret
end
| SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | code | 639 | using SyntheticObjects
using Test
@testset "Pollen" begin
sz = (128, 128, 128)
pollen = pollen3D(sz)
@test size(pollen) == sz
@test typeof(pollen) == Array{Float32, 3}
@test maximum(pollen) ≈ 1.0
@test minimum(pollen) == 0.0
end
@testset "Filaments" begin
sz = (128, 128, 128)
filaments = filaments3D(sz)
@test size(filaments) == sz
@test typeof(filaments) == Array{Float32, 3}
end
@testset "Annotations" begin
sz = (128, 128, 5)
annotation = annotation_3D(sz)
@test size(annotation) == sz
@test typeof(annotation) == Array{Float32, 3}
@test maximum(annotation) ≈ 1.0
end | SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | code | 2561 | using IndexFunArrays:gaussian
using FourierTools:conv_psf, conv
export test_spheres
"""
test_spheres(asize = [200, 200, 100], ScaleX = 25, d = 100, Zfoc = asize[3] ÷ 2)
Create a 3D representation of test spheres.
# Arguments
- `asize::Vector{Int}`: A vector of integers representing the size of the volume in which the spheres will be created. Default is [200, 200, 100].
- `ScaleX::Int`: An integer representing the scale factor. Default is 25.
- `d::Int`: An integer representing the diameter of the spheres. Default is 100.
- `Zfoc::Int`: An integer representing the z-coordinate of the focal slice. Default is asize[3] ÷ 2.
# Returns
- `myPos::Array{Float64}`: A 3D array representing the test spheres.
# Example
```julia
test_spheres([300, 300, 150], 50, 200, 75)
```
"""
function test_spheres(asize = [200, 200, 100], ScaleX = 25, d = 100, Zfoc = asize[3] ÷ 2)
# Bead Creation
strength = 255
sphere_radius = 90 / ScaleX
mySigma = floor(Int, sphere_radius / 2.5)
sbox = ceil(Int, sphere_radius * (1 + sqrt(2)))
b = zeros(sbox, sbox, sbox)
b[sbox ÷ 2, sbox ÷ 2, sbox ÷ 2] = strength # Central pixel
myblob = gaussian(b, sigma=mySigma) # Gaussian blob
# Position Calculation
mymiddleX = asize[1] ÷ 2
mymiddleY = asize[2] ÷ 2
dist = d / ScaleX
# Ensure sufficient image dimensions (with warnings)
if asize[3] < 2 * Zfoc + 2 * dist
println("Warning: z-size insufficient. Extending")
asize[3] = floor(2 * Zfoc + 2 * dist)
end
if asize[1] < 3 * dist
println("Warning: x-size insufficient. Extending")
asize[1] = floor(3 * dist)
end
myPos = zeros(Tuple(asize))
# In-focal Positions
myPos[mymiddleX - floor(Int, dist / 2), mymiddleY, Zfoc] = 1
myPos[mymiddleX + floor(Int, dist / 2), mymiddleY, Zfoc] = 1
# Out-of-Focus
myPos[mymiddleX - 2 * floor(Int, dist / 2), mymiddleY, Zfoc + floor(Int, dist)] = 1
myPos[mymiddleX + 2 * floor(Int, dist / 2), mymiddleY, Zfoc + floor(Int, dist)] = 1
# Random Positions
nb = 300
maxX, maxY, maxZ = asize[1] - 1, asize[2] - 1, asize[3] - 1
posX = rand(1:maxX, nb)
posY = rand(1:maxY, nb)
posZ = rand(1:maxZ, nb)
for i in 1:nb
myPos[posX[i], posY[i], posZ[i]] = 1
end
obj = zeros(Tuple(asize))
# Create object
if (0)
a=conv_psf(myPos, myblob); # put the blobs at the right positions #TODO Not at the same size
obj=brightness.*a[a.>0.5];
else
obj=myPos; # just white pixels
end
return obj
end
| SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | docs | 167 | # SyntheticObjects
[](https://codecov.io/gh/hzarei4/SyntheticObjects.jl) | SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 056d787e614907b230f380d0c9600642ecf8bd9d | docs | 369 | # SyntheticObjects.jl Documentation
```@docs
pollen3D(x)
```
```@docs
object_3D
```
```@docs
filaments3D
```
```@docs
draw_sphere!
```
```@docs
annotation_3D!
```
```@docs
draw_line!
```
```@docs
filaments3D!
```
```@docs
matrix_read
```
```@docs
spokes_object
```
```@docs
hollow_sphere!
```
```@docs
resolution_offset
```
```@docs
hollow_sphere
``` | SyntheticObjects | https://github.com/hzarei4/SyntheticObjects.jl.git |
|
[
"MIT"
] | 1.0.0 | 6f752d6962e3892213199521a8338cd7c4354cc0 | code | 9366 | #= License
Copyright 2019, 2020 (c) Yossi Bokor Katharine Turner
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
=#
__precompile__()
module DiscretePersistentHomologyTransform
#### Requirements ####
using CSV
using Hungarian
using DataFrames
using LinearAlgebra
using SparseArrays
using Eirene
#### Exports ####
export PHT,
Recenter,
Direction_Filtration,
Evaluate_Barcode,
Total_Rank_Exact,
Total_Rank_Grid,
Total_Rank_Auto,
Average_Rank_Grid,
Create_Heat_Map,
Set_Mean_Zero,
Weighted_Inner_Product,
Weighted_Inner_Product_Matrix,
Principal_Component_Scores,
Average_Discretised_Rank,
unittest
#### First some functions to recenter the curves ####
function Find_Center(points)
n_p = size(points,1)
c_x = Float64(0)
c_y = Float64(0)
for i in 1:n_p
c_x += points[i,1]
c_y += points[i,2]
end
return Float64(c_x/n_p), Float64(c_y/n_p)
end
function Recenter(points)
points = convert(Array{Float64}, points)
center = Find_Center(points)
for i in 1:size(points)[1]
points[i,1] = points[i,1] - center[1]
points[i,2] = points[i,2] - center[2]
end
return points
end
function Evaluate_Rank(barcode, point)
n = size(barcode)[1]
count = 0
if point[2] < point[1]
#return 0
else
for i in 1:n
if barcode[i,1] <= point[1]
if barcode[i,2] >= point[2]
count +=1
end
end
end
return count
end
end
function Total_Rank_Exact(barcode)
@assert size(barcode,2) == 2
rks = []
n = size(barcode,1)
b = copy(barcode)
reshape(b, 2,n)
for i in 1:n
for j in 1:n
if barcode[i,1] < barcode[j,1]
if barcode[i,2] < barcode[j,2]
b = vcat(b, [barcode[j,1] barcode[i,2]])
m += 1
end
end
end
end
for i in 1:n
append!(rks, Evaluate_Rank(barcode, b[i,:]))
end
return b, rks
end
function Total_Rank_Grid(barcode, x_g, y_g) #the grid should be an array, with 0s in all entries below the second diagonal.
@assert size(x_g) == size(y_g) #I should maybe change this as you don't REALLY need to use the same size grid.....
n_g = size(x_g,1)
rks = zeros(n_g,n_g)
n_p = size(barcode,1)
for i in 1:n_p
point = barcode[i,:]
x_i = findfirst(>=(point[1]), x_g)
y_i = findfirst(<=(point[2]), y_g)
for j in x_i:n_g-y_i+1
for k in j:n_g-y_i+1
rks[n_g-k+1,j] += 1
end
end
end
return rks
end
function Average_Rank_Grid(list_of_barcodes, x_g, y_g)
rks = zeros(length(x_g),length(y_g))
n_b = length(list_of_barcodes)
for i in 1:n_b
rk_i = Total_Rank_Grid(list_of_barcodes[i], x_g,y_g)
rks = rks .+ rk_i
end
rks = rks/n_b
return rks
end
function Average_Rank_Point(list_of_barcodes, x,y)
rk = 0
n_b = length(list_of_barcodes)
if y >= x
for i in 1:n_b
rk += Evaluate_Rank(list_of_barcodes[i], [x,y])
end
return rk/n_b
else
return 0
end
end
function Create_Heat_Map(barcode, x_g, y_g)
f(x,y) = begin
if x > y
return 0
else
return Evaluate_Rank(barcode,[x,y])
end
end
#Z = map(f, X, Y)
p1 = contour(x_g, y_g, f, fill=true)
return p1
end
# Let us do PCA for the rank functions using Kate and Vanessa's paper.
# So, I first need to calculate the pointwise norm
function Set_Mean_Zero(discretised_ranks)
n_r = length(discretised_ranks)
println(n_r)
grid_size = size(discretised_ranks[1])
for i in 1:n_r
@assert size(discretised_ranks[i]) == grid_size
end
mu = zeros(grid_size)
for i in 1:n_r
mu = mu .+ discretised_ranks[i]
end
mu = mu./n_r
normalised = []
for i in 1:n_r
append!(normalised, [discretised_ranks[i] .- mu])
end
return normalised
end
function Weighted_Inner_Product(disc_rank_1, disc_rank_2, weights)
wip = sum((disc_rank_1.*disc_rank_2).*weights)
return wip
end
function Weighted_Inner_Product_Matrix(discretised_ranks, weights)
n_r = length(discretised_ranks)
D = Array{Float64}(undef, n_r, n_r)
for i in 1:n_r
for j in i:n_r
wip = Weighted_Inner_Product(discretised_ranks[i], discretised_ranks[j], weights)
D[i,j] = wip
D[j,i] = wip
end
end
return D
end
function Principal_Component_Scores(inner_prod_matrix, dimension)
F = LinearAlgebra.eigen(inner_prod_matrix, permute = false, scale=false) # this sorts the eigenvectors in ascending order
n_r = size(inner_prod_matrix,1)
lambda = Array{Float64}(undef, 1,dimension)
w = Array{Float64}(undef, size(F.vectors)[1],dimension)
n_v = length(F.values)
for i in 1:dimension
lambda[i] = F.values[n_v-i+1]
w[:,i] = F.vectors[:,n_v-i+1]
end
s = Array{Float64}(undef, n_r,dimension)
for i in 1:size(inner_prod_matrix,1)
for j in 1:dimension
den = sqrt(sum([w[k,j]*sum(w[l,j]*inner_prod_matrix[k,l] for l in 1:n_r) for k in 1:n_r]))
numerator = sum(w[:,j].*inner_prod_matrix[:,i])
s[i,j] = numerator/den
end
end
return s
end
function Average_Discretised_Rank(list_of_disc_ranks)
average = Array{Float64}(undef, size(list_of_disc_ranks[1]))
n_r = length(list_of_disc_ranks)
for i in n_r
average = average .+ list_of_disc_ranks[i]
end
return average/n_r
end
function Direction_Filtration(ordered_points, direction; out = "barcode")
number_of_points = length(ordered_points[:,1]) #number of points
heights = zeros(number_of_points) #empty array to be changed to heights for filtration
fv = zeros(2*number_of_points) #blank fv Eirene
for i in 1:number_of_points
heights[i]= ordered_points[i,1]*direction[1] + ordered_points[i,2]*direction[2] #calculate heights in specificed direction
end
for i in 1:number_of_points
fv[i]= heights[i] # for a point the filtration step is the height
end
for i in 1:(number_of_points-1)
fv[(i+number_of_points)]=maximum([heights[i], heights[i+1]]) # for an edge between two adjacent points it enters when the 2nd of the two points does
end
fv[2*number_of_points] = maximum([heights[1] , heights[number_of_points]]) #last one is a special snowflake
dv = [] # template dv for Eirene
for i in 1:number_of_points
append!(dv,0) # every point is 0 dimensional
end
for i in (1+number_of_points):(2*number_of_points)
append!(dv,1) # edges are 1 dimensional
end
D = zeros((2*number_of_points, 2*number_of_points))
for i in 1:number_of_points
D[i,(i+number_of_points)]=1 # create boundary matrix and put in entries
end
for i in 2:(number_of_points)
D[i, (i+number_of_points-1)]=1 # put in entries for boundary matrix
end
D[1, (2*number_of_points)]=1
ev = [number_of_points, number_of_points] # template ev for Eirene
S = sparse(D) # converting as required for Eirene
rv = S.rowval # converting as required for Eirene
cp = S.colptr # converting as required for Eirene
C = Eirene.eirene(rv=rv,cp=cp,ev=ev,fv=fv) # put it all into Eirene
if out == "barcode"
return barcode(C, dim=0)
elseif out == "one_cycle"
return barcode(C, dim=0), maximum(heights)
else
return C
end
end
#### Wrapper for the PHT function ####
function PHT(curve_points, directions; one_cycle = "n") ##accepts an ARRAY of points
if typeof(directions) == Int64
println("auto generating directions")
dirs = Array{Float64}(undef, directions,2)
for n in 1:directions
dirs[n,1] = cos(n*pi/(directions/2))
dirs[n,2] = sin(n*pi/(directions/2))
end
println("Directions are:")
println(dirs)
else
println("using directions provided")
dirs = copy(directions)
end
pht = []
if one_cycle == "y"
c_1 = []
end
for i in 1:size(dirs,1)
if one_cycle == "y"
pd,c_1 = Direction_Filtration(curve_points, dirs[i,:], out ="one_cycle")
append!(cycle_1, c_1)
pht = vcat(pht, [pd])
else
pd = Direction_Filtration(curve_points, dirs[i,:])
pht = vcat(pht, [pd])
end
end
if one_cycle == "y"
return pht, cycle_1
else
return pht
end
end
#### Wrapper for PCA ####
function PCA(ranks, dimension, weights)
normalised = Set_Mean_Zero(ranks)
D = Weighted_InnerProd_Matrix(normalised, weights)
return Principal_Component_Scores(D, dimension)
end
#### Unittests ####
function test_1()
return PHT([0,0,0],0)
end
function test_2()
pht = PHT([1 1; 5 5], 1)
if pht == [0.9999999999999998 4.999999999999999]
return []
else
println("Error: test_2, pht = ")
return pht
end
end
function unittest()
x = Array{Any}(undef, 2)
x[1] = test_1()
x[2] = test_2()
for p = 1:length(x)
if !isempty(x[p])
println(p)
return x
end
end
return []
end
end# module
| DiscretePersistentHomologyTransform | https://github.com/yossibokor/DiscretePersistentHomologyTransform.jl.git |
|
[
"MIT"
] | 1.0.0 | 6f752d6962e3892213199521a8338cd7c4354cc0 | code | 98 | using DiscretePersistentHomologyTransform
using JLDEirene
using Base.Test
@test unittest() == []
| DiscretePersistentHomologyTransform | https://github.com/yossibokor/DiscretePersistentHomologyTransform.jl.git |
|
[
"MIT"
] | 1.0.0 | 6f752d6962e3892213199521a8338cd7c4354cc0 | docs | 4504 | # DiscretePersistentHomologyTransform.jl
Persistent Homology Transform is produced and maintained by \
Yossi Bokor and Katharine Turner \
<[email protected]> and <[email protected]>
This package provides an implementation of the Persistent Homology Transform, as defined in [Persistent Homology Transform for Modeling Shapes and Surfaces](https://arxiv.org/abs/1310.1030). It also does Rank Functions of Persistence Diagrams, and implements [Principal Component Analysis of Rank functions](https://www.sciencedirect.com/science/article/pii/S0167278916000476).
## Installation
Currently, the best way to install DiscretePersistentHomologyTransform is to run the following in `Julia`:
```julia
using Pkg
Pkg.add("DiscretePersistentHomologyTransform")
```
## Functionality
- DiscretePersistentHomologyTransform computes the Persistent Homology Transform of simple, closed curves in $\mathbb{R}^2$.
- Rank functions of persistence diagrams.
- Principal Component Analysis of Rank Functions.
### Persistent Homology Transform
Given an $m \times 2$ matrix of ordered points sampled from a simple, closed curve $C \subset \mathbb{R}^2$ (in either a clockwise or anti-clockwise direction), calculate the Persistent Homology Transform for a set of directions. You can either specify the directions explicity as a $n \times 2$ array (`directions::Array{Float64}(n,2)`), or specify an integer (`directions::Int64`) and then the directions used will be generated by
```julia
angles = [n*pi/(directions/2) for n in 1:directions]
directions = [[cos(x), sin(x)] for x in angles]
```
To perform the Persistent Homology Transform for the directions, run
```julia
PHT(points, directions)
```
This outputs an array of [Eirene](https://github.com/Eetion/Eirene.jl) Persistence Diagrams, one for each direction.
### Rank Functions
Given an [Eirene](https://github.com/Eetion/Eirene.jl) Persistence Diagram $D$, DiscretePersistentHomologyTransform can calculate the Rank Function $r_D$ either exactly, or given a grid of points, calculate a discretised version. Recall that $D$ is an $n \times 2$ array of points, and hence the function `Total_Rank_Exact` accepts an $n \times 2$ array of points, and returns a list of points critical points of the Rank function and the value at each of these points. Running
```julia
rk = Total_Rank_Exact(barcode)
```
we obtain the critical points via
```julia
rk[1]
```
which returns an array of points in $\mathbb{R}^2$, and the values through
```julia
rk[2]
```
wich returns an array of integers.
To obtain a discrete approximation of a Rank Function over a persistence diagram $D$, use `Total_Rank_Grid`, which acceps as input an [Eirene](https://github.com/Eetion/Eirene.jl) Persistence Diagram $D$, an increasing `StepRange` for $x$-coordinates `x_g`, and a decreasing `StepRange` for $y$-coordinates `y_g`. The `StepRanges` are obtained by running
```julia
x_g = lb:delta:ub
x_g = ub:-delta:lb
```
with `lb` being the lower bound so that $(lb, lb)$ is the lower left corner of the grid, and `ub` being the upper bound so that $(ub,ub)$ is the top right corner of the grid, and $delta$ is the step size.
Finally, the rank is obtained by
```julia
rk = Total_Rank_Grid(D, x_g, y_g)
```
which returns an array or values.
### PCA of Rank Functions
Given a set of rank functions, we can perform principal component analysis on them. The easiest way to do this is to use the wrapper function `PCA` which has inputs an array of rank functions evaluated at the same points (best to use `Total_Rank_Grid` to obtain them), an dimension $d$ and an array of weights `weights`, where the weights correspond to the grid points used in `Total_Rank_Grid`.
To perform Principal Component Analysis and obtain the scores run
```julia
scores = PCA(ranks, d, weights)
```
which returns the scores in $d$-dimensions.
## Examples
### Discrete Persistent Homology Transform
We will go through an example using a random [shape](https://github.com/yossibokor/DiscretePersistentHomologyTransform.jl/Example/Example1.png) and 20 directions. You can download the CSV file from [here](https://github.com/yossibokor/DiscretePersistentHomologyTransform.jl/Example/Example1.csv)
To begin, load the CSV file into an array in Julia
```julia
Boundary = CSV.read("<path/to/file>")
Persistence_Diagrams = PHT(Boundary, 20)
```
You can then access the persistence diagram corresponding to the $i^{th}$ direction as
```julia
Persistence_Diagrams[i]
```
<!---### Rank Functions -->
| DiscretePersistentHomologyTransform | https://github.com/yossibokor/DiscretePersistentHomologyTransform.jl.git |
|
[
"MIT"
] | 0.34.21 | 6af1dc255a34ea50e2ea16f11dfe941a2c3965ad | code | 4200 | import Documenter
import DocumenterMarkdown
import DocumenterLaTeX
import Literate
import PredictMD
import Random
Random.seed!(999)
ENV["PREDICTMD_IS_MAKE_DOCS"] = "true"
generate_examples_output_directory = PredictMD.package_directory(
"docs",
"src",
"examples",
)
rm(
generate_examples_output_directory;
force = true,
recursive = true,
)
PredictMD.generate_examples(
generate_examples_output_directory;
execute_notebooks = false,
markdown = true,
notebooks = true,
scripts = true,
include_test_statements = false,
)
Documenter.makedocs(
modules = [
PredictMD,
PredictMD.Cleaning,
PredictMD.Compilation,
PredictMD.GPU,
PredictMD.Server,
],
pages = [
"Home" => "index.md",
"Requirements for plotting (optional)" => "requirements_for_plotting.md",
"Docker image" => "docker_image.md",
"Examples" => [
"Generating these example files on your computer" => "generate_examples/generate_examples.md",
"Boston housing (single label regression)" => [
"1\\. Preprocess data" => "examples/cpu_examples/boston_housing/src/01_preprocess_data.md",
"2\\. Linear regressions" => "examples/cpu_examples/boston_housing/src/02_linear_regression.md",
"3\\. Random forest regression" => "examples/cpu_examples/boston_housing/src/03_random_forest_regression.md",
"4\\. Knet neural network regression" => "examples/cpu_examples/boston_housing/src/04_knet_mlp_regression.md",
"5\\. Compare models" => "examples/cpu_examples/boston_housing/src/05_compare_models.md",
"6\\. Directly access model output" => "examples/cpu_examples/boston_housing/src/06_get_model_output.md",
],
"Breast cancer biopsy (single label binary classification)" => [
"1\\. Preprocess data" => "examples/cpu_examples/breast_cancer_biopsy/src/01_preprocess_data.md",
"2\\. Apply SMOTE algorithm" => "examples/cpu_examples/breast_cancer_biopsy/src/02_smote.md",
"3\\. Logistic classifier" => "examples/cpu_examples/breast_cancer_biopsy/src/03_logistic_classifier.md",
"4\\. Random forest classifier" => "examples/cpu_examples/breast_cancer_biopsy/src/04_random_forest_classifier.md",
"5\\. C-SVC support vector machine classifier" => "examples/cpu_examples/breast_cancer_biopsy/src/05_c_svc_svm_classifier.md",
"6\\. nu-SVC support vector machine classifier" => "examples/cpu_examples/breast_cancer_biopsy/src/06_nu_svc_svm_classifier.md",
"7\\. Knet neural network classifier" => "examples/cpu_examples/breast_cancer_biopsy/src/07_knet_mlp_classifier.md",
"8\\. Compare models" => "examples/cpu_examples/breast_cancer_biopsy/src/08_compare_models.md",
"9\\. Directly access model output" => "examples/cpu_examples/breast_cancer_biopsy/src/09_get_model_output.md",
],
],
"Library" => [
"Internals" => "library/internals.md",
],
],
root = PredictMD.package_directory(
"docs",
),
sitename = "PredictMD documentation",
)
ENV["PREDICTMD_IS_MAKE_DOCS"] = "false"
ENV["PREDICTMD_IS_DEPLOY_DOCS"] = "true"
COMPILED_MODULES_CURRENT_VALUE = strip(
lowercase(strip(get(ENV, "COMPILED_MODULES", "")))
)
COMPILED_MODULES_VALUE_FOR_DOCS = strip(
lowercase(strip(get(ENV, "COMPILED_MODULES_VALUE_FOR_DOCS", "")))
)
JULIA_VERSION_FOR_DOCS = strip(
lowercase(strip(get(ENV, "JULIA_VERSION_FOR_DOCS", "")))
)
@debug("COMPILED_MODULES_CURRENT_VALUE: ", COMPILED_MODULES_CURRENT_VALUE,)
@debug("COMPILED_MODULES_VALUE_FOR_DOCS: ", COMPILED_MODULES_VALUE_FOR_DOCS,)
if COMPILED_MODULES_CURRENT_VALUE == COMPILED_MODULES_VALUE_FOR_DOCS
Documenter.deploydocs(
target = "build",
repo = "github.com/bcbi/PredictMD.jl.git",
branch = "gh-pages",
devbranch = "master",
devurl = "development",
)
end
ENV["PREDICTMD_IS_DEPLOY_DOCS"] = "false"
| PredictMD | https://github.com/bcbi/PredictMD.jl.git |
|
[
"MIT"
] | 0.34.21 | 6af1dc255a34ea50e2ea16f11dfe941a2c3965ad | code | 6067 | __precompile__(true)
"""
"""
module PredictMD # begin module PredictMD
import Distributed
import Random
include("base/api.jl")
include("base/backends.jl")
include("base/concrete-types.jl")
include("base/fallback.jl")
include("registry_url_list.jl")
include("package_directory.jl")
include("version.jl")
include("package_list.jl")
include("import_all.jl")
include("welcome.jl")
include("init.jl")
include("toplevel/always-loaded/code_loading/requires.jl")
include("toplevel/always-loaded/code_loading/require_versions.jl")
include("toplevel/always-loaded/classimbalance/smote.jl")
include("toplevel/always-loaded/datasets/csv.jl")
include("toplevel/always-loaded/datasets/datadeps.jl")
include("toplevel/always-loaded/datasets/gzip.jl")
include("toplevel/always-loaded/datasets/juliadb.jl")
include("toplevel/always-loaded/datasets/mldatasets.jl")
include("toplevel/always-loaded/datasets/mnist.jl")
include("toplevel/always-loaded/datasets/queryverse.jl")
include("toplevel/always-loaded/datasets/rdatasets.jl")
include("toplevel/always-loaded/docs_and_examples/cache.jl")
include("toplevel/always-loaded/docs_and_examples/generate_examples.jl")
include("toplevel/always-loaded/ide/atom.jl")
include("toplevel/always-loaded/io/saveload.jl")
include("toplevel/always-loaded/linearmodel/glm.jl")
include("toplevel/always-loaded/linearmodel/ordinary_least_squares_regression.jl")
include("toplevel/always-loaded/metrics/auprc.jl")
include("toplevel/always-loaded/metrics/aurocc.jl")
include("toplevel/always-loaded/metrics/averageprecisionscore.jl")
include("toplevel/always-loaded/metrics/brier_score.jl")
include("toplevel/always-loaded/metrics/coefficientofdetermination.jl")
include("toplevel/always-loaded/metrics/cohenkappa.jl")
include("toplevel/always-loaded/metrics/getbinarythresholds.jl")
include("toplevel/always-loaded/metrics/mean_square_error.jl")
include("toplevel/always-loaded/metrics/prcurve.jl")
include("toplevel/always-loaded/metrics/risk_score_cutoff_values.jl")
include("toplevel/always-loaded/metrics/roccurve.jl")
include("toplevel/always-loaded/metrics/rocnumsmetrics.jl")
include("toplevel/always-loaded/metrics/singlelabelbinaryclassificationmetrics.jl")
include("toplevel/always-loaded/metrics/singlelabelregressionmetrics.jl")
include("toplevel/always-loaded/modelselection/crossvalidation.jl")
include("toplevel/always-loaded/modelselection/split_data.jl")
include("toplevel/always-loaded/neuralnetwork/flux.jl")
include("toplevel/always-loaded/neuralnetwork/knet.jl")
include("toplevel/always-loaded/neuralnetwork/merlin.jl")
include("toplevel/always-loaded/online/onlinestats.jl")
include("toplevel/always-loaded/ontologies/ccs.jl")
include("toplevel/always-loaded/pipeline/simplelinearpipeline.jl")
include("toplevel/always-loaded/plotting/catch_plotting_errors.jl")
include("toplevel/always-loaded/plotting/defaultapplication.jl")
include("toplevel/always-loaded/plotting/pgfplots.jl")
include("toplevel/always-loaded/plotting/pgfplotsx.jl")
include("toplevel/always-loaded/plotting/plotlearningcurve.jl")
include("toplevel/always-loaded/plotting/plotprcurve.jl")
include("toplevel/always-loaded/plotting/plotroccurve.jl")
include("toplevel/always-loaded/plotting/plotsinglelabelregressiontruevspredicted.jl")
include("toplevel/always-loaded/plotting/plotsinglelabelbinaryclassifierhistograms.jl")
include("toplevel/always-loaded/plotting/probability_calibration_plots.jl")
include("toplevel/always-loaded/plotting/unicodeplots.jl")
include("toplevel/always-loaded/postprocessing/packagemultilabelpred.jl")
include("toplevel/always-loaded/postprocessing/packagesinglelabelpred.jl")
include("toplevel/always-loaded/postprocessing/packagesinglelabelproba.jl")
include("toplevel/always-loaded/postprocessing/predictoutput.jl")
include("toplevel/always-loaded/postprocessing/predictprobaoutput.jl")
include("toplevel/always-loaded/preprocessing/dataframecontrasts.jl")
include("toplevel/always-loaded/preprocessing/dataframetodecisiontree.jl")
include("toplevel/always-loaded/preprocessing/dataframetoglm.jl")
include("toplevel/always-loaded/preprocessing/dataframetoknet.jl")
include("toplevel/always-loaded/preprocessing/dataframetosvm.jl")
include("toplevel/always-loaded/svm/libsvm.jl")
include("toplevel/always-loaded/time_series/timeseries.jl")
include("toplevel/always-loaded/tree/decisiontree.jl")
include("toplevel/always-loaded/utils/constant_columns.jl")
include("toplevel/always-loaded/utils/dataframe_column_types.jl")
include("toplevel/always-loaded/utils/filename_extension.jl")
include("toplevel/always-loaded/utils/find.jl")
include("toplevel/always-loaded/utils/fix_type.jl")
include("toplevel/always-loaded/utils/formulas.jl")
include("toplevel/always-loaded/utils/inverse-dictionary.jl")
include("toplevel/always-loaded/utils/is_debug.jl")
include("toplevel/always-loaded/utils/labelstringintmaps.jl")
include("toplevel/always-loaded/utils/linearly_dependent_columns.jl")
include("toplevel/always-loaded/utils/make_directory.jl")
include("toplevel/always-loaded/utils/maketemp.jl")
include("toplevel/always-loaded/utils/missings.jl")
include("toplevel/always-loaded/utils/nothings.jl")
include("toplevel/always-loaded/utils/openbrowserwindow.jl")
include("toplevel/always-loaded/utils/openplotsduringtestsenv.jl")
include("toplevel/always-loaded/utils/predictionsassoctodataframe.jl")
include("toplevel/always-loaded/utils/probabilitiestopredictions.jl")
include("toplevel/always-loaded/utils/runtestsenv.jl")
include("toplevel/always-loaded/utils/shufflerows.jl")
include("toplevel/always-loaded/utils/simplemovingaverage.jl")
include("toplevel/always-loaded/utils/tikzpictures.jl")
include("toplevel/always-loaded/utils/transform_columns.jl")
include("toplevel/always-loaded/utils/trapz.jl")
include("toplevel/always-loaded/utils/traviscienv.jl")
include("toplevel/always-loaded/utils/tuplify.jl")
include("submodules/Cleaning/Cleaning.jl")
include("submodules/Compilation/Compilation.jl")
include("submodules/GPU/GPU.jl")
include("submodules/Server/Server.jl")
end # end module PredictMD
| PredictMD | https://github.com/bcbi/PredictMD.jl.git |
|
[
"MIT"
] | 0.34.21 | 6af1dc255a34ea50e2ea16f11dfe941a2c3965ad | code | 1958 | macro import_all()
return _import_all_macro()
end
function _import_all_macro()
statements_1 = _import_all_statements()
pkg_list::Vector{String} = convert(Vector{String}, sort(unique(strip.(package_list()))))::Vector{String}
n = length(pkg_list)
statements_2 = Vector{String}(undef, n)
for i = 1:n
pkgname = pkg_list[i]
statements_2[i] = "import $(pkgname)"
end
statements = vcat(statements_1, statements_2)
ex = Base.Meta.parse(join(statements, "; "))
return ex
end
function _import_all_statements()::Vector{String}
statements = Vector{String}(undef, 0)
_import_all_statements!(statements)
return statements
end
function _import_all_statements!(statements::Vector{String})::Vector{String}
pkg_list::Vector{String} = convert(Vector{String}, sort(unique(strip.(package_list()))))::Vector{String}
for p in pkg_list
a = string("try ",
" import $(string(p)); ",
"catch e1 ",
" @debug(\"ignoring exception: \", e1,); ",
" try ",
" import Pkg; ",
" Pkg.add(\"$(string(p))\"); ",
" catch e2 ",
" @debug(\"ignoring exception: \", e2,); ",
" end ",
"end ")
b = string("try ",
" import $(string(p)); ",
" @debug(\"imported $(string(p))\"); ",
"catch e3 ",
" @error(\"ignoring exception: \", exception=e3,); ",
"end ")
push!(statements, a)
push!(statements, b)
end
return statements
end
import_all() = import_all(Main)
function import_all(m::Module)::Nothing
statements = _import_all_statements()
for stmt in statements
ex = Base.Meta.parse(stmt)
Base.eval(m, ex)
end
return nothing
end
| PredictMD | https://github.com/bcbi/PredictMD.jl.git |
|
[
"MIT"
] | 0.34.21 | 6af1dc255a34ea50e2ea16f11dfe941a2c3965ad | code | 561 | import Requires
function __init__()::Nothing
Requires.@require MLJ="add582a8-e3ab-11e8-2d5e-e98b27df1bc7" begin
Requires.@require MLJBase="a7f614a8-145f-11e9-1d2a-a57a1082229d" begin
Requires.@require MLJModels="d491faf4-2d78-11e9-2867-c94bc002c0b7" begin
import .MLJ
import .MLJBase
import .MLJModels
include("toplevel/conditionally-loaded/PredictMD_MLJ/PredictMD_MLJ.jl")
import .PredictMD_MLJ
end
end
end
return nothing
end
| PredictMD | https://github.com/bcbi/PredictMD.jl.git |
|
[
"MIT"
] | 0.34.21 | 6af1dc255a34ea50e2ea16f11dfe941a2c3965ad | code | 5263 | import PredictMDAPI
is_filesystem_root(path::AbstractString)::Bool =
abspath(strip(path)) == dirname(abspath(strip(path)))
function is_package_directory(path::AbstractString)::Bool
path::String = abspath(strip(path))
if isfile(joinpath(path, "Project.toml"))
return true
else
return false
end
end
function find_package_directory(path::AbstractString)::String
path::String = abspath(strip(path))
if is_package_directory(path)
return path
elseif is_filesystem_root(path)
error(string("Could not find the Project.toml file"))
else
result = find_package_directory(dirname(path))
return result
end
end
"""
package_directory()::String
Return the PredictMD package directory.
"""
function package_directory()::String
result::String = find_package_directory(abspath(strip(@__FILE__)))
return result
end
function api_package_directory()::String
result::String = find_package_directory(abspath(pathof(PredictMDAPI)))
return result
end
# function functionlocation(m::Method)::String
# result::String = abspath(first(functionloc(m)))
# return result
# end
# function functionlocation(f::Function)::String
# result::String = abspath(first(functionloc(f)))
# return result
# end
# function functionlocation(f::Function, types::Tuple)::String
# result::String = abspath(first(functionloc(f, types)))
# return result
# end
# function functionlocation(m::Module)::String
# result::String = abspath(functionlocation(getfield(m, :eval)))
# return result
# end
"""
package_directory(parts...)::String
Equivalent to `abspath(joinpath(abspath(package_directory()), parts...))`.
"""
function package_directory(parts...)::String
result::String = abspath(joinpath(abspath(package_directory()), parts...))
return result
end
function api_package_directory(p...)::String
result::String = abspath(joinpath(abspath(api_package_directory()), p...))
return result
end
"""
package_directory(m::Method)::String
If method `m`
is part of a Julia package, returns the package root directory.
If method `m`
is not part of a Julia package, throws an error.
"""
# function package_directory(m::Method)::String
# m_module_directory::String = abspath(functionlocation(m))
# m_package_directory::String = abspath(
# find_package_directory(m_module_directory)
# )
# return m_package_directory
# end
# """
# package_directory(m::Method, parts...)::String
#
# Equivalent to
# `result = abspath(joinpath(abspath(package_directory(m)), parts...))`.
# """
# function package_directory(m::Method, parts...)::String
# result::String = abspath(joinpath(abspath(package_directory(m)), parts...))
# return result
# end
"""
package_directory(f::Function)::String
If function `f`
is part of a Julia package, returns the package root directory.
If function `f`
is not part of a Julia package, throws an error.
"""
# function package_directory(f::Function)::String
# m_module_directory::String = abspath(functionlocation(f))
# m_package_directory::String = abspath(
# find_package_directory(m_module_directory)
# )
# return m_package_directory
# end
# """
# package_directory(f::Function, parts...)::String
#
# Equivalent to
# `result = abspath(joinpath(abspath(package_directory(f)), parts...))`.
# """
# function package_directory(f::Function, parts...)::String
# result::String = abspath(joinpath(abspath(package_directory(f)), parts...))
# return result
# end
"""
package_directory(f::Function, types::Tuple)::String
If function `f` with type signature `types`
is part of a Julia package, returns the package root directory.
If function `f` with type signature `types`
is not part of a Julia package, throws an error.
"""
# function package_directory(f::Function, types::Tuple)::String
# m_module_directory::String = abspath(functionlocation(f, types))
# m_package_directory::String = abspath(
# find_package_directory(m_module_directory)
# )
# return m_package_directory
# end
# """
# package_directory(f::Function, types::Tuple, parts...)::String
#
# Equivalent to
# `result = abspath(joinpath(abspath(package_directory(f, types)), parts...))`.
# """
# function package_directory(f::Function, types::Tuple, parts...)::String
# result::String = abspath(joinpath(abspath(package_directory(f, types)), parts...))
# return result
# end
"""
package_directory(m::Module)::String
If module `m`
is part of a Julia package, returns the package root directory.
If module `m`
is not part of a Julia package, throws an error.
"""
# function package_directory(m::Module)::String
# m_module_directory::String = abspath(functionlocation(m))
# m_package_directory::String = abspath(
# find_package_directory(m_module_directory)
# )
# return m_package_directory
# end
"""
package_directory(m::Module, parts...)::String
Equivalent to
`result = abspath(joinpath(abspath(package_directory(m)), parts...))`.
"""
# function package_directory(m::Module, parts...)::String
# result::String = abspath(joinpath(abspath(package_directory(m)), parts...))
# return result
# end
| PredictMD | https://github.com/bcbi/PredictMD.jl.git |
|
[
"MIT"
] | 0.34.21 | 6af1dc255a34ea50e2ea16f11dfe941a2c3965ad | code | 884 | import Pkg
function package_list()::Vector{String}
project_toml::String = package_directory("Project.toml",)
pkg_list::Vector{String} = sort(
unique(
strip.(
collect(
keys(
Pkg.TOML.parsefile(project_toml)["deps"]
)
)
)
)
)
return pkg_list
end
function print_list_of_package_imports(io::IO = stdout)::Nothing
pkg_list::Vector{String} = sort(unique(package_list()))
println("##### Beginning of file")
println()
println("# Automatically generated by: ")
println("# PredictMD.print_list_of_package_imports()")
println()
for i = 1:length(pkg_list)
println(io, "import $(pkg_list[i])",)
end
println()
println("##### End of file")
println()
return nothing
end
| PredictMD | https://github.com/bcbi/PredictMD.jl.git |
|
[
"MIT"
] | 0.34.21 | 6af1dc255a34ea50e2ea16f11dfe941a2c3965ad | code | 320 | function registry_url_list()::Vector{String}
registry_url_list_raw::Vector{String} = String[
"https://github.com/JuliaRegistries/General.git",
"https://github.com/bcbi/BCBIRegistry.git",
]
registry_url_list::Vector{String} = strip.(registry_url_list_raw)
return registry_url_list
end
| PredictMD | https://github.com/bcbi/PredictMD.jl.git |
|
[
"MIT"
] | 0.34.21 | 6af1dc255a34ea50e2ea16f11dfe941a2c3965ad | code | 4830 | import Pkg # stdlib
struct TomlFile
filename::String
function TomlFile(path::String)::TomlFile
path::String = abspath(strip(path))
if isfile(path)
result::TomlFile = new(path)
return result
else
error("File does not exist")
end
end
end
function parse_toml_file(x::TomlFile)::Dict{String, Any}
toml_file_filename::String = x.filename
toml_file_text::String = read(toml_file_filename, String)
toml_file_parsed::Dict{String, Any} = Pkg.TOML.parse(toml_file_text)
return toml_file_parsed
end
function version_string(x::TomlFile)::String
toml_file_parsed::Dict{String, Any} = parse_toml_file(x)
version_string::String = toml_file_parsed["version"]
return version_string
end
function version_string()::String
predictmd_toml_file::TomlFile = TomlFile(
package_directory("Project.toml")
)
resultversion_string::String = version_string(predictmd_toml_file)
return resultversion_string
end
function api_version_string()::String
predictmdapi_toml_file::TomlFile = TomlFile(
api_package_directory("Project.toml")
)
resultversion_string::String = version_string(predictmdapi_toml_file)
return resultversion_string
end
# function version_string(m::Method)::String
# m_package_directory::String = package_directory(m)
# m_toml_file::TomlFile = TomlFile(
# joinpath(m_package_directory, "Project.toml")
# )
# resultversion_string::String = version_string(m_toml_file)
# return resultversion_string
# end
# function version_string(f::Function)::String
# m_package_directory::String = package_directory(f)
# m_toml_file::TomlFile = TomlFile(
# joinpath(m_package_directory, "Project.toml")
# )
# resultversion_string::String = version_string(m_toml_file)
# return resultversion_string
# end
# function version_string(f::Function, types::Tuple)::String
# m_package_directory::String = package_directory(f, types)
# m_toml_file::TomlFile = TomlFile(
# joinpath(m_package_directory, "Project.toml")
# )
# resultversion_string::String = version_string(m_toml_file)
# return resultversion_string
# end
# function version_string(m::Module)::String
# m_package_directory::String = package_directory(m)
# m_toml_file::TomlFile = TomlFile(
# joinpath(m_package_directory, "Project.toml")
# )
# resultversion_string::String = version_string(m_toml_file)
# return resultversion_string
# end
"""
version()::VersionNumber
Return the version number of PredictMD.
"""
function version()::VersionNumber
resultversion_string::String = version_string()
result_versionnumber::VersionNumber = VersionNumber(resultversion_string)
return result_versionnumber
end
function api_version()::VersionNumber
resultversion_string::String = api_version_string()
result_versionnumber::VersionNumber = VersionNumber(resultversion_string)
return result_versionnumber
end
"""
version(m::Method)::VersionNumber
If method `m`
is part of a Julia package, returns the version number of that package.
If method `m`
is not part of a Julia package, throws an error.
"""
# function version(m::Method)::VersionNumber
# resultversion_string::String = version_string(m)
# result_versionnumber::VersionNumber = VersionNumber(resultversion_string)
# return result_versionnumber
# end
"""
version(f::Function)::VersionNumber
If function `f`
is part of a Julia package, returns the version number of
that package.
If function `f`
is not part of a Julia package, throws an error.
"""
# function version(f::Function)::VersionNumber
# resultversion_string::String = version_string(f)
# result_versionnumber::VersionNumber = VersionNumber(resultversion_string)
# return result_versionnumber
# end
"""
version(f::Function, types::Tuple)::VersionNumber
If function `f` with type signature `types`
is part of a Julia package, returns the version number of
that package.
If function `f` with type signature `types`
is not part of a Julia package, throws an error.
"""
# function version(f::Function, types::Tuple)::VersionNumber
# resultversion_string::String = version_string(f, types)
# result_versionnumber::VersionNumber = VersionNumber(resultversion_string)
# return result_versionnumber
# end
"""
version(m::Module)::VersionNumber
If module `m` is part of a Julia package, returns the version number of
that package.
If module `m` is not part of a Julia package, throws an error.
"""
# function version(m::Module)::VersionNumber
# resultversion_string::String = version_string(m)
# result_versionnumber::VersionNumber = VersionNumber(resultversion_string)
# return result_versionnumber
# end
| PredictMD | https://github.com/bcbi/PredictMD.jl.git |
|
[
"MIT"
] | 0.34.21 | 6af1dc255a34ea50e2ea16f11dfe941a2c3965ad | code | 832 | import Pkg
function print_welcome_message(a::AbstractDict = ENV)::Nothing
predictmd_version::VersionNumber = version()
predictmd_pkgdir::String = package_directory()
predictmdapi_version::VersionNumber = api_version()
predictmdapi_pkgdir::String = api_package_directory()
if !_suppress_welcome_message(a)
@info("This is PredictMD, version $(predictmd_version)")
@info("For help, please visit https://predictmd.net")
@debug("PredictMD package directory: $(predictmd_pkgdir)")
@debug("PredictMDAPI version: $(predictmdapi_version)")
@debug("PredictMDAPI package directory: $(predictmdapi_pkgdir)")
end
return nothing
end
function _suppress_welcome_message(a::AbstractDict = ENV)::Bool
return get(a, "SUPPRESS_PREDICTMD_WELCOME_MESSAGE", "false") == "true"
end
| PredictMD | https://github.com/bcbi/PredictMD.jl.git |
|
[
"MIT"
] | 0.34.21 | 6af1dc255a34ea50e2ea16f11dfe941a2c3965ad | code | 2770 | import PredictMDAPI
# types
const AbstractFittable = PredictMDAPI.AbstractFittable
const AbstractEstimator = PredictMDAPI.AbstractEstimator
const AbstractPipeline = PredictMDAPI.AbstractPipeline
const AbstractTransformer = PredictMDAPI.AbstractTransformer
const AbstractFeatureContrasts = PredictMDAPI.AbstractFeatureContrasts
const AbstractNonExistentFeatureContrasts = PredictMDAPI.AbstractNonExistentFeatureContrasts
const AbstractNonExistentUnderlyingObject = PredictMDAPI.AbstractNonExistentUnderlyingObject
const AbstractBackend = PredictMDAPI.AbstractBackend
const AbstractPlot = PredictMDAPI.AbstractPlot
# traits
# const TargetStyle = PredictMDAPI.TargetStyle
# const UnknownTargetStyle = PredictMDAPI.UnknownTargetStyle
# const MixedTargetStyle = PredictMDAPI.MixedTargetStyle
# const Regression = PredictMDAPI.Regression
# const Classification{N} = PredictMDAPI.Classification{N}
# const BinaryClassification = PredictMDAPI.BinaryClassification
# functions
const fit! = PredictMDAPI.fit!
const get_history = PredictMDAPI.get_history
const get_underlying = PredictMDAPI.get_underlying
const parse_functions! = PredictMDAPI.parse_functions!
const predict = PredictMDAPI.predict
const predict_proba = PredictMDAPI.predict_proba
const set_feature_contrasts! = PredictMDAPI.set_feature_contrasts!
const set_max_epochs! = PredictMDAPI.set_max_epochs!
const transform = PredictMDAPI.transform
const accuracy = PredictMDAPI.accuracy
const auprc = PredictMDAPI.auprc
const aurocc = PredictMDAPI.aurocc
const binary_brier_score = PredictMDAPI.binary_brier_score
const cohen_kappa = PredictMDAPI.cohen_kappa
const f1score = PredictMDAPI.f1score
const false_negative_rate = PredictMDAPI.false_negative_rate
const false_positive_rate = PredictMDAPI.false_positive_rate
const fbetascore = PredictMDAPI.fbetascore
const mean_squared_error = PredictMDAPI.mean_squared_error
const negative_predictive_value = PredictMDAPI.negative_predictive_value
const positive_predictive_value = PredictMDAPI.positive_predictive_value
const prcurve = PredictMDAPI.prcurve
const precision = PredictMDAPI.precision
const r2score = PredictMDAPI.r2score
const recall = PredictMDAPI.recall
const roccurve = PredictMDAPI.roccurve
const root_mean_squared_error = PredictMDAPI.root_mean_squared_error
const sensitivity = PredictMDAPI.sensitivity
const specificity = PredictMDAPI.specificity
const true_negative_rate = PredictMDAPI.true_negative_rate
const true_positive_rate = PredictMDAPI.true_positive_rate
const plotlearningcurve = PredictMDAPI.plotlearningcurve
const plotprcurve = PredictMDAPI.plotprcurve
const plotroccurve = PredictMDAPI.plotroccurve
const load_model = PredictMDAPI.load_model
const save_model = PredictMDAPI.save_model
const save_plot = PredictMDAPI.save_plot
| PredictMD | https://github.com/bcbi/PredictMD.jl.git |
Subsets and Splits