id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
293,669,663 | neovim | :write warning if modified-time changed | <!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`: NVIM v0.2.2
- Vim (version: ) behaves differently? No
- Operating system/version: Arch Linux
- Terminal name/version: XTerm(331)
- `$TERM`: xterm-256color
### Steps to reproduce using `nvim -u NORC`
- `nvim -u NORC test`
- Write an empty file: `:w`
- In another terminal, update the modified time of the file: `touch test` or `touch -d '2 hours ago' test` or `echo -n > test`
- Try to write again: `:w`
### Actual behaviour
`WARNING: The file has been changed since reading it!!!`
### Expected behaviour
There should be no warning since the file hasn't changed, and as per `help timestamp`:
> When Vim notices the timestamp of a file has changed, and the file is being
> edited in a buffer but has not changed, Vim checks if the contents of the file
> is equal. This is done by reading the file again (into a hidden buffer, which
> is immediately deleted again) and comparing the text. If the text is equal,
> you will get no warning.
Note that there is no warning if `:checktime` is called before `:w`. | enhancement,ux | low | Minor |
293,671,343 | nvm | Install from zsh doesn't install to bash. Install from bash doesn't install to bash | - Operating system and version:
Linux mcdesktop 4.13.0-32-generic #35~16.04.1-Ubuntu SMP Thu Jan 25 10:13:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
- `nvm debug` output:
works in zsh, not in bash
- How did you install `nvm`? (e.g. install script in readme, homebrew):
in zsh, using `curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash`
- What steps did you perform?
- What happened?
1. From zsh, ran curl installer
2. Installed to zsh
3. closed/opened zsh
4. nvm works in zsh
5. Started bash (from zsh)
6. nvm not found <- error
7. From bash, ran curl installer
8. exit/start bash
9. nvm not found <- error
- What did you expect to happen?
nvm to be installed.
nvm to install to bash when run from bash
nvm to install in zsh AND bash.
(I use zsh, but write scripts with `#/bin/bash` for compatibility with people not using zsh)
- Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`?
```
โ grep PATH .*
<trimmed>
.profile:# set PATH so it includes user's private bin directories
.profile:PATH="$HOME/bin:$HOME/.local/bin:$PATH"
```
| installing nvm: profile detection,pull request wanted | low | Critical |
293,681,387 | rust | TcpSocket try_clone() sets close-on-exec but this isn't documented | Hello, the TcpSocket try_clone() method calls into .duplicate(), which sets the close-on-exec flag on the socket:
https://doc.rust-lang.org/src/std/net/tcp.rs.html#255-257
https://github.com/rust-lang/rust/blob/master/src/libstd/sys/unix/fd.rs#L197
I believe the documentation for try_clone() should be extended to describe:
- That this is an operation with the underlying operating system, dup() or WSADuplicateSocket() or whatever is appropriate ~~(this would help address very similar issue 45536)~~
- That the duplication operation also sets the close-on-exec flag.
Thanks
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"HypheX"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | C-enhancement,P-medium,T-libs-api,E-medium,A-docs | low | Major |
293,710,135 | rust | Panics in destructors can cause the return value to be leaked | ## STR
```Rust
struct NoisyDrop;
impl Drop for NoisyDrop {
fn drop(&mut self) {
println!("dropping a NoisyDrop");
}
}
impl NoisyDrop {
fn new() -> Self {
println!("creating a NoisyDrop");
NoisyDrop
}
}
struct PanickyDrop;
impl Drop for PanickyDrop {
fn drop(&mut self) {
panic!()
}
}
fn foo() -> NoisyDrop {
let p = PanickyDrop;
NoisyDrop::new()
}
fn main() {
foo();
}
```
## Expected Result
"creating a NoisyDrop" and "dropping a NoisyDrop" should appear the same number of times
## Actual Result
```
creating a NoisyDrop
thread 'main' panicked at 'explicit panic', src/main.rs:18:8
note: Run with `RUST_BACKTRACE=1` for a backtrace.
```
The destructor is ignored. | C-enhancement,A-destructors,P-high,T-lang,T-compiler,I-unsound | medium | Critical |
293,729,381 | rust | Using ToSocketAddrs seems to remember EMFILE on the same thread | This was noticed in https://github.com/hyperium/hyper/issues/1422, where a user tried to trigger more connections than their allowed max file descriptors, and saw the EMFILE error. It was then noticed that afterwards, every call to `to_socket_addrs` that requires a DNS lookup would fail from then on. However, trying the same DNS lookup on a new thread would work fine.
I was able to reproduce this using just the standard library here:
```rust
use std::net::TcpStream;
fn main() {
let cnt = 30_000; // adjust for your system
let host = "localhost:3000"; // using "127.0.0.1:3000" doesn't have the same problem
let mut sockets = Vec::with_capacity(cnt);
for i in 0..cnt {
match TcpStream::connect(host) {
Ok(tcp) => sockets.push(tcp),
Err(e) => {
println!("error {} after {} connects", e, i);
break;
}
}
}
drop(sockets);
println!("closing all sockets");
// sleep because why not
::std::thread::sleep(::std::time::Duration::from_secs(5));
TcpStream::connect(host).unwrap();
println!("end");
}
```
Just start up a local server, and try to run this program against it. Also, notice that if you change from `"localhost"` to `"127.0.0.1"`, the issue doesn't show up. | C-bug,T-libs | low | Critical |
293,753,347 | go | cmd/compile: bad line number in error message calling variadic function | Compile:
```go
package p
import git "gopkg.in/libgit2/git2go.v26"
func f(r *git.Repository, x int) {
r.CreateCommit(
"",
nil,
nil,
"",
nil,
x,
)
}
```
Result:
```
$ go build x.go
# command-line-arguments
./x.go:10:3: cannot use x (type int) as type *git.Commit in argument to r.CreateCommit
```
The correct line number is 12, not 10. Line 10 is the second "" line.
Reproduces with (at a minimum): 1.7, 1.8, 1.9, and 1.10rc1.
@mdempsky @griesemer | NeedsFix,compiler/runtime | low | Critical |
293,761,666 | vscode | Backspace at end of empty line doesn't delete whole line and go to end of above line | <!-- Do you have a question? Please ask it on https://stackoverflow.com/questions/tagged/vscode. -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.19.2
- OS Version: W10LTSB
Steps to Reproduce:
1. Create a new line of a block of multi-line code like an opening div tag or JavaScript or PHP function, then hit Enter.
2. If you have an indent, continue to step 3. If not, create an indent with Tab or 4 spaces (whatever your religion is).
3. Hit the Backspace key; notice it merely removes an indent each time instead of deleting the whole line and putting your cursor at the end of the previous line like awesome JetBrains software does. Realize how many unnecessary keystrokes you'll now have to deal with (and Ctrl+Shift+Up-End each line is still a lot).
4. Continue to use VS Code because nobody can afford to buy another JetBrains license for your new job.
This should be standard logic in all editors; there's zero functional or stylistic reason to want to just delete indents on an empty line with Backspace (unless there's too many indents there which is rare, or your editor auto-indented wrong), and even if for some Pan-like reason you need to use that regressive space, you can just use Ctrl+Home to delete the line then indent where you want (or Shift+arrow, or just arrow if you're a weirdo).
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
| feature-request,editor-commands | medium | Critical |
293,762,670 | vscode | [json] Automatically add required fields to object | Since the editor has access to the schema for the file it should be easy to make the object that is suggested by intellisense to add all the "required" fields.
So if my Schema was
a.json
```json
{
"type": "object",
"required: [
"id"
],
"properties": {
"id: {
"type": "string"
}
}
}
```
I would expect intellisense to suggest
```json
{ "id": "$1" }
```
instead of just `{}`. So I would want anything that is required to become an autogenerated default snippet. Given my interactions with @aeschli this might be out of scope for the project. | feature-request,json | low | Minor |
293,782,031 | go | cmd/go: go get -v is too verbose for repos with meta tags | ```
Fetching https://golang.org/x/tools/cmd/godoc?go-get=1
Parsing meta tags from https://golang.org/x/tools/cmd/godoc?go-get=1 (status code 200)
get "golang.org/x/tools/cmd/godoc": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/cmd/godoc?go-get=1
get "golang.org/x/tools/cmd/godoc": verifying non-authoritative meta tag
```
This is interjected for every repo that uses meta tags. It would be useful if there were a failure only. Instead it disrupts the usual pattern of listing packages that have been consulted to fulfil the command. There are other ways to get this information a user can use if they were curious, or needed to debug the situation.
Here's a full sample of the noise in a typical circumstance:
```
go get -u -v \
golang.org/x/tools/cmd/godoc \
golang.org/x/tools/cmd/guru \
golang.org/x/tools/cmd/gorename \
honnef.co/go/tools/cmd/unused \
github.com/rogpeppe/sortimports
Fetching https://golang.org/x/tools/cmd/godoc?go-get=1
Parsing meta tags from https://golang.org/x/tools/cmd/godoc?go-get=1 (status code 200)
get "golang.org/x/tools/cmd/godoc": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/cmd/godoc?go-get=1
get "golang.org/x/tools/cmd/godoc": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools?go-get=1
Parsing meta tags from https://golang.org/x/tools?go-get=1 (status code 200)
golang.org/x/tools (download)
Fetching https://golang.org/x/tools/blog?go-get=1
Parsing meta tags from https://golang.org/x/tools/blog?go-get=1 (status code 200)
get "golang.org/x/tools/blog": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/blog?go-get=1
get "golang.org/x/tools/blog": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/blog/atom?go-get=1
Parsing meta tags from https://golang.org/x/tools/blog/atom?go-get=1 (status code 200)
get "golang.org/x/tools/blog/atom": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/blog/atom?go-get=1
get "golang.org/x/tools/blog/atom": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/present?go-get=1
Parsing meta tags from https://golang.org/x/tools/present?go-get=1 (status code 200)
get "golang.org/x/tools/present": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/present?go-get=1
get "golang.org/x/tools/present": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/godoc?go-get=1
Parsing meta tags from https://golang.org/x/tools/godoc?go-get=1 (status code 200)
get "golang.org/x/tools/godoc": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/godoc?go-get=1
get "golang.org/x/tools/godoc": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/godoc/analysis?go-get=1
Parsing meta tags from https://golang.org/x/tools/godoc/analysis?go-get=1 (status code 200)
get "golang.org/x/tools/godoc/analysis": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/godoc/analysis?go-get=1
get "golang.org/x/tools/godoc/analysis": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/go/callgraph?go-get=1
Parsing meta tags from https://golang.org/x/tools/go/callgraph?go-get=1 (status code 200)
get "golang.org/x/tools/go/callgraph": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/go/callgraph?go-get=1
get "golang.org/x/tools/go/callgraph": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/go/ssa?go-get=1
Parsing meta tags from https://golang.org/x/tools/go/ssa?go-get=1 (status code 200)
get "golang.org/x/tools/go/ssa": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/go/ssa?go-get=1
get "golang.org/x/tools/go/ssa": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/go/ast/astutil?go-get=1
Parsing meta tags from https://golang.org/x/tools/go/ast/astutil?go-get=1 (status code 200)
get "golang.org/x/tools/go/ast/astutil": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/go/ast/astutil?go-get=1
get "golang.org/x/tools/go/ast/astutil": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/go/types/typeutil?go-get=1
Parsing meta tags from https://golang.org/x/tools/go/types/typeutil?go-get=1 (status code 200)
get "golang.org/x/tools/go/types/typeutil": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/go/types/typeutil?go-get=1
get "golang.org/x/tools/go/types/typeutil": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/go/loader?go-get=1
Parsing meta tags from https://golang.org/x/tools/go/loader?go-get=1 (status code 200)
get "golang.org/x/tools/go/loader": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/go/loader?go-get=1
get "golang.org/x/tools/go/loader": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/go/buildutil?go-get=1
Parsing meta tags from https://golang.org/x/tools/go/buildutil?go-get=1 (status code 200)
get "golang.org/x/tools/go/buildutil": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/go/buildutil?go-get=1
get "golang.org/x/tools/go/buildutil": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/go/pointer?go-get=1
Parsing meta tags from https://golang.org/x/tools/go/pointer?go-get=1 (status code 200)
get "golang.org/x/tools/go/pointer": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/go/pointer?go-get=1
get "golang.org/x/tools/go/pointer": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/container/intsets?go-get=1
Parsing meta tags from https://golang.org/x/tools/container/intsets?go-get=1 (status code 200)
get "golang.org/x/tools/container/intsets": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/container/intsets?go-get=1
get "golang.org/x/tools/container/intsets": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/go/ssa/ssautil?go-get=1
Parsing meta tags from https://golang.org/x/tools/go/ssa/ssautil?go-get=1 (status code 200)
get "golang.org/x/tools/go/ssa/ssautil": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/go/ssa/ssautil?go-get=1
get "golang.org/x/tools/go/ssa/ssautil": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/godoc/util?go-get=1
Parsing meta tags from https://golang.org/x/tools/godoc/util?go-get=1 (status code 200)
get "golang.org/x/tools/godoc/util": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/godoc/util?go-get=1
get "golang.org/x/tools/godoc/util": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/godoc/vfs?go-get=1
Parsing meta tags from https://golang.org/x/tools/godoc/vfs?go-get=1 (status code 200)
get "golang.org/x/tools/godoc/vfs": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/godoc/vfs?go-get=1
get "golang.org/x/tools/godoc/vfs": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/godoc/vfs/httpfs?go-get=1
Parsing meta tags from https://golang.org/x/tools/godoc/vfs/httpfs?go-get=1 (status code 200)
get "golang.org/x/tools/godoc/vfs/httpfs": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/godoc/vfs/httpfs?go-get=1
get "golang.org/x/tools/godoc/vfs/httpfs": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/godoc/redirect?go-get=1
Parsing meta tags from https://golang.org/x/tools/godoc/redirect?go-get=1 (status code 200)
get "golang.org/x/tools/godoc/redirect": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/godoc/redirect?go-get=1
get "golang.org/x/tools/godoc/redirect": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/godoc/static?go-get=1
Parsing meta tags from https://golang.org/x/tools/godoc/static?go-get=1 (status code 200)
get "golang.org/x/tools/godoc/static": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/godoc/static?go-get=1
get "golang.org/x/tools/godoc/static": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/godoc/vfs/gatefs?go-get=1
Parsing meta tags from https://golang.org/x/tools/godoc/vfs/gatefs?go-get=1 (status code 200)
get "golang.org/x/tools/godoc/vfs/gatefs": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/godoc/vfs/gatefs?go-get=1
get "golang.org/x/tools/godoc/vfs/gatefs": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/godoc/vfs/mapfs?go-get=1
Parsing meta tags from https://golang.org/x/tools/godoc/vfs/mapfs?go-get=1 (status code 200)
get "golang.org/x/tools/godoc/vfs/mapfs": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/godoc/vfs/mapfs?go-get=1
get "golang.org/x/tools/godoc/vfs/mapfs": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/godoc/vfs/zipfs?go-get=1
Parsing meta tags from https://golang.org/x/tools/godoc/vfs/zipfs?go-get=1 (status code 200)
get "golang.org/x/tools/godoc/vfs/zipfs": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/godoc/vfs/zipfs?go-get=1
get "golang.org/x/tools/godoc/vfs/zipfs": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/playground?go-get=1
Parsing meta tags from https://golang.org/x/tools/playground?go-get=1 (status code 200)
get "golang.org/x/tools/playground": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/playground?go-get=1
get "golang.org/x/tools/playground": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/cmd/guru?go-get=1
Parsing meta tags from https://golang.org/x/tools/cmd/guru?go-get=1 (status code 200)
get "golang.org/x/tools/cmd/guru": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/cmd/guru?go-get=1
get "golang.org/x/tools/cmd/guru": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/cmd/guru/serial?go-get=1
Parsing meta tags from https://golang.org/x/tools/cmd/guru/serial?go-get=1 (status code 200)
get "golang.org/x/tools/cmd/guru/serial": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/cmd/guru/serial?go-get=1
get "golang.org/x/tools/cmd/guru/serial": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/go/callgraph/static?go-get=1
Parsing meta tags from https://golang.org/x/tools/go/callgraph/static?go-get=1 (status code 200)
get "golang.org/x/tools/go/callgraph/static": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/go/callgraph/static?go-get=1
get "golang.org/x/tools/go/callgraph/static": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/refactor/importgraph?go-get=1
Parsing meta tags from https://golang.org/x/tools/refactor/importgraph?go-get=1 (status code 200)
get "golang.org/x/tools/refactor/importgraph": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/refactor/importgraph?go-get=1
get "golang.org/x/tools/refactor/importgraph": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/cmd/gorename?go-get=1
Parsing meta tags from https://golang.org/x/tools/cmd/gorename?go-get=1 (status code 200)
get "golang.org/x/tools/cmd/gorename": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/cmd/gorename?go-get=1
get "golang.org/x/tools/cmd/gorename": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/refactor/rename?go-get=1
Parsing meta tags from https://golang.org/x/tools/refactor/rename?go-get=1 (status code 200)
get "golang.org/x/tools/refactor/rename": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/refactor/rename?go-get=1
get "golang.org/x/tools/refactor/rename": verifying non-authoritative meta tag
Fetching https://golang.org/x/tools/refactor/satisfy?go-get=1
Parsing meta tags from https://golang.org/x/tools/refactor/satisfy?go-get=1 (status code 200)
get "golang.org/x/tools/refactor/satisfy": found meta tag get.metaImport{Prefix:"golang.org/x/tools", VCS:"git", RepoRoot:"https://go.googlesource.com/tools"} at https://golang.org/x/tools/refactor/satisfy?go-get=1
get "golang.org/x/tools/refactor/satisfy": verifying non-authoritative meta tag
Fetching https://honnef.co/go/tools/cmd/unused?go-get=1
Parsing meta tags from https://honnef.co/go/tools/cmd/unused?go-get=1 (status code 200)
get "honnef.co/go/tools/cmd/unused": found meta tag get.metaImport{Prefix:"honnef.co/go/tools", VCS:"git", RepoRoot:"https://github.com/dominikh/go-tools"} at https://honnef.co/go/tools/cmd/unused?go-get=1
get "honnef.co/go/tools/cmd/unused": verifying non-authoritative meta tag
Fetching https://honnef.co/go/tools?go-get=1
Parsing meta tags from https://honnef.co/go/tools?go-get=1 (status code 200)
honnef.co/go/tools (download)
Fetching https://honnef.co/go/tools/lint/lintutil?go-get=1
Parsing meta tags from https://honnef.co/go/tools/lint/lintutil?go-get=1 (status code 200)
get "honnef.co/go/tools/lint/lintutil": found meta tag get.metaImport{Prefix:"honnef.co/go/tools", VCS:"git", RepoRoot:"https://github.com/dominikh/go-tools"} at https://honnef.co/go/tools/lint/lintutil?go-get=1
get "honnef.co/go/tools/lint/lintutil": verifying non-authoritative meta tag
github.com/kisielk/gotool (download)
Fetching https://honnef.co/go/tools/lint?go-get=1
Parsing meta tags from https://honnef.co/go/tools/lint?go-get=1 (status code 200)
get "honnef.co/go/tools/lint": found meta tag get.metaImport{Prefix:"honnef.co/go/tools", VCS:"git", RepoRoot:"https://github.com/dominikh/go-tools"} at https://honnef.co/go/tools/lint?go-get=1
get "honnef.co/go/tools/lint": verifying non-authoritative meta tag
Fetching https://honnef.co/go/tools/ssa?go-get=1
Parsing meta tags from https://honnef.co/go/tools/ssa?go-get=1 (status code 200)
get "honnef.co/go/tools/ssa": found meta tag get.metaImport{Prefix:"honnef.co/go/tools", VCS:"git", RepoRoot:"https://github.com/dominikh/go-tools"} at https://honnef.co/go/tools/ssa?go-get=1
get "honnef.co/go/tools/ssa": verifying non-authoritative meta tag
Fetching https://honnef.co/go/tools/ssa/ssautil?go-get=1
Parsing meta tags from https://honnef.co/go/tools/ssa/ssautil?go-get=1 (status code 200)
get "honnef.co/go/tools/ssa/ssautil": found meta tag get.metaImport{Prefix:"honnef.co/go/tools", VCS:"git", RepoRoot:"https://github.com/dominikh/go-tools"} at https://honnef.co/go/tools/ssa/ssautil?go-get=1
get "honnef.co/go/tools/ssa/ssautil": verifying non-authoritative meta tag
Fetching https://honnef.co/go/tools/version?go-get=1
Parsing meta tags from https://honnef.co/go/tools/version?go-get=1 (status code 200)
get "honnef.co/go/tools/version": found meta tag get.metaImport{Prefix:"honnef.co/go/tools", VCS:"git", RepoRoot:"https://github.com/dominikh/go-tools"} at https://honnef.co/go/tools/version?go-get=1
get "honnef.co/go/tools/version": verifying non-authoritative meta tag
Fetching https://honnef.co/go/tools/unused?go-get=1
Parsing meta tags from https://honnef.co/go/tools/unused?go-get=1 (status code 200)
get "honnef.co/go/tools/unused": found meta tag get.metaImport{Prefix:"honnef.co/go/tools", VCS:"git", RepoRoot:"https://github.com/dominikh/go-tools"} at https://honnef.co/go/tools/unused?go-get=1
get "honnef.co/go/tools/unused": verifying non-authoritative meta tag
github.com/rogpeppe/sortimports (download)
golang.org/x/tools/godoc/static
golang.org/x/tools/cmd/guru/serial
golang.org/x/tools/go/callgraph/static
golang.org/x/tools/refactor/importgraph
golang.org/x/tools/cmd/godoc
golang.org/x/tools/refactor/satisfy
github.com/kisielk/gotool/internal/load
golang.org/x/tools/cmd/guru
github.com/kisielk/gotool
honnef.co/go/tools/ssa
golang.org/x/tools/refactor/rename
golang.org/x/tools/cmd/gorename
honnef.co/go/tools/version
github.com/rogpeppe/sortimports
honnef.co/go/tools/ssa/ssautil
honnef.co/go/tools/lint
honnef.co/go/tools/lint/lintutil
honnef.co/go/tools/unused
honnef.co/go/tools/cmd/unused
``` | NeedsFix,GoCommand | low | Critical |
293,826,148 | godot | "Collapse all properties" has no effect if inspector folding is disabled in the Editor Settings | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.0
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
<!-- What happened, and what was expected. -->
When 'Disable Inspector Folding' in Project settings is set to On, the option to 'Collapse all properties' in the Inspector panel no longer works.
**Steps to reproduce:**
Follow the description above.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| enhancement,discussion,topic:editor,usability | low | Critical |
293,911,101 | godot | Improve usability of animating sub-properties like modulate.a | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.0
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
<!-- What happened, and what was expected. -->
The removal of Canvas Item Opacity and Self Opacity settings results in cumbersome and annoying workflow. They are now two color fields instead. As you can see in the images of my animation timelines in Godot 2 and Godot 3, the opacity setting has changed from an easy to understand number to a confusing color patch. I now have to click twice to change the value (three times if I want the value in RAW). In Godot 2 I only click once and adjust a number. I fail to see the improvement here. Please bring back the Canvas Item Opacity and Self Opacity settings as numbers.


**Steps to reproduce:**
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| enhancement,topic:editor,usability,topic:animation | low | Critical |
293,918,976 | flutter | Reload timeouts mishandled in --machine mode | While testing a fix for #14184 in Dart Code (don't send a reload request while it's already reloading) I noticed that if a hot reload times out, the error message returned (via `flutter run --machine`) seems confused:
```text
[15:03:50]: ==> [{"id":"1","method":"app.restart","params":{"appId":"9d04942a-7c3c-42d6-9b5c-93cecef66c95","fullRestart":false,"pause":true}}]
[15:03:50]: ==> [{"id":"2","method":"app.restart","params":{"appId":"9d04942a-7c3c-42d6-9b5c-93cecef66c95","fullRestart":false,"pause":true}}]
[15:03:50]: <== [{"event":"app.progress","params":{"appId":"9d04942a-7c3c-42d6-9b5c-93cecef66c95","id":"1","progressId":"hot.reload","message":"Initializing hot reload..."}}]
[15:04:21]: <== [{"id":"2","error":"NoSuchMethodError: Class 'TimeoutException' has no instance method '[]'.\nReceiver: Instance of 'TimeoutException'\nTried calling: [](\"code\")"}]
[15:04:21]: <== [{"event":"app.progress","params":{"appId":"9d04942a-7c3c-42d6-9b5c-93cecef66c95","id":"1","progressId":"hot.reload","finished":true}}]
[15:04:21]: <== [{"id":"1","error":"NoSuchMethodError: Class 'TimeoutException' has no instance method '[]'.\nReceiver: Instance of 'TimeoutException'\nTried calling: [](\"code\")"}]
```
It's obvious that it's a timeout from the mention of `TimeoutException` but the error about trying to call `["code"]` on it probably isn't what should be returned here.
It's possible these timeouts will go away once the other issue is fixed, however it still seems like there's a bug in the timeout exception/code. | c: crash,tool,t: hot reload,P2,team-tool,triaged-tool | low | Critical |
293,998,157 | vue | SSR Component Cache doesn't cache Strings | ### Version
2.5.13
### Reproduction link
[https://runkit.com/martinlg/issue-vue-renderer-cache](https://runkit.com/martinlg/issue-vue-renderer-cache)
### Steps to reproduce
Run the Runkit code. If you prefer a git repository I can provide you one.
### What is expected?
The value passed to the ```set``` function should be a string.
### What is actually happening?
The value passed to the ```set``` function is an object, with 2 properties:
* html: a ```string``` containing the rendered component
* components: a ```Set``` containing nothing or function, depending on context (sub-components I think)
---
This issue breaks any external cache possibility (Redis in my case).
The only possible cache is in the process memory, like the LRUCache, but it seems impossible to scale processes and share a common cache.
Moreover, the documentation explain clearly that the cached value should be a string, and even provides a small Redis example implementation, that could just not work.
I think that the documentation describe the expected behavior, so I don't want to "fix" the documentation, I think we should fix the behavior.
I will try to help but I may need some explanations on some parts of the RenderContext, can I ask my questions on this thread ?
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | medium | Major |
294,026,214 | rust | dlclose() does not behave properly on Mac | This report will reference this repository which reproduces the issue: https://github.com/dradtke/rust-dylib-issues
### The Issue
The repository contains an application library, built as a `dylib`, and two example main programs, one in Rust and one in C. Each main application runs in a loop, loading the library with `dlopen()`, calling a method, and then closing with `dlclose()`. The expectation is that any changes to the library will be picked up immediately by the main application when it is recompiled.
However, the behavior between the two programs differs. If I run the two main programs side-by-side, then make a change to the returned message and recompile the library, only the C program immediately reflects the change. The Rust main program won't reflect any changes until it is fully restarted.
It appears that this is Mac-specific behavior. When the same test is run on Debian, the two main programs behave identically.
### The Environment
**Operating System**: macOS Sierra 10.12.6
**Rust Version**:
```
rustc 1.23.0 (766bd11c8 2018-01-01)
binary: rustc
commit-hash: 766bd11c8a3c019ca53febdcd77b2215379dd67d
commit-date: 2018-01-01
host: x86_64-apple-darwin
release: 1.23.0
LLVM version: 4.0
```
**C Compiler**:
```
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 8.0.0 (clang-800.0.38)
Target: x86_64-apple-darwin16.7.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
``` | A-runtime,O-macos,A-thread-locals,C-bug | medium | Major |
294,060,134 | godot | Visible seams between GridMap tiles (sometimes only with MSAA enabled) | 
Is hard to see. You need to open the image full size. There are no visible seams with MSAA Disabled but with 2x or higher I start seeing seams and edges of 3D meshes.
My guess is the texture or mesh is shrunk by a small factor in order to produce the edge aliasing.
I use a Gridmap is a cell size 1 , 1 , 1 and the tiles are all designed to be exactly 1 x 1 x 1 and the UV maps are made exactly to the dot for 64 x 64 textures.
This is on latest Godot 3.0 release | topic:rendering,confirmed,documentation,topic:3d | high | Critical |
294,143,852 | flutter | Dragging a list that is currently handling an animateTo animation throws exception | ## Steps to Reproduce
The end goal is to create a scrolling list that automatically "snaps" to a given location based on the actual scroll position at which the user stops scrolling. This may not be the best solution (I'm new to Flutter), but here's the process by which I get the crash:
Wrap a ListView within a NotificationListener. Upon receiving a `UserScrollNotification` with `ScrollDirection.idle`, call the ScrollController's `animateTo()` method to scroll to another location (in the example here, I arbitrarily scroll to 10000.0 pixels).
### Sample code
```dart
import 'package:flutter/material.dart';
void main() => runApp(new MyApp());
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return new MaterialApp(
title: 'Scroller issue',
home: new MyHomePage(),
);
}
}
class MyHomePage extends StatelessWidget {
final _controller = new ScrollController();
bool _didGetNotification(ScrollNotification notification) {
if (notification is UserScrollNotification) {
if (notification.direction.toString() == "ScrollDirection.idle") {
// We've stopped scrolling... now animate automatically to the 10000-pixel spot
_controller.animateTo(10000.0, duration: const Duration(seconds: 2), curve: Curves.elasticOut);
/* Above works great... unless the user tries to interact with the list WHILE it's
// animating. In this case, you end up with:
**********************************************************************************
Another exception was thrown: 'package:flutter/src/widgets/scrollable.dart':
Failed assertion: line 472 pos 12: '_hold == null || _drag == null': is not true.
********************************************************************************** */
}
}
return true;
}
@override
Widget build(BuildContext context) {
return new Scaffold(
body: new NotificationListener(
onNotification: _didGetNotification,
child: new ListView.builder(
padding: new EdgeInsets.all(8.0),
controller: _controller,
itemExtent: 60.0,
itemBuilder: (BuildContext context, int index) {
return new Text('Item No. $index');
},
),
),
);
}
}
```
It works great as long as the user only interacts with the ListView when it is idle, but if the user attempts to scroll/drag/touch the list _during_ the animateTo() process, the following exception is raised within the Flutter Scrollable package:
```
'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 472 pos 12: '_hold == null || _drag == null': is not true.
```
After this, the ListView becomes unresponsive.
I'm sure there's a better way to achieve what I'm trying to do, but the platform itself doesn't seem to recover from this. Figured I'd make a note of it.
Thanks!
## Logs
Run your application with `flutter run` and attach all the log output.
```
CADisplay.name = LCD;
CADisplay.deviceName = PurpleMain;
CADisplay.seed = 1;
tags = 0;
currentMode = <FBSDisplayMode: 0x604000097430; 375x667@2x (750x1334/2) 60Hz sRGB SDR>;
safeOverscanRatio = {0.89999997615814209, 0.89999997615814209};
nativeCenter = {375, 667};
pixelSize = {750, 1334};
bounds = {{0, 0}, {375, 667}};
CADisplay = <CADisplay:LCD PurpleMain>;
}
Syncing files to device iPhone 6s... 1.9s
๐ฅ To hot reload your app on the fly, press "r". To restart the app entirely, press "R".
An Observatory debugger and profiler on iPhone 6s is available at: http://127.0.0.1:8100/
For a more detailed help message, press "h". To quit, press "q".
โโโก EXCEPTION CAUGHT BY GESTURE โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The following assertion was thrown while handling a gesture:
'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 467 pos 12: '_hold == null':
is not true.
Either the assertion indicates an error in the framework itself, or we should provide substantially
more information in this error message to help you determine and fix the underlying cause.
In either case, please report this assertion by filing a bug on GitHub:
https://github.com/flutter/flutter/issues/new
When the exception was thrown, this was the stack:
#2 ScrollableState._handleDragStart (package:flutter/src/widgets/scrollable.dart:467:12)
#3 DragGestureRecognizer.acceptGesture.<anonymous closure> (package:flutter/src/gestures/monodrag.dart:169:54)
#4 GestureRecognizer.invokeCallback (package:flutter/src/gestures/recognizer.dart:102:24)
#5 DragGestureRecognizer.acceptGesture (package:flutter/src/gestures/monodrag.dart:169:9)
#6 GestureArenaManager._resolveByDefault (package:flutter/src/gestures/arena.dart:250:25)
#7 GestureArenaManager._tryToResolveArena.<anonymous closure> (package:flutter/src/gestures/arena.dart:231:31)
(elided 4 frames from class _AssertionError and package dart:async)
Handler: onStart
Recognizer:
VerticalDragGestureRecognizer#6eafd
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Another exception was thrown: 'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 472 pos 12: '_hold == null || _drag == null': is not true.
Another exception was thrown: 'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 472 pos 12: '_hold == null || _drag == null': is not true.
Another exception was thrown: 'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 472 pos 12: '_hold == null || _drag == null': is not true.
Another exception was thrown: 'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 472 pos 12: '_hold == null || _drag == null': is not true.
Another exception was thrown: 'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 472 pos 12: '_hold == null || _drag == null': is not true.
Another exception was thrown: 'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 472 pos 12: '_hold == null || _drag == null': is not true.
Another exception was thrown: 'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 478 pos 12: '_hold == null || _drag == null': is not true.
Another exception was thrown: 'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 455 pos 12: '_drag == null': is not true.
Another exception was thrown: 'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 464 pos 12: '_drag == null': is not true.
Another exception was thrown: 'package:flutter/src/widgets/scrollable.dart': Failed assertion: line 478 pos 12: '_hold == null || _drag == null': is not true.
```
Run `flutter analyze` and attach any output of that command also.
```
Analyzing /Users/mfahy/Apps/list_view_attempts...
No issues found!
Ran in 6.5s
```
## Flutter Doctor
```
[ +21 ms] [/Users/mfahy/flutter/] git rev-parse --abbrev-ref --symbolic @{u}
[ +36 ms] Exit code 0 from: git rev-parse --abbrev-ref --symbolic @{u}
[ ] origin/alpha
[ ] [/Users/mfahy/flutter/] git rev-parse --abbrev-ref HEAD
[ +7 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD
[ ] alpha
[ ] [/Users/mfahy/flutter/] git ls-remote --get-url origin
[ +9 ms] Exit code 0 from: git ls-remote --get-url origin
[ ] https://github.com/flutter/flutter.git
[ ] [/Users/mfahy/flutter/] git log -n 1 --pretty=format:%H
[ +28 ms] Exit code 0 from: git log -n 1 --pretty=format:%H
[ ] 2e449f06f0a3be076e336ad6b30b0e9ec99dbdfe
[ ] [/Users/mfahy/flutter/] git log -n 1 --pretty=format:%ar
[ +10 ms] Exit code 0 from: git log -n 1 --pretty=format:%ar
[ ] 5 days ago
[ ] [/Users/mfahy/flutter/] git describe --match v*.*.* --first-parent --long --tags
[ +44 ms] Exit code 0 from: git describe --match v*.*.* --first-parent --long --tags
[ ] v0.0.21-0-g2e449f06f
[ +473 ms] /usr/bin/defaults read /Applications/Android Studio.app/Contents/Info CFBundleShortVersionString
[+1263 ms] Exit code 0 from: /usr/bin/defaults read /Applications/Android Studio.app/Contents/Info CFBundleShortVersionString
[ ] 3.0
[ +452 ms] [โ] Flutter (on Mac OS X 10.13.3 17D47, locale en-US, channel alpha)
[ +1 ms] โข Flutter version 0.0.21 at /Users/mfahy/flutter
[ ] โข Framework revision 2e449f06f0 (5 days ago), 2018-01-29 14:26:51 -0800
[ ] โข Engine revision 6921873c71
[ ] โข Tools Dart version 2.0.0-dev.16.0
[ ] โข Engine Dart version 2.0.0-edge.da1f52592ef73fe3afa485385cb995b9aec0181a
[ +130 ms] /usr/bin/defaults read /Applications/Android Studio.app/Contents/Info CFBundleShortVersionString
[ +218 ms] Exit code 0 from: /usr/bin/defaults read /Applications/Android Studio.app/Contents/Info CFBundleShortVersionString
[ ] 3.0
[ +98 ms] java -version
[ +87 ms] [โ] Android toolchain - develop for Android devices (Android SDK 27.0.3)
[ ] โข Android SDK at /Users/mfahy/Library/Android/sdk
[ ] โข Android NDK location not configured (optional; useful for native profiling support)
[ ] โข Platform android-27, build-tools 27.0.3
[ ] โข Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
[ ] โข Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b08)
[+1189 ms] DevToolsSecurity -status
[ +49 ms] Developer mode is currently enabled.
[ ] python -c import six
[ +126 ms] idevice_id -h
[ +10 ms] idevice_id -h
[ +9 ms] idevice_id -l
[ +28 ms] 5a62f0e8de345b9794b9a61c9bfcd02026c68a86
[ +1 ms] idevicename
[ +44 ms] ios-deploy --version
[ +54 ms] ios-deploy --version
[ +16 ms] 1.9.2
[ +1 ms] ios-deploy --version
[ +33 ms] ios-deploy --version
[ +21 ms] 1.9.2
[ +2 ms] pod --version
[ +905 ms] pod --version
[ +605 ms] 1.3.1
[ +2 ms] pod --version
[ +533 ms] 1.3.1
[ +1 ms] [-] iOS toolchain - develop for iOS devices (Xcode 9.2)
[ ] โข Xcode at /Applications/Xcode.app/Contents/Developer
[ ] โข Xcode 9.2, Build version 9C40b
[ ] โ Verify that all connected devices have been paired with this computer in Xcode.
If all devices have been paired, libimobiledevice and ideviceinstaller may require updating.
To update, run:
brew uninstall --ignore-dependencies libimobiledevice
brew install --HEAD libimobiledevice
brew install ideviceinstaller
[ ] โข ios-deploy 1.9.2
[ ] โข CocoaPods version 1.3.1
[ +2 ms] [โ] Android Studio (version 3.0)
[ ] โข Android Studio at /Applications/Android Studio.app/Contents
[ ] โข Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b08)
[ +6 ms] /usr/bin/defaults read /Applications/IntelliJ IDEA.app/Contents/Info CFBundleShortVersionString
[ +249 ms] Exit code 0 from: /usr/bin/defaults read /Applications/IntelliJ IDEA.app/Contents/Info CFBundleShortVersionString
[ ] 2017.3.4
[ +79 ms] [โ] IntelliJ IDEA Ultimate Edition (version 2017.3.4)
[ ] โข Flutter plugin version 21.2.3
[ ] โข Dart plugin version 173.4548.30
[ +4 ms] /Users/mfahy/Library/Android/sdk/platform-tools/adb devices -l
[ +17 ms] Exit code 0 from: /Users/mfahy/Library/Android/sdk/platform-tools/adb devices -l
[ ] List of devices attached
[ +8 ms] idevice_id -h
[ +82 ms] which ideviceinstaller
[ +5 ms] Exit code 0 from: which ideviceinstaller
[ ] /usr/local/bin/ideviceinstaller
[ ] which iproxy
[ +4 ms] Exit code 0 from: which iproxy
[ ] /usr/local/bin/iproxy
[ +4 ms] /usr/bin/xcrun simctl list --json devices
[ +218 ms] [โ] Connected devices
[ ] โข ViPhone 6S โข 5a62f0e8de345b9794b9a61c9bfcd02026c68a86 โข ios โข iOS 11.3
[ ] โข iPhone 6s โข 6F36E8FD-E570-453E-B943-8243E4FAEDD6 โข ios โข iOS 11.2 (simulator)
[ +21 ms] "flutter doctor" took 6,972ms.
[ +42 ms] ensureAnalyticsSent: 38ms
[ +2 ms] exiting with code 0
```
Hope this helps! Thanks! | framework,a: animation,f: scrolling,d: api docs,d: examples,customer: crowd,P2,workaround available,team-framework,triaged-framework | low | Critical |
294,144,745 | rust | Extremely weird hygiene behavior when invoking a macro from the calling crate in a Derive | I'm not entirely sure whether this is a bug or not, but the current behavior seems super finicky and unintuitive at worst, and I think it's likely a bug. I think this warrants some context on the use case, so I'd like to preface with that. There's a repro script at the bottom if you don't care about the context.
Custom derives have to work around the fact that they don't have access to `$crate` for the crate they're associated with. Typically the way this is worked around is by doing `const UNIQUE_NAME: ()
= { extern crate your_crate; /*code*/ };`. Diesel provides several derives which we want to allow third party crates to use, but *also* use them within Diesel itself. This means that the `extern crate` workaround won't work for us. Instead we have this macro in Diesel:
```rust
macro_rules! __diesel_use_everything {
() => { pub use $crate::*; }
}
```
and then the generated code looks like this:
```rust
mod unique_name {
mod diesel {
__diesel_use_everything!();
}
/*code*/
}
```
However, this gets super finicky with hygiene. If we try to do that with nightly, using the derives within Diesel itself will complain that `__diesel_use_everything!` can't be found. The fix for this is to give `__diesel_use_everything!()` a `call_site` span. Interestingly, the semicolon after it *must* have a `def_site` span, or nothing it imported will be visible. The semicolon being significant is particularly weird to me, because it's basically enforcing that `()` or `[]` be used as the delimiters. If I wanted to invoke the macro with `{}`, ~it would be impossible for me to make it work~ I have to ensure the braces have a `def_site` span.
Anyway it's possible to work around this in the most basic cases by giving `__diesel_use_everything!()` a `call_site` span. However, we run into additional trouble when the use of the `derive` originates inside a macro from Diesel (the actual macro is [`sql_function!`](http://docs.diesel.rs/diesel/macro.sql_function.html) if you want a real use case). It'll still find `__diesel_use_everything!()` but we get the same problem that we had if the `;` has a `call_site` span. Nothing in this `diesel` module is visible. `use self::diesel::anything` will fail.
With all that said, here's a minimum repro script:
### foo/lib.rs
```rust
#[macro_use]
extern crate bar;
macro_rules! __foo_use_everything {
() => {
pub use $crate::*;
};
}
pub struct Foo;
macro_rules! make_a_struct {
() => {
#[derive(Thingy)]
pub struct Bar;
};
}
make_a_struct!();
```
### bar/lib.rs
```rust
#![feature(proc_macro)]
#[macro_use]
extern crate quote;
extern crate proc_macro2;
extern crate proc_macro;
use proc_macro::TokenStream;
use proc_macro2::Span;
#[proc_macro_derive(Thingy)]
pub fn derive(_: TokenStream) -> TokenStream {
let call_site = Span::call_site();
let use_everything = quote_spanned!(call_site=> __foo_use_everything!());
quote!(
mod a_unique_name {
mod foo {
#use_everything;
}
use self::foo::Foo;
}
).into()
}
```
The workaround here is to call `source` on the `call_site` span (in `proc_macro2` that looks like `Span::call_site().unstable().source().into()`), but this requirement seems really weird to me. For that matter, the need to give the macro invocation any particular span at all is really surprising to me. This is a `macro_rules!` macro, which by its very nature is non-hygienic and global. I think this invocation should work regardless of the span the macro name has. Even putting that aside though, it seems to me that a derive used inside a `macro_rules!` macro should behave basically the same as one without (e.g. `__diesel_use_everything!` should certainly resolve with `call_site` regardless of where it's used) | T-compiler,A-macros-2.0,C-bug,A-hygiene | low | Critical |
294,191,298 | opencv | cv::viz::vtkCloudMatSink::WriteData handles colors with 4 channels incorrectly. | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
##### Detailed description
Refer to the [code][1]
```.cpp
Mat buffer(cloud.size(), CV_64FC(channels));
Vec3d *cptr = buffer.ptr<Vec3d>();
for(size_t i = 0; i < buffer.total(); ++i)
*cptr++ = Vec3d(normals_data->GetTuple((vtkIdType)i));
```
`channels` can be either 3 or 4, but the line
```.cpp
Vec3d *cptr = buffer.ptr<Vec3d>();
```
assumes it is 3.
The same applies to normals, see the [code][2]
[2]: https://github.com/opencv/opencv/blob/master/modules/viz/src/vtk/vtkCloudMatSink.cpp#L121
[1]: https://github.com/opencv/opencv/blob/master/modules/viz/src/vtk/vtkCloudMatSink.cpp#L101
<!-- your description -->
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | bug,category: viz | low | Critical |
294,197,635 | opencv | Inconsistent color ordering in cv::viz::Mesh and cv::viz::WCloud | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
When it loads the color of a mesh from a PLY file, it stores
color in RGB order, see the [code][1] here
```.cpp
for(size_t i = 0; i < buffer.total(); ++i)
*cptr++ = Vec3d(scalars_data->GetTuple((vtkIdType)i));
```
`cv::viz::WMesh::WMesh` can be [initialized][2] from a `Mesh`, which
calls [cv::viz::vtkCloudMatSource::filterNanColorsCopy][3] and switches
the red and the blue channel
```.cpp
if (!isNan(mrow))
*pos++ = Vec3b(srow[2], srow[1], srow[0]);
```
The problem is that `cv::viz::vtkCloudMatSource::filterNanColorsCopy` is also
invoked inside [cv::viz::WCloud::WCloud][4]
```.cpp
cv::viz::WCloud::WCloud(InputArray cloud, const Color &color)
{
WCloud cloud_widget(cloud, Mat(cloud.size(), CV_8UC3, color));
*this = cloud_widget;
}
```
where `Color` is in BGR order.
[4]: https://github.com/opencv/opencv/blob/master/modules/viz/src/clouds.cpp#L57
[3]: https://github.com/opencv/opencv/blob/master/modules/viz/src/vtk/vtkCloudMatSource.cpp#L230
[2]: https://github.com/opencv/opencv/blob/master/modules/viz/src/clouds.cpp#L371
[1]: https://github.com/opencv/opencv/blob/master/modules/viz/src/vtk/vtkCloudMatSink.cpp#L103
<!-- your description -->
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | bug,category: viz | low | Critical |
294,209,534 | go | x/mobile: gomobile Apps crash on Android 8 at "runtime/internal/atomic.Cas" | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go1.9.3
### Does this issue reproduce with the latest release?
I just tested version 1.9.2 and 1.9.3, both of them have this issue.
### What operating system and processor architecture are you using (`go env`)?
The gomobile App crash on Android 8.0.0~8.1.0 . The processor architecture is armeabi-v7a on real devices. Actually I got the issue from crash report of my Android App.
### What did you do?
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
Maybe this issue is similar to #20409 . I just got some limited information from the Android crash report.
### What did you expect to see?
What caused the crash and how to fix this?
### What did you see instead?
From crash report:
```
signal 31 (SIGSYS), code 1 (SYS_SECCOMP)
*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
pid: 0, tid: 0 >>> com.github.dawndiy.bifrostv <<<
backtrace:
#00 pc 00000000003a4090 /data/app/com.github.dawndiy.bifrostv-cgCD2dB5iSPpbiG9ujWdPA==/lib/arm/libgojni.so (syscall.Syscall+40)
#01 pc 000000000033b000 /data/app/com.github.dawndiy.bifrostv-cgCD2dB5iSPpbiG9ujWdPA==/lib/arm/libgojni.so (runtime/internal/atomic.Cas+16)
```
The crash happened on Android 8 device only, include Google Pixel, Pixel 2 and other Android 8 devices.
| mobile | low | Critical |
294,250,545 | rust | -Z time-llvm-passes prints info on incremental builds while doing non incremental builds | This is kind of weird, when building with ````CARGO_INCREMENTAL=0 RUSTFLAGS="-Z time-passes -Z time-llvm-passes"```` in release mode, rustc will still print
````
warning: The output of `-Z time-llvm-passes` will only reflect timings of re-translated modules when used with incremental compilation
````
while building a crate, although incremental build is explicitly disabled.
Why is it doing that?
````
rustc 1.25.0-nightly (3d292b793 2018-02-03)
cargo 0.26.0-nightly (1d6dfea44 2018-01-26)
```` | T-compiler,A-incr-comp,C-bug | low | Minor |
294,263,169 | rust | Compile time regression with _large_ number of slices | cc https://github.com/behnam/rust-unic/issues/199
The simplest way to show this is to use [`unic_ucd_name` 0.6.0](https://crates.io/crates/unic-ucd-name/0.6.0), which has two huge files for the Unicode code point Name property, [`name_map.rsv`](https://github.com/behnam/rust-unic/blob/v0.6.0/unic/ucd/name/tables/name_map.rsv) (1.56 MB of `('character', &[PARTS, OF, ITS, NAME]),`) and [`name_values.rsd`](https://github.com/behnam/rust-unic/blob/v0.6.0/unic/ucd/name/tables/name_values.rsd) (444 KB of `const NAME_BIT: &str = "NAME_BIT";`). The purpose is to generate a large binary-search slice from characters to their name while deduplicating _very_ common name fragments like `LETTER`.
With this small `main.rs`
```rust
extern crate unic_ucd_name;
fn main() {
let name = unic_ucd_name::Name::of('A').unwrap();
println!("{}", name);
}
```
and a non-existent `target` folder, I get the following compile times:
```
D:\Christopher\Documents\Code\Rust\playground>rustup update
info: syncing channel updates for 'stable-x86_64-pc-windows-msvc'
info: syncing channel updates for 'beta-x86_64-pc-windows-msvc'
info: syncing channel updates for 'nightly-x86_64-pc-windows-msvc'
info: checking for self-updates
stable-x86_64-pc-windows-msvc unchanged - rustc 1.23.0 (766bd11c8 2018-01-01)
beta-x86_64-pc-windows-msvc unchanged - rustc 1.24.0-beta.11 (03f456d3c 2018-02-03)
nightly-x86_64-pc-windows-msvc unchanged - rustc 1.25.0-nightly (0c6091fbd 2018-02-04)
D:\Christopher\Documents\Code\Rust\playground>cargo +beta build
Compiling unic-ucd-core v0.6.0
Compiling unic-char-range v0.6.0
Compiling unic-utils v0.6.0
Compiling unic-ucd-name v0.6.0
Compiling playground v0.0.0 (file:///D:/Christopher/Documents/Code/Rust/playground)
Finished dev [unoptimized + debuginfo] target(s) in 18.23 secs
D:\Christopher\Documents\Code\Rust\playground>cargo +nightly build
Compiling unic-char-range v0.6.0
Compiling unic-ucd-core v0.6.0
Compiling unic-utils v0.6.0
Compiling unic-ucd-name v0.6.0
Compiling playground v0.0.0 (file:///D:/Christopher/Documents/Code/Rust/playground)
Finished dev [unoptimized + debuginfo] target(s) in 139.35 secs
```
<details><summary><code>time-passes</code> for <code>unic_ucd_name</code></summary>
```
D:\Christopher\Documents\Code\Rust\rust-unic\unic\ucd\name>cargo +nightly rustc -- -Z time-passes
Compiling unic-char-range v0.6.0 (file:///D:/Christopher/Documents/Code/Rust/rust-unic/unic/char/range)
Compiling unic-ucd-core v0.6.0 (file:///D:/Christopher/Documents/Code/Rust/rust-unic/unic/ucd/core)
Compiling unic-utils v0.6.0 (file:///D:/Christopher/Documents/Code/Rust/rust-unic/unic/utils)
Compiling unic-ucd-name v0.6.0 (file:///D:/Christopher/Documents/Code/Rust/rust-unic/unic/ucd/name)
time: 0.009; rss: 17MB parsing
time: 0.000; rss: 18MB recursion limit
time: 0.000; rss: 18MB crate injection
time: 0.000; rss: 18MB plugin loading
time: 0.000; rss: 18MB background load prev dep-graph
time: 0.000; rss: 18MB plugin registration
time: 0.654; rss: 105MB expansion
time: 0.000; rss: 105MB maybe building test harness
time: 0.010; rss: 105MB maybe creating a macro crate
time: 0.032; rss: 105MB creating allocators
time: 0.010; rss: 105MB AST validation
time: 0.122; rss: 122MB name resolution
time: 0.027; rss: 122MB complete gated feature checking
time: 0.000; rss: 122MB blocked while dep-graph loading finishes
time: 0.145; rss: 170MB lowering ast -> hir
time: 0.056; rss: 170MB early lint checks
time: 0.178; rss: 179MB indexing hir
time: 0.000; rss: 122MB load query result cache
time: 0.000; rss: 122MB looking for entry point
time: 0.000; rss: 122MB looking for plugin registrar
time: 0.011; rss: 122MB loop checking
time: 0.040; rss: 122MB static item recursion checking
time: 0.041; rss: 135MB attribute checking
time: 0.063; rss: 144MB stability checking
time: 0.293; rss: 202MB type collecting
time: 0.002; rss: 202MB outlives testing
time: 0.000; rss: 202MB impl wf inference
time: 0.047; rss: 220MB coherence checking
time: 0.002; rss: 220MB variance testing
time: 0.248; rss: 261MB wf checking
time: 4.396; rss: 337MB item-types checking
time: 0.032; rss: 341MB item-bodies checking
time: 126.260; rss: 550MB const checking # !!!!!!!!!!
time: 0.183; rss: 551MB privacy checking
time: 0.025; rss: 551MB intrinsic checking
time: 0.046; rss: 552MB match checking
time: 0.011; rss: 552MB liveness checking
time: 0.554; rss: 561MB borrow checking
time: 0.026; rss: 564MB MIR borrow checking
time: 0.008; rss: 564MB MIR effect checking
time: 0.049; rss: 565MB death checking
time: 0.000; rss: 565MB unused lib feature checking
time: 0.166; rss: 565MB lint checking
time: 0.000; rss: 565MB resolving dependency formats
time: 1.136; rss: 615MB write metadata
time: 0.108; rss: 618MB translation item collection
time: 0.002; rss: 618MB codegen unit partitioning
time: 0.001; rss: 635MB llvm function passes [49a7n47po4ttqjl7]
time: 0.001; rss: 635MB llvm module passes [49a7n47po4ttqjl7]
time: 0.000; rss: 636MB llvm function passes [3ayaeypdcro9d6yk]
time: 0.000; rss: 636MB llvm module passes [3ayaeypdcro9d6yk]
time: 0.000; rss: 636MB llvm function passes [3kfx4ynvkmi2y9i5]
time: 0.000; rss: 636MB llvm module passes [3kfx4ynvkmi2y9i5]
time: 0.000; rss: 636MB llvm function passes [45nf4z58qqykpcpi]
time: 0.000; rss: 636MB llvm module passes [45nf4z58qqykpcpi]
time: 0.000; rss: 637MB llvm function passes [kt25z0521ngsjub]
time: 0.000; rss: 637MB llvm module passes [kt25z0521ngsjub]
time: 0.000; rss: 639MB llvm function passes [2ny9ynlpevlhfa8x]
time: 0.000; rss: 640MB llvm module passes [2ny9ynlpevlhfa8x]
time: 0.001; rss: 642MB llvm function passes [1im38lueib99jsk0]
time: 0.000; rss: 644MB llvm module passes [1im38lueib99jsk0]
time: 0.019; rss: 654MB codegen passes [3kfx4ynvkmi2y9i5]
time: 0.024; rss: 656MB codegen passes [3ayaeypdcro9d6yk]
time: 0.014; rss: 657MB codegen passes [45nf4z58qqykpcpi]
time: 0.016; rss: 658MB codegen passes [kt25z0521ngsjub]
time: 0.001; rss: 658MB llvm function passes [2lyh15q6cjwzy18c]
time: 0.000; rss: 659MB llvm module passes [2lyh15q6cjwzy18c]
time: 0.015; rss: 659MB codegen passes [2ny9ynlpevlhfa8x]
time: 0.036; rss: 659MB codegen passes [49a7n47po4ttqjl7]
time: 0.012; rss: 659MB codegen passes [1im38lueib99jsk0]
time: 0.010; rss: 659MB codegen passes [2lyh15q6cjwzy18c]
time: 0.000; rss: 677MB llvm function passes [4ypvbwho0bu5tnww]
time: 0.000; rss: 677MB llvm module passes [4ypvbwho0bu5tnww]
time: 0.000; rss: 677MB llvm function passes [43v6g0y2xsxoggnt]
time: 0.000; rss: 678MB llvm function passes [9elsx31vb4it187]
time: 0.000; rss: 678MB llvm function passes [16u6js6g0l3k1ic6]
time: 0.000; rss: 678MB llvm module passes [43v6g0y2xsxoggnt]
time: 0.000; rss: 678MB llvm function passes [3e8c0xfx7ikmlnfk]
time: 0.000; rss: 678MB llvm module passes [9elsx31vb4it187]
time: 0.000; rss: 680MB llvm function passes [9fcb3syd3ne5k0n]
time: 0.000; rss: 679MB llvm module passes [16u6js6g0l3k1ic6]
time: 0.000; rss: 679MB llvm function passes [8xzrsc1ux72v29j]
time: 0.000; rss: 679MB llvm module passes [3e8c0xfx7ikmlnfk]
time: 0.000; rss: 679MB llvm module passes [9fcb3syd3ne5k0n]
time: 0.000; rss: 679MB llvm module passes [8xzrsc1ux72v29j]
time: 0.017; rss: 679MB codegen passes [4ypvbwho0bu5tnww]
time: 0.013; rss: 679MB codegen passes [9elsx31vb4it187]
time: 0.018; rss: 679MB codegen passes [43v6g0y2xsxoggnt]
time: 0.000; rss: 679MB llvm function passes [c6lbtaiefvx3wya]
time: 0.016; rss: 679MB codegen passes [16u6js6g0l3k1ic6]
time: 0.000; rss: 679MB llvm module passes [c6lbtaiefvx3wya]
time: 0.000; rss: 679MB llvm function passes [2kjrmm4fe2aha78f]
time: 0.000; rss: 679MB llvm function passes [4ezmh1vbs95c5ack]
time: 0.013; rss: 679MB codegen passes [9fcb3syd3ne5k0n]
time: 0.000; rss: 679MB llvm module passes [2kjrmm4fe2aha78f]
time: 0.000; rss: 679MB llvm function passes [2jqywn86b2gsqohu]
time: 0.000; rss: 679MB llvm module passes [4ezmh1vbs95c5ack]
time: 0.015; rss: 679MB codegen passes [3e8c0xfx7ikmlnfk]
time: 0.000; rss: 679MB llvm function passes [4yh8x2b62dcih00t]
time: 0.000; rss: 681MB llvm module passes [4yh8x2b62dcih00t]
time: 0.017; rss: 681MB codegen passes [8xzrsc1ux72v29j]
time: 0.000; rss: 679MB llvm module passes [2jqywn86b2gsqohu]
time: 0.000; rss: 681MB llvm function passes [1mvmz58owquyropc]
time: 0.074; rss: 681MB llvm function passes [2iv7jmandrgcbb7e]
time: 0.000; rss: 681MB llvm module passes [1mvmz58owquyropc]
time: 0.000; rss: 681MB llvm function passes [4xq48u46a1pwiqn7]
time: 0.013; rss: 681MB codegen passes [c6lbtaiefvx3wya]
time: 0.000; rss: 681MB llvm module passes [2iv7jmandrgcbb7e]
time: 0.000; rss: 681MB llvm module passes [4xq48u46a1pwiqn7]
time: 0.000; rss: 677MB llvm function passes [48721dc4k5qxei0u]
time: 0.015; rss: 677MB codegen passes [2kjrmm4fe2aha78f]
time: 0.000; rss: 677MB llvm module passes [48721dc4k5qxei0u]
time: 0.000; rss: 677MB llvm function passes [98g0d9x8aw3akpe]
time: 0.000; rss: 677MB llvm module passes [98g0d9x8aw3akpe]
time: 0.022; rss: 677MB codegen passes [4ezmh1vbs95c5ack]
time: 0.023; rss: 677MB codegen passes [4yh8x2b62dcih00t]
time: 0.000; rss: 677MB llvm function passes [2f0hry2t7c05ttdi]
time: 0.000; rss: 677MB llvm module passes [2f0hry2t7c05ttdi]
time: 0.000; rss: 678MB llvm function passes [1dqvxks6k2bzkxe]
time: 0.018; rss: 678MB codegen passes [1mvmz58owquyropc]
time: 0.000; rss: 678MB llvm module passes [1dqvxks6k2bzkxe]
time: 0.019; rss: 678MB codegen passes [2jqywn86b2gsqohu]
time: 0.000; rss: 678MB llvm function passes [23tqyymcb18u96mb]
time: 0.000; rss: 678MB llvm module passes [23tqyymcb18u96mb]
time: 0.000; rss: 678MB llvm function passes [4jdnq7xfjeka1bt]
time: 0.000; rss: 678MB llvm module passes [4jdnq7xfjeka1bt]
time: 0.029; rss: 678MB codegen passes [4xq48u46a1pwiqn7]
time: 0.000; rss: 679MB llvm function passes [1y16o1qfye96o7m0]
time: 0.000; rss: 679MB llvm module passes [1y16o1qfye96o7m0]
time: 0.014; rss: 679MB codegen passes [48721dc4k5qxei0u]
time: 0.705; rss: 679MB translate to LLVM IR
time: 0.000; rss: 679MB assert dep graph
time: 0.000; rss: 679MB llvm function passes [v6ozwtpojmqfurc]
time: 0.000; rss: 679MB llvm module passes [v6ozwtpojmqfurc]
time: 0.012; rss: 679MB codegen passes [98g0d9x8aw3akpe]
time: 0.000; rss: 679MB llvm function passes [524bze3gcv99ucga]
time: 0.000; rss: 679MB llvm module passes [524bze3gcv99ucga]
time: 0.012; rss: 680MB codegen passes [2f0hry2t7c05ttdi]
time: 0.000; rss: 680MB llvm function passes [2r82puffnvvb8iic]
time: 0.000; rss: 680MB llvm module passes [2r82puffnvvb8iic]
time: 0.014; rss: 681MB codegen passes [1dqvxks6k2bzkxe]
time: 0.013; rss: 681MB codegen passes [4jdnq7xfjeka1bt]
time: 0.014; rss: 681MB codegen passes [23tqyymcb18u96mb]
time: 0.000; rss: 681MB llvm function passes [2xnvmuhjbhd7vxcm]
time: 0.000; rss: 681MB llvm module passes [2xnvmuhjbhd7vxcm]
time: 0.012; rss: 683MB codegen passes [1y16o1qfye96o7m0]
time: 0.010; rss: 683MB codegen passes [2r82puffnvvb8iic]
time: 0.011; rss: 683MB codegen passes [524bze3gcv99ucga]
time: 0.011; rss: 683MB codegen passes [v6ozwtpojmqfurc]
time: 0.006; rss: 686MB codegen passes [2xnvmuhjbhd7vxcm]
time: 0.543; rss: 690MB persist query result cache
time: 0.146; rss: 725MB persist dep-graph
time: 0.690; rss: 725MB serialize dep graph
time: 2.837; rss: 725MB translation
time: 1.168; rss: 275MB codegen passes [2iv7jmandrgcbb7e]
time: 2.562; rss: 237MB LLVM passes
time: 0.005; rss: 237MB serialize work products
time: 0.491; rss: 237MB linking
Finished dev [unoptimized + debuginfo] target(s) in 141.52 secs
```
</details>
(`x86_64-pc-windows-msvc`) | C-enhancement,E-needs-test,I-compiletime,T-compiler | low | Critical |
294,267,244 | flutter | How to make the screen moving with the Drawer? | Default :

`
This is what I want :

| c: new feature,framework,a: animation,f: material design,a: tablet,P3,team-design,triaged-design | low | Major |
294,295,477 | rust | Make the way to get intel-formatted assembly from rustc more discoverable | As far as I can tell, nowhere in `rustc --help` does it say how to control the ASM syntax. I only found it by finding this random gist: https://gist.github.com/bluss/5a088d3f420d12406689439e2940d731
Brainstorming ways to make it more discoverable:
- add an `--emit=asm-intel` option
- document `-C "llvm-args=-x86-asm-syntax=intel"` in the help somewhere
- ??? | A-frontend,C-enhancement,T-compiler | low | Minor |
294,341,603 | opencv | cv::viz::readMesh should support OBJ format as well | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
##### Detailed description
See the [code][1]
```.cpp
cv::viz::Mesh cv::viz::readMesh(const String& file) { return Mesh::load(file); }
```
Since `Mesh::load()` support both PLY and OBJ formats, see [here][2]
```.cpp
/**
**File type** can be one of the following:
- **LOAD_PLY**
- **LOAD_OBJ**
*/
static Mesh load(const String& file, int type = LOAD_PLY);
```
Therefore, `cv::viz::readMesh` should support OBJ format as well.
[2]: https://github.com/opencv/opencv/blob/master/modules/viz/include/opencv2/viz/types.hpp#L145
[1]: https://github.com/opencv/opencv/blob/master/modules/viz/src/vizcore.cpp#L243
<!-- your description -->
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | feature,category: viz | low | Critical |
294,342,888 | kubernetes | StatefulSet support for clean decommission on scale down | /kind feature
StatefulSets are nice resources. But most stateful applications need to execute a custom "decommission" procedure that must be executed (**correctly**) when a instance is removed permanently from the cluster.
**Examples:**
- When scaling down from say 5 pods to 4 pods, many datastores require that data contained in the 5th pod is rebalanced across the cluster
- Applications forming distributed clusters distinguish the case where a instance is temporary unreachable from the case where the instance is administratively removed with a "scale down": in the second case a special "decommission" procedure has to take place
- In other cases (e.g. transactions), data stored in the stateful set is "temporary" (although it needs to be persisted across restarts), so the application developer wants to put the application into a "terminating" state and wait for all data to be flushed to external resources before allowing "decommission" of the instance
The **problem** in Kubernetes is twofold:
**1) Pods that are requested to terminate (SIGTERM) are not informed by Kubernetes if it's a permanent termination or a temporary** one (i.e. if the stateful pod will be removed from the set or it will be recreated on another node)
**2) Kubernetes does not require the decommissioned pod to terminate gracefully for the scale down operation to succeed:** if the **termination grace period** is not enough for the cleanup operation to complete, or also if the docker container is killed during the termination phase, the pod is considered terminated and the set is scaled down, but the "decommission" operation may be incomplete
Currently applications need mechanisms for auto-detection of such scenarios (when a pod is not reachable for some time, it is considered gone). I've seen also people creating custom "controller" pods, that monitor such events in the cluster and start a custom "recovery" pod to cleanup data left behind.
It would be nice if StatefulSets could have hook to detect *decommission* vs. *termination* and allow to plug a different behavior on graceful termination in case of decommission. | kind/feature,sig/apps,lifecycle/frozen | medium | Critical |
294,546,953 | go | x/text: ISO8859_1 charmap does not map invalid bytes to replacement character | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
1.9.1 linux/amd64
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
linux, amd64
### What did you do?
https://play.golang.org/p/Z5uLLszLkXm
### What did you expect to see?
Byte values which are not valid ISO-8859-1 characters should map to the Unicode replacement character (0xfffd) when decoding.
### What did you see instead?
Byte values which are not valid characters map to the Unicode code point with the same value as the original byte.
| NeedsInvestigation | low | Minor |
294,549,985 | go | net/http: new HTTP client package | Tracking bug for higher-level more usable HTTP client functionality.
* take contexts
* more timeout control per-request, rather than per-transport/client
* ...Options pattern?
* decode into JSON/etc easily?
* treat non-2xx as error (by default?).
* slurp entire response from server (by default?)
* control over max response size (some sane default limit when slurping is enabled)
I thought we had a bug for this, but maybe not.
| Thinking,NeedsInvestigation | high | Critical |
294,565,163 | pytorch | Delete obsolete `THCDeviceTensor::downcastOuter` / `THCDeviceTensor::downcastInner` functions | This is a reminder to self to do this.
Also check to make sure nothing calls them anymore. I believe there are a lot of helper functions that call them but all those helper functions are unused.
cc @ngimel | module: bootcamp,module: cuda,triaged,small,better-engineering | low | Minor |
294,574,729 | TypeScript | --esModuleInterop: Strange incompatibility between correct and incorrect import of same type | **TypeScript Version:** master
**Code**
**b.d.ts**
```ts
declare class C {
f: this;
}
declare namespace C {}
export = C;
```
**a.ts**
```ts
import * as C1 from "./b";
import C2 = require("./b");
function f(t: (c: C1) => void): void {
t(new C2());
}
```
**Expected behavior:**
Error at `import * as C1`.
**Actual behavior:**
No error at the import declaration. Error at `new C2()`:
```ts
src/a.ts(5,7): error TS2345: Argument of type 'C' is not assignable to parameter of type 'C'.
Types of property 'f' are incompatible.
Type 'C' is not assignable to type 'this'.
```
(possibly related: #20704) | Suggestion,In Discussion,Domain: Error Messages,Experience Enhancement | low | Critical |
294,589,058 | TypeScript | esmoduleinterop: Improve error message when calling static method on class | **TypeScript Version:** master
**Code**
**b.d.ts**:
```ts
declare class C { static m(): void }
declare namespace C {}
export = C;
```
```ts
import * as C from "./b";
C.m();
```
**Expected behavior:**
```
src/a.ts(1,1): error TS7038: A namespace-style import will import a module namespace object, not a class.
src/a.ts(2,3): error TS2339: Property 'm' does not exist on type 'typeof C'.
```
**Actual behavior:**
`src/a.ts(2,3): error TS2339: Property 'm' does not exist on type 'typeof C'.` | Suggestion,Help Wanted,Domain: Error Messages | low | Critical |
294,593,811 | create-react-app | different default HOST in development mode from webpack-dev-server (0.0.0.0 vs localhost) | The [default host](https://github.com/facebook/create-react-app/blob/next/packages/react-scripts/scripts/start.js#L59) in the start script is `0.0.0.0`. This differs from the default of webpack-dev-server which is to bind to localhost.
I found this out by reading the output of `npm run dev`:
```
You can now view app in the browser.
Local: http://localhost:3001/
On Your Network: http://192.168.43.210:3001/
Note that the development build is not optimized.
To create a production build, use yarn build.
```
I hadn't noticed it much before, but I happened to be in a coffee shop working on an API client, so I found it a bit concerning that it was being exposed to the outside. The use case given is for Tools like cloud9. I suggest having them alter their `package.json` to `"start": "HOST=0.0.0.0 react-scripts start"` or something instead of having the unusual use case made the default.
If nothing else, if the default can be changed, the message shown when running `npm run dev` can be simplified. If you have the typical use case of using it for dev only on your machine it's unnecessary, and if you're using ngrok it's a bit redundant. | issue: proposal | low | Major |
294,600,545 | TypeScript | `this` types in intrinsic class attributes not inferred correctly | **Code**
```ts
// @strict: true
// @jsx: preserve
namespace JSX {
export interface Element {}
export interface IntrinsicClassAttributes<TClass> {
ref?: (ref: TClass) => void;
acceptProps?: (props: this) => boolean;
key: string;
}
export interface ElementClass extends Element {}
export interface ElementAttributesProperty { props: {}; }
export interface ElementChildrenAttribute { children: {}; }
export interface IntrinsicAttributes {}
export interface IntrinsicElements { [key: string]: Element }
}
class ElemClass<T extends {x: number}> implements JSX.ElementClass {
constructor(public props: T) {}
}
const elem = <ElemClass x={12} y={24} key="elem" ref={me => void me.props.y} acceptProps={p => p.ref(null as any) || void p.key || void p.y} />
```
**Expected behavior:**
no errors
**Actual behavior:**
```
const elem = <ElemClass x={12} y={24} key="elem" ref={me => void me.props.y} acceptProps={p => p.ref(null as any) || void p.key || void p.y} />
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
!!! error TS2322: Type '{ x: number; y: number; key: string; ref: (me: ElemClass<any>) => undefined; acceptProps: (p: IntrinsicAttributes & IntrinsicClassAttributes<ElemClass<any>> & any) => undefined; }' is not assignable to type 'IntrinsicAttributes & IntrinsicClassAttributes<ElemClass<{ x: number; y: number; key: string; ref: {}; acceptProps: {}; }>> & { x: number; y: number; key: string; ref: {}; acceptProps: {}; }'.
!!! error TS2322: Type '{ x: number; y: number; key: string; ref: (me: ElemClass<any>) => undefined; acceptProps: (p: IntrinsicAttributes & IntrinsicClassAttributes<ElemClass<any>> & any) => undefined; }' is not assignable to type 'IntrinsicClassAttributes<ElemClass<{ x: number; y: number; key: string; ref: {}; acceptProps: {}; }>>'.
!!! error TS2322: Types of property 'acceptProps' are incompatible.
!!! error TS2322: Type '(p: IntrinsicAttributes & IntrinsicClassAttributes<ElemClass<{ x: number; y: number; key: string; ref: {}; acceptProps: {}; }>> & { x: number; y: number; key: string; ref: {}; acceptProps: {}; }) => undefined' is not assignable to type '((props: IntrinsicClassAttributes<ElemClass<{ x: number; y: number; key: string; ref: {}; acceptProps: {}; }>>) => boolean) | undefined'.
!!! error TS2322: Type '(p: IntrinsicAttributes & IntrinsicClassAttributes<ElemClass<{ x: number; y: number; key: string; ref: {}; acceptProps: {}; }>> & { x: number; y: number; key: string; ref: {}; acceptProps: {}; }) => undefined' is not assignable to type '(props: IntrinsicClassAttributes<ElemClass<{ x: number; y: number; key: string; ref: {}; acceptProps: {}; }>>) => boolean'.
!!! error TS2322: Types of parameters 'p' and 'props' are incompatible.
!!! error TS2322: Type 'IntrinsicClassAttributes<ElemClass<{ x: number; y: number; key: string; ref: {}; acceptProps: {}; }>>' is not assignable to type 'IntrinsicAttributes & IntrinsicClassAttributes<ElemClass<{ x: number; y: number; key: string; ref: {}; acceptProps: {}; }>> & { x: number; y: number; key: string; ref: {}; acceptProps: {}; }'.
!!! error TS2322: Type 'IntrinsicClassAttributes<ElemClass<{ x: number; y: number; key: string; ref: {}; acceptProps: {}; }>>' is not assignable to type '{ x: number; y: number; key: string; ref: {}; acceptProps: {}; }'.
!!! error TS2322: Property 'x' is missing in type 'IntrinsicClassAttributes<ElemClass<{ x: number; y: number; key: string; ref: {}; acceptProps: {}; }>>'.
```
Unlike I initially expected, there's only an issue with `this` types - `this` types are formulated from contextual types, so fixing this _probably_ just requires tightening the new jsx inference code to instantiate the type with the correct `this`. | Bug,Domain: JSX/TSX | low | Critical |
294,621,974 | rust | -Z time-llvm-passes prints no info for llvm passes during LTO | There is information printed when building the individual crates, but when linking everything together and applying llvm passes to the entire program, no information is given:
````
time: 0.210; rss: 607MB ll link "allocator"
time: 52.029; rss: 693MB LTO passes
time: 87.689; rss: 693MB codegen passes [alacritty3]
time: 193.546; rss: 262MB LLVM passes
time: 0.000; rss: 254MB serialize work products
time: 0.000; rss: 253MB altering alacritty-664aed616dcaaec4.rlib
````
It might be neat to have that. | A-LLVM,C-enhancement,T-compiler | low | Minor |
294,623,367 | pytorch | NVIDIA_DRIVER_CAPABILITIES env variable is missing in pytorch docker images | I use the pytorch:v0.2 from dockerhub, I found the NVIDIA_DRIVER_CAPABILITIES env variable is missing
```
root@pytorch:/workspace# echo $NVIDIA_DRIVER_CAPABILITIES
```
but when I use the base image of pytorch `nvidia/cuda:8.0-cudnn6-devel-ubuntu16.04` according to the Dockerfile https://github.com/pytorch/pytorch/blob/master/Dockerfile, the result is correct.
```
root@cuda8:/# echo $NVIDIA_DRIVER_CAPABILITIES
compute,utility
```
could anyone help to check this?
some information:
```
$ docker images pytorch/pytorch:v0.2
REPOSITORY TAG IMAGE ID CREATED SIZE
pytorch/pytorch v0.2 4f6a5501c844 5 months ago 3.24GB
$ docker images nvidia/cuda:8.0-cudnn6-devel-ubuntu16.04
REPOSITORY TAG IMAGE ID CREATED SIZE
nvidia/cuda 8.0-cudnn6-devel-ubuntu16.04 ad0e48bad1f2 7 days ago 1.97GB
```
| triaged,module: docker | low | Minor |
294,634,141 | flutter | Get dimensions of widget after layout without render | ## Steps to Reproduce
In the moment we read out the size of the widget in the build method with a delay in order to get the correct size when the build update is done. That looks rather like a hack and has to be called
twice in order to work (see code). I would like to see an save callback (overwrite) in the derived state class where I can safely read the new size after build update. Maybe I just did not find it. The use case here is that two widget in a stack are depended on there position (height) but they are not lay-outed together.
```dart
@override
Widget build(BuildContext context) {
...
new Future.delayed(
const Duration(milliseconds: 60),
() => setState(() {
textComposerWidgetHeight = context.size.height + 70;
})).then(
(_) => new Future.delayed(
const Duration(milliseconds: 60),
() => setState(() {
textComposerWidgetHeight = context.size.height + 70;
}),
),
);
## Flutter Doctor
[โ] Flutter (on Microsoft Windows [Version 10.0.16299.214], locale en-US, channel dev)
โข Flutter version 0.0.21 at c:\sdks\flutter
โข Framework revision 2e449f06f0 (7 days ago), 2018-01-29 14:26:51 -0800
โข Engine revision 6921873c71
โข Tools Dart version 2.0.0-dev.16.0
โข Engine Dart version 2.0.0-edge.da1f52592ef73fe3afa485385cb995b9aec0181a
[โ] Android toolchain - develop for Android devices (Android SDK 27.0.2)
โข Android SDK at C:\Users\ride4\AppData\Local\Android\sdk
โข Android NDK location not configured (optional; useful for native profiling support)
โข Platform android-27, build-tools 27.0.2
โข Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
โข Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01)
[โ] Android Studio (version 3.0)
โข Android Studio at C:\Program Files\Android\Android Studio
โข Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01)
[โ] IntelliJ IDEA Community Edition (version 2017.2)
โข Flutter plugin version 19.1
โข Dart plugin version 172.4343.25
[โ] Connected devices
โข Android SDK built for x86 โข emulator-5556 โข android-x86 โข Android 7.1.1 (API 25) (emulator)
| c: new feature,framework,c: proposal,P3,team-framework,triaged-framework | high | Major |
294,698,602 | vscode | Provide ability to ignore all whitespace in diff editor (feature request) | Steps to Reproduce:
1. In settings it's possible to set `"diffEditor.ignoreTrimWhitespace": true`
2. There's no option to ignore all whitespace,
Would it be possible to add `"diffEditor.ignoreAllSpaces": true`? I believe the command-line to achieve this would be something like `git diff --ignore-space-at-eol -b -w [commit]...`
h/t [Daniel Gomez @ coderwall](https://coderwall.com/p/crj69a/from-a-useless-git-diff-to-a-useful-one)
[edit]
Might only need `-w` ?
[/edit] | feature-request,diff-editor | high | Critical |
294,729,425 | electron | Clipboard: allow to write to buffer from clipboard.write() method | * Electron version: 1.7.x
* Operating system: macOS
### Expected behavior
The method [`clipboard.write()`](https://github.com/electron/electron/blob/master/docs/api/clipboard.md#clipboardwritedata-type) accepts a data object but does not allow to write to a custom format with a `Buffer` (like `clipboard.writeBuffer()` does).
It would be great if this method could be changed to something like:
```javascript
clipboard.write({
text: "some text",
myFormat: new Buffer("some format")
});
```
My understanding is that the advantage of `clipboard.write()` is that I can set multiple clipboard contents at once so that the one format does not overwrite any other. Today, when I want to use `clipboard.writeBuffer()`, I cannot use it together with the other formats. | enhancement :sparkles: | medium | Critical |
294,808,173 | node | listening to sigint don't exit nicely | * **Version**: master (83c93158fb0e979dbffb4a776d237da0db8f7b08)
* **Platform**: MacOS
* **Subsystem**: `process`, `trace_events`
Run the following program, hit Ctrl+C. I would then expect `got SIGINT` and `exit` to be printed. It prints neither.
The problem that I'm really having, is that if I add `--trace-events-enabled --trace-event-categories node.async_hooks` to `spawn` then it doesn't flush the `trace_events`. I don't think this is a duplicate of https://github.com/nodejs/node/issues/14802, as the `SIGINT` handler is supposed to shut down the process nicely. But the signal handler simply doesn't execute.
```js
const fs = require('fs');
const { spawn } = require('child_process');
const CODE = `
const http = require('http');
// Keep the process alive
const server = http.createServer();
server.listen();
process.once('SIGINT', function () {
console.log('got SIGINT');
server.close();
});
process.once('exit', function () {
console.log('exit');
});
`;
const proc = spawn(process.execPath, ['-e', CODE], {
stdio: 'inherit'
});
// relay SIGINT to process
process.once('SIGINT', function () {
proc.kill('SIGINT');
});
``` | child_process | medium | Critical |
294,823,322 | vscode | Add command to select all next occurrences for all cursors | Consider the following:
<img width="160" alt="screen shot 2018-02-06 at 8 23 26 am" src="https://user-images.githubusercontent.com/2193314/35870688-207a2aa2-0b17-11e8-92f7-1cde31a3eeef.png">
Pressing cmd+d adds the quote to the right of `"Meta`:
<img width="181" alt="screen shot 2018-02-06 at 8 24 49 am" src="https://user-images.githubusercontent.com/2193314/35870729-3af10752-0b17-11e8-81c0-2ff658d3e3d9.png">
Pressing it again does nothing, presumably because it saw that the next quote was already selected.
This was the first time I did this, but I expected cmd+d to select the next item of each of the selections. | feature-request,editor-multicursor | low | Major |
294,828,456 | rust | Non-items dropped in custom derive | Hi.
It looks like when you generate non-items in a custom derive, they are silently dropped.
For instance, with the following custom derive:
```rust
#[proc_macro_derive(HelloWorld)]
pub fn hello_world(input: TokenStream) -> TokenStream {
let ast = syn::parse(input).unwrap();
let gen = impl_hello_world(&ast);
gen.into()
}
fn impl_hello_world(ast: &syn::DeriveInput) -> quote::Tokens {
let name = &ast.ident;
quote! {{
fn hello_world() {
println!("Hello, World! My name is {}", stringify!(#name));
}
hello_world();
}}
}
```
and the following code:
```rust
fn main() {
#[derive(HelloWorld)]
struct _Toto {
}
}
```
nothing is printed on the screen.
So, is it normal that non-items are dropped?
Could they be generated as well if it makes sense (like in a struct that is defined within a function)?
If not, if would be nice to have a least a warning when this happen.
Thanks.
cc @jseyfried @alexcrichton @nikomatsakis | C-enhancement,A-diagnostics,T-compiler,A-macros-2.0 | low | Minor |
294,872,557 | go | x/build/maintner: wrong type for DismissedReview.State | The [docs](https://developer.github.com/v3/issues/events/) say that it's a string with values `commented`, `approved`, `changes_requested`
https://github.com/golang/build/blob/438dce5f3d2fd35b54f4d5431e53e2029c40fffe/maintner/github.go#L466-L472
https://github.com/golang/build/blob/438dce5f3d2fd35b54f4d5431e53e2029c40fffe/maintner/maintpb/maintner.proto#L144-L151
I can send a change to fix it. I think it'd have no effect on existing code, since if people were using maintner with repos that had GitHub reviews, then they'd already have experienced panics. (I'd add it as a new field and remove the existing one + reserve the proto id) | Builders,NeedsInvestigation | low | Minor |
294,920,456 | TypeScript | Decouple jsx element type from jsx factory return type and sfc return type | We need to look up the type of a jsx expression by actually resolving the jsx factory call, so that we don't create a reference to the global `JSX.Element` type, which can change shape between react versions (as it needs to in the react 16 upgrade). We also need to resolve the sfc return type and class element type from the parameters of the factory function overloads for the same reasons, doubly so because the types allowable as `render` method and `SFC` return values are no longer the same as `JSX.Element` (namely, they can be strings, arrays, portals, etc).
This _might_ be considered a breaking change, because some consumers may expect `JSX.Element` to always be a supertype of both jsx element expression return types and SFC return types (even though this isn't true in react 16) - we certainly made that assumption internally, hence the need for the change. ๐ฑ | Suggestion,Breaking Change,Effort: Moderate,Domain: JSX/TSX,Fix Available | high | Critical |
294,937,022 | TypeScript | Suggestion: go-to-definition should go to a base definition if possible. | **TypeScript Version:** master
**Code**
```ts
class A {
m() {}
}
class B extends A {
m() {}
}
```
**Expected behavior:**
Go-to-definition on `m` in `class B` should go to `m` in `class A`. (Same if `A` were an interface.)
**Actual behavior:**
I get taken to the beginning of the current identifier. | Suggestion,In Discussion,Domain: Symbol Navigation | low | Minor |
294,937,498 | youtube-dl | FOX.COM LOGIN ERROR |
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.02.04*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2018.02.04**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [x] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
C:\youtube>youtube-dl -v https://www.fox.com/watch/2a7ca0c7488270db9af5f09f72308
d8a --ap-mso ATTOTT --ap-username [email protected] --ap-password xxxxx -
-skip-download --all-subs
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'https://www.fox.com/watch/2a7ca0c7488270db9af
5f09f72308d8a', '--ap-mso', 'ATTOTT', '--ap-username', 'PRIVATE', '--ap-password
', 'PRIVATE', '--skip-download', '--all-subs']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2018.02.04
[debug] Python version 3.4.4 (CPython) - Windows-7-6.1.7601-SP1
[debug] exe versions: none
[debug] Proxy map: {}
[FOX] 2a7ca0c7488270db9af5f09f72308d8a: Downloading JSON metadata
[FOX] 2a7ca0c7488270db9af5f09f72308d8a: Downloading Provider Redirect Page
[FOX] 2a7ca0c7488270db9af5f09f72308d8a: Downloading Provider Redirect Page (meta
refresh)
[FOX] 2a7ca0c7488270db9af5f09f72308d8a: Downloading Provider Login Page
[FOX] 2a7ca0c7488270db9af5f09f72308d8a: Logging in
[FOX] 2a7ca0c7488270db9af5f09f72308d8a: Confirming Login
[FOX] 2a7ca0c7488270db9af5f09f72308d8a: Retrieving Session
ERROR: Unable to download webpage: HTTP Error 401: Unauthorized (caused by HTTPE
rror()); please report this issue on https://yt-dl.org/bug . Make sure you are u
sing the latest version; type youtube-dl -U to update. Be sure to call youtube
-dl with the --verbose flag and include its complete output.
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpww9373dy\bu
ild\youtube_dl\extractor\common.py", line 519, in _request_webpage
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpww9373dy\bu
ild\youtube_dl\YoutubeDL.py", line 2199, in urlopen
File "C:\Python\Python34\lib\urllib\request.py", line 470, in open
File "C:\Python\Python34\lib\urllib\request.py", line 580, in http_response
File "C:\Python\Python34\lib\urllib\request.py", line 508, in error
File "C:\Python\Python34\lib\urllib\request.py", line 442, in _call_chain
File "C:\Python\Python34\lib\urllib\request.py", line 588, in http_error_defau
lt
| geo-restricted,tv-provider-account-needed | low | Critical |
294,984,026 | pytorch | [feature request]Add an env variable to cover different pathes when testing code with openmp | I have discussed with @fmassa about the problem before. Refer to #4824 and #4188.
In consideration of openmp overhead threshold, actually the code has 2 pathes need to be tested. However, the size of tensor in test case are usually small in order to increase test speed. So the current code is potentially dangerous because the path with multi-thread maybe is never tested. It's necessary to add an env variable to control code paths. I think we can start with TH and THNN modules which intensively use openmp. I hope that guys which are in charge of test case could also follow it once the env variable is ready.
cc @mruberry @VitalyFedyunin | module: tests,triaged,better-engineering | low | Minor |
294,997,666 | rust | Tracking issue for `try_reserve`: RFC 2116 fallible collection allocation | This is a tracking issue for the `try_reserve` part of the RFC "fallible collection allocation" (rust-lang/rfcs#2116).
**Steps:**
- [x] Implement the RFC #48648
- [x] Add `HashSet::try_reserve`: https://github.com/rust-lang/rust/pull/58623
- [x] Finalize the error type
- [ ] Adjust documentation ([see instructions on forge][doc-guide])
- [ ] Stabilization PR ([see instructions on forge][stabilization-guide])
[stabilization-guide]: https://forge.rust-lang.org/stabilization-guide.html
[doc-guide]: https://forge.rust-lang.org/stabilization-guide.html#updating-documentation
**API:**
```rust
impl /* each of String, Vec<T>, VecDeque<T> */ {
pub fn try_reserve(&mut self, additional: usize) -> Result<(), TryReserveError> {โฆ}
pub fn try_reserve_exact(&mut self, additional: usize) -> Result<(), TryReserveError> {โฆ}
}
impl /* each of HashMap<K, V> and HashSet<T> */ {
pub fn try_reserve(&mut self, additional: usize) -> Result<(), TryReserveError> {โฆ}
}
/// The error type for `try_reserve` methods.
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum TryReserveError { // in std::collections
/// Error due to the computed capacity exceeding the collection's maximum
/// (usually `isize::MAX` bytes).
CapacityOverflow,
/// The memory allocator returned an error
AllocError {
/// The layout of allocation request that failed
layout: Layout,
#[doc(hidden)]
#[unstable(feature = "container_error_extra", issue = "0", reason = "\
Enable exposing the allocatorโs custom error value \
if an associated type is added in the future: \
https://github.com/rust-lang/wg-allocators/issues/23")]
non_exhaustive: (),
},
}
impl From<LayoutErr> for TryReserveError {
fn from(_: LayoutErr) -> Self {
TryReserveError::CapacityOverflow
}
}
``` | A-allocators,A-collections,T-libs-api,B-unstable,B-RFC-implemented,C-tracking-issue,Libs-Tracked | high | Critical |
295,106,125 | pytorch | Saving model with runtime code changes | - OS: Ubuntu 16.04
- PyTorch version: 0.3.0.post4
- How you installed PyTorch (conda, pip, source): pip
- Python version: python 3.5.2
There is a bug in saving and loading the model with torch.save(...) and torch.load(...).
Workflow of bug:
1. Create neural network code and start learning loop. Model gets saved every 100 epoch with torch.save(...).
2. Change the neural network code and save code, while the learning loop from step 1 is still running.
3. Let training loop run until the next save of neural network.
4. Loading the last save with the neural network code from step 1 causes a warning (SourceChangeWarning). Also if you open the save file, there will be the changed code from step 2 instead of from step 1.
Short:
If you change the code of the neural network while runtime, a false code copy will be saved into the torch.save(...) file. Which causes a warning, if I want to load it with the original network code again.
Loading code:
`self.optimizeNet = torch.load(pacPretrained, map_location=lambda storage, loc: storage)`
Saving code:
`torch.save(self.optimizeNet, "saves2/"+str(ep)+'th-episode.pt')`
Neural network:
```
class DQN(nn.Module):
def __init__(self):
super(DQN, self).__init__()
#smallGrid = kernel_size=3, stride=1
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 32, kernel_size=3, stride=1)
self.bn3 = nn.BatchNorm2d(32)
self.head = nn.Linear(1088, 512)
self.head2 = nn.Linear(513, 256)
self.head3 = nn.Linear(256, 5)
self.lastLayer = 3
def forward(self, x, numCarry):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
#print(x.shape)
#print((x.view(x.size(0), -1)).shape)
try:
x = F.relu(self.head(x.view(x.size(0), -1)))
except:
raise ValueError("Dimension should be "+str((x.view(x.size(0), -1)).shape))
y = torch.cat((x,numCarry.unsqueeze(0).t()), dim=1)
y = F.relu(self.head2(y))
y = self.head3(y)
return y
``` | module: serialization,triaged | low | Critical |
295,211,864 | vscode | [css] Lab colors and other CSS Color Module Level 4 features | See the [CSS Color Module Level 4](https://drafts.csswg.org/css-color/#specifying-lab-lch) specification.
While the specification is still a draft and browsers don't support these features yet, there already exists a [PostCSS plugin](https://github.com/jonathantneal/postcss-color-lab) that (partially) implements them, so it's actually already possible to use at least a part of them.
```css
.example {
background-color: lab(33 43 -47);
box-shadow: 0 0 20px lch(54 107 41 / 10%);
}
```
Would be nice to get previews for those colors in VS Code like it's shown for rgb/rgba/hsl colors. | help wanted,feature-request,css-less-scss | low | Major |
295,224,863 | flutter | flutter shell wrapper can't resume transfer of SDK from completed download | At some point in the recent past, curl HTTP range commands to the URL for the Dart SDK likely worked even if the download was complete, or otherwise we wouldn't unconditionally do this:
https://github.com/flutter/flutter/blob/e1018fab34e49ad3941361aa06fbbe5173e110ac/bin/internal/update_dart_sdk.sh#L64
However, that doesn't seem to work right now:
```
~/dart/flutter/bin/cache
$ rm dart-sdk.zip
~/dart/flutter/bin/cache
$ curl --continue-at - --location --output /usr/local/google/home/jcollins/dart/flutter/bin/cache/dart-sdk.zip https://storage.googleapis.com/dart-archive/channels/dev/raw/2.0.0-dev.19.0/sdk/dartsdk-linux-x64-release.zip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 78.1M 100 78.1M 0 0 19080 0 1:11:36 1:11:36 --:--:-- 43.2M
~/dart/flutter/bin/cache
$ curl --continue-at - --location --output /usr/local/google/home/jcollins/dart/flutter/bin/cache/dart-sdk.zip https://storage.googleapis.com/dart-archive/channels/dev/raw/2.0.0-dev.19.0/sdk/dartsdk-linux-x64-release.zip
** Resuming transfer from byte position 81971659
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 171 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume.
~/dart/flutter/bin/cache
$
```
This can be worked around by deleting the cached dart-sdk.zip.
## Steps to Reproduce
flutter upgrade-packages
## Logs
```
~/dart/flutter/bin
$ ./flutter upgrade-packages
Downloading Dart SDK 2.0.0-dev.19.0...
** Resuming transfer from byte position 81971659
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 171 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume.
~/dart/flutter/bin
$
``` | tool,a: first hour,P2,team-tool,triaged-tool | low | Minor |
295,232,972 | rust | Tracking issue for RFC #1909: Unsized Rvalues (unsized_locals, unsized_fn_params) | This is a tracking issue for the RFC "Unsized Rvalues " (rust-lang/rfcs#1909).
**Steps:**
- [ ] Implement the RFC (cc @rust-lang/compiler -- can anyone write up mentoring instructions?)
- [ ] Adjust documentation ([see instructions on forge][doc-guide])
- [ ] Stabilization PR ([see instructions on forge][stabilization-guide])
[stabilization-guide]: https://forge.rust-lang.org/stabilization-guide.html
[doc-guide]: https://forge.rust-lang.org/stabilization-guide.html#updating-documentation
**Blocking bugs for `unsized_fn_params`:**
- https://github.com/rust-lang/rust/issues/111175
- https://github.com/rust-lang/rust/issues/115709 (bad interaction with extern_type: we either need to be okay with post-mono checks or need a trait for "dynamically sized" types)
- Reject unsized arguments for functions with non-Rust ABI
**Related bugs:**
- [x] https://github.com/rust-lang/rust/issues/61335 -- ICE when combined with async-await
- [x] https://github.com/rust-lang/rust/issues/68304 --
`Box<dyn FnOnce>` doesn't respect self alignment
**Unresolved questions:**
- [ ] What are the MIR semantics for unsized locals? We currently do not have operational semantics for them, and the way they currently work, there are no good operational semantics. This needs a complete from-scratch re-design.
- [ ] Can we carve out a path of "guaranteed no alloca" optimization? (See #68304 for some related discussion)
- [ ] Given that LLVM doesn't seem to support alloca with alignment, how do we expect to respect alignment limitations? (See #68304 for one specific instance)
- [ ] How can we mitigate the risk of unintended unsized or large allocas? Note that the problem already exists today with large structs/arrays. A MIR lint against large/variable stack sizes would probably help users avoid these stack overflows. Do we want it in Clippy? rustc?
- [ ] How do we handle truely-unsized DSTs when we get them? They can theoretically be passed to functions, but they can never be put in temporaries.
- [ ] Decide on a concrete syntax for VLAs.
- [ ] What about the [interactions between async-await/generators and unsized locals](https://github.com/rust-lang/rust/issues/48055#issuecomment-583743014)?
- [ ] We currently allow `extern type` arguments with `unsized_fn_params`, but that does not make much sense and leads to ICEs: https://github.com/rust-lang/rust/issues/115709 | B-RFC-approved,T-lang,C-tracking-issue,F-unsized_locals,F-unsized_fn_params,S-tracking-design-concerns,S-tracking-needs-summary | high | Critical |
295,267,057 | TypeScript | Differing results for inference produced when identical overload count changes | I happened to notice this while looking at something else inference related a bit ago. We already have a test that is looking for checking this, `fixingTypeParametersRepeatedly3.ts`, added in #2356, but it was accepted in #16368 (strict generic checks) with a (tiny, unnoticeable in all the other changes) baseline showing the fault.
**Code**
```ts
interface Base {
baseProp;
}
interface Derived extends Base {
toBase?(): Base;
}
var derived: Derived;
declare function foo<T>(x: T, func: (p: T) => T): T;
var result = foo(derived, d => d.toBase());
// bar should type check just like foo.
// result2 should have the same type as result
declare function bar<T>(x: T, func: (p: T) => T): T;
declare function bar<T>(x: T, func: (p: T) => T): T;
var result2 = bar(derived, d => d.toBase());
```
**Expected behavior:**
`result` and `result2` are assigned the same type (there's a comment in the test asserting as much) - they are implemented identically except `bar` has an extra identical overload. (note: this can happen easily in the real world nowadays via intersection types! Intersecting multiple interfaces with similar base interfaces can cause exactly this situation.)
**Actual behavior:**
`result` typechecks as `Derived`
`result2` typechecks as `Base` (seems wrong, since `Base` definitely doesn't have the `toBase` method used in the callback!)
`Derived` and `Base` are assignable to one another here, since `toBase` is optional, which is why the inference can succeed at all; however, `Base`, as reported in the second case can't actually have been the type used to typecheck the lambda body, as the `toBase` call doesn't cause an error in the test - I think the likely reason for the change in behavior is [this removal](https://github.com/Microsoft/TypeScript/pull/16368/files#diff-c3ed224e4daa84352f7f1abcd23e8ccaL16428), based on the content of the removed comment.
| Bug | low | Critical |
295,271,077 | godot | Exported (non-static) Arrays are not duplicated when instancing. | **Godot version:** Godot 3.0
**OS/device including version:** Solus Linux / X11
**Issue description:** When exporting an array variable in GDScript, the array is not duplicated upon instancing of the node. This behavior conflicts with all other properties, and is contrary to the OOP model Godot uses. The array is not static (or class-wide), but belongs to the instance, and so every instance's copy should be unique. Currently, unless manually duplicated, modifications to the array affect all instances of the class.
This issue may be present with Dictionaries as well, I have not tested them.
**Steps to reproduce:** Write this script:
```gdscript
extends Node
export var test = []
# This way too, it's not bound to explicit array constructor
# export(Array) var test
func _ready():
test.append("test %s" % self)
print(test)
```
Save a node with the script a as scene, instance two copies at runtime, and add them to the tree.
Expected result:
```
["test Node:XXXX"]
["test Node:YYYY"]
```
Result:
```
["test Node:XXXX"]
["test Node:XXXX", "test Node:YYYY"]
```
**Minimal reproduction project:** See above. | bug,discussion,topic:gdscript,confirmed | medium | Critical |
295,286,435 | TypeScript | Suggestion: case-sensitive imports | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.7.0-dev.201xxxxx
My mac does not have case sensitive imports. That is you can import the file `x.js` as `./X.js` and everything will work. However, our servers run on linux (like most) and I got a runtime exception that took down the whole server because linux imports are case sensitive.
I think this would be an awesome addition to Typescript to prevent fatal mistakes that are hard to catch like this one.
**Related Issues:**
https://github.com/Microsoft/TypeScript/issues/14460 | Suggestion,Help Wanted,Effort: Moderate | high | Critical |
295,310,416 | go | spec: order of evaluation vs panicking | It's possible to write expressions that, depending on order of evaluation, may panic for different reasons. I think this is fine, but it might be worth mentioning or providing examples to make clearer / less surprising to users.
For example, consider:
var p *[0]int
var i int
(*p)[i] = 0
Under cmd/compile, this snippet produces an "invalid memory address or nil pointer dereference" panic; whereas under gccgo, it produces an "index out of range" panic.
--
Another example is:
var p *int
var x int
_ = *p / x
Both cmd/compile and gccgo produce a nil pointer dereference panic, but the Go spec appears to also permit division by zero.
--
The latter example is relevant to #23661, because of:
var m = map[interface{}]int{0: 0} // non-empty map to workaround #23734
var k []int
var x int
m[k] /= x
Today, the last statement is compiled into roughly:
tmp := *mapaccess1(m, k)
if x == 0 {
panicdivide()
}
*mapassign(m, k) = tmp / x
It would be nice to optimize that to
if x == 0 {
panicdivide()
}
*mapassign(m, k) /= x
But that would change the panic from "hash of unhashable type []int" to "division by zero". This change appears valid under the Go spec, but it seems worth clarifying whether that's the case. | Documentation,NeedsFix | low | Major |
295,357,469 | TypeScript | Request for translation on Filipino | I want to contribute in translation | Suggestion,Awaiting More Feedback,i18n | low | Minor |
295,357,607 | flutter | iOS a11y text entry is incomplete | This is a follow-up for issue https://github.com/flutter/flutter/issues/12786.
As of https://github.com/flutter/engine/pull/4575 we have partial support for a11y text entry on iOS.
What works:
- When "tabbed" or "dragged" into
- Text fields are announced as "Text field"
- The hint "double tap to edit"
- Content of text field is announced
- It is announced weather a text field is in edit mode
- The insert mode (e.g. character mode) is not announced.
- Entered characters are echoed back
- After inserting space the last word is echoed back
- Swiping up or down moves the courser
- Selecting text with reverse-pinch
What remains to be done:
- [ ] **No announcement** when a **currently edited** text field gains focus via the "tap" gesture
- [ ] The position of the cursor is **not** announced
- [ ] Copy&paste does **not** work when rotor is in edit mode
- [ ] TextFields for passwords (`secureTextEntry = YES`) are **not** announced as "secure text fields"
- [ ] TextFields for passwords (`secureTextEntry = YES`) announce their content (e.g. "5 bullets") instead of the number of characters
/cc @goderbauer
| a: text input,platform-ios,framework,a: accessibility,a: fidelity,f: cupertino,has reproducible steps,P2,customer: flex,team-ios,triaged-ios,found in release: 3.19,fyi-text-input,found in release: 3.22 | low | Major |
295,363,956 | vscode | Titlebar-less view for Linux | Just as Mozilla did in Firefox 59 (you can check it out in Firefox Nightly atm), and someone was doing for the Mac version in #12377, it would be really good to have an option to **integrate the title bar in the same row where the tabs reside**, in order to save some vertical space (which is even more important if like me you usually work on a non-FHD laptop such as an old Thinkpad or a Dell Latitude).
I said in Gnome cause it's what I use, but maybe it could be made into a more portable solution that has options for all DEs with top bars (be it Mac, Gnome, Xfce...). Like Mozilla did, it could even start to roll out when just some DEs are supported, with the warning that it may not work on all systems.
| feature-request,linux,titlebar | high | Critical |
295,437,163 | rust | Macros: limitation in the expression parser for <$:path>::<ident> | Background: https://users.rust-lang.org/t/macros-using-path-tokens-with-format-args/15480
When passing in a `path` token to a macro, then trying to suffix the metavariable with `::<ident>` (or more), the parser cannot recognize the whole thing as an `:expr`, which causes failures on calls to macros like `format_args!`.
Repro:
```rust
macro_rules! something {
( $path:path ) => (
//println!("Say: {}", $path::say);
format_args!("Say: {}", $path::say);
);
}
mod foo {
const say: &str = "Hello";
}
mod bar {
const say: &str = "World";
mod baz {
const say: &str = "Universe";
}
}
fn main() {
something!(foo);
something!(bar);
something!(bar::baz);
}
```
It fails with three instances of this error (with `RUSTFLAGS='-Z external-macro-backtrace'`):
```rust
error: expected token: `,`
--> src/main.rs:4:9
|
1 | / macro_rules! talk {
2 | | ( $mod:path ) => (
3 | | // print!("Say: {}", $mod::say);
4 | | format_args!("Say: {}", $mod::say);
| | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5 | | );
6 | | }
| |_- in this expansion of `talk!`
...
21 | talk!(foo);
| ----------- in this macro invocation
```
A workaround is to use:
```rust
{ use $path as base; base::say }
```
but would be great if we could just use:
```
$path::say
```
I couldn't find an existing report. I'm guessing it falls under RFE. | C-enhancement,A-parser,A-macros,T-compiler | medium | Critical |
295,453,638 | go | cmd/link: panic: runtime error: slice bounds out of range | ### What version of Go are you using (`go version`)?
1.9.4
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
Ubuntu 17.10 linux_amd64
### What did you do?
I'm compiling my project with bazel by just running `bazel build node/agent/unsullied`.
The code is too complicated. I'm working to make a minimum example.
### What did you expect to see?
bazel build succeed.
### What did you see instead?
At the last step, GoLink crashed.
```
ERROR: /src/node/agent/unsullied/BUILD.bazel:42:1: GoLink node/agent/unsullied/linux_amd64_stripped/unsullied failed (Exit 1)
github.com/vishvananda/netlink.NewHandleAt: call to external function
github.com/vishvananda/netlink.(*Handle).LinkList: call to external function
panic: runtime error: slice bounds out of range
goroutine 1 [running]:
cmd/link/internal/ld.decodetypePtrdata(0x6f8060, 0xc42219a3d8, 0xc42082bbd0)
/usr/local/go/src/cmd/link/internal/ld/decodesym.go:83 +0x9c
cmd/link/internal/ld.(*GCProg).AddSym(0xc426f628f0, 0xc422195c40)
/usr/local/go/src/cmd/link/internal/ld/data.go:1260 +0x76
cmd/link/internal/ld.(*Link).dodata(0xc4204cc000)
/usr/local/go/src/cmd/link/internal/ld/data.go:1573 +0x1932
cmd/link/internal/ld.Main()
/usr/local/go/src/cmd/link/internal/ld/main.go:219 +0x9fd
main.main()
/usr/local/go/src/cmd/link/main.go:58 +0xac
2018/02/08 18:05:07 error running linker: exit status 1
Target //node/agent/unsullied:unsullied failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 116.070s, Critical Path: 1.31s
FAILED: Build did NOT complete successfully
``` | help wanted,NeedsInvestigation | low | Critical |
295,453,930 | vscode | Allow to open multiple workspaces in the same window | Workspace feature is great because we can grouping projects in a top context (microservices projects in a big API workspace). But, only one workspace can be open. This feature can be powerful if multiple workspaces can be open at the same time (in the same window).

And another request : it will be great if :
- Right click on the workspace name should open menu with a "settings" item (which open xxxx.code-workspace).
- Workspace settings should contain a `name` attribute (to clearly identify the workspace in the left pane in VS Code) | feature-request,workbench-multiroot | high | Critical |
295,470,526 | vscode | [json] launch.json completion for "type" does not include expected values | This completion only seems to include `node`, however if I type `dart` it works fine and if I type nonsense, it warns me that it's invalid. So it seems to know `dart` is a valid option, but I'm not sure why it's not in the completion list.

| bug,json | low | Minor |
295,477,377 | TypeScript | SUGGESTION: add support for writeonly properties on interfaces | I'd like to resurrect an old discussion around the desire to have getters/setters support for interfaces: #11878
Obviously we can use `readonly` to express a property on an interface that has just a getter but there's no way of expressing that a property should have just a setter. In issue #11878 there was an idea proposed to add the concept of a `writeonly` property designed to cover this need but the consensus was that there wasn't enough real-world scenarios to justify such a feature. So let me try and add one.
We have a situation where we have a child object that we want to publish data to a parent but while we want the parent to know about its child we don't want the child to know about its parent. We've ruled out the use of events because we only want a single subscriber and we need to return a Promise to the child to let them know when we're done. Instead we've opted to establish what we would have called a "weak reference" between the child and parent back in the days of COM. The interface in TypeScript looks something like this:
```TypeScript
interface Adapter {
onDataReceived: (data: any) => Promise<void>;
publishData(data: any): Promise<void>;
}
```
As you can see data flows bidirectionally between the parent and child and while we've received a couple of questions about why the interface is the way it is, it's generally easy enough to grok from a TypeScript perspective.
The issue we just ran into, however, is that a developer on our team just created a class in ES6 that implements this interface and the result ended up being.... yuck :(
If we literally implement this interface in a declarative way in ES6 it looks something like:
```JavaScript
export class WebAdapter {
get onDataReceived() {
return this.callback;
}
set onDataReceived(cb) {
this.callback = cb;
}
postData(data) {
}
}
```
Not only is it crappy that you have to define a getter and a setter, the fact of the matter is we're never going to ask for the callback back so the getter is pointless here. So what did our dev do? He did this:
```JavaScript
export class WebAdapter {
onDataReceived(data) {
// will be replaced by parent
}
postData(data) {
}
}
```
That technically works and what's nice is you have some sense of the signature of the handler but it makes my skin crawl to look at it. If I was to mirror that in my TypeScript interface you'd have zero clue that `onDataReceived()` was something I expect you to override. What I really want the developer to have to write implementation wise is this:
```JavaScript
export class WebAdapter {
set onDataReceived(cb) {
this.callback = cb;
}
postData(data) {
}
}
```
That's the proper contract for a weak reference but I have no way of expressing it in TypeScript. While it's very rare that you need to do this it doesn't make it any less valid a scenario. The addition of "writeonly" properties would give me a way to express this. | Suggestion,Awaiting More Feedback | high | Critical |
295,496,249 | TypeScript | Error: Type cannot be used to index type after two indexes | <!-- If you have a BUG:
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.7.1, 2.8.0-dev.20180208
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
Type cannot be used to index type
**Code**
```ts
interface IExample {
foo: {
bar: {
baz: number;
}
}
}
type F = <
name extends keyof IExample,
val extends keyof IExample[name]
>() => IExample[name][val]['baz']; // โ Type '"baz"' cannot be used to index type 'IExample[name][val]'.
```
**Expected behavior:**
In version 2.7.0-dev.20171115 this code was checking without errors.
**Actual behavior:**
Now in throws error: Type '"baz"' cannot be used to index type 'IExample[name][val]'.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
https://www.typescriptlang.org/play/index.html#src=interface%20IExample%20%7B%0D%0A%20%20%20%20foo%3A%20%7B%0D%0A%20%20%20%20%20%20%20%20bar%3A%20%7B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20baz%3A%20number%3B%0D%0A%20%20%20%20%20%20%20%20%7D%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0Atype%20F%20%3D%20%3C%0D%0A%20%20%20%20name%20extends%20keyof%20IExample%2C%0D%0A%20%20%20%20val%20extends%20keyof%20IExample%5Bname%5D%0D%0A%3E()%20%3D%3E%20IExample%5Bname%5D%5Bval%5D%5B'baz'%5D%3B
| Bug | medium | Critical |
295,563,458 | rust | Rustdoc: distinguish provided methods on trait impls | Currently, if a trait impl uses a provided method's default implementation, there is no easy way to tell directly from the rustdoc. For instance, see the `Write for Vec<u8>` [impl block](https://doc.rust-lang.org/std/vec/struct.Vec.html#impl-Write). The first three methods are implemented by `Vec<u8>`, while the other two use the default implementation. Of the ones implemented by `Vec<u8>`, `write_all` overrides a default implementation while the other two are required.
Ideally, we would have some small way to distinguish between provided, overridden, and required methods in the trait impl blocks. While it isn't useful for normal documentation practices, it is useful for educational purposes (e.g. if I want to see how a type implements a trait, I can know to ignore provided methods) as well as debugging, if I have different levels of trust for the trait author and implementor. | T-rustdoc,C-enhancement,A-trait-system | low | Critical |
295,619,907 | go | x/crypto/ssh: handshake failed: ssh: unsupported DSA key size 2048 | We are seeing some odd behavior connecting to a customer SFTP site with a username and password. This code is working for all other tested endpoints.
I see a section of code in the crypto/keys.go file in the function checkDSAParams that fails if the key length is not 1024, but since I am able to connect to that SFTP with ssh and other SFTP clients, I'm not sure why that restriction is being enforced in Go.
### What version of Go are you using (`go version`)?
go1.9.2 linux/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/go"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
### What did you do?
Attempting to establish a connection to a remote SFTP server.
ssh.Dial("tcp", config.SftpServer+":"+string(config.SftpPort), sshConfig)
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
### What did you expect to see?
A successful SSH handshake and authentication.
### What did you see instead?
An error: ssh: handshake failed: ssh: unsupported DSA key size 2048 | NeedsInvestigation | medium | Critical |
295,641,340 | vscode | Incorrect indentation for single line if/for/while/etc, multiline chaining statements etc | I am experiencing some weird indentation issues. Not using any plugins to prettify or similar.
```
if (condition) {
return; // <-- correct indentation
}
```
```
if(condition)
return; // <-- incorrect indentation
```
Some ppl claim this should be avoided. I don't agree. Using pair of { } for simple early exits messes up code. Putting the return on the end of the same line can be hard to read if the condition is long.
This is not exclusive to return. Applies to all single lines.
same problems apply to for, while, etc.
```
var transformedValues =
originalValues <-- incorrect should be indented one level.
.where(condition) <-- incorrect, should be indented two levels.
.select(transform); <-- incorrect, should be indented two levels.
function someFunction() {
callToSomeOtherFunction(
variableWithLongNameWhichRequiresASeparateLine,
anotherVariableWithLongNameWhichRequiresASeparateLine);
} <-- incorrect should not be indented.
function ... <-- incorrect the rest of the file is indented.
``` | bug,typescript,javascript,editor-autoindent,on-unit-test | high | Critical |
295,670,966 | kubernetes | Allow limiting adding/removing finalizers | /kind feature
@kubernetes/sig-auth-feature-requests
@kubernetes/sig-api-machinery-feature-requests
**What happened**:
As a namespace-constrained user, I am able to manually add/remove finalizers added by system components:
* garbage collection finalizers
* pv/pvc protection finalizers
* service catalog deprovisioning finalizers
* etc...
**What you expected to happen**:
As a cluster admin, I expected to be able to control what finalizers can be added/removed by end users, so they can be relied on by system components and controllers for gating deletion | area/security,area/apiserver,sig/api-machinery,kind/feature,sig/auth,priority/important-longterm,lifecycle/frozen | medium | Major |
295,677,840 | godot | Inaccurate physics with simple bodies (RigidBodies can temporarily penetrate other PhysicsBodies) | **Godot version:**
v3.0.stable.official
**OS/device including version:**
Windows 10 Pro 10.0.16299 Build 16299
**Issue description:**
Simple 3d physics are not resolved correctly. Slow moving rigidbodies can enter/penetrate staticbodies at slow speeds. This collision is slowly corrected over several frames by pushing out the rigidbody. Setting the physics fps to a high value like 240 fps minimizes the problem, but it's still noticeable.
[Sample Video](https://gfycat.com/FailingChillyChrysomelid)
**Steps to reproduce:**
Add a staticbody as a ground. Add an rigidbody with a sphere collider and drop it from medium heights (10-20 Units).
**Minimal reproduction project:**
Left-Mouseclick accelerate rigidbody.
[Physics_Issue.zip](https://github.com/godotengine/godot/files/1708905/Physics_Issue.zip)
| bug,confirmed,topic:physics | medium | Major |
295,821,156 | TypeScript | Empty type inferred by Object.entries | **TypeScript Version:** 2.7.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** Object entries empty TS2365
**Code**
```ts
let o: any = {x: 5};
let p = Object.entries(o).map(([k, v]) => v + 1);
```
Compile command: `tsc --lib es2017,es2017.object a.ts`
**Expected behavior:**
This should compile without any error, assuming `v: any`.
This was the case for Typescript 2.6.2 at least.
**Actual behavior:**
(3,43): error TS2365: Operator '+' cannot be applied to types '{}' and '1'.
Following codes compile without errors:
```ts
let o: object = {x: 5};
let p = Object.entries(o).map(([k, v]) => v + 1);
```
```ts
let o: any = {x: 5};
let p = Object.entries(o).map(([k, v]: [any, any]) => v + 1);
```
| Bug,Domain: lib.d.ts | medium | Critical |
295,822,682 | angular | Angular Elements should preserve/forward component methods | ## I'm submitting a...
<!-- Check one of the following options with "x" -->
- [ ] Regression (a behavior that used to work and stopped working in a new release)
- [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
- [x] Feature request
- [ ] Documentation issue or request
- [ ] Support request
Partially considered a bug due to developer expectations, partially a feature request because it might not be "fixable" in that sense.
## Current behavior
When creating Custom Elements from Angular Components using Angular Elements APIs, the resulting elements do not preserve the Angular component methods on the Custom elements.
For example giving a component like this:
```
@Component(...)
export class CarComponent {
drive() {...}
}
```
does not make `drive()` available on the resulting custom element:
```
const car = document.querySeletor('car-thing');
car.drive(); // Errors out because `drive()` doesn't exist on <car-thing>
```
This is because an Angular component doesn't actually "become" the Custom Element but is rather preserved "inside" the Custom Element and readable via `componentRef`. To make the code above work we'd need to call
```
car.componentRef.instance.drive();
```
## Expected behavior
As Angular Elements are being promoted as Angular Components packaged as Custom Elements, I'd expect Angular Elements to have the same characteristics (Custom Elements come with their attributes, properties, methods and events).
In other words, I'd expect this to work:
```
car.drive(); // Angular takes care of preserving or forward API methods
```
## Some useful notes and ideas on this
I've talked to @gkalpak about this already and one of his ideas was to maybe come up with a new decorator that tells Angular to preserve the decorated method on the resulting host element.
Another idea would be to declare in the registration process which methods should be preserved/forwarded. This could look like this:
```
registerAsCustomElements([{component: CarComponent, methods: ['drive']}], ...)
```
The reason there might be an additional API for this needed is because we can't just preserve/forward all of the components methods. Some methods may not be intended to be available on the resulting Angular Element (e.g. lifecycle hooks etc.)
## What is the motivation / use case for changing the behavior?
I think it'd be great if consumers of Angular Elements don't have to worry about Angular internals and can in fact use any Angular Element exactly the same way they'd use any other DOM element. Right now, to call component methods, consumers of Angular Elements have to know that those are populated on `componentRef.instance`, which doesn't seem to be very convenient nor what one would expect. | type: bug/fix,feature,area: elements,feature: under consideration | medium | Critical |
295,920,753 | pytorch | BCELoss - weight parameter shape incorrect | The `weight` parameter of `BCELoss` seems to be incorrectly defined when using a multi-dimensional input and target. Related [forum thread](https://discuss.pytorch.org/t/binary-cross-entropy-weights/13299).
The documentation defines `weight` as:
> If given, has to be a Tensor of size โnbatchโ.
However, this example throws an error:
```
x = Variable(torch.randn(4, 2, 2))
y = Variable(torch.Tensor(4, 2, 2).random_(2))
output = F.sigmoid(x)
# Create weight according to doc in BCELoss
weight = torch.randn(4)
criterion_weighted = nn.BCELoss(weight=weight)
loss_weighted = criterion_weighted(output, y) # Error!
```
A workaround is to `unsqueeze` the `weight` tensor to match the number of dimensions:
```
# Unsqueeze weight tensor
weight = torch.randn(4, 1, 1)
criterion_weighted = nn.BCELoss(weight=weight)
loss_weighted = criterion_weighted(output, y)
```
Internally, `_infer_size` is called and fails in the first code snippet.
The second code snippet applied a weighting for each batch element, which is fine.
Should we automatically unsqueeze the weight tensor, if input and target are multi-dimensional?
Also, how should we handle class weighting?
If we just pass 2 weights as the `weight` tensor, the code successfully runs, but does not apply class weighting, which might mislead some users:
```
# Create class weights
weight = torch.FloatTensor([0.1, 0.9])
# Internally, weight is expanded as
size = _infer_size(weight.size(), y.size())
weight_expanded = weight.expand(size)
print(weight_expanded) # This is not, what we wanted as class weights!
criterion_weighted = nn.BCELoss(weight=weight)
loss_weighted = criterion_weighted(output, y)
criterion_nonreduced = nn.BCELoss(reduce=False)
loss_unreduced = criterion_nonreduced(output, y)
loss_weighted_manual = (Variable(weight_expanded) * loss_unreduced).mean()
if loss_weighted == loss_weighted_manual:
print('Class weighting failed')
```
A workaround would be:
```
weight_ = weight[y.data.view(-1).long()].view_as(y)
criterion = nn.BCELoss(reduce=False)
loss = criterion(output, y)
loss_class_weighted = loss * Variable(weight_)
```
What is wanted behavior of the `weight` parameter in `BCELoss`: class or batch weighting?
Both cases have some issues at the moment in my opinion.
I would like to fix this issue, but I would like to hear some opinions on the right behavior.
Class weighting would be consistent with other loss functions like `NLLLoss`, but maybe batch weighting is a more common use case for `BCELoss`.
PyTorch version: `0.4.0a0+492e26f` (installed from source)
| module: nn,module: loss,triaged | low | Critical |
295,941,230 | pytorch | TakeBackward taking a significant portion of backward time | I'm trying to write some code that requires reindexing the hidden states of an LSTM at each step (because of particle filtering). Is there a more efficient way to do this? It's pretty slow right now, and I wonder if another approach (`scatter_`?) might be better.
If there's an internal issue with PyTorch, I'm happy to tackle it and make a PR; I just don't know where to look.
- OS: Ubuntu 16.04
- PyTorch version: `0.4.0a0+e3e3874`
- How you installed PyTorch (conda, pip, source): source
- Python version: 2.7.12
- CUDA/cuDNN version: 8.0
- GPU models and configuration: Tesla K80
- GCC version (if compiling from source): 5.4.0
```
import torch
import torch.nn as nn
from torch.autograd import Variable
INPUT_SIZE = 1024
HIDDEN_SIZE = 512
class IndexingIssues(nn.Module):
def __init__(self):
super(IndexingIssues, self).__init__()
self.enc = nn.LSTMCell(INPUT_SIZE, HIDDEN_SIZE)
def forward(self, input):
seq_len, batch_sz, _ = input.size()
h = Variable(input.data.new(batch_sz, HIDDEN_SIZE).zero_())
c = Variable(input.data.new(batch_sz, HIDDEN_SIZE).zero_())
for i in range(seq_len):
h, c = self.enc(input[i], (h, c))
idx = torch.arange(batch_sz).long().cuda().view(-1) # just identity for now
h = h[idx]
c = c[idx]
return h.sum()
def main():
data = Variable(torch.randn(15, 80, INPUT_SIZE).cuda())
model = IndexingIssues().cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=0.0001)
with torch.autograd.profiler.profile() as prof:
for i in range(5):
optimizer.zero_grad()
loss = model(data)
loss.backward()
optimizer.step()
prof.export_chrome_trace("repro.prof")
if __name__ == '__main__':
main()
```
Here's the chrome_trace it outputs [repro.txt](https://github.com/pytorch/pytorch/files/1711786/repro.txt)
(this isn't a text file, just needed to rename so Github would accept it)
cc @ezyang @SsnL @albanD @zou3519 @gqchen @VitalyFedyunin @ngimel | module: performance,module: autograd,triaged | low | Major |
295,955,472 | pytorch | ASAN detected leaks on python -c 'import torch' | They seem relatively benign, but might still be worth looking into. Some of them might just be Python bugs but I see a bunch of leaks related to pybind11.
```
=================================================================
==15599==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 1492688 byte(s) in 518 object(s) allocated from:
#0 0x7fc0a1304602 in malloc (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x98602)
#1 0x556778751803 in PyObject_Malloc (/private/home/ezyang/pytorch-env/bin/python3.6+0xe8803)
Direct leak of 6177 byte(s) in 8 object(s) allocated from:
#0 0x7fc0a1304602 in malloc (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x98602)
#1 0x556778750050 in _PyObject_Alloc.isra.0 (/private/home/ezyang/pytorch-env/bin/python3.6+0xe7050)
Direct leak of 1016 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1304961 in realloc (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x98961)
#1 0x55677877cd2c in _PyObject_GC_Resize (/private/home/ezyang/pytorch-env/bin/python3.6+0x113d2c)
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935ff352 in pybind11::cpp_function::make_function_record() /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:90
#2 0x7fc093919358 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}, int, torch::autograd::profiler::ProfilerState const&, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&, int (*)(torch::autograd::profiler::ProfilerState const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:100
#3 0x7fc093916ed2 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914fc5 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391336e in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1441
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935ff352 in pybind11::cpp_function::make_function_record() /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:90
#2 0x7fc093919126 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}, bool, torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&, bool (*)(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:100
#3 0x7fc093916e34 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914de9 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913354 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1411
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935ff352 in pybind11::cpp_function::make_function_record() /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:90
#2 0x7fc0939186b0 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::dict, pybind11::handle>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::dict (*)(pybind11::handle)) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:100
#3 0x7fc093916b09 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, , void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc09391487d in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def_property_readonly_static<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3}, pybind11::return_value_policy>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::return_value_policy const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1207
#5 0x7fc0939132e4 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1399
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935ff352 in pybind11::cpp_function::make_function_record() /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:90
#2 0x7fc093918c4a in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}, int, torch::autograd::profiler::ProfilerState, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&, int (*)(torch::autograd::profiler::ProfilerState), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:100
#3 0x7fc093916cf8 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914a31 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913320 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1406
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935ff352 in pybind11::cpp_function::make_function_record() /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:90
#2 0x7fc09391c834 in _ZN8pybind1112cpp_function10initializeIZNOS_6detail8initimpl14pickle_factoryIZNS_5enum_IN5torch8autograd8profiler13ProfilerStateEEC4IJEEERKNS_6handleEPKcDpRKT_EUlRKS9_E27_ZNSB_IJEEESE_SG_SK_EUlNS_5tupleEE28_FSO_SM_EFS9_SO_EE7executeINS_6class_IS9_JEEEJEEEvRT_DpRKT0_EUlRNS2_16value_and_holderESO_E_vJS13_SO_EJNS_4nameENS_9is_methodENS_7siblingENS2_24is_new_style_constructorEEEEvOSW_PFT0_DpT1_EDpRKT2_ /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:100
#3 0x7fc09391ada1 in pybind11::cpp_function::cpp_function<void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})#1}, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor, void>(pybind11::class_&&, pybind11::name const) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093919752 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})#1}, pybind11::detail::is_new_style_constructor>(char const*, pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&, pybind11::detail::is_new_style_constructor const) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391708e in void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:324
#6 0x7fc093915397 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>(pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1133
#7 0x7fc093913397 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1443
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935ff352 in pybind11::cpp_function::make_function_record() /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:90
#2 0x7fc093918358 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, pybind11::handle>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > (*)(pybind11::handle)) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:100
#3 0x7fc0939169ff in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, , void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc09391476b in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def_property_readonly_static<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1207
#5 0x7fc0939132ba in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1384
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935ff352 in pybind11::cpp_function::make_function_record() /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:90
#2 0x7fc09391c440 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::tuple, torch::autograd::profiler::ProfilerState const&, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&, pybind11::tuple (*)(torch::autograd::profiler::ProfilerState const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:100
#3 0x7fc09391acf8 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093919515 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093917067 in void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:318
#6 0x7fc093915397 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>(pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1133
#7 0x7fc093913397 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1443
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935ff352 in pybind11::cpp_function::make_function_record() /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:90
#2 0x7fc093918f1e in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}, bool, torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&, bool (*)(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:100
#3 0x7fc093916d96 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914c0d in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391333a in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1410
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935ff352 in pybind11::cpp_function::make_function_record() /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:90
#2 0x7fc09391bd00 in void pybind11::cpp_function::initialize<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, void, pybind11::detail::value_and_holder, int, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<torch::autograd::profiler::ProfilerState>&&, void (*)(pybind11::detail::value_and_holder, int), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::detail::is_new_style_constructor const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:100
#3 0x7fc09391aa47 in pybind11::cpp_function::cpp_function<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor, void>(pybind11::class_<torch::autograd::profiler::ProfilerState>&&, pybind11::name const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093918a06 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, pybind11::detail::is_new_style_constructor>(char const*, pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute&&, pybind11::detail::is_new_style_constructor const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093916c7a in void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:234
#6 0x7fc09391494f in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1127
#7 0x7fc093913306 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1405
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935ff352 in pybind11::cpp_function::make_function_record() /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:90
#2 0x7fc093918100 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}, pybind11::str, torch::autograd::profiler::ProfilerState, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&, pybind11::str (*)(torch::autograd::profiler::ProfilerState), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:100
#3 0x7fc09391695a in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc0939145a7 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913298 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1377
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Direct leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1304602 in malloc (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x98602)
#1 0x5567787523b5 in PyThread_allocate_lock (/private/home/ezyang/pytorch-env/bin/python3.6+0xe93b5)
Indirect leak of 111795 byte(s) in 110 object(s) allocated from:
#0 0x7fc0a1304602 in malloc (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x98602)
#1 0x556778751803 in PyObject_Malloc (/private/home/ezyang/pytorch-env/bin/python3.6+0xe8803)
Indirect leak of 1792 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1304961 in realloc (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x98961)
#1 0x55677879e005 in _PyUnicodeWriter_Finish (/private/home/ezyang/pytorch-env/bin/python3.6+0x135005)
Indirect leak of 1504 byte(s) in 2 object(s) allocated from:
#0 0x7fc0a1304961 in realloc (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x98961)
#1 0x55677877cd2c in _PyObject_GC_Resize (/private/home/ezyang/pytorch-env/bin/python3.6+0x113d2c)
Indirect leak of 608 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1304602 in malloc (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x98602)
#1 0x556778751803 in PyObject_Malloc (/private/home/ezyang/pytorch-env/bin/python3.6+0xe8803)
#2 0x7fc09a536df7 (<unknown module>)
Indirect leak of 88 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc09360050c in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:385
#2 0x7fc0939191d8 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}, bool, torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&, bool (*)(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916e34 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914de9 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913354 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1411
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 88 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc09360050c in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:385
#2 0x7fc093918fd0 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}, bool, torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&, bool (*)(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916d96 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914c0d in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391333a in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1410
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 81 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ffca1 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:281
#2 0x7fc093918fd0 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}, bool, torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&, bool (*)(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916d96 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914c0d in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391333a in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1410
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 81 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ffca1 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:281
#2 0x7fc0939191d8 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}, bool, torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&, bool (*)(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916e34 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914de9 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913354 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1411
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 71 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc09360050c in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:385
#2 0x7fc09391c8ed in _ZN8pybind1112cpp_function10initializeIZNOS_6detail8initimpl14pickle_factoryIZNS_5enum_IN5torch8autograd8profiler13ProfilerStateEEC4IJEEERKNS_6handleEPKcDpRKT_EUlRKS9_E27_ZNSB_IJEEESE_SG_SK_EUlNS_5tupleEE28_FSO_SM_EFS9_SO_EE7executeINS_6class_IS9_JEEEJEEEvRT_DpRKT0_EUlRNS2_16value_and_holderESO_E_vJS13_SO_EJNS_4nameENS_9is_methodENS_7siblingENS2_24is_new_style_constructorEEEEvOSW_PFT0_DpT1_EDpRKT2_ /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391ada1 in pybind11::cpp_function::cpp_function<void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})#1}, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor, void>(pybind11::class_&&, pybind11::name const) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093919752 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})#1}, pybind11::detail::is_new_style_constructor>(char const*, pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&, pybind11::detail::is_new_style_constructor const) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391708e in void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:324
#6 0x7fc093915397 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>(pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1133
#7 0x7fc093913397 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1443
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 65 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc09360050c in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:385
#2 0x7fc09391bdb9 in void pybind11::cpp_function::initialize<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, void, pybind11::detail::value_and_holder, int, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<torch::autograd::profiler::ProfilerState>&&, void (*)(pybind11::detail::value_and_holder, int), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::detail::is_new_style_constructor const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391aa47 in pybind11::cpp_function::cpp_function<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor, void>(pybind11::class_<torch::autograd::profiler::ProfilerState>&&, pybind11::name const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093918a06 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, pybind11::detail::is_new_style_constructor>(char const*, pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute&&, pybind11::detail::is_new_style_constructor const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093916c7a in void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:234
#6 0x7fc09391494f in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1127
#7 0x7fc093913306 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1405
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 59 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc09360050c in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:385
#2 0x7fc09391c4f2 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::tuple, torch::autograd::profiler::ProfilerState const&, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&, pybind11::tuple (*)(torch::autograd::profiler::ProfilerState const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391acf8 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093919515 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093917067 in void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:318
#6 0x7fc093915397 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>(pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1133
#7 0x7fc093913397 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1443
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 58 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ffca1 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:281
#2 0x7fc09391c8ed in _ZN8pybind1112cpp_function10initializeIZNOS_6detail8initimpl14pickle_factoryIZNS_5enum_IN5torch8autograd8profiler13ProfilerStateEEC4IJEEERKNS_6handleEPKcDpRKT_EUlRKS9_E27_ZNSB_IJEEESE_SG_SK_EUlNS_5tupleEE28_FSO_SM_EFS9_SO_EE7executeINS_6class_IS9_JEEEJEEEvRT_DpRKT0_EUlRNS2_16value_and_holderESO_E_vJS13_SO_EJNS_4nameENS_9is_methodENS_7siblingENS2_24is_new_style_constructorEEEEvOSW_PFT0_DpT1_EDpRKT2_ /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391ada1 in pybind11::cpp_function::cpp_function<void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})#1}, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor, void>(pybind11::class_&&, pybind11::name const) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093919752 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})#1}, pybind11::detail::is_new_style_constructor>(char const*, pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&, pybind11::detail::is_new_style_constructor const) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391708e in void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:324
#6 0x7fc093915397 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>(pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1133
#7 0x7fc093913397 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1443
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 56 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ffca1 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:281
#2 0x7fc09391bdb9 in void pybind11::cpp_function::initialize<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, void, pybind11::detail::value_and_holder, int, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<torch::autograd::profiler::ProfilerState>&&, void (*)(pybind11::detail::value_and_holder, int), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::detail::is_new_style_constructor const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391aa47 in pybind11::cpp_function::cpp_function<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor, void>(pybind11::class_<torch::autograd::profiler::ProfilerState>&&, pybind11::name const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093918a06 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, pybind11::detail::is_new_style_constructor>(char const*, pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute&&, pybind11::detail::is_new_style_constructor const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093916c7a in void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:234
#6 0x7fc09391494f in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1127
#7 0x7fc093913306 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1405
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 53 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc09360050c in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:385
#2 0x7fc0939181c0 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}, pybind11::str, torch::autograd::profiler::ProfilerState, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&, pybind11::str (*)(torch::autograd::profiler::ProfilerState), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391695a in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc0939145a7 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913298 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1377
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 53 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc09360050c in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:385
#2 0x7fc09391940a in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}, int, torch::autograd::profiler::ProfilerState const&, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&, int (*)(torch::autograd::profiler::ProfilerState const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916ed2 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914fc5 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391336e in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1441
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 52 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc09360050c in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:385
#2 0x7fc093918cfc in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}, int, torch::autograd::profiler::ProfilerState, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&, int (*)(torch::autograd::profiler::ProfilerState), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916cf8 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914a31 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913320 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1406
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 46 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ffca1 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:281
#2 0x7fc09391c4f2 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::tuple, torch::autograd::profiler::ProfilerState const&, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&, pybind11::tuple (*)(torch::autograd::profiler::ProfilerState const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391acf8 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093919515 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093917067 in void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:318
#6 0x7fc093915397 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>(pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1133
#7 0x7fc093913397 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1443
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 44 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ffca1 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:281
#2 0x7fc0939181c0 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}, pybind11::str, torch::autograd::profiler::ProfilerState, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&, pybind11::str (*)(torch::autograd::profiler::ProfilerState), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391695a in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc0939145a7 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913298 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1377
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 44 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ffca1 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:281
#2 0x7fc093918cfc in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}, int, torch::autograd::profiler::ProfilerState, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&, int (*)(torch::autograd::profiler::ProfilerState), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916cf8 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914a31 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913320 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1406
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 44 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ffca1 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:281
#2 0x7fc09391940a in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}, int, torch::autograd::profiler::ProfilerState const&, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&, int (*)(torch::autograd::profiler::ProfilerState const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916ed2 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914fc5 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391336e in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1441
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935fff50 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:306
#2 0x7fc09391940a in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}, int, torch::autograd::profiler::ProfilerState const&, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&, int (*)(torch::autograd::profiler::ProfilerState const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916ed2 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914fc5 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391336e in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1441
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935fff50 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:306
#2 0x7fc09391c4f2 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::tuple, torch::autograd::profiler::ProfilerState const&, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&, pybind11::tuple (*)(torch::autograd::profiler::ProfilerState const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391acf8 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093919515 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093917067 in void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:318
#6 0x7fc093915397 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>(pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1133
#7 0x7fc093913397 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1443
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935fff50 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:306
#2 0x7fc0939181c0 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}, pybind11::str, torch::autograd::profiler::ProfilerState, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&, pybind11::str (*)(torch::autograd::profiler::ProfilerState), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391695a in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc0939145a7 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913298 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1377
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935fff50 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:306
#2 0x7fc09391c8ed in _ZN8pybind1112cpp_function10initializeIZNOS_6detail8initimpl14pickle_factoryIZNS_5enum_IN5torch8autograd8profiler13ProfilerStateEEC4IJEEERKNS_6handleEPKcDpRKT_EUlRKS9_E27_ZNSB_IJEEESE_SG_SK_EUlNS_5tupleEE28_FSO_SM_EFS9_SO_EE7executeINS_6class_IS9_JEEEJEEEvRT_DpRKT0_EUlRNS2_16value_and_holderESO_E_vJS13_SO_EJNS_4nameENS_9is_methodENS_7siblingENS2_24is_new_style_constructorEEEEvOSW_PFT0_DpT1_EDpRKT2_ /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391ada1 in pybind11::cpp_function::cpp_function<void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})#1}, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor, void>(pybind11::class_&&, pybind11::name const) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093919752 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})#1}, pybind11::detail::is_new_style_constructor>(char const*, pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&, pybind11::detail::is_new_style_constructor const) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391708e in void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:324
#6 0x7fc093915397 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>(pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1133
#7 0x7fc093913397 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1443
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935fff50 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:306
#2 0x7fc093918404 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, pybind11::handle>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > (*)(pybind11::handle)) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc0939169ff in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, , void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc09391476b in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def_property_readonly_static<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1207
#5 0x7fc0939132ba in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1384
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935fff50 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:306
#2 0x7fc09391875c in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::dict, pybind11::handle>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::dict (*)(pybind11::handle)) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916b09 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, , void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc09391487d in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def_property_readonly_static<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3}, pybind11::return_value_policy>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::return_value_policy const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1207
#5 0x7fc0939132e4 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1399
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935fff50 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:306
#2 0x7fc09391bdb9 in void pybind11::cpp_function::initialize<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, void, pybind11::detail::value_and_holder, int, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<torch::autograd::profiler::ProfilerState>&&, void (*)(pybind11::detail::value_and_holder, int), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::detail::is_new_style_constructor const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391aa47 in pybind11::cpp_function::cpp_function<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor, void>(pybind11::class_<torch::autograd::profiler::ProfilerState>&&, pybind11::name const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093918a06 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, pybind11::detail::is_new_style_constructor>(char const*, pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute&&, pybind11::detail::is_new_style_constructor const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093916c7a in void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:234
#6 0x7fc09391494f in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1127
#7 0x7fc093913306 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1405
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935fff50 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:306
#2 0x7fc093918cfc in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}, int, torch::autograd::profiler::ProfilerState, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&, int (*)(torch::autograd::profiler::ProfilerState), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916cf8 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914a31 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913320 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1406
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935fff50 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:306
#2 0x7fc093918fd0 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}, bool, torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&, bool (*)(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916d96 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914c0d in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391333a in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1410
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a1305532 in operator new(unsigned long) (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x99532)
#1 0x7fc0935fff50 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:306
#2 0x7fc0939191d8 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}, bool, torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&, bool (*)(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916e34 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914de9 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913354 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1411
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc09360050c in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:385
#2 0x7fc09391875c in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::dict, pybind11::handle>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::dict (*)(pybind11::handle)) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916b09 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, , void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc09391487d in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def_property_readonly_static<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3}, pybind11::return_value_policy>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::return_value_policy const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1207
#5 0x7fc0939132e4 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1399
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 23 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc09360050c in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:385
#2 0x7fc093918404 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, pybind11::handle>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > (*)(pybind11::handle)) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc0939169ff in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, , void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc09391476b in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def_property_readonly_static<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1207
#5 0x7fc0939132ba in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1384
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 23 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ffca1 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:281
#2 0x7fc09391875c in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::dict, pybind11::handle>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::dict (*)(pybind11::handle)) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916b09 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, , void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc09391487d in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def_property_readonly_static<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3}, pybind11::return_value_policy>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::return_value_policy const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1207
#5 0x7fc0939132e4 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1399
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 22 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ffca1 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:281
#2 0x7fc093918404 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, pybind11::handle>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > (*)(pybind11::handle)) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc0939169ff in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, , void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc09391476b in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def_property_readonly_static<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1207
#5 0x7fc0939132ba in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1384
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 13 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ff412 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:192
#2 0x7fc09391c8ed in _ZN8pybind1112cpp_function10initializeIZNOS_6detail8initimpl14pickle_factoryIZNS_5enum_IN5torch8autograd8profiler13ProfilerStateEEC4IJEEERKNS_6handleEPKcDpRKT_EUlRKS9_E27_ZNSB_IJEEESE_SG_SK_EUlNS_5tupleEE28_FSO_SM_EFS9_SO_EE7executeINS_6class_IS9_JEEEJEEEvRT_DpRKT0_EUlRNS2_16value_and_holderESO_E_vJS13_SO_EJNS_4nameENS_9is_methodENS_7siblingENS2_24is_new_style_constructorEEEEvOSW_PFT0_DpT1_EDpRKT2_ /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391ada1 in pybind11::cpp_function::cpp_function<void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})#1}, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor, void>(pybind11::class_&&, pybind11::name const) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093919752 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})#1}, pybind11::detail::is_new_style_constructor>(char const*, pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&, pybind11::detail::is_new_style_constructor const) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391708e in void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:324
#6 0x7fc093915397 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>(pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1133
#7 0x7fc093913397 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1443
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 13 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ff412 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:192
#2 0x7fc09391c4f2 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::tuple, torch::autograd::profiler::ProfilerState const&, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&, pybind11::tuple (*)(torch::autograd::profiler::ProfilerState const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391acf8 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093919515 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093917067 in void pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:318
#6 0x7fc093915397 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>(pybind11::detail::initimpl::pickle_factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::tuple)#30}, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29} (torch::autograd::profiler::ProfilerState const&), torch::autograd::profiler::ProfilerState (pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#29})>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1133
#7 0x7fc093913397 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1443
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 9 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ff412 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:192
#2 0x7fc09391bdb9 in void pybind11::cpp_function::initialize<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, void, pybind11::detail::value_and_holder, int, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<torch::autograd::profiler::ProfilerState>&&, void (*)(pybind11::detail::value_and_holder, int), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::detail::is_new_style_constructor const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391aa47 in pybind11::cpp_function::cpp_function<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor, void>(pybind11::class_<torch::autograd::profiler::ProfilerState>&&, pybind11::name const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093918a06 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) &&::{lambda(pybind11::detail::value_and_holder&, int)#1}, pybind11::detail::is_new_style_constructor>(char const*, pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute&&, pybind11::detail::is_new_style_constructor const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093916c7a in void pybind11::detail::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>::execute<pybind11::class_<torch::autograd::profiler::ProfilerState>>(pybind11::class_<torch::autograd::profiler::ProfilerState>&) && /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/detail/init.h:234
#6 0x7fc09391494f in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}::initimpl::factory<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(int)#4}, pybind11::detail::void_type (*)(), torch::autograd::profiler::ProfilerState (int), pybind11::detail::void_type>&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1127
#7 0x7fc093913306 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1405
#8 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#9 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 9 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ff412 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:192
#2 0x7fc0939181c0 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}, pybind11::str, torch::autograd::profiler::ProfilerState, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&, pybind11::str (*)(torch::autograd::profiler::ProfilerState), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc09391695a in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc0939145a7 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#1}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913298 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1377
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 9 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ff412 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:192
#2 0x7fc09391940a in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}, int, torch::autograd::profiler::ProfilerState const&, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&, int (*)(torch::autograd::profiler::ProfilerState const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916ed2 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914fc5 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&)#28}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391336e in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1441
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 8 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ff412 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:192
#2 0x7fc093918cfc in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}, int, torch::autograd::profiler::ProfilerState, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&, int (*)(torch::autograd::profiler::ProfilerState), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916cf8 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914a31 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState)#5}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913320 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1406
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 7 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ff412 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:192
#2 0x7fc093918fd0 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}, bool, torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&, bool (*)(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916d96 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914c0d in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#6}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc09391333a in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1410
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 7 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ff412 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:192
#2 0x7fc0939191d8 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}, bool, torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&, bool (*)(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916e34 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}, pybind11::name, pybind11::is_method, pybind11::sibling, void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&, pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc093914de9 in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(torch::autograd::profiler::ProfilerState const&, torch::autograd::profiler::ProfilerState*)#7}&&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1086
#5 0x7fc093913354 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1411
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 1 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ff412 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:192
#2 0x7fc093918404 in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, pybind11::handle>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > (*)(pybind11::handle)) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc0939169ff in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&, , void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc09391476b in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def_property_readonly_static<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2}>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#2} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1207
#5 0x7fc0939132ba in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1384
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
Indirect leak of 1 byte(s) in 1 object(s) allocated from:
#0 0x7fc0a12ce30f in strdup (/usr/lib/gcc/x86_64-linux-gnu/5/libasan.so+0x6230f)
#1 0x7fc0935ff412 in pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:192
#2 0x7fc09391875c in void pybind11::cpp_function::initialize<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::dict, pybind11::handle>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::dict (*)(pybind11::handle)) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:171
#3 0x7fc093916b09 in pybind11::cpp_function::cpp_function<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, , void>(pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:66
#4 0x7fc09391487d in pybind11::class_<torch::autograd::profiler::ProfilerState>& pybind11::class_<torch::autograd::profiler::ProfilerState>::def_property_readonly_static<pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3}, pybind11::return_value_policy>(char const*, pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*)::{lambda(pybind11::handle)#3} const&, pybind11::return_value_policy const&) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1207
#5 0x7fc0939132e4 in pybind11::enum_<torch::autograd::profiler::ProfilerState>::enum_<>(pybind11::handle const&, char const*) /private/home/ezyang/pytorch/torch/lib/pybind11/include/pybind11/pybind11.h:1399
#6 0x7fc093910536 in THPAutograd_initExtension(_object*) torch/csrc/autograd/init.cpp:38
#7 0x556778778019 in _PyCFunction_FastCallDict (/private/home/ezyang/pytorch-env/bin/python3.6+0x10f019)
SUMMARY: AddressSanitizer: 1618364 byte(s) leaked in 692 allocation(s).
``` | module: memory usage,triaged | low | Critical |
295,959,563 | TypeScript | Missing fix suggestions when multiple come from the same JSX element | ```typescript
const element = <span />;
// [ts] 'React' refers to a UMD global, but the current file is a module. Consider adding an import instead.
```
If the compiler settings have `jsx: "react"`, it seems like adding the import automatically could be a reasonable auto-fix?
```typescript
import * as React from "react";
const element = <span />;
``` | Bug,Domain: Quick Fixes | low | Major |
296,024,372 | rust | Feature: `Rc::clone_raw` (and for Arc) | When using `from_raw`/`into_raw` functions with `Rc`, you often want to obtain a new reference to a raw pointer, without taking ownership. At the moment you have to do this dance:
```rust
fn clone_raw<T>(ptr: *const T) -> Rc<T> {
let result = unsafe { Rc::from_raw(ptr) };
::std::mem::forget(result.clone());
result
}
```
This is quite error prone and makes little sense to anyone trying to read the code. It would be better if the standard library had `clone_raw` built in for `Rc` and `Arc`, and possibly for their weak variants. | T-libs-api,C-feature-request | low | Critical |
296,058,773 | vscode | [css] propose ids used in other selectors | ### Issue Type
Bug
### Description
i have issues when i try to use suggestion for css files. for classes works well but when i'm using for id i have no suggestions try it with a file with like 500 lines of code
### VS Code Info
VS Code version: Code 1.20.0 (c63189deaa8e620f650cc28792b8f5f3363f2c5b, 2018-02-07T17:09:39.780Z)
OS version: Windows_NT x64 10.0.16299
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz (4 x 2195)|
|Memory (System)|3.91GB (0.52GB free)|
|Process Argv|C:\Program Files\Microsoft VS Code\Code.exe|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (5)</summary>
Extension|Author (truncated)|Version
---|---|---
rainbow-brackets|2gu|0.0.6
vscode-eslint|dba|1.4.5
vsc-material-theme|Equ|1.3.0
prettier-vscode|esb|1.1.3
LiveServer|rit|3.2.0
</details>
Reproduces without extensions | feature-request,css-less-scss | low | Critical |
296,066,686 | angular | [Feature Request] [Service Worker] Background Sync | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[x ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
Right now, Service Worker package do not have sync feature. Since Angular follows a plugin mechanism to add service worker apis, I feel this is a major feature which is required.
[Background Sync](https://developers.google.com/web/updates/2015/12/background-sync)
## Expected behavior
<!-- Describe what the desired behavior would be. -->
Should be able to register a background sync easily using @angular/service-worker package.
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
N/A
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
This is required to make sure that user is able to do actions even on unstable internet connections.
| feature,area: service-worker,feature: under consideration | high | Critical |
296,071,764 | go | x/build/maintner/maintnerd: allow serving logs locally | Currently, maintnerd only serves on /logs if it's logging to GCS.
It doesn't look like this is for any particular reason except that the logs index hasn't been implemented for local filesystem?
/cc @bradfitz | Builders | low | Minor |
296,082,489 | rust | Step::steps_between does not distinguish overflow and unimplemented (unstable) | It returns `Option<size>`: https://doc.rust-lang.org/std/iter/trait.Step.html#tymethod.steps_between
And the comment says it uses `None` for overflow: https://doc.rust-lang.org/src/core/iter/range.rs.html#29
But it also uses `None` for unimplemented https://doc.rust-lang.org/src/core/iter/range.rs.html#155
And as a result, `Range` needs to return `(0, None)` when it gets `None`: https://doc.rust-lang.org/src/core/iter/range.rs.html#235-240
It would be nice to either
- Require that the method is implemented accurately (easy for fundamental integers; harder if `Step` is expected to support graph walks, but maybe not harder than `PartialOrd`)
- Change the type to something else, maybe `Option<Option<usize>>` or be more hint-like with `(usize, Option<usize>)`
- Something else
cc https://github.com/rust-lang/rust/issues/42168 | C-enhancement,T-libs-api | low | Minor |
296,108,949 | TypeScript | auto import always uses relative (to the current file) module paths in presense of baseUri | back in a day it used to be different
- at first it was always an absolute path (that is without "./" or "../" in it) relative to the `baseUri`
- then starting at some point it began to always show a popup with 2 options: absolute or relavive
- now (since like a week ago or so) it just always uses a relative path with "./" and "../" to the current file no matter what, without asking, which is very frustrating and annoying
how can i get the auto import to use absolute paths like it was back in the day? | Suggestion,Awaiting More Feedback | low | Major |
296,121,392 | godot | [Bullet] Performance monitor for 3D physics not working for Bullet | **Godot version:**
v3.0-stable_x11.64
**Issue description:**
Performance monitor for 3D physics not working for Bullet, all values are 0. This issue was mentioned earlier here, among other issues: https://github.com/godotengine/godot/issues/15975
**Steps to reproduce:**
Run attached project. Monitor values are shown in window title. Cube will fall on surface and sleeping is disabled, so there should be 1 active object/collision pair/island. Run with GodotPhysics to see correct values.
**Minimal reproduction project:**
[BUgTest.zip](https://github.com/godotengine/godot/files/1713657/BUgTest.zip)
| bug,topic:editor,confirmed,topic:physics,topic:3d | low | Critical |
296,135,274 | go | x/tools/go/ast/astutil: Apply: extraneous comma after replaced parameter | Not sure if this is a bug in `astutil.Apply` or `format.Node`.
This function:
```
func applyBug() {
src := `
package p
import foo "bar"
func f(x foo.t) {}
`
fset := token.NewFileSet()
file, err := parser.ParseFile(fset, "", src, 0)
if err != nil {
panic(err)
}
pre := func(c *astutil.Cursor) bool {
if sel, ok := c.Node().(*ast.SelectorExpr); ok {
if id, ok := sel.X.(*ast.Ident); ok && id.Obj == nil && id.Name == "foo" {
c.Replace(&ast.Ident{Name: "foo_t"})
}
}
return true
}
astutil.Apply(file, pre, nil)
var buf bytes.Buffer
if err := format.Node(&buf, fset, file); err != nil {
panic(err)
}
fmt.Println(buf.String())
}
```
yields this output:
```
package p
import foo "bar"
func f(x foo_t,) {}
```
Note the comma at the end of the function param list.
Reparsing the result with a new FileSet and then re-formatting fixes the problem. | Tools | low | Critical |
296,152,313 | rust | closure return type inference doesn't consider coercions | The title is a guess as to why this code doesn't compile:
```rust
fn foo(x: Option<&mut i32>) -> Option<&i32> {
x.map(|x| x)
}
```
```
error[E0308]: mismatched types
--> src/main.rs:3:5
|
3 | x.map(|x| x)
| ^^^^^^^^^^^^ types differ in mutability
|
= note: expected type `std::option::Option<&i32>`
found type `std::option::Option<&mut i32>`
```
while given the tiniest of type hints, it does:
```rust
fn foo(x: Option<&mut i32>) -> Option<&i32> {
x.map(|x| -> &_ { x })
}
```
It seems that the type inference sees the closure returning `&mut i32` but doesn't consider that that coerces to the desired type. Is this unintended and/or fixable? | C-enhancement,T-compiler,A-inference,T-types | low | Critical |
296,162,858 | opencv | Design flaw of Widget in the viz module | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
##### Detailed description
By design, `cv::viz:: Widget` is implicitly shared.
https://github.com/opencv/opencv/blob/e2a99d24ec8f48ca56ead3a6beb7fe38fc067a12/modules/viz/include/opencv2/viz/widgets.hpp#L90-L92
That is, the actor (i.e., Prop) is shared between widgets.
https://github.com/opencv/opencv/blob/e2a99d24ec8f48ca56ead3a6beb7fe38fc067a12/modules/viz/src/widget.cpp#L60-L64
This is very efficient while copying widgets, since it only needs to copy a pointer.
Widgets are saved in a dictionary
https://github.com/opencv/opencv/blob/e2a99d24ec8f48ca56ead3a6beb7fe38fc067a12/modules/viz/src/precomp.hpp#L161
where the key is the widget name and the value is the prop associated with the widget.
To display a widget, its actor is added to the renderer
https://github.com/opencv/opencv/blob/601e3aaf988b8450634b931c10900633622bad04/modules/viz/src/vizimpl.cpp#L250-L251
**The problem** is that when two widgets share the same actor, `renderer_->AddActor()`
filters duplicates and only one actor is added. If one widget is removed,
https://github.com/opencv/opencv/blob/601e3aaf988b8450634b931c10900633622bad04/modules/viz/src/vizimpl.cpp#L396
the actor of the other widget is also removed from the renderer, but it remains in the widgets
dictionary.
The consequence is that we cannot see the widget, since its actor is not added to the renderer,
but we can use `getWidget`, `setWidgetPose` or other operations to modify the properties of
the widget.
##### Steps to reproduce
A minimal example is as follows:
```.cpp
#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/viz.hpp>
int main()
{
cv::viz::Viz3d win("test window");
cv::viz::WLine line(cv::Point3d(0,0,0), cv::Point3d(1,1,0));
cv::viz::WLine line2(line);
win.showWidget("my line", line);
win.showWidget("my line2", line2);
win.spinOnce(3000, true);
win.removeWidget("my line");
// At this point, my line2 is not visible
// since its actor is removed from the renderer
// but we can still access it.
// Expected result: my line2 should still be visible.
std::cout << win.getWidgetPose("my line2").matrix << "\n";
win.spinOnce(3000, true);
return 0;
}
```
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | category: viz | low | Critical |
296,184,742 | opencv | Name mismatch for widgets.hpp and widget.cpp in the viz module. | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
##### Detailed description
The header file is called [widgets.hpp][1], whereas
the source file is named as [widget.cpp][2].
[2]: https://github.com/opencv/opencv/blob/master/modules/viz/src/widget.cpp
[1]: https://github.com/opencv/opencv/blob/master/modules/viz/include/opencv2/viz/widgets.hpp
<!-- your description -->
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | priority: low,category: viz | low | Critical |
296,216,321 | vscode | Option to disable font ligatures in strings | Sometimes you have to list a bunch of characters in a string containing a regular expression and you don't want the characters to merge in order to more easily read the expression. | feature-request,font-rendering | medium | Critical |
296,218,625 | rust | Multiple output files created with -o foo --emit=bar -C codegen-units=1 when incremental compilation is enabled | rustc will not warn that the requested value for codegen-units is ignored when incremental compilation is enabled. If also using ```-o foo --emit=bar```, this results in multiple foo.bar files being created.
It would be useful to have a warning message that codegen-units is being overridden.
There is a similar warning present today that can be triggered with ```-C codegen-units=2 -o ir --emit=llvm-ir ``` at [config.rs#L1740-L1768](https://github.com/rust-lang/rust/blob/afa8acce251cda7ab1548640fdb769139a45f839/src/librustc/session/config.rs#L1740-L1768).
```rust
match codegen_units {
Some(n) if n > 1 => {
if matches.opt_present("o") {
for ot in &incompatible {
early_warn(error_format, &format!("--emit={} with -o incompatible with \
-C codegen-units=N for N > 1",
ot));
}
early_warn(error_format, "resetting to default -C codegen-units=1");
```
#### Example - With Incremental Compilation
I expect this to create a single .ll file, but instead four are received
```bash
$ cargo new --bin cgu-ignored && cd cgu-ignored
$ cargo rustc -- -o ir --emit=llvm-ir -C codegen-units=1
warning: ignoring emit path because multiple .ll files were produced
Finished dev [unoptimized + debuginfo] target(s) in 0.44 secs
$ ls *.ll
ir-a806d54fb39b2862.1y16o1qfye96o7m0.rcgu.ll ir-a806d54fb39b2862.3rngp6bm2u2q5z0y.rcgu.ll
ir-a806d54fb39b2862.3yvk9x00s7w2trum.rcgu.ll ir-a806d54fb39b2862.4xq48u46a1pwiqn7.rcgu.ll
```
#### Example - Without Incremental Compilation
With incremental compilation turned off, a single file is created as expected
```bash
$ CARGO_INCREMENTAL=0 cargo rustc -- -o ir --emit=llvm-ir -C codegen-units=1
Finished dev [unoptimized + debuginfo] target(s) in 0.31 secs
$ ls *.ll
ir-a806d54fb39b2862.ll
```
Issue recreated on rustc 1.25.0-nightly (45fba43b3 2018-02-10) | C-enhancement,T-compiler,A-incr-comp | low | Critical |
296,353,156 | rust | #[repr(align(4))] struct has 8 byte alignment | [See it live](https://play.rust-lang.org/?gist=b53af41209b9a45e5e4c180d54018d7f&version=undefined):
```rust
#![allow(non_camel_case_types)]
pub enum c_void {}
type uintptr_t = usize;
type int16_t = u16;
type uint16_t = int16_t;
type uint32_t = u32;
type intptr_t = uintptr_t;
#[repr(C)]
#[repr(align(4))]
pub struct kevent {
pub ident: uintptr_t,
pub filter: int16_t,
pub flags: uint16_t,
pub fflags: uint32_t,
pub data: intptr_t,
pub udata: *mut c_void,
}
fn main() {
assert_eq!(::std::mem::align_of::<kevent>(), 4); // ERROR: 8 != 4
}
```
No warning, no error, nu nothing.
This is https://github.com/rust-lang/libc/issues/914 | A-diagnostics,T-compiler,C-feature-request,A-repr,A-align | low | Critical |
296,429,479 | opencv | Integration with oss-fuzz | According to [meeting notes](https://github.com/opencv/opencv/wiki/2017#minutes-17) from 2017-05-16, there was a plan to integrate OpenCV into [oss-fuzz](https://github.com/google/oss-fuzz).
However it seems to have never happened.
What would help you getting started (beyond the cash reward from Google)? | RFC | low | Major |
296,505,698 | rust | Ship a custom LLDB with Rust support | I hear that @tromey has been doing some awesome work with Rust-specific support in LLDB, and we should enable easily shipping that work to users! This is intended to be a checklist/tracking issue of sorts of the work needed to be done to enable this.
* [x] One of the first things we'll need to do is to add it to the Rust source tree (off by default). This will probably involve adding a submodule and adding rustbuild logic to build LLDB. This will currently, as of this writing, require syncing the LLDB work to get based off the same LLVM version we have in tree, presently the `release_60` branch.
* [x] Next up we'll want to confirm that LLDB works with `sccache` and is being cached properly when using sccache. The change here would basically be ensuring that `--enable-ccache` works for LLDB locally.
* [x] After that we'll want to build LLDB on the dist bots, but not upload it yet. This'll help us evaluate impacts on cycle time, if any. The change here should mostly just be tweaking flags in travis/appveyor/dockerfiles
* [x] Next we'd add rustbuild support for creating a new LLDB component that we'd publish on the dist bots. This would probably look similar to the support for RLS (ish).
* [ ] And finally we'd teach rustup.rs about the LLDB component, allowing `rustup component add lldb` (or whatever the name is), but this is much farther down the line! | A-LLVM,T-infra,C-tracking-issue | medium | Critical |
296,571,166 | rust | Support returning elements from an array by-value | This works:
```rust
fn test_vec() -> String {
let mut z = vec![String::from("foo"), String::from("bar")];
z.remove(0)
}
```
But this fails:
```rust
fn test_array() -> String {
let mut z = [String::from("foo"), String::from("bar")];
z[0]
}
```
with a "cannot move out of" error.
(#rust-beginners advised me that this "should" intuitively work and that I should file here) | C-enhancement,A-borrow-checker,T-compiler,A-array | low | Critical |
296,573,629 | go | proposal: cmd/go: allow test binaries to identify as uncacheable | The new test caching stuff is neat, except when a test has an external dependency (e.g. it is testing code that hits a web service), and we don't want the test's result to be cached (so that we're always exercising the code against the real world).
There are ways to disable the test caching from the user's perspective (e.g. passing `-count=1`), but not from the test itself from what I can tell. It'd be nice if tests in this position could do something to indicate to the `go` tool that its result and output should not be cached.
Some ideas:
* Have a method on `*testing.T` that can be invoked to signal this.
* Have a specified file to touch (e.g. `$GOCACHE/something`).
* Have a naming convention for such tests (e.g. must have an `External` substring). | Proposal,GoCommand | medium | Critical |
296,599,334 | vscode | Feature Request - Make possible to undo (redo) changes in code after VS Code's been restarted | <!-- Do you have a question? Please ask it on https://stackoverflow.com/questions/tagged/vscode. -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.21.0-ins
- OS Version: win7x64
It would be great to undo (Ctrl+Z) and redo (Ctrl+Y) changes in the code we've done, **after** VS Code's been closed and restarted.
Is it hard to implement? | feature-request,editor-core,undo-redo | high | Critical |
296,611,241 | go | proposal: math/rand/v2: add function for random bool | I want to be able to call `rand.Bool()` and receive a pseudo-random bool when I import `math/random` | Proposal,Proposal-Hold | medium | Critical |
296,614,935 | go | proposal: cmd/go: add transitive Deps for TestImports and XTestImports | `go list` provides a `.Deps` which has the recursive list of dependencies for `.Imports`. This is important because it provides the complete list of dependencies that must be fulfilled to run `go build`
Unfortunately there is no recursive dependency provided for `TestImports` or `XTestImports`. Those have a dependency tree that needs to be fulfilled in order to `go test` but it's non-trivial to retrieve that dependency list. (especially when trying to filter out stdlib entries)
### Currently
```go
// Dependency information
Imports []string // import paths used by this package
Deps []string // all (recursively) imported dependencies
TestImports []string // imports from TestGoFiles
XTestImports []string // imports from XTestGoFiles
```
### What's expected?
```go
// Dependency information
Imports []string // import paths used by this package
Deps []string // all (recursively) imported dependencies
TestImports []string // imports from TestGoFiles
TestDeps []string // all (recursively) imported dependencies from TestGoFiles
XTestImports []string // imports from XTestGoFiles
XTestDeps []string // all (recursively) imported dependencies from XTestGoFiles
```
This would allow a command like the following to output the complete set of dependencies needed to run `go build` or `go test`.
```bash
go list -f '{{join .Deps "\n"}}{{if len .TestDeps}}{{"\n"}}{{end}}{{join .TestDeps "\n"}}{{if len .XTestDeps}}{{"\n"}}{{end}}{{join .XTestDeps "\n"}}' | sort | uniq
```
Currently you have to run `go list '{{join .Deps "\n"}}' $import_path` on each import path from `.TestImports` and `.XTestImports`. | Proposal,Proposal-Hold | low | Major |
296,733,122 | pytorch | Weird error message in torch.split_size_or_sections | Hi everyone,
I wanted to use the new function torch.split with a list of integer.
I have a input vector of dim X (let's say 100) and a list of integer where the sum is less or equal (Y <= X). If Y <= X, an error is raised saying "Sum of split sizes exceeds tensor dim", which is not the case.
`
def split(tensor, split_size_or_sections, dim=0):
if dim < 0:
dim += tensor.dim()
dim_size = tensor.size(dim)
if isinstance(split_size_or_sections, int):
split_size = split_size_or_sections
num_splits = (dim_size + split_size - 1) // split_size
last_split_size = split_size - (split_size * num_splits - dim_size)
def get_split_size(i):
return split_size if i < num_splits - 1 else last_split_size
return tuple(tensor.narrow(int(dim), int(i * split_size), int(get_split_size(i))) for i
in range(0, num_splits))
else:
if dim_size != sum(split_size_or_sections):
raise ValueError("Sum of split sizes exceeds tensor dim")
split_indices = [0] + split_size_or_sections
split_indices = torch.cumsum(torch.Tensor(split_indices), dim=0)
return tuple(
tensor.narrow(int(dim), int(start), int(length))
for start, length in zip(split_indices, split_size_or_sections))
`
In this case, either we should have if dim_size <= sum(split_size_or_sections), either we should split the tensor is chuncks where the sum is lower than dim_size.
OS: Mac OS X
PyTorch version: From source (https://github.com/pytorch/pytorch/commit/2b2d56d8460d335daf5aa79774442a111d424f90)
How you installed PyTorch (conda, pip, source): pip
Python version: 3.6
cc @mruberry @rgommers | triaged,module: numpy | low | Critical |
296,799,841 | opencv | TS module does not map SIGFPE to TS::FAIL_ARITHM_EXCEPTION in ts.cpp | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => **2.4.13.5**
- Operating System / Platform => **Linux / Ubuntu 16.04**
- Compiler => **linux gcc 5.4**
##### Detailed description
The **TS** module has a signal handler to handle the signals from OS and map it to an appropriate test exception so the cause of the test failure is known. However from the code in **ts.cpp at line no: 216** it is throwing ` _code;` instead of `cv::Exception` object due to which the `SIGFPE` is not mapped to an appropriate **TS**'s error code. Is there any reason it is done like this? I am curious to know why it is left out.
Thank you,
Mani
##### Steps to reproduce
Execute the following program as a simple C++ program by linking to **opencv_ts** and its dependencies **opencv_core, tbb, pthread** and **libz**.
```c++
//============================================================================
// Name : learn_opencv_ts.cpp
// Author : Mani Kumar
// Version :
// Copyright : Free2Copy
// Description : Hello World in C++, Ansi-style
//============================================================================
/*
* This module is to learn OpenCV by mani
*/
//============================================================================
// 1. test_precomp.hpp
#ifndef __OPENCV_TEST_PRECOMP_HPP__
#define __OPENCV_TEST_PRECOMP_HPP__
// Module dependencies
#include "opencv2/ts/ts.hpp"
// Constants
#endif // end of __OPENCV_TEST_PRECOMP_HPP__
//============================================================================
//============================================================================
// 2. module under test
#ifndef __OPENCV_MANI_EXPS_HPP__
#define __OPENCV_MANI_EXPS_HPP__
namespace cv
{
class LearnTSModule
{
public:
LearnTSModule(int a=0, int b=0): m_a(a), m_b(b){};
int addNums() const { return m_a + m_b; };
double divideAbyB() const { return m_a / m_b; };
private:
int m_a;
int m_b;
};
}
#endif // __OPENCV_MANI_EXPS_HPP__
//============================================================================
//============================================================================
// 3. test code for module
//#include "test_precomp.hpp"
using namespace cv;
using namespace std;
class CV_LearnTSModuleTest : public cvtest::BaseTest
{
public:
void run(int)
{
ts->printf(cvtest::TS::LOG, "start testing LearnTSModule\n");
// Write tests for the test suite
EXPECT_EQ(3, LearnTSModule(1, 2).addNums());
EXPECT_EQ(6, LearnTSModule(4, 2).addNums());
EXPECT_EQ(2.0f, LearnTSModule(4, 2).divideAbyB());
EXPECT_EQ(0.0f, LearnTSModule(4, 0).divideAbyB()) << "Expected to get FPE!\n";
ts->printf(cvtest::TS::LOG, "start testing LearnTSModule\n");
}
};
// Write test case i.e. test suite
TEST(ManiExps_LearnTSModule, simple_math_ops) { CV_LearnTSModuleTest test; test.safe_run(); }
//============================================================================
//============================================================================
// 4. test main func
//#include "test_precomp.hpp"
CV_TEST_MAIN("test_opencv_ts")
//============================================================================
``` | test,priority: low | low | Critical |
296,820,194 | javascript | (question) eslint rule / plugin for guideline 7.15 | is there an eslint rule or plugin that enforces [7.15](https://github.com/airbnb/javascript#functions--signature-invocation-indentation)? | pull request wanted,editorial,needs eslint rule change/addition | low | Major |
296,842,023 | react-native | Android native UI components are not re-layout on dynamically added views | ### Is this a bug report?
Yes.
### Have you read the [Contributing Guidelines](https://facebook.github.io/react-native/docs/contributing.html)?
Yes.
### Environment
Environment:
OS: macOS High Sierra 10.13.2
Node: 8.9.4
Yarn: 1.3.2
npm: 5.6.0
Watchman: 4.9.0
Xcode: Xcode 9.2 Build version 9C40b
Android Studio: 3.0 AI-171.4443003
Packages: (wanted => installed)
react: 16.3.1 => 16.3.1
react-native: 0.55.4 => 0.55.4
### Description
The issue can be noticed if you bridge to React Native the following views:
A view with elements that have a visibility to `gone` on the **initial render** won't be displayed after you've set is visibility to `visible`. `view.isShown()` will return `true`, but it will not be there or it will be there but not really re-layout.
A view with elements that are dynamically added, simply by `view.addView()` or let's say you want to load an image with Fresco it will only work if it was loaded on the initial render.
I've noticed that native components are re-layout on hot reloading, but `this.forceUpdate()` or changing props won't trigger a re-layout. As an ugly workaround, if we interact with the native component height or width it will trigger a re-layout, so every time you want to toggle a visibility from gone to visible or dynamically adds views you can alter his size.
I've also implemented `needsCustomLayoutForChildren` without notable change.
### Expected Behavior (Native)
Here's the native implementation directly inflated inside an Activity.
<img src="https://user-images.githubusercontent.com/7189823/36165758-95c1b2ce-10be-11e8-85dc-8b801a0a3705.gif" height="400" />
### Actual Behavior (React Native)
Here's the exact same layout as above, but bridged to react native and inflated inside `SimpleViewManager`.
<img src="https://user-images.githubusercontent.com/7189823/36165505-bf90ddf6-10bd-11e8-8983-f351c7be2c00.gif" height="400" />
### Reproducible Demo
https://github.com/charpeni/react-native-android-visibility-issue
Related to #5531 (Already flagged in RN 0.18). | Ran Commands,Issue: Author Provided Repro,Platform: Android,Resolution: Backlog,Bug,Never gets stale | high | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.