id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
589,193,164 |
PowerToys
|
Add ability to reopen previously closed apps
|
# Summary of the new feature/enhancement
Currently, browsers implement Ctrl+Shift+T to re-open previously closed tabs. A wonderful feature (if possible) would be to have the ability to re-open previously closed applications from the current user + current session with a designated shortcut key.
# Proposed technical implementation details (optional)
<!--
A clear and concise description of what you want to happen.
-->
|
Idea-New PowerToy
|
low
|
Minor
|
589,278,777 |
go
|
x/crypto/ssh: connection.session.Close() returns a EOF as error instead of nil
|
<!--
Please answer these questions before submitting your issue. Thanks!
For questions please use one of our forums: https://github.com/golang/go/wiki/Questions
-->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.14.1 darwin/amd64
$
</pre>
### Does this issue reproduce with the latest release?
Yes, I believe I am using the latest stable release.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/my_username/Library/Caches/go-build"
GOENV="/Users/my_username/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/my_username/Learn/go_learn"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/my_username/Learn/go_learn/src/my_stuff/ssh_test/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/17/vmjmghsj28d4lzrx7mqshr9r0000gp/T/go-build681325874=/tmp/go-build -gno-record-gcc-switches -fno-common"
$
</pre></details>
### What did you do?
My program connects to a ssh server (just my laptop in this case) and issues an ssh command closes the SSH session, and connection:
```
package main
import (
"bytes"
"fmt"
"strings"
"golang.org/x/crypto/ssh"
)
func main() {
sshConfig := &ssh.ClientConfig{
User: โmy_usernameโ,
Auth: []ssh.AuthMethod{
ssh.Password(โmy passwordโ),
},
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
}
connection, err := ssh.Dial("tcp", "localhost:22", sshConfig)
if err != nil {
fmt.Printf("Failed to dial: %s\n", err)
return
}
defer func() {
if err := connection.Close(); err != nil {
fmt.Println("Received an error closing the ssh connection: ", err)
} else {
fmt.Println("No error found closing ssh connection")
}
}()
session, err := connection.NewSession()
if err != nil {
fmt.Printf("Failed to create session: %s\n", err)
return
}
defer func() {
if err := session.Close(); err != nil {
fmt.Println("Received an error closing the ssh session: ", err)
} else {
fmt.Println("No error found closing ssh session")
}
}()
fmt.Println("created session")
var stdOut bytes.Buffer
var stdErr bytes.Buffer
session.Stdout = &stdOut
session.Stderr = &stdErr
err = session.Run("pwd")
fmt.Println("Executed command")
fmt.Println("Command stdOut is:", strings.TrimRight(stdOut.String(), "\n"), " --- stdError is:", strings.TrimRight(stdErr.String(), "\n"))
}
```
### What did you expect to see?
At session.close(), I was expecting to see 'nil' as the output instead of any error.
### What did you see instead?
I'm getting an EOF error in session.Close(). Looking at other raised issues in this forum (https://github.com/golang/go/issues/16194) it sounds reasonable to get the EOF at end of Run(). However, for Close() this seems odd to have it return as an error instead the return as nil.
It could be that Im not handling something properly in terms managing the buffer or executing command or closing it. But the code I pasted above is from examples online and it looks like the working model.
EOF at close is something I can handle but to be very clear and precise I believe Close() should return nil. And EOF doesn't seem like an error either.
|
NeedsInvestigation
|
low
|
Critical
|
589,280,193 |
go
|
crypto/x509: support policyQualifiers in certificatePolicies extension
|
Currently `x509.ParseCertificate`/`x509.CreateCertificate` supports automatically parsing out the `policyIdentifier`s from a `certificatePolicies` extension (into `Certificate.PolicyIdentifiers`) but ignores the optional `policyQualifiers` sequence. This field is often used to transmit a CPS pointer URL (for root and intermediate certificates) and a user notice (for end entity certificates).
It'd be great if we could get automatic parsing for this full structure, instead of just the OIDs, so that we don't have to implement extra post-parsing parsing of extensions to get the full value and/or manually constructing the extension and sticking it in `ExtraExtensions` for creation.
RFC 5280 only defines two possible qualifier types, `id-qt-cps` and `id-qt-unotice`, the vales of both of which can be safely mapped to and from a string, so I don't think we need anything fancier than that. I think the simplest implementation would be to add a new field to `Certificate` with the following structure:
```
CertificatePolicies []struct{
Id asn1.ObjectIdentifier
Qualifiers []struct{
Id asn1.ObjectIdentifier
Qualifier string
}
}
```
This would then be populated during parsing, and marshaled into an extension during creation. There is a question of what do to with the existing `PoliciyIdentifiers` field, i.e. if both are populated how should a call to `CreateCertificate` behave. I think for now it'd make sense to document that only one of them is allowed, and populating both would result in an error.
cc @FiloSottile
|
NeedsInvestigation
|
low
|
Critical
|
589,286,497 |
terminal
|
Feature Request: Background color transparency
|
# Description of the new feature/enhancement
This feature would add a setting to change the opacity of the text background when it is set. This is _not_ the screen background which already has this feature. It is also not per-color transparency. The rationale for this is that I have an image background, but I also use vim a lot which tends to want to set a solid background.
Basically, I can do this:

Or this:

And I'd like to be able to do (roughly) this:

It'd look a little different because some solid background is still there, but that should give you the general idea.
# Proposed technical implementation details (optional)
Add a profile setting `backgroundOpacity`. If this is lower than 1, when a cell is printed with a custom background color, that color is blended with the default terminal background instead of being fully opaque.
Bonus: `backgroundBlend` - Different [color blend modes](https://en.wikipedia.org/wiki/Blend_modes).
I'll also accept pointers on how to add these settings and pipe that info through to the renderer :).
|
Help Wanted,Area-Rendering,Area-Settings,Product-Terminal,Issue-Task
|
low
|
Major
|
589,294,385 |
flutter
|
Image gaplessPlayback keeps evicted images in memory
|
This is most likely intended due to how gaplessPlayback works, or is a bug.
When TickerMode.of returns false, Image widgets where gaplessPlayback is false remove their reference to the loaded Image. However, if gaplessPlayback is true, the Image widget continues to reference the loaded Image.
Once the image is evicted from the image cache, it doesn't get garbage collected if there's an Image widget with gaplessPlayback set to true. It also isn't tracked in the live image cache, so when the image comes back on screen (TickerMode.of returning true), the image will load a second time because it isn't present in the image cache.
This causes an increase in memory pressure if gaplessPlayback is set to true for an Image widget containing a large image.
We've mitigated this by removing gaplessPlayback, since we don't need it anymore.
Currently working on a minimal testcase.
cc @dnfield
|
framework,c: performance,a: images,perf: memory,P2,team-framework,triaged-framework
|
low
|
Critical
|
589,301,744 |
pytorch
|
caffe2 `DEPTHWISE3x3.Conv` test is broken
|
## ๐ Bug
`DEPTHWISE3x3.Conv` test fails with following lengthy error message:
```
$ ./bin/depthwise3x3_conv_op_test | tail -n 50
/home/nshulga/git/pytorch-worktree/caffe2/share/contrib/depthwise/depthwise3x3_conv_op_test.cc:144: Failure
Value of: relErr <= maxRelErr || absErr <= absErrForRelErrFailure
Actual: false
Expected: true
75 70 (rel err 0.068965516984462738) (1 1 91 78) running N 2 inputC 2 H 90 W 84 outputC 2 kernelH 3 kernelW 3 strideH 1 strideW 1 padT 2 padL 2 padB 2 padR 2 group 2
/home/nshulga/git/pytorch-worktree/caffe2/share/contrib/depthwise/depthwise3x3_conv_op_test.cc:144: Failure
Value of: relErr <= maxRelErr || absErr <= absErrForRelErrFailure
Actual: false
Expected: true
75 70 (rel err 0.068965516984462738) (1 1 91 79) running N 2 inputC 2 H 90 W 84 outputC 2 kernelH 3 kernelW 3 strideH 1 strideW 1 padT 2 padL 2 padB 2 padR 2 group 2
/home/nshulga/git/pytorch-worktree/caffe2/share/contrib/depthwise/depthwise3x3_conv_op_test.cc:144: Failure
Value of: relErr <= maxRelErr || absErr <= absErrForRelErrFailure
Actual: false
Expected: true
75 70 (rel err 0.068965516984462738) (1 1 91 80) running N 2 inputC 2 H 90 W 84 outputC 2 kernelH 3 kernelW 3 strideH 1 strideW 1 padT 2 padL 2 padB 2 padR 2 group 2
/home/nshulga/git/pytorch-worktree/caffe2/share/contrib/depthwise/depthwise3x3_conv_op_test.cc:144: Failure
Value of: relErr <= maxRelErr || absErr <= absErrForRelErrFailure
Actual: false
Expected: true
75 70 (rel err 0.068965516984462738) (1 1 91 81) running N 2 inputC 2 H 90 W 84 outputC 2 kernelH 3 kernelW 3 strideH 1 strideW 1 padT 2 padL 2 padB 2 padR 2 group 2
/home/nshulga/git/pytorch-worktree/caffe2/share/contrib/depthwise/depthwise3x3_conv_op_test.cc:144: Failure
Value of: relErr <= maxRelErr || absErr <= absErrForRelErrFailure
Actual: false
Expected: true
75 70 (rel err 0.068965516984462738) (1 1 91 82) running N 2 inputC 2 H 90 W 84 outputC 2 kernelH 3 kernelW 3 strideH 1 strideW 1 padT 2 padL 2 padB 2 padR 2 group 2
/home/nshulga/git/pytorch-worktree/caffe2/share/contrib/depthwise/depthwise3x3_conv_op_test.cc:144: Failure
Value of: relErr <= maxRelErr || absErr <= absErrForRelErrFailure
Actual: false
Expected: true
75 70 (rel err 0.068965516984462738) (1 1 91 83) running N 2 inputC 2 H 90 W 84 outputC 2 kernelH 3 kernelW 3 strideH 1 strideW 1 padT 2 padL 2 padB 2 padR 2 group 2
/home/nshulga/git/pytorch-worktree/caffe2/share/contrib/depthwise/depthwise3x3_conv_op_test.cc:144: Failure
Value of: relErr <= maxRelErr || absErr <= absErrForRelErrFailure
Actual: false
Expected: true
50 45 (rel err 0.10526315867900848) (1 1 91 84) running N 2 inputC 2 H 90 W 84 outputC 2 kernelH 3 kernelW 3 strideH 1 strideW 1 padT 2 padL 2 padB 2 padR 2 group 2
/home/nshulga/git/pytorch-worktree/caffe2/share/contrib/depthwise/depthwise3x3_conv_op_test.cc:144: Failure
Value of: relErr <= maxRelErr || absErr <= absErrForRelErrFailure
Actual: false
Expected: true
25 20 (rel err 0.2222222238779068) (1 1 91 85) running N 2 inputC 2 H 90 W 84 outputC 2 kernelH 3 kernelW 3 strideH 1 strideW 1 padT 2 padL 2 padB 2 padR 2 group 2
[ FAILED ] DEPTHWISE3x3.Conv (143 ms)
[----------] 1 test from DEPTHWISE3x3 (143 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (143 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] DEPTHWISE3x3.Conv
1 FAILED TEST
```
## To Reproduce
1. Checked out pytorch at `b33e38ec475017868534eb114741ad32c9d3b248` on Linux
2. Build it without GPU nor MKL_DNN acceleration (i.e. invoke `-DUSE_CUDA=NO -DUSE_MKLDNN=OFF`
3. Compile and run `depthwise3x3_conv_op_test`
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Collecting environment information...
PyTorch version: 1.5.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Fedora release 30 (Thirty)
GCC version: (GCC) 9.2.1 20190827 (Red Hat 9.2.1-1)
CMake version: version 3.16.4
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.168
GPU models and configuration: GPU 0: GeForce RTX 2080
Nvidia driver version: 440.31
cuDNN version: /usr/lib64/libcudnn.so.7.5.0
Versions of relevant libraries:
[pip3] numpy==1.16.4
[pip3] torch==1.5.0
[pip3] torchvision==0.6.0.dev20200326+cu101
[conda] Could not collect
## Additional context
cc @mruberry @VitalyFedyunin
|
caffe2,module: tests,triaged
|
low
|
Critical
|
589,319,969 |
youtube-dl
|
ESPN.CO.UK
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.03.24. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x ] I'm reporting a new site support request
- [x ] I've verified that I'm running youtube-dl version **2020.03.24**
- [ x] I've checked that all provided URLs are alive and playable in a browser
- [x ] I've checked that none of provided URLs violate any copyrights
- [x ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
https://www.espn.co.uk/video/clip/_/id/25103188
## Description
Sorry in advance if I should have opened a separate issue but...
The same clip is also viewable at:
https://www.espn.com/video/clip/_/id/25103188
and the ESPN extractor is invoked...however the ESPN extractor failed.
So
1) The ESPN extractor should also recognize espn.co.uk (not fall back to generic)
...and
2) The ESPN extractor might need some attention.
Thanks as always
Ringo
|
site-support-request
|
low
|
Critical
|
589,329,054 |
vscode
|
Can't drag files from a remote vscode window to a local window
|
- Have remote and local windows open
- Drag a file from the remote explorer to the local explorer
- Nothing happens
I can do this local -> remote to copy files, and I can do this from a remote vscode window to Finder, but not remote -> local vscode windows.
|
feature-request,file-explorer
|
medium
|
Critical
|
589,404,563 |
PowerToys
|
Memorized clipboard entries
|
Summary: Support assigning key-bindings to clipboard history items.
I occasionally need to run a command over and over again on a number of files.
I typically get my list of files ready as a \n delimited list of paths. To open them, I will cut their path to the clipboard, switch to a console and paste. Once they are open, I need to type in a command into the integrated shell. I would love to be able to paste the command, but because I copied the path onto the clipboard buffer, the command is no longer at my finger tips. Using the Clipboard history feature of Windows helps, but as I open more files, the command gets pushed further and further down the history list.
As an enhancement to the history list, I would love to be able to assign a key-binding to a particular entry. No matter how far the entry was pushed down the list, the key-binding would let me select it quickly.
When the entry was removed from the list, the key binding should also be freed up.
I suggest the key-binding only work after opening the clipboard history with Win+V, but perhaps a global key binding would be useful to some people too.
|
Idea-New PowerToy
|
medium
|
Major
|
589,410,409 |
TypeScript
|
Considering limiting hover length
|
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.7.x-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** hover, length
**Code**
```ts
// A *self-contained* demonstration of the problem follows...
// Test this by running `tsc` on the command-line, rather than through another build tool such as Gulp, Webpack, etc.
```
**Expected behavior:** JSDocs can always be shown
**Actual behavior:** JSDocs is hidden because hover is too long


**Playground Link:** https://www.typescriptlang.org/play/#code/FAegVGwARlAqALApgTygYwPYFsAOAbAQwEsA7JAEygBcFDqoBHAV2PQGsoyAzTKXgE79mpdNWKZSAZygB3ZAKRRCNFLiW5CAwtiTUkQ4jJ4HFVan0KkomAEYArJGIxWoAcz1RFhfPjQA3A1tMKSUyKFJMBkCBNGZQ7mZ8OUIUKQAaZVIqYjxMAWoraj8aBCMbWyl0ZkUZWiUAKQBlABFMdAwcXVJqKQA6KABJBlCkbBl8YnYlesVATAIZAUwcGj5c3CXA0qUWNk4ePiQAD3UBYiRRJQUlAAoyfW0xYi3jzWyQLF9CXFCAfgBKTK2ZgMeooOZUWxXb64NDEbhQABqTSgAGFMBQlPZ4iMEJhZJRtlBmm0OpJVrgBgBRQhSc5CCxQUZyOgMQZQXSucKEJ6STJSPjsxTobz6KgqKoXLQSJm5YhEBl8eT0bZoKR4glUXQALmgsAAApptNgmMwDGhEDszbF+PkoLYiAh7Y7nYQEHqoPrjjoCEgPQADQP2KQe7hkCgAeXINwA3q6nQBff4AbgDgY9HoAgtlVlAKHwBbpaGQ3Hn4dwDBdqLqYFA0-7g6Hw1GkLHbZggVpMsr9DEoEn63qQMBMegFUpEqJxOSw9kWwAeOAAPhuLHN2vg-w3AAUltgjEh5wAlJCEfOkPyLpdL1OgECmvZcUi8KDXYCzyPRmPQGzkDcARnSH9qFkTANwAJiAqBSkUJANwAZig20ag3AAWJCw0CDcAFYkLpI4NwANjwpBAlIDcAHYkKQYg3AQasoAADmAAcgA
**Related Issues:** https://github.com/microsoft/vscode/issues/92787
It seems certain type hover will be formatted to one property each line. I'm wondering if you could only adopt this behavior when the total lines do not exceed a threshold (like 10 lines), so it wouldn't affect JSDocs.
|
Suggestion,Awaiting More Feedback
|
low
|
Minor
|
589,433,605 |
node
|
child_process.exec/execFile docs have some inconsistencies and inaccuracies
|
I don't have time to fix this now, but while looking at sec issues related to these APIs, I found some oddities.
https://nodejs.org/api/child_process.html#child_process_child_process_execfile_file_args_options_callback says
> shell <boolean> | <string> If true, runs command inside of a shell.
But execFile() doesn't have a `command` argument... this was pasted from exec(), it seems. Probably what happens is that if there is a shell, then `file` and `args` are all concatenated together, `' '` seperated, and passed to the shell.
Since the shell option AFAICT ends up following the same path as from exec(), it suggests that the exec docs:
> shell <string> Shell to execute the command with. See Shell Requirements and Default Windows Shell. Default: '/bin/sh' on Unix, process.env.ComSpec on Windows.
are incomplete, probably `false` would work just fine as an arg there, making exec() behave exactly like execFile().
This seems to be a bit legacy as well:
> The child_process.execFile() function is similar to child_process.exec() except that it does not spawn a shell by default. Rather, the specified executable file is spawned directly as a new process making it slightly more efficient than child_process.exec().
Now that both exec and execFile() have a shell, differing only be the default value, its probably more accurate to say the difference is that one takes an array of strings as an argument `execFile(file, argv, ..` and the other takes a single string `exec(command, ...)`.
The text following is now wrong:
> The same options as child_process.exec() are supported. Since a shell is not spawned, behaviors such as I/O redirection and file globbing are not supported.
It can't both *support* the same options, and *not support* some of the options.
It should probably say "If a shell is ..." (only one word different, but its important).
exec should probably have docs saying the same thing, shell behaviours are not supported when shell is `false`.
And execFile() should probably include the warnings from exec about how shell special chars vary by platform.
Some of these issues are shared with the "sync" versions of the APIs.
|
child_process,doc
|
low
|
Minor
|
589,477,012 |
godot
|
[3.x] GDScript autocompletion errors with "Node not found" when using `get_node` with an absolute `/root` path
|
**OS: Microsoft Windows [Version 10.0.18363.720] 64bit**
**Godot: v3.2.1.stable.mono.official 64bit**
It seems get_node (same with $) is not working as expected.
**Specify steps to reproduce:**
Suppose you have the following tree:
/root/Character
/root/Character/Sword
And a script attached to Sword.
Inside that script you have:
func _ready() -> void:
var bug = get_node("/root/Character")
If you now type bug. below var bug you get "Node not found: /root/Character." in the output window of the editor.
[Get_node_bug.zip](https://github.com/godotengine/godot/files/4396428/Get_node_bug.zip)
And you don't get code completion for properties like position if the Character is a Node for example.
|
bug,topic:gdscript,topic:editor,confirmed
|
medium
|
Critical
|
589,481,045 |
flutter
|
Widget focus highlighting for iOS/iPadOS
|
Sister bug to https://github.com/flutter/flutter/issues/43365 for iPad keyboard support since it's different from macOS.
This also raises the question of whether we should attempt to detect platform in Cupertino and have different highlighting behavior depending on the platform.
However, figuring out what the focus highlight spec should be is non-trivial. After testing a bit, iPadOS (13.4) doesn't really have a (consistent) focus highlight handling strategy. It's also definitely different from macOS.

The task switcher looks the most polished. Springboard quick search looks similar, so does Siri search results. Based on our experience with other glass-frosty things like the action sheet, it's likely a combination of overlay and color dodge.

UITableView has a slightly different style. It's just a simple overlay. Note it's not super useful. You can't focus on anything else than the master list. And text field is the only other thing focusable. Pressing tab just inserts tabs.

WebKit itself has yet another style. The simplest. A blue box.
|
a: text input,platform-ios,framework,f: material design,a: accessibility,a: fidelity,f: cupertino,f: focus,team-design,triaged-design
|
low
|
Critical
|
589,489,427 |
terminal
|
Allow the user to set the height of the tab row
|
<!--
๐จ๐จ๐จ๐จ๐จ๐จ๐จ๐จ๐จ๐จ
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Please reduce the height of the title bar, there is no need to have that big chunk of area on top of the terminal.
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
# Proposed to reduced by 1/3 or 2/5.
<!--
A clear and concise description of what you want to happen.
-->
|
Area-Settings,Product-Terminal,Issue-Task,Area-Theming
|
medium
|
Critical
|
589,519,834 |
pytorch
|
libtorch_global_deps.so not found.
|
## Bug
The following CDLL call is expecting `libtorch_global_deps.so`, which is not always the case.
https://github.com/pytorch/pytorch/blob/f1d69cb2f848d07292ad69d7801b8b0b73a42b5d/torch/__init__.py#L85-L89
In our current setup, which is a little bit special, we want to use pytorch both in C++ and python, so we do a cmake build first, install to /usr/local/lib first, and then call setup.py upon the same build dir to build python wheel.
Currently there's no `lib` dir under `/usr/local/lib/python3.6/dist-packages/torch`.
Similar story also happened to torch_shm_manager, which is required under `bin`.
https://github.com/pytorch/pytorch/blob/f1d69cb2f848d07292ad69d7801b8b0b73a42b5d/torch/__init__.py#L308-L309
I also checked a version I built on Feb 15, that one has `/usr/local/lib/python3.6/dist-packages/torch/{lib,bin}`.
## Environment
- PyTorch Version (e.g., 1.0): master
- OS (e.g., Linux): Ubuntu 18.04
- How you installed PyTorch (`conda`, `pip`, source): source
- Build command you used (if compiling from source): cmake+ninja+gcc-8
- Python version: 3.6.9
- CUDA/cuDNN version: 10.2
|
module: build,triaged,module: regression
|
low
|
Critical
|
589,532,110 |
terminal
|
Add support for touchscreen selection
|
<!--
๐จ๐จ๐จ๐จ๐จ๐จ๐จ๐จ๐จ๐จ
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
I'm a Surface pro user and sometimes need to use CLI tools without bothering to pick up the keyboard (although it's a mess to type with on-screen keyboard), but it turns out that Windows Terminal is not optimized for touchscreen at all.
# Proposed technical implementation details (optional)
<!--
A clear and concise description of what you want to happen.
-->
As far as I can tell, the following need to be implemented:
1. Automatically popup on-screen keyboard and resize the window to fit it on touch.
2. Long press to select/copy/paste.
3. Zoom with finger gestures (https://github.com/microsoft/terminal/issues/3149)
|
Area-Input,Area-TerminalControl,Product-Terminal,Issue-Task
|
medium
|
Critical
|
589,551,409 |
flutter
|
[multicast_dns] SocketException: Failed to create datagram socket "The requested address is not valid in its context"
|
Hi
I decided to try multicast_dns 0.2.2 to search for mqtt clients.
I've copied the provided example into a dart project.
```
import 'package:multicast_dns/multicast_dns.dart';
Future<void> main() async {
// Parse the command line arguments.
const name = '_mqtt._tcp.local';
final MDnsClient client = MDnsClient();
// Start the client with default options.
await client.start();
// Get the PTR recod for the service.
await for (PtrResourceRecord ptr in client
.lookup<PtrResourceRecord>(ResourceRecordQuery.serverPointer(name))) {
// Use the domainName from the PTR record to get the SRV record,
// which will have the port and local hostname.
// Note that duplicate messages may come through, especially if any
// other mDNS queries are running elsewhere on the machine.
await for (SrvResourceRecord srv in client.lookup<SrvResourceRecord>(
ResourceRecordQuery.service(ptr.domainName))) {
// Domain name will be something like "[email protected]._dartobservatory._tcp.local"
final String bundleId =
ptr.domainName; //.substring(0, ptr.domainName.indexOf('@'));
print('Dart observatory instance found at '
'${srv.target}:${srv.port} for "$bundleId".');
}
}
client.stop();
print('Done.');
}
```
Running this from VSCode on Windows 10 gives the following error.
```
Dart Socket ERROR: c:\b\s\w\ir\cache\builder\sdk\runtime\bin\socket_win.cc:181: `reusePort` not supported for Windows.Dart Socket ERROR: c:\b\s\w\ir\cache\builder\sdk\runtime\bin\socket_win.cc:181: `reusePort` not supported for Windows.Dart Socket ERROR: c:\b\s\w\ir\cache\builder\sdk\runtime\bin\socket_win.cc:181: `reusePort` not supported for Windows.Dart Socket ERROR: c:\b\s\w\ir\cache\builder\sdk\runtime\bin\socket_win.cc:181: `reusePort` not supported for Windows.Unhandled exception:
SocketException: Failed to create datagram socket (OS Error: Den begรคrda adressen รคr inte giltig i sin kontext.
, errno = 10049), address = , port = 5353
#0 _NativeSocket.bindDatagram (dart:io-patch/socket_patch.dart:668:7)
<asynchronous suspension>
#1 _RawDatagramSocket.bind (dart:io-patch/socket_patch.dart:1964:26)
#2 RawDatagramSocket.bind (dart:io-patch/socket_patch.dart:1922:31)
#3 MDnsClient.start (package:multicast_dns/multicast_dns.dart:127:46)
<asynchronous suspension>
#4 main (file:///C:/experimentalProjects/flutter_mdns/mdns/bin/main.dart:9:16)
#5 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:307:19)
#6 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:174:12)
```
Please advice.
|
c: crash,package,p: multicast_dns,team-ecosystem,P3,triaged-ecosystem
|
low
|
Critical
|
589,552,850 |
pytorch
|
Increased memory usage in repetitive torch.jit.trace calls
|
## ๐ Bug
Tracing a model multiple times seems to increase the memory usage.
Reported by [alepack](https://discuss.pytorch.org/u/alepack) in [this post](https://discuss.pytorch.org/t/possible-memory-leak-in-torch-jit-trace/74455).
## To Reproduce
```python
#Utilities
import os
import psutil
#JIT trace test
import torch
import torchvision.models as model_zoo
with torch.no_grad():
#Create a simple resnet
model = model_zoo.resnet18()
model.eval
#Create sample input
sample_input = torch.randn(size=[2, 3, 480, 640], requires_grad=False)
#Get process id
process = psutil.Process(os.getpid())
#Repeat tracing
for i in range(0, 1000):
tr_model = torch.jit.trace(model, sample_input, check_trace=False)
print("Iter: {} = {}".format(i, process.memory_full_info()))
# output:
Iter: 0 = pfullmem(rss=415825920, vms=8240693248, shared=123785216, text=2215936, lib=0, data=6408441856, dirty=0, uss=419287040, pss=420155392, swap=0)
Iter: 1 = pfullmem(rss=456716288, vms=8292564992, shared=125976576, text=2215936, lib=0, data=6460313600, dirty=0, uss=461049856, pss=461918208, swap=0)
Iter: 2 = pfullmem(rss=493195264, vms=8331886592, shared=125976576, text=2215936, lib=0, data=6499635200, dirty=0, uss=495702016, pss=496570368, swap=0)
Iter: 3 = pfullmem(rss=547762176, vms=8391135232, shared=125976576, text=2215936, lib=0, data=6558883840, dirty=0, uss=550371328, pss=551239680, swap=0)
Iter: 4 = pfullmem(rss=577015808, vms=8420626432, shared=125976576, text=2215936, lib=0, data=6588375040, dirty=0, uss=579932160, pss=580800512, swap=0)
Iter: 5 = pfullmem(rss=630874112, vms=8469778432, shared=125976576, text=2215936, lib=0, data=6637527040, dirty=0, uss=629145600, pss=630013952, swap=0)
Iter: 6 = pfullmem(rss=659718144, vms=8499269632, shared=125976576, text=2215936, lib=0, data=6667018240, dirty=0, uss=662802432, pss=663670784, swap=0)
Iter: 7 = pfullmem(rss=698855424, vms=8538853376, shared=125976576, text=2215936, lib=0, data=6706601984, dirty=0, uss=702468096, pss=703336448, swap=0)
Iter: 8 = pfullmem(rss=728879104, vms=8568344576, shared=125976576, text=2215936, lib=0, data=6736093184, dirty=0, uss=732028928, pss=732897280, swap=0)
Iter: 9 = pfullmem(rss=773472256, vms=8617496576, shared=125976576, text=2215936, lib=0, data=6785245184, dirty=0, uss=776867840, pss=777736192, swap=0)
Iter: 10 = pfullmem(rss=802549760, vms=8646987776, shared=125976576, text=2215936, lib=0, data=6814736384, dirty=0, uss=806092800, pss=806961152, swap=0)
```
## Environment
alepack's setup based on the forum post:
* PyToch `1.4.0` installed via conda
I verified it with `1.5.0a0+5b3492df18`.
cc @ezyang @gchanan @zou3519 @suo
|
high priority,oncall: jit,triaged
|
medium
|
Critical
|
589,553,850 |
rust
|
Should `TryFrom` get mentioned in AsRef/AsMut?
|
Both `AsMut` as well as `AsRef`'s documentation contain the following note:
> **Note: This trait must not fail.** If the conversion can fail, use a dedicated method which returns an `Option<T>` or a `Result<T, E>`.
This wording was added in 58d2c7909f9 and 6cda8e4eaac back in 2016/01. It was also very similar in `From` and `Into` up to 71bdeb022a9, where the explicit mention of dedicated methods was replaced by a link to the `Try*` variants.
Since `TryFrom` is now stabilized, one could go ahead and write `AsRef` variants via `try_from`:
```rust
struct Example {
dont_panic: bool,
}
struct Panic;
impl TryFrom<&Example> for &u8 {
type Error = Panic;
fn try_from(e: &Example) -> Result<Self, Self::Error> {
if e.dont_panic {
Ok(&42)
} else {
Err(Panic)
}
}
}
// full example: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=8446972919a4ab31222800395accb434
```
However, I'm not sure whether that breaks the original spirit of `AsRef` and `AsMut`, being *cheap* conversions and all. I'm also not sure whether it is intended to use `TryFrom` for references.
If possible failing (and costly?) *reference* conversions are indeed a use case for `TryFrom`, should this alternative to an `Option<T>` or `Result<T,E>` get added to `AsRef`'s documentation? Or is this a misuse case and reference conversion should get mentioned on `TryFrom`'s documentation as a non-goal?
(Note that I don't have a use case for a `TryAsRef` or similar; I just read a lot of Rust's documentation lately and came across the symmetry between `From` and `TryFrom` and the missing counterpart in `AsRef`)
|
C-enhancement,T-libs-api
|
low
|
Critical
|
589,559,997 |
react
|
Devtools: Allow editing context
|
React version: 16.13 and `0.0.0-experimental-aae83a4b9
## Steps To Reproduce
1. Goto https://codesandbox.io/s/xenodochial-field-rfdjz
2. Try editing value of `MessageListContext.Provider`
Link to code example: https://codesandbox.io/s/xenodochial-field-rfdjz
## The current behavior
Context from `createContext` can't be edited in the current devtools (provider, consumer, hooks)
## The expected behavior
Context value should be editable. I already proposed an implementation for [Provider](https://github.com/facebook/react/pull/18255) and [Consumer](https://github.com/facebook/react/pull/18257).
|
Type: Feature Request,Component: Developer Tools
|
low
|
Minor
|
589,561,455 |
godot
|
dialog window title bar are visible in Windows(os)
|
**Godot version:** 307b1b3a5
**OS/device including version:** windows10
**Issue description:**
title bar is visible in dialog windows in Windows(os). also dragging them around makes the window lagging




|
enhancement,discussion,topic:editor,topic:porting,usability
|
low
|
Major
|
589,568,591 |
godot
|
GDScript: Strange const scoping with classes
|
**Godot version:** 3.2.1
**OS/device including version:** `Ubuntu 18.04.3 LTS (bionic) - Linux 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux`
**Issue description:** `const` scoping seems to be very restrictive, with child classes being unable to use constants from their parent or from the top level of the script in constant expressions. This results in having to use `var` instead (as in #20354), or copy consts/enums to each child (maintenance nightmare).
**Steps to reproduce:** See gdscript samples below.
**Minimal reproduction project:**
```
# Singleton named Note
extends Node
enum {NOTE_TAP, NOTE_HOLD}
enum Named {NOTE_TAP, NOTE_HOLD}
const NOTE_TAP1 := 0
class NoteBase:
...
class NoteTap extends NoteBase:
var type = NOTE_TAP # legal, seems like the fix for #20354 was targeted at this case
const type1 = NOTE_TAP # illegal: "Expected a constant expression."
const type2 = Named.NOTE_TAP # illegal
const type3 = Note.NOTE_TAP # illegal: "invalid index 'NOTE_TAP' in constant expression"
const type4 = Note.Named.NOTE_TAP # illegal
const type5 = NOTE_TAP1 # illegal
const type6 = Note.NOTE_TAP1 # illegal
```
```
class NoteBase:
enum {NOTE_TAP, NOTE_HOLD}
const NOTE_TAP1 := 0
...
class NoteTap extends NoteBase:
const type = NOTE_TAP # still illegal
const type1 = NoteBase.NOTE_TAP # still illegal
const type2 = NOTE_TAP1 # also illegal
```
```
class NoteBase:
...
class NoteTap extends NoteBase:
enum {NOTE_TAP, NOTE_HOLD}
const type = NOTE_TAP # legal!
```
|
bug,topic:gdscript
|
low
|
Minor
|
589,577,681 |
TypeScript
|
T | (() => T)
|
**TypeScript Version:** 3.9.0 (Nightly)
**Search Terms:** `T | (() => T)`, `T & Function`
**Code**
```ts
type Initializer<T> = T | (() => T)
// type Initializer<T> = T extends any ? (T | (() => T)) : never
function correct<T>(arg: Initializer<T>) {
return typeof arg === 'function' ? arg() : arg // error
}
```
Line 2 provides a workaround for this.
More info on [stackoverflow](https://stackoverflow.com/questions/60898079/typescript-type-t-or-function-t-usage).
**Expected behavior:** no errors
**Actual behavior:** `This expression is not callable.
Not all constituents of type '(() => T) | (T & Function)' are callable.
Type 'T & Function' has no call signatures.`
**Playground Link:** [here](https://www.typescriptlang.org/play/?ts=3.9.0-dev.20200327&ssl=1&ssc=1&pln=6&pc=2#code/C4TwDgpgBAkgdgS2AghgGwQLwgJwDwAqAfFALxQFQA+UAFLQJRkkEMBQA9B1KJLIsnRZchEuUoQAHsAhwAJgGcoKOCCgB+OpRr0mpFgyYAuKHAgA3XGzYAzAK5wAxsgD2cKI5c4cEZ6NooOADmJvBIqBjY+MRMAN5sUIlQPsB2OO68EC42ysFkpOQA5PZOrnCFGrlBjFAmgUFQXFC4OF5sAL5AA).
**Related Issues:** none
|
Bug
|
high
|
Critical
|
589,584,169 |
godot
|
Unable to inherit from ProjectSettings
|
**Godot version:**
3.2.1
**OS/device including version:**
Arch Linux
**Issue description:**
If I trying to extend `ProjectSettings` I have parsing error:
```
modules/gdscript/gdscript_compiler.cpp:1864 - Condition "native.is_null()" is true. Returned: ERR_BUG
```
Extending `ProjectSettings` can be very useful to add helper methods such as `add_gravity()` instead of using `get_setting("physics/3d/default_gravity")` every time.
**Steps to reproduce:**
1. Create a script.
2. Extend it from `ProjectSettings`.
3. Add any method or variable.
4. Try to use it in code.
**Minimal reproduction project:**
[PrjectSettings.zip](https://github.com/godotengine/godot/files/4397236/PrjectSettings.zip)
In this project I created `MySettingsNode` class that inherits from Node and works as expected. Also I created `MySettingsProjectSettings` class that have the same code, but will not work (just try to instantiate it from code to see the error.
**Temporary workaround:**
For now it is possible to create a class and with static functions and just access to all settings via `ProjectsSettings` because this is a singleton. This is useful to the error looks like a bug.
|
discussion,topic:core
|
low
|
Critical
|
589,599,444 |
scrcpy
|
scrcpy on Asus TF101 running Android 6.0.1
|
Hi there!
I am trying to get my old Asus EeePC (TF101) to work with scrcpy on Ubuntu 18.04. The tablet is currently running [KatKiss Marshmallow 6.0.1 ROM](https://forum.xda-developers.com/eee-pad-transformer/development/rom-t3318496) with USB Debugging on:
$ adb devices
List of devices attached
037001494120f517 device
When I run scrcpy I get:
$ scrcpy
INFO: scrcpy 1.12 <https://github.com/Genymobile/scrcpy>
/usr/local/share/scrcpy/scrcpy-server: 1 file pushed. 1.1 MB/s (26196 bytes in 0.022s)
INFO: Initial texture: 1280x800
but no windows open. My scrcpy installation is working perfectly with my Samsung S9, so I guess the problem is elsewhere. The output of "adb logcat -d" (attached [logcat.txt](https://github.com/Genymobile/scrcpy/files/4397849/logcat.txt)) has some weird outputs related to the Nvidia.h264.encoder:
03-28 15:17:40.792 3848 3865 I OMXClient: Using client-side OMX mux.
03-28 15:17:40.797 111 1270 E OMXNodeInstance: setParameter(2b:Nvidia.h264.encoder, OMX.google.android.index.storeMetaDataInBuffers(0x7fc00007): Output:1 en=0 type=1) ERROR: BadParameter(0x80001005)
03-28 15:17:40.797 3848 3865 E ACodec : [OMX.Nvidia.h264.encoder] storeMetaDataInBuffers (output) failed w/ err -2147483648
03-28 15:17:40.798 3848 3865 W ACodec : do not know color format 0x7f000789 = 2130708361
03-28 15:17:40.799 3848 3865 I ACodec : setupVideoEncoder succeeded
03-28 15:17:40.799 111 1270 E OMXNodeInstance: setParameter(2b:Nvidia.h264.encoder, OMX.google.android.index.enableAndroidNativeBuffers(0x7fc00004): Output:1 en=0) ERROR: BadParameter(0x80001005)
03-28 15:17:40.800 3848 3865 W ACodec : do not know color format 0x7f000789 = 2130708361
03-28 15:17:40.801 111 2380 E OMXNodeInstance: getParameter(2b:Nvidia.h264.encoder, ParamConsumerUsageBits(0x6f800004)) ERROR: NotImplemented(0x80001006)
Any ideas on how to circumvent this?
Thanks in advance,
Marcelo
|
device
|
low
|
Critical
|
589,599,872 |
pytorch
|
Performance bug with convolutions with weights and inputs of similar spatial size
|
This is a performance bug in `conv2d`, when spatial dimensions of the weight and input are the same (or similar). Such convolutions get 5 to 10 times slower when transitioning from image sizes of 30 to 32. There may be an issue with the selection of the CUDNN convolution algorithm.
One possibly route to a solution is to allow manual selection of the convolution algorithm.
## To Reproduce
Full minimal working example:
```
import torch as t
t.backends.cudnn.benchmark=True
import torch.nn.functional as F
from timeit import default_timer as timer
device = "cuda"
Cin = 128
Cout = 256
B = 1000
for W in range(20, 40):
input = t.randn(B, Cin, W, W, device=device)
weight = t.randn(Cout, Cin, W, W, device=device)
output = F.conv2d(input, weight, padding=1)
t.cuda.synchronize(device=device)
start_time = timer()
output = F.conv2d(input, weight, padding=1)
t.cuda.synchronize(device=device)
print(f"W: {W}, W**2: {W**2:4d}, time: {1000*(timer()-start_time)}")
```
## Expected behavior
The time required should scale linearly with `W**2`, instead, there is around a factor of 5 jump between W=30 and W=32,
```
W: 20, W**2: 400, time: 14.050104655325413
W: 21, W**2: 441, time: 13.91313411295414
W: 22, W**2: 484, time: 13.899956829845905
W: 23, W**2: 529, time: 13.991509564220905
W: 24, W**2: 576, time: 14.103865250945091
W: 25, W**2: 625, time: 14.099820517003536
W: 26, W**2: 676, time: 14.254673384130001
W: 27, W**2: 729, time: 14.209367334842682
W: 28, W**2: 784, time: 14.332166872918606
W: 29, W**2: 841, time: 14.398249797523022
W: 30, W**2: 900, time: 14.406020753085613
W: 31, W**2: 961, time: 53.21667902171612
W: 32, W**2: 1024, time: 86.8472745642066
W: 33, W**2: 1089, time: 84.1836640611291
W: 34, W**2: 1156, time: 113.77124954015017
W: 35, W**2: 1225, time: 84.53960344195366
W: 36, W**2: 1296, time: 125.90546812862158
W: 37, W**2: 1369, time: 84.77605786174536
W: 38, W**2: 1444, time: 141.37933682650328
W: 39, W**2: 1521, time: 149.50636308640242
```
The performance with `benchmark=True` commented out is even more exciting,
```
W: 20, W**2: 400, time: 17.404177226126194
W: 21, W**2: 441, time: 17.446410842239857
W: 22, W**2: 484, time: 17.630738206207752
W: 23, W**2: 529, time: 17.792406491935253
W: 24, W**2: 576, time: 16.371975652873516
W: 25, W**2: 625, time: 13.557913713157177
W: 26, W**2: 676, time: 13.596143573522568
W: 27, W**2: 729, time: 13.673634268343449
W: 28, W**2: 784, time: 13.687632977962494
W: 29, W**2: 841, time: 13.748884201049805
W: 30, W**2: 900, time: 13.820515014231205
W: 31, W**2: 961, time: 53.05525008589029
W: 32, W**2: 1024, time: 118.49690321832895
W: 33, W**2: 1089, time: 84.05695110559464
W: 34, W**2: 1156, time: 84.2417012900114
W: 35, W**2: 1225, time: 84.5922939479351
W: 36, W**2: 1296, time: 84.85969807952642
W: 37, W**2: 1369, time: 85.06182674318552
W: 38, W**2: 1444, time: 84.93016473948956
W: 39, W**2: 1521, time: 85.2658236399293
```
This is pretty painful, because it means my algorithm is very fast on MNIST, but disproportionately slower on any other dataset, such as CIFAR10.
## Environment
```
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: version 3.10.2
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: NVS 510
GPU 1: TITAN V
GPU 2: TITAN V
Nvidia driver version: 418.74
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip] numpy==1.17.2
[pip] numpydoc==0.9.1
[pip] torch==1.4.0
[pip] torchvision==0.5.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.14 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] pytorch 1.3.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] torchvision 0.4.1 py37_cu101 pytorch
```
cc @csarofeen @ptrblck @VitalyFedyunin @ngimel
|
module: performance,module: cudnn,module: convolution,triaged
|
low
|
Critical
|
589,599,940 |
rust
|
assert_eq(x, y) is not the same as assert_eq(y, x) because of type inferrence
|
<!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code ([playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=8c06ef8af87645f50695bdba65f8a723)):
```rust
fn main() {
assert_eq!(0_i64, S::zero());
assert_eq!(S::zero(), 0_i64);
}
struct S;
trait Zeroed<T> {
fn zero() -> T;
}
impl Zeroed<i16> for S {
fn zero() -> i16 {
0
}
}
impl Zeroed<i64> for S {
fn zero() -> i64 {
0
}
}
```
I expected to see this happen: The program compiles and runs successfully.
Instead, this happened:
```
error[E0282]: type annotations needed
--> src/main.rs:3:16
|
3 | assert_eq!(S::zero(), 0_i64);
| ^^^^^^^ cannot infer type for type parameter `T` declared on the trait `Zeroed`
```
Note that the `assert_eq!(0_i64, S::zero());` works as expected.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.42.0 (b8cedc004 2020-03-09)
binary: rustc
commit-hash: b8cedc00407a4c56a3bda1ed605c6fc166655447
commit-date: 2020-03-09
host: x86_64-unknown-linux-gnu
release: 1.42.0
LLVM version: 9.0
```
The same error is present on nightly:
```
rustc 1.44.0-nightly (2fbb07525 2020-03-26)
binary: rustc
commit-hash: 2fbb07525e2f07a815e780a4268b11916248b5a9
commit-date: 2020-03-26
host: x86_64-unknown-linux-gnu
release: 1.44.0-nightly
LLVM version: 9.0
```
|
T-lang,A-inference,C-bug
|
low
|
Critical
|
589,621,146 |
pytorch
|
Could not find any similar ops to "foo..." in the `Libtorch`
|
## ๐ Bug
I want to make the `jit trace` model in the Python and forwarding tensors in the C++.
In cpp, got error.
```
terminate called after throwing an instance of 'torch::jit::script::ErrorReport'
what():
Unknown builtin op: tomo::morph_pool.
Could not find any similar ops to tomo::morph_pool. This op may not exist or may not be currently supported in TorchScript.
....
Serialized File "code/__torch__/torch/nn/modules/module/___torch_mangle_162.py", line 9
argument_1: Tensor) -> Tensor:
_0 = self.morph
x_min_morph = ops.tomo.morph_pool(argument_1, _0)
~~~~~~~~~~~~~~~~~~~ <--- HERE
x, _1 = torch.min(x_min_morph, 2, False)
x_max_morph = ops.tomo.morph_pool(x, _0)
[1] 57316 abort (core dumped) ./forward
```
But In the python, working well as I expected

## To Reproduce
Now I'm using custom library like the `torchvision`
I had assign function like under codes.
```
static auto registry =
torch::RegisterOperators()
.op("tomo::morph_pool", &morph_pool)
.op("tomo::roi_pool_3d", &roi_pool_3d)
.op("tomo::roi_align_3d", &roi_align_3d)
.op("tomo::nms_3d", &nms_3d);
```
And in the `setup.py`
```
def get_extensions():
this_dir = os.path.dirname(os.path.abspath(__file__))
extensions_dir = os.path.join(this_dir, "csrc")
main_file = [os.path.join(extensions_dir, "vision.cpp")]
source_cpu = glob.glob(os.path.join(extensions_dir, "**", "*.cpp"))
source_cuda = glob.glob(os.path.join(extensions_dir, "**", "*.cu"))
sources = main_file + source_cpu
extension = CppExtension
extra_compile_args = {"cxx": []}
define_macros = []
if (torch.cuda.is_available() and CUDA_HOME is not None) or os.getenv("FORCE_CUDA", "0") == "1":
extension = CUDAExtension
sources += source_cuda
define_macros += [("WITH_CUDA", None)]
extra_compile_args["nvcc"] = [
"-DCUDA_HAS_FP16=1",
"-D__CUDA_NO_HALF_OPERATORS__",
"-D__CUDA_NO_HALF_CONVERSIONS__",
"-D__CUDA_NO_HALF2_OPERATORS__",
]
# It's better if pytorch can do this by default ..
CC = os.environ.get("CC", None)
if CC is not None:
extra_compile_args["nvcc"].append("-ccbin={}".format(CC))
include_dirs = [extensions_dir]
ext_modules = [
extension(
"tomo._C",
sources,
include_dirs=include_dirs,
define_macros=define_macros,
extra_compile_args=extra_compile_args,
)
]
return ext_modules
cmdclass={"build_ext": BuildExtension.with_options(no_python_abi_suffix=True)}
```
## Environment
I was trying to 2 versions of libtorch
`1.4.0` and the latest
```
โ cat libtorch/build-version
1.6.0.dev20200328+cu101
```
```
Collecting environment information...
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 8.3.0-16ubuntu3~16.04) 8.3.0
CMake version: version 3.11.0
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 418.87.01
cuDNN version: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.4
Versions of relevant libraries:
[pip3] numpy==1.17.4
[pip3] pytorch-sphinx-theme==0.0.24
[pip3] torch==1.4.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.5.0
[conda] Could not collect
```
## Additional context
<!-- Add any other context about the problem here. -->
cc @suo
|
triage review,oncall: jit,triaged
|
low
|
Critical
|
589,634,897 |
pytorch
|
Randomly error reports
|
## โ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
The same code sometimes runs okay, sometimes with reports weird errors.
*** Error in `python': free(): corrupted unsorted chunks: 0x0000559e5b12ec10 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7fc66db577e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7fc66db6037a]
/lib/x86_64-linux-gnu/libc.so.6(+0x83409)[0x7fc66db63409]
/lib/x86_64-linux-gnu/libc.so.6(realloc+0x179)[0x7fc66db64839]
/usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x2266ab)[0x7fc5db03e6ab]
/usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x30fab1)[0x7fc5db127ab1]
/usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x30fd0c)[0x7fc5db127d0c]
/usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x31b221)[0x7fc5db133221]
/usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x31d308)[0x7fc5db135308]
/usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x31d8b8)[0x7fc5db1358b8]
/usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x10007d)[0x7fc5daf1807d]
/usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x100ef2)[0x7fc5daf18ef2]
/usr/lib/x86_64-linux-gnu/libcuda.so.1(cuMemAlloc_v2+0x60)[0x7fc5db086b40]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libcudart-1b201d85.so.10.1(+0x36cb3)[0x7fc65ff20cb3]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libcudart-1b201d85.so.10.1(+0x1531b)[0x7fc65feff31b]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libcudart-1b201d85.so.10.1(cudaMalloc+0x6c)[0x7fc65ff3182c]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so(+0x1c69e)[0x7fc66275869e]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so(+0x1de6e)[0x7fc662759e6e]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so(THCStorage_resize+0xa3)[0x7fc5e43e2713]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so(_ZN2at6native18empty_strided_cudaEN3c108ArrayRefIlEES3_RKNS1_13TensorOptionsE+0x616)[0x7fc5e5893b96]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so(+0x4224a8a)[0x7fc5e42daa8a]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so(_ZN2at14TensorIterator16allocate_outputsEv+0x407)[0x7fc5e1d0a057]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so(_ZN2at14TensorIterator5buildEv+0x54)[0x7fc5e1d0f084]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so(_ZN2at14TensorIterator9binary_opERNS_6TensorERKS1_S4_b+0x2a7)[0x7fc5e1d0f867]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so(_ZN2at6native3addERKNS_6TensorES3_N3c106ScalarE+0x4e)[0x7fc5e1a8f6ee]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so(+0x4235d35)[0x7fc5e42ebd35]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so(+0x1e67d3b)[0x7fc5e1f1dd3b]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so(+0x3bb84b8)[0x7fc5e3c6e4b8]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so(+0x1e67d3b)[0x7fc5e1f1dd3b]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch_python.so(+0x1fcca5)[0x7fc6671a1ca5]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch_python.so(_ZNK2at6Tensor3addERKS0_N3c106ScalarE+0xee)[0x7fc6671a28be]
/opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch_python.so(+0x1c49ff)[0x7fc6671699ff]
python(_PyCFunction_FastCallDict+0x144)[0x559ac46231e4]
python(+0x19ee7c)[0x559ac46b6e7c]
python(_PyEval_EvalFrameDefault+0x30a)[0x559ac46d816a]
python(+0x197c86)[0x559ac46afc86]
python(+0x198cd1)[0x559ac46b0cd1]
python(+0x19ef55)[0x559ac46b6f55]
python(_PyEval_EvalFrameDefault+0x30a)[0x559ac46d816a]
python(_PyFunction_FastCallDict+0x11b)[0x559ac46b110b]
python(_PyObject_FastCallDict+0x26f)[0x559ac46235af]
python(_PyObject_Call_Prepend+0x63)[0x559ac4627fe3]
python(PyObject_Call+0x3e)[0x559ac4622ffe]
python(_PyEval_EvalFrameDefault+0x1a34)[0x559ac46d9894]
python(+0x197c86)[0x559ac46afc86]
python(_PyFunction_FastCallDict+0x1bf)[0x559ac46b11af]
python(_PyObject_FastCallDict+0x26f)[0x559ac46235af]
python(_PyObject_Call_Prepend+0x63)[0x559ac4627fe3]
python(PyObject_Call+0x3e)[0x559ac4622ffe]
python(+0x1694f7)[0x559ac46814f7]
python(_PyObject_FastCallDict+0x8b)[0x559ac46233cb]
python(+0x19efce)[0x559ac46b6fce]
python(_PyEval_EvalFrameDefault+0x30a)[0x559ac46d816a]
python(_PyFunction_FastCallDict+0x11b)[0x559ac46b110b]
python(_PyObject_FastCallDict+0x26f)[0x559ac46235af]
python(_PyObject_Call_Prepend+0x63)[0x559ac4627fe3]
python(PyObject_Call+0x3e)[0x559ac4622ffe]
python(_PyEval_EvalFrameDefault+0x1a34)[0x559ac46d9894]
python(+0x197c86)[0x559ac46afc86]
python(_PyFunction_FastCallDict+0x1bf)[0x559ac46b11af]
python(_PyObject_FastCallDict+0x26f)[0x559ac46235af]
python(_PyObject_Call_Prepend+0x63)[0x559ac4627fe3]
python(PyObject_Call+0x3e)[0x559ac4622ffe]
======= Memory map: ========
200000000-200200000 ---p 00000000 00:00 0
200200000-200400000 rw-s 00000000 00:06 455 /dev/nvidiactl
200400000-202400000 rw-s 00000000 00:06 455 /dev/nvidiactl
202400000-205400000 rw-s 00000000 00:06 455 /dev/nvidiactl
205400000-206000000 ---p 00000000 00:00 0
206000000-206200000 rw-s 00000000 00:06 455 /dev/nvidiactl
206200000-206400000 rw-s 00000000 00:06 455 /dev/nvidiactl
206400000-206600000 rw-s 206400000 00:06 461 /dev/nvidia-uvm
206600000-206800000 rw-s 00000000 00:06 455 /dev/nvidiactl
206800000-206a00000 ---p 00000000 00:00 0
206a00000-206c00000 rw-s 00000000 00:06 455 /dev/nvidiactl
206c00000-207000000 ---p 00000000 00:00 0
207000000-207200000 rw-s 00000000 00:06 455 /dev/nvidiactl
207200000-209200000 rw-s 00000000 00:06 455 /dev/nvidiactl
209200000-20c200000 rw-s 00000000 00:06 455 /dev/nvidiactl
20c200000-20ce00000 ---p 00000000 00:00 0
20ce00000-20d000000 rw-s 00000000 00:06 455 /dev/nvidiactl
20d000000-20d200000 rw-s 00000000 00:06 455 /dev/nvidiactl
20d200000-20d400000 rw-s 20d200000 00:06 461 /dev/nvidia-uvm
20d400000-20d600000 rw-s 00000000 00:06 455 /dev/nvidiactl
20d600000-20d800000 ---p 00000000 00:00 0
20d800000-20da00000 rw-s 00000000 00:06 455 /dev/nvidiactl
20da00000-20de00000 ---p 00000000 00:00 0
20de00000-20e000000 rw-s 00000000 00:06 455 /dev/nvidiactl
20e000000-210000000 rw-s 00000000 00:06 455 /dev/nvidiactl
210000000-213000000 rw-s 00000000 00:06 455 /dev/nvidiactl
213000000-213c00000 ---p 00000000 00:00 0
213c00000-213e00000 rw-s 00000000 00:06 455 /dev/nvidiactl
213e00000-214000000 rw-s 00000000 00:06 455 /dev/nvidiactl
214000000-214200000 rw-s 214000000 00:06 461 /dev/nvidia-uvm
214200000-214400000 rw-s 00000000 00:06 455 /dev/nvidiactl
214400000-214600000 ---p 00000000 00:00 0
214600000-214800000 rw-s 00000000 00:06 455 /dev/nvidiactl
214800000-214c00000 ---p 00000000 00:00 0
214c00000-214e00000 rw-s 00000000 00:06 455 /dev/nvidiactl
214e00000-216e00000 rw-s 00000000 00:06 455 /dev/nvidiactl
216e00000-219e00000 rw-s 00000000 00:06 455 /dev/nvidiactl
219e00000-21aa00000 ---p 00000000 00:00 0
21aa00000-21ac00000 rw-s 00000000 00:06 455 /dev/nvidiactl
21ac00000-21ae00000 rw-s 00000000 00:06 455 /dev/nvidiactl
21ae00000-21b000000 rw-s 21ae00000 00:06 461 /dev/nvidia-uvm
21b000000-21b200000 rw-s 00000000 00:06 455 /dev/nvidiactl
21b200000-21b400000 ---p 00000000 00:00 0
21b400000-21b600000 rw-s 00000000 00:06 455 /dev/nvidiactl
21b600000-600200000 ---p 00000000 00:00 0
10000000000-10410000000 ---p 00000000 00:00 0
559ac4518000-559ac47d6000 r-xp 00000000 08:11 35389532 /opt/miniconda/bin/python3.6
559ac49d5000-559ac49d8000 r--p 002bd000 08:11 35389532 /opt/miniconda/bin/python3.6
559ac49d8000-559ac4a3b000 rw-p 002c0000 08:11 35389532 /opt/miniconda/bin/python3.6
559ac4a3b000-559ac4a6c000 rw-p 00000000 00:00 0
559ac6596000-559e78b36000 rw-p 00000000 00:00 0 [heap]
7fc152000000-7fc21ea00000 ---p 00000000 00:00 0
7fc21ea00000-7fc21ec00000 rw-s 00000000 00:05 123097083 /dev/zero (deleted)
7fc21ec00000-7fc230a00000 ---p 00000000 00:00 0
7fc230a00000-7fc230c00000 rw-s 00000000 00:05 123085684 /dev/zero (deleted)
7fc230c00000-7fc242800000 ---p 00000000 00:00 0
7fc242800000-7fc242a00000 rw-s 00000000 00:05 123151569 /dev/zero (deleted)
7fc242a00000-7fc244000000 ---p 00000000 00:00 0
7fc244000000-7fc244046000 rw-p 00000000 00:00 0
7fc244046000-7fc248000000 ---p 00000000 00:00 0
7fc24a000000-7fc25a000000 ---p 00000000 00:00 0
7fc25a000000-7fc25fb8e000 rw-s 00000000 00:05 123091958 /dev/zero (deleted)
7fc25fb8e000-7fc260000000 ---p 00000000 00:00 0
7fc260000000-7fc265b8e000 rw-s 00000000 00:05 123091957 /dev/zero (deleted)
7fc265b8e000-7fc266000000 ---p 00000000 00:00 0
7fc266000000-7fc26bb8e000 rw-s 00000000 00:05 123091954 /dev/zero (deleted)
7fc26bb8e000-7fc26c000000 ---p 00000000 00:00 0
7fc26c000000-7fc271b8e000 rw-s 00000000 00:05 123091953 /dev/zero (deleted)
7fc271b8e000-7fc272000000 ---p 00000000 00:00 0
7fc272000000-7fc277b8e000 rw-s 00000000 00:05 123091950 /dev/zero (deleted)
7fc277b8e000-7fc278000000 ---p 00000000 00:00 0
7fc278000000-7fc27db8e000 rw-s 00000000 00:05 123091949 /dev/zero (deleted)
7fc27db8e000-7fc27e000000 ---p 00000000 00:00 0
7fc27e000000-7fc283b8e000 rw-s 00000000 00:05 123091946 /dev/zero (deleted)
7fc283b8e000-7fc284000000 ---p 00000000 00:00 0
7fc284000000-7fc289b8e000 rw-s 00000000 00:05 123091945 /dev/zero (deleted)
7fc289b8e000-7fc28a000000 ---p 00000000 00:00 0
7fc28a000000-7fc28fb8e000 rw-s 00000000 00:05 123091942 /dev/zero (deleted)
7fc28fb8e000-7fc290000000 ---p 00000000 00:00 0
7fc290000000-7fc295b8e000 rw-s 00000000 00:05 123091941 /dev/zero (deleted)
7fc295b8e000-7fc296000000 ---p 00000000 00:00 0
7fc296000000-7fc29bb8e000 rw-s 00000000 00:05 123091938 /dev/zero (deleted)
7fc29bb8e000-7fc29c000000 ---p 00000000 00:00 0
7fc29c000000-7fc2a1b8e000 rw-s 00000000 00:05 123091937 /dev/zero (deleted)
7fc2a1b8e000-7fc2a2000000 ---p 00000000 00:00 0
7fc2a2000000-7fc2a7b8e000 rw-s 00000000 00:05 123097080 /dev/zero (deleted)
7fc2a7b8e000-7fc2a8000000 ---p 00000000 00:00 0
7fc2a8000000-7fc2a8049000 rw-p 00000000 00:00 0
7fc2a8049000-7fc2ac000000 ---p 00000000 00:00 0
7fc2ac000000-7fc2ac047000 rw-p 00000000 00:00 0
7fc2ac047000-7fc2b0000000 ---p 00000000 00:00 0
7fc2b0000000-7fc2b004c000 rw-p 00000000 00:00 0
7fc2b004c000-7fc2b4000000 ---p 00000000 00:00 0
7fc2b6000000-7fc2bbb8e000 rw-s 00000000 00:05 123097075 /dev/zero (deleted)
7fc2bbb8e000-7fc2bbe00000 ---p 00000000 00:00 0
7fc2bbe00000-7fc2bc000000 rw-s 00000000 00:05 123153229 /dev/zero (deleted)
7fc2be000000-7fc2c3b8e000 rw-s 00000000 00:05 123097079 /dev/zero (deleted)
7fc2c3b8e000-7fc2c4000000 ---p 00000000 00:00 0
7fc2c6000000-7fc2ca000000 ---p 00000000 00:00 0
7fc2ca000000-7fc2cfb8e000 rw-s 00000000 00:05 123097076 /dev/zero (deleted)
7fc2cfb8e000-7fc2d0000000 ---p 00000000 00:00 0
7fc2d0000000-7fc2d5b8e000 rw-s 00000000 00:05 123097072 /dev/zero (deleted)
7fc2d5b8e000-7fc2d5e00000 ---p 00000000 00:00 0
7fc2d5e00000-7fc2d6000000 rw-s 00000000 00:05 123153225 /dev/zero (deleted)
7fc2d6000000-7fc2dbb8e000 rw-s 00000000 00:05 123097071 /dev/zero (deleted)
7fc2dbb8e000-7fc2dbc00000 ---p 00000000 00:00 0
7fc2dbc00000-7fc2dbe00000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc2dbe00000-7fc2dc000000 rw-s 00000000 00:05 123153226 /dev/zero (deleted)
7fc2dc000000-7fc2dc021000 rw-p 00000000 00:00 0
7fc2dc021000-7fc2e0000000 ---p 00000000 00:00 0
7fc2e2000000-7fc2e7b8e000 rw-s 00000000 00:05 123097068 /dev/zero (deleted)
7fc2e7b8e000-7fc2e8000000 ---p 00000000 00:00 0
7fc2e8000000-7fc2edb8e000 rw-s 00000000 00:05 123097067 /dev/zero (deleted)
7fc2edb8e000-7fc2edc00000 ---p 00000000 00:00 0
7fc2edc00000-7fc2ede00000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc2ede00000-7fc2ee000000 ---p 00000000 00:00 0
7fc2ee000000-7fc2f3b8e000 rw-s 00000000 00:05 123097064 /dev/zero (deleted)
7fc2f3b8e000-7fc2f3e00000 ---p 00000000 00:00 0
7fc2f3e00000-7fc2f4000000 rw-s 00000000 00:05 123153228 /dev/zero (deleted)
7fc2f4000000-7fc2f9b8e000 rw-s 00000000 00:05 123097063 /dev/zero (deleted)
7fc2f9b8e000-7fc2f9c00000 ---p 00000000 00:00 0
7fc2f9c00000-7fc2f9ed6000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc2f9ed6000-7fc30a000000 ---p 00000000 00:00 0
7fc30a000000-7fc30fb8e000 rw-s 00000000 00:05 123097056 /dev/zero (deleted)
7fc30fb8e000-7fc310000000 ---p 00000000 00:00 0
7fc310000000-7fc315b8e000 rw-s 00000000 00:05 123097055 /dev/zero (deleted)
7fc315b8e000-7fc316000000 ---p 00000000 00:00 0
7fc316000000-7fc31bb8e000 rw-s 00000000 00:05 123097052 /dev/zero (deleted)
7fc31bb8e000-7fc31c000000 ---p 00000000 00:00 0
7fc31c000000-7fc321b8e000 rw-s 00000000 00:05 123097051 /dev/zero (deleted)
7fc321b8e000-7fc322000000 ---p 00000000 00:00 0
7fc322000000-7fc327b8e000 rw-s 00000000 00:05 123097048 /dev/zero (deleted)
7fc327b8e000-7fc328000000 ---p 00000000 00:00 0
7fc328000000-7fc32db8e000 rw-s 00000000 00:05 123097047 /dev/zero (deleted)
7fc32db8e000-7fc32e000000 ---p 00000000 00:00 0
7fc32e000000-7fc333b8e000 rw-s 00000000 00:05 123097044 /dev/zero (deleted)
7fc333b8e000-7fc334000000 ---p 00000000 00:00 0
7fc334000000-7fc339b8e000 rw-s 00000000 00:05 123097043 /dev/zero (deleted)
7fc339b8e000-7fc33a000000 ---p 00000000 00:00 0
7fc33a000000-7fc33fb8e000 rw-s 00000000 00:05 123097040 /dev/zero (deleted)
7fc33fb8e000-7fc340000000 ---p 00000000 00:00 0
7fc340000000-7fc345b8e000 rw-s 00000000 00:05 123097039 /dev/zero (deleted)
7fc345b8e000-7fc346000000 ---p 00000000 00:00 0
7fc346000000-7fc34bb8e000 rw-s 00000000 00:05 123097036 /dev/zero (deleted)
7fc34bb8e000-7fc34c000000 ---p 00000000 00:00 0
7fc34c000000-7fc351b8e000 rw-s 00000000 00:05 123097035 /dev/zero (deleted)
7fc351b8e000-7fc352000000 ---p 00000000 00:00 0
7fc352000000-7fc357b8e000 rw-s 00000000 00:05 123097032 /dev/zero (deleted)
7fc357b8e000-7fc358000000 ---p 00000000 00:00 0
7fc358000000-7fc35db8e000 rw-s 00000000 00:05 123097031 /dev/zero (deleted)
7fc35db8e000-7fc35e000000 ---p 00000000 00:00 0
7fc35e000000-7fc363b8e000 rw-s 00000000 00:05 123097028 /dev/zero (deleted)
7fc363b8e000-7fc364000000 ---p 00000000 00:00 0
7fc364000000-7fc369b8e000 rw-s 00000000 00:05 123097027 /dev/zero (deleted)
7fc369b8e000-7fc36a000000 ---p 00000000 00:00 0
7fc36a000000-7fc36fb8e000 rw-s 00000000 00:05 123097024 /dev/zero (deleted)
7fc36fb8e000-7fc370000000 ---p 00000000 00:00 0
7fc370000000-7fc375b8e000 rw-s 00000000 00:05 123097023 /dev/zero (deleted)
7fc375b8e000-7fc376000000 ---p 00000000 00:00 0
7fc378000000-7fc37db8e000 rw-s 00000000 00:05 123097060 /dev/zero (deleted)
7fc37db8e000-7fc37e000000 ---p 00000000 00:00 0
7fc37e000000-7fc383b8e000 rw-s 00000000 00:05 123097059 /dev/zero (deleted)
7fc383b8e000-7fc384000000 ---p 00000000 00:00 0
7fc384000000-7fc389b8e000 rw-s 00000000 00:05 123097016 /dev/zero (deleted)
7fc389b8e000-7fc389e00000 ---p 00000000 00:00 0
7fc389e00000-7fc38a000000 rw-s 00000000 00:05 123153222 /dev/zero (deleted)
7fc38c000000-7fc391b8e000 rw-s 00000000 00:05 123097020 /dev/zero (deleted)
7fc391b8e000-7fc392000000 ---p 00000000 00:00 0
7fc392000000-7fc397b8e000 rw-s 00000000 00:05 123097019 /dev/zero (deleted)
7fc397b8e000-7fc398000000 ---p 00000000 00:00 0
7fc398000000-7fc39db8e000 rw-s 00000000 00:05 123097015 /dev/zero (deleted)
7fc39db8e000-7fc3a2000000 ---p 00000000 00:00 0
7fc3a2000000-7fc3a7b8e000 rw-s 00000000 00:05 123097012 /dev/zero (deleted)
7fc3a7b8e000-7fc3a7c00000 ---p 00000000 00:00 0
7fc3a7c00000-7fc3a7e00000 rw-s 00000000 00:05 123153219 /dev/zero (deleted)
7fc3a7e00000-7fc3a8000000 ---p 00000000 00:00 0
7fc3a8000000-7fc3adb8e000 rw-s 00000000 00:05 123097011 /dev/zero (deleted)
7fc3adb8e000-7fc3ade00000 ---p 00000000 00:00 0
7fc3ade00000-7fc3ae000000 rw-s 00000000 00:05 123153218 /dev/zero (deleted)
7fc3ae000000-7fc3b3b8e000 rw-s 00000000 00:05 123097008 /dev/zero (deleted)
7fc3b3b8e000-7fc3b3e00000 ---p 00000000 00:00 0
7fc3b3e00000-7fc3b4000000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc3b4000000-7fc3b9b8e000 rw-s 00000000 00:05 123097007 /dev/zero (deleted)
7fc3b9b8e000-7fc3b9c00000 ---p 00000000 00:00 0
7fc3b9c00000-7fc3b9e00000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc3b9e00000-7fc3ba000000 ---p 00000000 00:00 0
7fc3ba000000-7fc3bfb8e000 rw-s 00000000 00:05 123097004 /dev/zero (deleted)
7fc3bfb8e000-7fc3bfc00000 ---p 00000000 00:00 0
7fc3bfc00000-7fc3bfe00000 rw-s 00000000 00:05 123153221 /dev/zero (deleted)
7fc3bfe00000-7fc3c0000000 ---p 00000000 00:00 0
7fc3c0000000-7fc3c5b8e000 rw-s 00000000 00:05 123097003 /dev/zero (deleted)
7fc3c5b8e000-7fc3c6000000 ---p 00000000 00:00 0
7fc3c6000000-7fc3cbb8e000 rw-s 00000000 00:05 123097000 /dev/zero (deleted)
7fc3cbb8e000-7fc3cbc00000 ---p 00000000 00:00 0
7fc3cbc00000-7fc3cbed6000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc3cbed6000-7fc3cc000000 ---p 00000000 00:00 0
7fc3cc000000-7fc3d1b8e000 rw-s 00000000 00:05 123096999 /dev/zero (deleted)
7fc3d1b8e000-7fc3e2000000 ---p 00000000 00:00 0
7fc3e2000000-7fc3e7b8e000 rw-s 00000000 00:05 123096992 /dev/zero (deleted)
7fc3e7b8e000-7fc3e8000000 ---p 00000000 00:00 0
7fc3e8000000-7fc3edb8e000 rw-s 00000000 00:05 123096991 /dev/zero (deleted)
7fc3edb8e000-7fc3ee000000 ---p 00000000 00:00 0
7fc3ee000000-7fc3f3b8e000 rw-s 00000000 00:05 123096988 /dev/zero (deleted)
7fc3f3b8e000-7fc3f4000000 ---p 00000000 00:00 0
7fc3f4000000-7fc3f9b8e000 rw-s 00000000 00:05 123096987 /dev/zero (deleted)
7fc3f9b8e000-7fc3fa000000 ---p 00000000 00:00 0
7fc3fa000000-7fc3ffb8e000 rw-s 00000000 00:05 123096984 /dev/zero (deleted)
7fc3ffb8e000-7fc400000000 ---p 00000000 00:00 0
7fc400000000-7fc405b8e000 rw-s 00000000 00:05 123096983 /dev/zero (deleted)
7fc405b8e000-7fc406000000 ---p 00000000 00:00 0
7fc406000000-7fc40bb8e000 rw-s 00000000 00:05 123096980 /dev/zero (deleted)
7fc40bb8e000-7fc40c000000 ---p 00000000 00:00 0
7fc40c000000-7fc411b8e000 rw-s 00000000 00:05 123096979 /dev/zero (deleted)
7fc411b8e000-7fc412000000 ---p 00000000 00:00 0
7fc412000000-7fc417b8e000 rw-s 00000000 00:05 123096976 /dev/zero (deleted)
7fc417b8e000-7fc418000000 ---p 00000000 00:00 0
7fc418000000-7fc41db8e000 rw-s 00000000 00:05 123096975 /dev/zero (deleted)
7fc41db8e000-7fc41e000000 ---p 00000000 00:00 0
7fc41e000000-7fc423b8e000 rw-s 00000000 00:05 123096972 /dev/zero (deleted)
7fc423b8e000-7fc424000000 ---p 00000000 00:00 0
7fc424000000-7fc429b8e000 rw-s 00000000 00:05 123096971 /dev/zero (deleted)
7fc429b8e000-7fc42a000000 ---p 00000000 00:00 0
7fc42a000000-7fc42fb8e000 rw-s 00000000 00:05 123096968 /dev/zero (deleted)
7fc42fb8e000-7fc430000000 ---p 00000000 00:00 0
7fc430000000-7fc435b8e000 rw-s 00000000 00:05 123096967 /dev/zero (deleted)
7fc435b8e000-7fc436000000 ---p 00000000 00:00 0
7fc436000000-7fc43bb8e000 rw-s 00000000 00:05 123096964 /dev/zero (deleted)
7fc43bb8e000-7fc43c000000 ---p 00000000 00:00 0
7fc43c000000-7fc441b8e000 rw-s 00000000 00:05 123096963 /dev/zero (deleted)
7fc441b8e000-7fc442000000 ---p 00000000 00:00 0
7fc442000000-7fc447b8e000 rw-s 00000000 00:05 123096960 /dev/zero (deleted)
7fc447b8e000-7fc448000000 ---p 00000000 00:00 0
7fc448000000-7fc44db8e000 rw-s 00000000 00:05 123096959 /dev/zero (deleted)
7fc44db8e000-7fc44e000000 ---p 00000000 00:00 0
7fc44e000000-7fc453b8e000 rw-s 00000000 00:05 123085673 /dev/zero (deleted)
7fc453b8e000-7fc454000000 ---p 00000000 00:00 0
7fc454000000-7fc459b8e000 rw-s 00000000 00:05 123085672 /dev/zero (deleted)
7fc459b8e000-7fc45a000000 ---p 00000000 00:00 0
7fc45a000000-7fc45fb8e000 rw-s 00000000 00:05 123085669 /dev/zero (deleted)
7fc45fb8e000-7fc460000000 ---p 00000000 00:00 0
7fc462000000-7fc467b8e000 rw-s 00000000 00:05 123096996 /dev/zero (deleted)
7fc467b8e000-7fc468000000 ---p 00000000 00:00 0
7fc468000000-7fc46db8e000 rw-s 00000000 00:05 123096995 /dev/zero (deleted)
7fc46db8e000-7fc46e000000 ---p 00000000 00:00 0
7fc46e000000-7fc473b8e000 rw-s 00000000 00:05 123085664 /dev/zero (deleted)
7fc473b8e000-7fc473c00000 ---p 00000000 00:00 0
7fc473c00000-7fc473e00000 rw-s 00000000 00:05 123091928 /dev/zero (deleted)
7fc473e00000-7fc474000000 ---p 00000000 00:00 0
7fc474000000-7fc479b8e000 rw-s 00000000 00:05 123085665 /dev/zero (deleted)
7fc479b8e000-7fc479c00000 ---p 00000000 00:00 0
7fc479c00000-7fc47f78e000 rw-s 00000000 00:05 123085668 /dev/zero (deleted)
7fc47f78e000-7fc482000000 ---p 00000000 00:00 0
7fc482000000-7fc487b8e000 rw-s 00000000 00:05 123085661 /dev/zero (deleted)
7fc487b8e000-7fc488000000 ---p 00000000 00:00 0
7fc488000000-7fc48db8e000 rw-s 00000000 00:05 123085660 /dev/zero (deleted)
7fc48db8e000-7fc48de00000 ---p 00000000 00:00 0
7fc48de00000-7fc48e000000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc48e000000-7fc493b8e000 rw-s 00000000 00:05 123085657 /dev/zero (deleted)
7fc493b8e000-7fc493e00000 ---p 00000000 00:00 0
7fc493e00000-7fc494000000 rw-s 00000000 00:05 123091924 /dev/zero (deleted)
7fc494000000-7fc494021000 rw-p 00000000 00:00 0
7fc494021000-7fc498000000 ---p 00000000 00:00 0
7fc498000000-7fc498021000 rw-p 00000000 00:00 0
7fc498021000-7fc49c000000 ---p 00000000 00:00 0
7fc49c000000-7fc4a1b8e000 rw-s 00000000 00:05 123085656 /dev/zero (deleted)
7fc4a1b8e000-7fc4a1e00000 ---p 00000000 00:00 0
7fc4a1e00000-7fc4a2000000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc4a4000000-7fc4a8000000 ---p 00000000 00:00 0
7fc4aa000000-7fc4ab800000 ---p 00000000 00:00 0
7fc4ab800000-7fc4aba00000 rw-s 00000000 00:05 123134134 /dev/zero (deleted)
7fc4aba00000-7fc4b3c00000 ---p 00000000 00:00 0
7fc4b3c00000-7fc4b3e00000 rw-s 00000000 00:05 123091925 /dev/zero (deleted)
7fc4b3e00000-7fc4b4000000 ---p 00000000 00:00 0
7fc4b4000000-7fc4b9b8e000 rw-s 00000000 00:05 123085653 /dev/zero (deleted)
7fc4b9b8e000-7fc4b9c00000 ---p 00000000 00:00 0
7fc4b9c00000-7fc4b9ed6000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc4b9ed6000-7fc4ba000000 ---p 00000000 00:00 0
7fc4ba000000-7fc4bfb8e000 rw-s 00000000 00:05 123085652 /dev/zero (deleted)
7fc4bfb8e000-7fc4c0000000 ---p 00000000 00:00 0
7fc4c0000000-7fc4c5b8e000 rw-s 00000000 00:05 123085649 /dev/zero (deleted)
7fc4c5b8e000-7fc4c6000000 ---p 00000000 00:00 0
7fc4c6000000-7fc4cbb8e000 rw-s 00000000 00:05 123085648 /dev/zero (deleted)
7fc4cbb8e000-7fc4cc000000 ---p 00000000 00:00 0
7fc4cc000000-7fc4d1b8e000 rw-s 00000000 00:05 123085643 /dev/zero (deleted)
7fc4d1b8e000-7fc4d2000000 ---p 00000000 00:00 0
7fc4d2000000-7fc4d7b8e000 rw-s 00000000 00:05 123085642 /dev/zero (deleted)
7fc4d7b8e000-7fc4d8000000 ---p 00000000 00:00 0
7fc4d8000000-7fc4ddb8e000 rw-s 00000000 00:05 123085639 /dev/zero (deleted)
7fc4ddb8e000-7fc4de000000 ---p 00000000 00:00 0
7fc4de000000-7fc4e3b8e000 rw-s 00000000 00:05 123085638 /dev/zero (deleted)
7fc4e3b8e000-7fc4e4000000 ---p 00000000 00:00 0
7fc4e4000000-7fc4e9b8e000 rw-s 00000000 00:05 123085633 /dev/zero (deleted)
7fc4e9b8e000-7fc4ea000000 ---p 00000000 00:00 0
7fc4ea000000-7fc4efb8e000 rw-s 00000000 00:05 123085632 /dev/zero (deleted)
7fc4efb8e000-7fc4f0000000 ---p 00000000 00:00 0
7fc4f0000000-7fc4f5b8e000 rw-s 00000000 00:05 123085627 /dev/zero (deleted)
7fc4f5b8e000-7fc4f6000000 ---p 00000000 00:00 0
7fc4f6000000-7fc4fbb8e000 rw-s 00000000 00:05 123085626 /dev/zero (deleted)
7fc4fbb8e000-7fc4fc000000 ---p 00000000 00:00 0
7fc4fc000000-7fc501b8e000 rw-s 00000000 00:05 123085622 /dev/zero (deleted)
7fc501b8e000-7fc502000000 ---p 00000000 00:00 0
7fc503172000-7fc504371000 r-xp 00000000 08:11 35784996 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libnvrtc-5e8a26c9.so.10.1
7fc504371000-7fc504570000 ---p 011ff000 08:11 35784996 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libnvrtc-5e8a26c9.so.10.1
7fc504570000-7fc5047f3000 r--p 011fe000 08:11 35784996 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libnvrtc-5e8a26c9.so.10.1
7fc5047f3000-7fc50483a000 rw-p 01481000 08:11 35784996 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libnvrtc-5e8a26c9.so.10.1
7fc50483a000-7fc5048e2000 rw-p 00000000 00:00 0
7fc5048e2000-7fc5048e4000 rw-p 014c8000 08:11 35784996 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libnvrtc-5e8a26c9.so.10.1
7fc5067f9000-7fc5067fa000 ---p 00000000 00:00 0
7fc5067fa000-7fc506ffa000 rw-p 00000000 00:00 0
7fc506ffa000-7fc506ffb000 ---p 00000000 00:00 0
7fc506ffb000-7fc5077fb000 rw-p 00000000 00:00 0
7fc5077fb000-7fc5077fc000 ---p 00000000 00:00 0
7fc5077fc000-7fc507ffc000 rw-p 00000000 00:00 0
7fc507ffc000-7fc507ffd000 ---p 00000000 00:00 0
7fc507ffd000-7fc5087fd000 rw-p 00000000 00:00 0
7fc5087fd000-7fc5087fe000 ---p 00000000 00:00 0
7fc5087fe000-7fc508ffe000 rw-p 00000000 00:00 0
7fc508ffe000-7fc508fff000 ---p 00000000 00:00 0
7fc508fff000-7fc5097ff000 rw-p 00000000 00:00 0
7fc5097ff000-7fc509800000 ---p 00000000 00:00 0
7fc509800000-7fc50a000000 rw-p 00000000 00:00 0
7fc50a000000-7fc50fc00000 ---p 00000000 00:00 0
7fc50fc00000-7fc50fe00000 rw-s 00000000 00:05 123091927 /dev/zero (deleted)
7fc50fe00000-7fc510000000 ---p 00000000 00:00 0
7fc510000000-7fc510021000 rw-p 00000000 00:00 0
7fc510021000-7fc514000000 ---p 00000000 00:00 0
7fc514000000-7fc514021000 rw-p 00000000 00:00 0
7fc514021000-7fc518000000 ---p 00000000 00:00 0
7fc518000000-7fc518021000 rw-p 00000000 00:00 0
7fc518021000-7fc51c000000 ---p 00000000 00:00 0
7fc51c000000-7fc51c021000 rw-p 00000000 00:00 0
7fc51c021000-7fc520000000 ---p 00000000 00:00 0
7fc520000000-7fc520021000 rw-p 00000000 00:00 0
7fc520021000-7fc524000000 ---p 00000000 00:00 0
7fc524000000-7fc524021000 rw-p 00000000 00:00 0
7fc524021000-7fc528000000 ---p 00000000 00:00 0
7fc528000000-7fc528021000 rw-p 00000000 00:00 0
7fc528021000-7fc52c000000 ---p 00000000 00:00 0
7fc52c7f9000-7fc52c7fa000 ---p 00000000 00:00 0
7fc52c7fa000-7fc52cffa000 rw-p 00000000 00:00 0
7fc52cffa000-7fc52cffb000 ---p 00000000 00:00 0
7fc52cffb000-7fc52d7fb000 rw-p 00000000 00:00 0
7fc52d7fb000-7fc52d7fc000 ---p 00000000 00:00 0
7fc52d7fc000-7fc52dffc000 rw-p 00000000 00:00 0
7fc52dffc000-7fc52dffd000 ---p 00000000 00:00 0
7fc52dffd000-7fc52e7fd000 rw-p 00000000 00:00 0
7fc52e7fd000-7fc52e7fe000 ---p 00000000 00:00 0
7fc52e7fe000-7fc52effe000 rw-p 00000000 00:00 0
7fc52effe000-7fc52efff000 ---p 00000000 00:00 0
7fc52efff000-7fc52f7ff000 rw-p 00000000 00:00 0
7fc52f7ff000-7fc52f800000 ---p 00000000 00:00 0
7fc52f800000-7fc530000000 rw-p 00000000 00:00 0
7fc530000000-7fc530021000 rw-p 00000000 00:00 0
7fc530021000-7fc534000000 ---p 00000000 00:00 0
7fc534000000-7fc534021000 rw-p 00000000 00:00 0
7fc534021000-7fc538000000 ---p 00000000 00:00 0
7fc538000000-7fc538021000 rw-p 00000000 00:00 0
7fc538021000-7fc53c000000 ---p 00000000 00:00 0
7fc53c000000-7fc53c021000 rw-p 00000000 00:00 0
7fc53c021000-7fc540000000 ---p 00000000 00:00 0
7fc540000000-7fc540021000 rw-p 00000000 00:00 0
7fc540021000-7fc544000000 ---p 00000000 00:00 0
7fc544000000-7fc544021000 rw-p 00000000 00:00 0
7fc544021000-7fc548000000 ---p 00000000 00:00 0
7fc548000000-7fc548021000 rw-p 00000000 00:00 0
7fc548021000-7fc54c000000 ---p 00000000 00:00 0
7fc54c5f9000-7fc54c7f9000 rw-s 00000000 00:05 123153227 /dev/zero (deleted)
7fc54c7f9000-7fc54c7fa000 ---p 00000000 00:00 0
7fc54c7fa000-7fc54cffa000 rw-p 00000000 00:00 0
7fc54cffa000-7fc54cffb000 ---p 00000000 00:00 0
7fc54cffb000-7fc54d7fb000 rw-p 00000000 00:00 0
7fc54d7fb000-7fc54d7fc000 ---p 00000000 00:00 0
7fc54d7fc000-7fc54dffc000 rw-p 00000000 00:00 0
7fc54dffc000-7fc54dffd000 ---p 00000000 00:00 0
7fc54dffd000-7fc54e7fd000 rw-p 00000000 00:00 0
7fc54e7fd000-7fc54e7fe000 ---p 00000000 00:00 0
7fc54e7fe000-7fc54effe000 rw-p 00000000 00:00 0
7fc54effe000-7fc54efff000 ---p 00000000 00:00 0
7fc54efff000-7fc54f7ff000 rw-p 00000000 00:00 0
7fc54f7ff000-7fc54f800000 ---p 00000000 00:00 0
7fc54f800000-7fc550000000 rw-p 00000000 00:00 0
7fc550000000-7fc550021000 rw-p 00000000 00:00 0
7fc550021000-7fc554000000 ---p 00000000 00:00 0
7fc554000000-7fc554021000 rw-p 00000000 00:00 0
7fc554021000-7fc558000000 ---p 00000000 00:00 0
7fc558000000-7fc558021000 rw-p 00000000 00:00 0
7fc558021000-7fc55c000000 ---p 00000000 00:00 0
7fc55c000000-7fc55c021000 rw-p 00000000 00:00 0
7fc55c021000-7fc560000000 ---p 00000000 00:00 0
7fc560000000-7fc560021000 rw-p 00000000 00:00 0
7fc560021000-7fc564000000 ---p 00000000 00:00 0
7fc564000000-7fc564021000 rw-p 00000000 00:00 0
7fc564021000-7fc568000000 ---p 00000000 00:00 0
7fc568000000-7fc568021000 rw-p 00000000 00:00 0
7fc568021000-7fc56c000000 ---p 00000000 00:00 0
7fc56c1f4000-7fc56c3f4000 rw-s 00000000 00:05 123153220 /dev/zero (deleted)
7fc56c3f4000-7fc56c5f4000 rw-s 00000000 00:05 123091926 /dev/zero (deleted)
7fc56c5f4000-7fc56c5f5000 r-xp 00000000 08:11 35784991 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libcaffe2_nvrtc.so
7fc56c5f5000-7fc56c7f5000 ---p 00001000 08:11 35784991 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libcaffe2_nvrtc.so
7fc56c7f5000-7fc56c7f6000 r--p 00001000 08:11 35784991 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libcaffe2_nvrtc.so
7fc56c7f6000-7fc56c7f7000 rw-p 00002000 08:11 35784991 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libcaffe2_nvrtc.so
7fc56c7f7000-7fc56c7f9000 rw-p 00004000 08:11 35784991 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libcaffe2_nvrtc.so
7fc56c7f9000-7fc56c7fa000 ---p 00000000 00:00 0
7fc56c7fa000-7fc56cffa000 rw-p 00000000 00:00 0
7fc56cffa000-7fc56cffb000 ---p 00000000 00:00 0
7fc56cffb000-7fc56d7fb000 rw-p 00000000 00:00 0
7fc56d7fb000-7fc56d7fc000 ---p 00000000 00:00 0
7fc56d7fc000-7fc56dffc000 rw-p 00000000 00:00 0
7fc56dffc000-7fc56dffd000 ---p 00000000 00:00 0
7fc56dffd000-7fc56e7fd000 rw-p 00000000 00:00 0
7fc56e7fd000-7fc56e7fe000 ---p 00000000 00:00 0
7fc56e7fe000-7fc56effe000 rw-p 00000000 00:00 0
7fc56effe000-7fc56efff000 ---p 00000000 00:00 0
7fc56efff000-7fc56f7ff000 rw-p 00000000 00:00 0
7fc56f7ff000-7fc56f800000 ---p 00000000 00:00 0
7fc56f800000-7fc570000000 rw-p 00000000 00:00 0
7fc570000000-7fc570021000 rw-p 00000000 00:00 0
7fc570021000-7fc574000000 ---p 00000000 00:00 0
7fc574000000-7fc574021000 rw-p 00000000 00:00 0
7fc574021000-7fc578000000 ---p 00000000 00:00 0
7fc578000000-7fc578021000 rw-p 00000000 00:00 0
7fc578021000-7fc57c000000 ---p 00000000 00:00 0
7fc57c134000-7fc57c174000 rw-p 00000000 00:00 0
7fc57c174000-7fc57c175000 ---p 00000000 00:00 0
7fc57c175000-7fc57c975000 rw-p 00000000 00:00 0
7fc57c975000-7fc57c976000 ---p 00000000 00:00 0
7fc57c976000-7fc57d176000 rw-p 00000000 00:00 0
7fc57d176000-7fc57d177000 ---p 00000000 00:00 0
7fc57d177000-7fc57f7f7000 rw-p 00000000 00:00 0
7fc57f7f7000-7fc57f7f8000 ---p 00000000 00:00 0
7fc57f7f8000-7fc57fff8000 rw-p 00000000 00:00 0
7fc57fff8000-7fc57fff9000 ---p 00000000 00:00 0
7fc57fff9000-7fc5807f9000 rw-p 00000000 00:00 0
7fc5807f9000-7fc5807fa000 ---p 00000000 00:00 0
7fc5807fa000-7fc580ffa000 rw-p 00000000 00:00 0
7fc580ffa000-7fc580ffb000 ---p 00000000 00:00 0
7fc580ffb000-7fc5817fb000 rw-p 00000000 00:00 0
7fc5817fb000-7fc5817fc000 ---p 00000000 00:00 0
7fc5817fc000-7fc581ffc000 rw-p 00000000 00:00 0
7fc581ffc000-7fc581ffd000 ---p 00000000 00:00 0
7fc581ffd000-7fc5827fd000 rw-p 00000000 00:00 0
7fc5827fd000-7fc5827fe000 ---p 00000000 00:00 0
7fc5827fe000-7fc582ffe000 rw-p 00000000 00:00 0
7fc582ffe000-7fc582fff000 ---p 00000000 00:00 0
7fc582fff000-7fc5837ff000 rw-p 00000000 00:00 0
7fc5837ff000-7fc583800000 ---p 00000000 00:00 0
7fc583800000-7fc584000000 rw-p 00000000 00:00 0
7fc584000000-7fc5840eb000 rw-p 00000000 00:00 0
7fc5840eb000-7fc588000000 ---p 00000000 00:00 0
7fc58803d000-7fc5887fd000 rw-p 00000000 00:00 0
7fc5887fd000-7fc5887fe000 ---p 00000000 00:00 0
7fc5887fe000-7fc588ffe000 rw-p 00000000 00:00 0
7fc588ffe000-7fc588fff000 ---p 00000000 00:00 0
7fc588fff000-7fc5897ff000 rw-p 00000000 00:00 0
7fc5897ff000-7fc589800000 ---p 00000000 00:00 0
7fc589800000-7fc58a000000 rw-p 00000000 00:00 0
7fc58a000000-7fc59a000000 ---p 00000000 00:00 0
7fc59a00e000-7fc59d2e7000 rw-p 00000000 00:00 0
7fc59d308000-7fc59d4c8000 rw-p 00000000 00:00 0
7fc59d4c8000-7fc59d4c9000 ---p 00000000 00:00 0
7fc59d4c9000-7fc59dcc9000 rw-p 00000000 00:00 0
7fc59dcc9000-7fc59dcca000 ---p 00000000 00:00 0
7fc59dcca000-7fc59e4ca000 rw-p 00000000 00:00 0
7fc59e4ca000-7fc59e4cb000 ---p 00000000 00:00 0
7fc59e4cb000-7fc59eccb000 rw-p 00000000 00:00 0
7fc59eccb000-7fc59eccc000 ---p 00000000 00:00 0
7fc59eccc000-7fc59f4cc000 rw-p 00000000 00:00 0
7fc59f4cc000-7fc59f4cd000 ---p 00000000 00:00 0
7fc59f4cd000-7fc5a0b8d000 rw-p 00000000 00:00 0
7fc5a0bc1000-7fc5a0d41000 rw-p 00000000 00:00 0
7fc5a0d41000-7fc5a0d42000 ---p 00000000 00:00 0
7fc5a0d42000-7fc5a1542000 rw-p 00000000 00:00 0
7fc5a1542000-7fc5a1543000 ---p 00000000 00:00 0
7fc5a1543000-7fc5a1d43000 rw-p 00000000 00:00 0
7fc5a1d43000-7fc5a1d44000 ---p 00000000 00:00 0
7fc5a1d44000-7fc5a2544000 rw-p 00000000 00:00 0
7fc5a2544000-7fc5a2545000 ---p 00000000 00:00 0
7fc5a2545000-7fc5a2d45000 rw-p 00000000 00:00 0
7fc5a2d45000-7fc5a2d46000 ---p 00000000 00:00 0
7fc5a2d46000-7fc5a44c6000 rw-p 00000000 00:00 0
7fc5a44f8000-7fc5ac4bc000 rw-p 00000000 00:00 0
7fc5ac4c0000-7fc5ae000000 rw-p 00000000 00:00 0
7fc5ae000000-7fc5afe00000 ---p 00000000 00:00 0
7fc5afe00000-7fc5b598e000 rw-s 00000000 00:05 123085620 /dev/zero (deleted)
7fc5b598e000-7fc5baa00000 ---p 00000000 00:00 0
7fc5baa00000-7fc5bac00000 rw-s 00000000 00:05 123153149 /dev/zero (deleted)
7fc5bac00000-7fc5c0000000 ---p 00000000 00:00 0
7fc5c0000000-7fc5c0001000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c0001000-7fc5c0002000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c0002000-7fc5c0003000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c0003000-7fc5c0004000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c0004000-7fc5c0005000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c0005000-7fc5c0006000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c0006000-7fc5c0007000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c0007000-7fc5c0008000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c0008000-7fc5c0009000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c0009000-7fc5c000a000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c000a000-7fc5c000b000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c000b000-7fc5c000c000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c000c000-7fc5c000d000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c000d000-7fc5c000e000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c000e000-7fc5c000f000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c000f000-7fc5c0010000 rw-s 00000000 00:06 460 /dev/nvidia3
7fc5c0010000-7fc5c0011000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c0011000-7fc5c0012000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c0012000-7fc5c0013000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c0013000-7fc5c0014000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c0014000-7fc5c0015000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c0015000-7fc5c0016000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c0016000-7fc5c0017000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c0017000-7fc5c0018000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c0018000-7fc5c0019000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c0019000-7fc5c001a000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c001a000-7fc5c001b000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c001b000-7fc5c001c000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c001c000-7fc5c001d000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c001d000-7fc5c001e000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c001e000-7fc5c001f000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c001f000-7fc5c0020000 rw-s 00000000 00:06 456 /dev/nvidia0
7fc5c0020000-7fc5c0021000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c0021000-7fc5c0022000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c0022000-7fc5c0023000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c0023000-7fc5c0024000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c0024000-7fc5c0025000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c0025000-7fc5c0026000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c0026000-7fc5c0027000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c0027000-7fc5c0028000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c0028000-7fc5c0029000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c0029000-7fc5c002a000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c002a000-7fc5c002b000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c002b000-7fc5c002c000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c002c000-7fc5c002d000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c002d000-7fc5c002e000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c002e000-7fc5c002f000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c002f000-7fc5c0030000 rw-s 00000000 00:06 458 /dev/nvidia1
7fc5c0030000-7fc5c0031000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c0031000-7fc5c0032000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c0032000-7fc5c0033000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c0033000-7fc5c0034000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c0034000-7fc5c0035000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c0035000-7fc5c0036000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c0036000-7fc5c0037000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c0037000-7fc5c0038000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c0038000-7fc5c0039000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c0039000-7fc5c003a000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c003a000-7fc5c003b000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c003b000-7fc5c003c000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c003c000-7fc5c003d000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c003d000-7fc5c003e000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c003e000-7fc5c003f000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c003f000-7fc5c0040000 rw-s 00000000 00:06 459 /dev/nvidia2
7fc5c0040000-7fc5d0000000 ---p 00000000 00:00 0
7fc5d0024000-7fc5d2000000 rw-p 00000000 00:00 0
7fc5d2000000-7fc5d2400000 ---p 00000000 00:00 0
7fc5d2400000-7fc5d2600000 rw-s 00000000 00:05 123153142 /dev/zero (deleted)
7fc5d2600000-7fc5d2800000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc5d2800000-7fc5d2a00000 rw-s 00000000 00:05 123153143 /dev/zero (deleted)
7fc5d2a00000-7fc5d2c00000 ---p 00000000 00:00 0
7fc5d2c00000-7fc5d2e00000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc5d2e00000-7fc5d3000000 ---p 00000000 00:00 0
7fc5d3000000-7fc5d3200000 rw-s 00000000 00:05 123153147 /dev/zero (deleted)
7fc5d3200000-7fc5d34d6000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc5d34d6000-7fc5d4000000 ---p 00000000 00:00 0
7fc5d4018000-7fc5d4e18000 rw-p 00000000 00:00 0
7fc5d4e18000-7fc5dae18000 ---p 00000000 00:00 0
7fc5dae18000-7fc5dbcd1000 r-xp 00000000 08:01 2580 /usr/lib/x86_64-linux-gnu/libcuda.so.440.33.01
7fc5dbcd1000-7fc5dbed0000 ---p 00eb9000 08:01 2580 /usr/lib/x86_64-linux-gnu/libcuda.so.440.33.01
7fc5dbed0000-7fc5dbfeb000 rw-p 00eb8000 08:01 2580 /usr/lib/x86_64-linux-gnu/libcuda.so.440.33.01
7fc5dbfeb000-7fc5dc000000 rw-p 00000000 00:00 0
7fc5dc000000-7fc5dc021000 rw-p 00000000 00:00 0
7fc5dc021000-7fc5e0000000 ---p 00000000 00:00 0
7fc5e0036000-7fc5e00b6000 rw-p 00000000 00:00 0
7fc5e00b6000-7fc626c80000 r-xp 00000000 08:11 35784999 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so
7fc626c80000-7fc626e80000 ---p 46bca000 08:11 35784999 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so
7fc626e80000-7fc62706b000 r--p 46bca000 08:11 35784999 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so
7fc62706b000-7fc628563000 rw-p 46db5000 08:11 35784999 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so
7fc628563000-7fc628c9e000 rw-p 00000000 00:00 0
7fc628c9e000-7fc62ae5d000 rw-p 4c796000 08:11 35784999 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch.so
7fc62ae5d000-7fc62af9d000 rw-p 00000000 00:00 0
7fc62af9d000-7fc62b036000 r-xp 00000000 08:11 35652758 /opt/miniconda/lib/python3.6/site-packages/numpy/random/generator.cpython-36m-x86_64-linux-gnu.so
7fc62b036000-7fc62b235000 ---p 00099000 08:11 35652758 /opt/miniconda/lib/python3.6/site-packages/numpy/random/generator.cpython-36m-x86_64-linux-gnu.so
7fc62b235000-7fc62b259000 rw-p 00098000 08:11 35652758 /opt/miniconda/lib/python3.6/site-packages/numpy/random/generator.cpython-36m-x86_64-linux-gnu.so
7fc62b259000-7fc62b25b000 rw-p 00000000 00:00 0
7fc62b25b000-7fc62b267000 r-xp 00000000 08:11 35652765 /opt/miniconda/lib/python3.6/site-packages/numpy/random/sfc64.cpython-36m-x86_64-linux-gnu.so
7fc62b267000-7fc62b466000 ---p 0000c000 08:11 35652765 /opt/miniconda/lib/python3.6/site-packages/numpy/random/sfc64.cpython-36m-x86_64-linux-gnu.so
7fc62b466000-7fc62b468000 rw-p 0000b000 08:11 35652765 /opt/miniconda/lib/python3.6/site-packages/numpy/random/sfc64.cpython-36m-x86_64-linux-gnu.so
7fc62b468000-7fc62b477000 r-xp 00000000 08:11 35652762 /opt/miniconda/lib/python3.6/site-packages/numpy/random/pcg64.cpython-36m-x86_64-linux-gnu.so
7fc62b477000-7fc62b677000 ---p 0000f000 08:11 35652762 /opt/miniconda/lib/python3.6/site-packages/numpy/random/pcg64.cpython-36m-x86_64-linux-gnu.so
7fc62b677000-7fc62b679000 rw-p 0000f000 08:11 35652762 /opt/miniconda/lib/python3.6/site-packages/numpy/random/pcg64.cpython-36m-x86_64-linux-gnu.so
7fc62b679000-7fc62b67a000 rw-p 00000000 00:00 0
7fc62b67a000-7fc62b68c000 r-xp 00000000 08:11 35652763 /opt/miniconda/lib/python3.6/site-packages/numpy/random/philox.cpython-36m-x86_64-linux-gnu.so
7fc62b68c000-7fc62b88b000 ---p 00012000 08:11 35652763 /opt/miniconda/lib/python3.6/site-packages/numpy/random/philox.cpython-36m-x86_64-linux-gnu.so
7fc62b88b000-7fc62b88e000 rw-p 00011000 08:11 35652763 /opt/miniconda/lib/python3.6/site-packages/numpy/random/philox.cpython-36m-x86_64-linux-gnu.so
7fc62b88e000-7fc62b892000 r-xp 00000000 08:11 35391268 /opt/miniconda/lib/python3.6/lib-dynload/_random.cpython-36m-x86_64-linux-gnu.so
7fc62b892000-7fc62ba91000 ---p 00004000 08:11 35391268 /opt/miniconda/lib/python3.6/lib-dynload/_random.cpython-36m-x86_64-linux-gnu.so
7fc62ba91000-7fc62ba92000 r--p 00003000 08:11 35391268 /opt/miniconda/lib/python3.6/lib-dynload/_random.cpython-36m-x86_64-linux-gnu.so
7fc62ba92000-7fc62ba93000 rw-p 00004000 08:11 35391268 /opt/miniconda/lib/python3.6/lib-dynload/_random.cpython-36m-x86_64-linux-gnu.so
7fc62ba93000-7fc62ba95000 r-xp 00000000 08:11 35391239 /opt/miniconda/lib/python3.6/lib-dynload/_bisect.cpython-36m-x86_64-linux-gnu.so
7fc62ba95000-7fc62bc94000 ---p 00002000 08:11 35391239 /opt/miniconda/lib/python3.6/lib-dynload/_bisect.cpython-36m-x86_64-linux-gnu.so
7fc62bc94000-7fc62bc95000 r--p 00001000 08:11 35391239 /opt/miniconda/lib/python3.6/lib-dynload/_bisect.cpython-36m-x86_64-linux-gnu.so
7fc62bc95000-7fc62bc96000 rw-p 00002000 08:11 35391239 /opt/miniconda/lib/python3.6/lib-dynload/_bisect.cpython-36m-x86_64-linux-gnu.so
7fc62bc96000-7fc62bcac000 r-xp 00000000 08:11 35391271 /opt/miniconda/lib/python3.6/lib-dynload/_sha3.cpython-36m-x86_64-linux-gnu.so
7fc62bcac000-7fc62beab000 ---p 00016000 08:11 35391271 /opt/miniconda/lib/python3.6/lib-dynload/_sha3.cpython-36m-x86_64-linux-gnu.so
7fc62beab000-7fc62beac000 r--p 00015000 08:11 35391271 /opt/miniconda/lib/python3.6/lib-dynload/_sha3.cpython-36m-x86_64-linux-gnu.so
7fc62beac000-7fc62beae000 rw-p 00016000 08:11 35391271 /opt/miniconda/lib/python3.6/lib-dynload/_sha3.cpython-36m-x86_64-linux-gnu.so
7fc62beae000-7fc62beba000 r-xp 00000000 08:11 35391240 /opt/miniconda/lib/python3.6/lib-dynload/_blake2.cpython-36m-x86_64-linux-gnu.so
7fc62beba000-7fc62c0ba000 ---p 0000c000 08:11 35391240 /opt/miniconda/lib/python3.6/lib-dynload/_blake2.cpython-36m-x86_64-linux-gnu.so
7fc62c0ba000-7fc62c0bb000 r--p 0000c000 08:11 35391240 /opt/miniconda/lib/python3.6/lib-dynload/_blake2.cpython-36m-x86_64-linux-gnu.so
7fc62c0bb000-7fc62c0bc000 rw-p 0000d000 08:11 35391240 /opt/miniconda/lib/python3.6/lib-dynload/_blake2.cpython-36m-x86_64-linux-gnu.so
7fc62c0bc000-7fc62c129000 r--p 00000000 08:11 35389902 /opt/miniconda/lib/libcrypto.so.1.0.0
7fc62c129000-7fc62c253000 r-xp 0006d000 08:11 35389902 /opt/miniconda/lib/libcrypto.so.1.0.0
7fc62c253000-7fc62c2da000 r--p 00197000 08:11 35389902 /opt/miniconda/lib/libcrypto.so.1.0.0
7fc62c2da000-7fc62c2f6000 r--p 0021d000 08:11 35389902 /opt/miniconda/lib/libcrypto.so.1.0.0
7fc62c2f6000-7fc62c301000 rw-p 00239000 08:11 35389902 /opt/miniconda/lib/libcrypto.so.1.0.0
7fc62c301000-7fc62c305000 rw-p 00000000 00:00 0
7fc62c305000-7fc62c30b000 r-xp 00000000 08:11 35391257 /opt/miniconda/lib/python3.6/lib-dynload/_hashlib.cpython-36m-x86_64-linux-gnu.so
7fc62c30b000-7fc62c50a000 ---p 00006000 08:11 35391257 /opt/miniconda/lib/python3.6/lib-dynload/_hashlib.cpython-36m-x86_64-linux-gnu.so
7fc62c50a000-7fc62c50b000 r--p 00005000 08:11 35391257 /opt/miniconda/lib/python3.6/lib-dynload/_hashlib.cpython-36m-x86_64-linux-gnu.so
7fc62c50b000-7fc62c50c000 rw-p 00006000 08:11 35391257 /opt/miniconda/lib/python3.6/lib-dynload/_hashlib.cpython-36m-x86_64-linux-gnu.so
7fc62c50c000-7fc62c512000 r-xp 00000000 08:11 35391284 /opt/miniconda/lib/python3.6/lib-dynload/binascii.cpython-36m-x86_64-linux-gnu.so
7fc62c512000-7fc62c711000 ---p 00006000 08:11 35391284 /opt/miniconda/lib/python3.6/lib-dynload/binascii.cpython-36m-x86_64-linux-gnu.so
7fc62c711000-7fc62c712000 r--p 00005000 08:11 35391284 /opt/miniconda/lib/python3.6/lib-dynload/binascii.cpython-36m-x86_64-linux-gnu.so
7fc62c712000-7fc62c713000 rw-p 00006000 08:11 35391284 /opt/miniconda/lib/python3.6/lib-dynload/binascii.cpython-36m-x86_64-linux-gnu.so
7fc62c713000-7fc62c73c000 r-xp 00000000 08:11 35652753 /opt/miniconda/lib/python3.6/site-packages/numpy/random/bit_generator.cpython-36m-x86_64-linux-gnu.so
7fc62c73c000-7fc62c93c000 ---p 00029000 08:11 35652753 /opt/miniconda/lib/python3.6/site-packages/numpy/random/bit_generator.cpython-36m-x86_64-linux-gnu.so
7fc62c93c000-7fc62c941000 rw-p 00029000 08:11 35652753 /opt/miniconda/lib/python3.6/site-packages/numpy/random/bit_generator.cpython-36m-x86_64-linux-gnu.so
7fc62c941000-7fc62c942000 rw-p 00000000 00:00 0
7fc62c942000-7fc62c95b000 r-xp 00000000 08:11 35652760 /opt/miniconda/lib/python3.6/site-packages/numpy/random/mt19937.cpython-36m-x86_64-linux-gnu.so
7fc62c95b000-7fc62cb5a000 ---p 00019000 08:11 35652760 /opt/miniconda/lib/python3.6/site-packages/numpy/random/mt19937.cpython-36m-x86_64-linux-gnu.so
7fc62cb5a000-7fc62cb5c000 rw-p 00018000 08:11 35652760 /opt/miniconda/lib/python3.6/site-packages/numpy/random/mt19937.cpython-36m-x86_64-linux-gnu.so
7fc62cb5c000-7fc62cb5d000 rw-p 00000000 00:00 0
7fc62cb5d000-7fc62cbba000 r-xp 00000000 08:11 35652755 /opt/miniconda/lib/python3.6/site-packages/numpy/random/bounded_integers.cpython-36m-x86_64-linux-gnu.so
7fc62cbba000-7fc62cdba000 ---p 0005d000 08:11 35652755 /opt/miniconda/lib/python3.6/site-packages/numpy/random/bounded_integers.cpython-36m-x86_64-linux-gnu.so
7fc62cdba000-7fc62cdbc000 rw-p 0005d000 08:11 35652755 /opt/miniconda/lib/python3.6/site-packages/numpy/random/bounded_integers.cpython-36m-x86_64-linux-gnu.so
7fc62cdbc000-7fc62cdfe000 rw-p 00000000 00:00 0
7fc62cdfe000-7fc62ce3a000 r-xp 00000000 08:11 35652756 /opt/miniconda/lib/python3.6/site-packages/numpy/random/common.cpython-36m-x86_64-linux-gnu.so
7fc62ce3a000-7fc62d03a000 ---p 0003c000 08:11 35652756 /opt/miniconda/lib/python3.6/site-packages/numpy/random/common.cpython-36m-x86_64-linux-gnu.so
7fc62d03a000-7fc62d03c000 rw-p 0003c000 08:11 35652756 /opt/miniconda/lib/python3.6/site-packages/numpy/random/common.cpython-36m-x86_64-linux-gnu.so
7fc62d03c000-7fc62d03d000 rw-p 00000000 00:00 0
7fc62d03d000-7fc62d0c0000 r-xp 00000000 08:11 35652761 /opt/miniconda/lib/python3.6/site-packages/numpy/random/mtrand.cpython-36m-x86_64-linux-gnu.so
7fc62d0c0000-7fc62d2c0000 ---p 00083000 08:11 35652761 /opt/miniconda/lib/python3.6/site-packages/numpy/random/mtrand.cpython-36m-x86_64-linux-gnu.so
7fc62d2c0000-7fc62d2e4000 rw-p 00083000 08:11 35652761 /opt/miniconda/lib/python3.6/site-packages/numpy/random/mtrand.cpython-36m-x86_64-linux-gnu.so
7fc62d2e4000-7fc62d326000 rw-p 00000000 00:00 0
7fc62d326000-7fc62d33c000 r-xp 00000000 08:11 35652486 /opt/miniconda/lib/python3.6/site-packages/numpy/fft/_pocketfft_internal.cpython-36m-x86_64-linux-gnu.so
7fc62d33c000-7fc62d53b000 ---p 00016000 08:11 35652486 /opt/miniconda/lib/python3.6/site-packages/numpy/fft/_pocketfft_internal.cpython-36m-x86_64-linux-gnu.so
7fc62d53b000-7fc62d53c000 rw-p 00015000 08:11 35652486 /opt/miniconda/lib/python3.6/site-packages/numpy/fft/_pocketfft_internal.cpython-36m-x86_64-linux-gnu.so
7fc62d53c000-7fc62d57c000 rw-p 00000000 00:00 0
7fc62d57c000-7fc62d5c0000 r-xp 00000000 08:11 35391255 /opt/miniconda/lib/python3.6/lib-dynload/_decimal.cpython-36m-x86_64-linux-gnu.so
7fc62d5c0000-7fc62d7c0000 ---p 00044000 08:11 35391255 /opt/miniconda/lib/python3.6/lib-dynload/_decimal.cpython-36m-x86_64-linux-gnu.so
7fc62d7c0000-7fc62d7c1000 r--p 00044000 08:11 35391255 /opt/miniconda/lib/python3.6/lib-dynload/_decimal.cpython-36m-x86_64-linux-gnu.so
7fc62d7c1000-7fc62d7c9000 rw-p 00045000 08:11 35391255 /opt/miniconda/lib/python3.6/lib-dynload/_decimal.cpython-36m-x86_64-linux-gnu.so
7fc62d7c9000-7fc62d889000 rw-p 00000000 00:00 0
7fc62d889000-7fc62d8b4000 r-xp 00000000 08:11 35652619 /opt/miniconda/lib/python3.6/site-packages/numpy/linalg/_umath_linalg.cpython-36m-x86_64-linux-gnu.so
7fc62d8b4000-7fc62dab3000 ---p 0002b000 08:11 35652619 /opt/miniconda/lib/python3.6/site-packages/numpy/linalg/_umath_linalg.cpython-36m-x86_64-linux-gnu.so
7fc62dab3000-7fc62dab5000 rw-p 0002a000 08:11 35652619 /opt/miniconda/lib/python3.6/site-packages/numpy/linalg/_umath_linalg.cpython-36m-x86_64-linux-gnu.so
7fc62dab5000-7fc62dab8000 rw-p 000d3000 08:11 35652619 /opt/miniconda/lib/python3.6/site-packages/numpy/linalg/_umath_linalg.cpython-36m-x86_64-linux-gnu.so
7fc62dab8000-7fc62dabc000 r-xp 00000000 08:11 35652621 /opt/miniconda/lib/python3.6/site-packages/numpy/linalg/lapack_lite.cpython-36m-x86_64-linux-gnu.so
7fc62dabc000-7fc62dcbc000 ---p 00004000 08:11 35652621 /opt/miniconda/lib/python3.6/site-packages/numpy/linalg/lapack_lite.cpython-36m-x86_64-linux-gnu.so
7fc62dcbc000-7fc62dcbd000 rw-p 00004000 08:11 35652621 /opt/miniconda/lib/python3.6/site-packages/numpy/linalg/lapack_lite.cpython-36m-x86_64-linux-gnu.so
7fc62dcbd000-7fc62dcbf000 rw-p 00019000 08:11 35652621 /opt/miniconda/lib/python3.6/site-packages/numpy/linalg/lapack_lite.cpython-36m-x86_64-linux-gnu.so
7fc62dcbf000-7fc62dcff000 rw-p 00000000 00:00 0
7fc62dcff000-7fc62dd06000 r-xp 00000000 08:11 35260206 /opt/miniconda/lib/libffi.so.6.0.4
7fc62dd06000-7fc62df06000 ---p 00007000 08:11 35260206 /opt/miniconda/lib/libffi.so.6.0.4
7fc62df06000-7fc62df07000 r--p 00007000 08:11 35260206 /opt/miniconda/lib/libffi.so.6.0.4
7fc62df07000-7fc62df08000 rw-p 00008000 08:11 35260206 /opt/miniconda/lib/libffi.so.6.0.4
7fc62df08000-7fc62df26000 r-xp 00000000 08:11 35391250 /opt/miniconda/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so
7fc62df26000-7fc62e126000 ---p 0001e000 08:11 35391250 /opt/miniconda/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so
7fc62e126000-7fc62e127000 r--p 0001e000 08:11 35391250 /opt/miniconda/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so
7fc62e127000-7fc62e12b000 rw-p 0001f000 08:11 35391250 /opt/miniconda/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so
7fc62e12b000-7fc62e14a000 r-xp 00000000 08:11 35652017 /opt/miniconda/lib/python3.6/site-packages/numpy/core/_multiarray_tests.cpython-36m-x86_64-linux-gnu.so
7fc62e14a000-7fc62e349000 ---p 0001f000 08:11 35652017 /opt/miniconda/lib/python3.6/site-packages/numpy/core/_multiarray_tests.cpython-36m-x86_64-linux-gnu.so
7fc62e349000-7fc62e34b000 rw-p 0001e000 08:11 35652017 /opt/miniconda/lib/python3.6/site-packages/numpy/core/_multiarray_tests.cpython-36m-x86_64-linux-gnu.so
7fc62e34b000-7fc62e3cb000 rw-p 00000000 00:00 0
7fc62e3dc000-7fc62e49c000 rw-p 00000000 00:00 0
7fc62e49c000-7fc62e4b8000 r-xp 00000000 08:11 35391266 /opt/miniconda/lib/python3.6/lib-dynload/_pickle.cpython-36m-x86_64-linux-gnu.so
7fc62e4b8000-7fc62e6b7000 ---p 0001c000 08:11 35391266 /opt/miniconda/lib/python3.6/lib-dynload/_pickle.cpython-36m-x86_64-linux-gnu.so
7fc62e6b7000-7fc62e6b8000 r--p 0001b000 08:11 35391266 /opt/miniconda/lib/python3.6/lib-dynload/_pickle.cpython-36m-x86_64-linux-gnu.so
7fc62e6b8000-7fc62e6bb000 rw-p 0001c000 08:11 35391266 /opt/miniconda/lib/python3.6/lib-dynload/_pickle.cpython-36m-x86_64-linux-gnu.so
7fc62e6bb000-7fc62e6fb000 rw-p 00000000 00:00 0
7fc62e6fb000-7fc62e714000 r-xp 00000000 08:11 35391254 /opt/miniconda/lib/python3.6/lib-dynload/_datetime.cpython-36m-x86_64-linux-gnu.so
7fc62e714000-7fc62e913000 ---p 00019000 08:11 35391254 /opt/miniconda/lib/python3.6/lib-dynload/_datetime.cpython-36m-x86_64-linux-gnu.so
7fc62e913000-7fc62e914000 r--p 00018000 08:11 35391254 /opt/miniconda/lib/python3.6/lib-dynload/_datetime.cpython-36m-x86_64-linux-gnu.so
7fc62e914000-7fc62e916000 rw-p 00019000 08:11 35391254 /opt/miniconda/lib/python3.6/lib-dynload/_datetime.cpython-36m-x86_64-linux-gnu.so
7fc62e916000-7fc636916000 rw-p 00000000 00:00 0
7fc636916000-7fc636956000 rw-p 00000000 00:00 0
7fc636956000-7fc63c956000 rw-p 00000000 00:00 0
7fc63c956000-7fc63c957000 ---p 00000000 00:00 0
7fc63c957000-7fc63d157000 rw-p 00000000 00:00 0
7fc63d157000-7fc63d158000 ---p 00000000 00:00 0
7fc63d158000-7fc63d958000 rw-p 00000000 00:00 0
7fc63d958000-7fc63d959000 ---p 00000000 00:00 0
7fc63d959000-7fc63e159000 rw-p 00000000 00:00 0
7fc63e159000-7fc63e15a000 ---p 00000000 00:00 0
7fc63e15a000-7fc63e95a000 rw-p 00000000 00:00 0
7fc63e95d000-7fc63eedd000 rw-p 00000000 00:00 0
7fc63eedd000-7fc63f650000 r-xp 00000000 08:01 2716 /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.440.33.01
7fc63f650000-7fc63f84f000 ---p 00773000 08:01 2716 /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.440.33.01
7fc63f84f000-7fc63f94e000 r--p 00772000 08:01 2716 /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.440.33.01
7fc63f94e000-7fc63f953000 rw-p 00871000 08:01 2716 /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.440.33.01
7fc63f953000-7fc63f95c000 rw-p 00000000 00:00 0
7fc63f95c000-7fc64195c000 rw-p 00000000 00:00 0
7fc641970000-7fc64215d000 rw-p 00000000 00:00 0
7fc64215d000-7fc64415d000 rw-p 00000000 00:00 0
7fc64415e000-7fc64495e000 rw-p 00000000 00:00 0
7fc64495e000-7fc64695e000 rw-p 00000000 00:00 0
7fc64696a000-7fc646daa000 rw-p 00000000 00:00 0
7fc646daa000-7fc646f59000 r-xp 00000000 08:11 35785460 /opt/miniconda/lib/python3.6/site-packages/torchvision/_C.cpython-36m-x86_64-linux-gnu.so
7fc646f59000-7fc647159000 ---p 001af000 08:11 35785460 /opt/miniconda/lib/python3.6/site-packages/torchvision/_C.cpython-36m-x86_64-linux-gnu.so
7fc647159000-7fc64715a000 r--p 001af000 08:11 35785460 /opt/miniconda/lib/python3.6/site-packages/torchvision/_C.cpython-36m-x86_64-linux-gnu.so
7fc64715a000-7fc64715e000 rw-p 001b0000 08:11 35785460 /opt/miniconda/lib/python3.6/site-packages/torchvision/_C.cpython-36m-x86_64-linux-gnu.so
7fc64715e000-7fc64715f000 rw-p 00000000 00:00 0
7fc64715f000-7fc64915f000 rw-p 00000000 00:00 0
7fc64917b000-7fc6492fb000 rw-p 00000000 00:00 0
7fc6492fb000-7fc6494fb000 rw-s 00000000 00:05 123153144 /dev/zero (deleted)
7fc6494fb000-7fc64954a000 r-xp 00000000 08:11 35391293 /opt/miniconda/lib/python3.6/lib-dynload/pyexpat.cpython-36m-x86_64-linux-gnu.so
7fc64954a000-7fc649749000 ---p 0004f000 08:11 35391293 /opt/miniconda/lib/python3.6/lib-dynload/pyexpat.cpython-36m-x86_64-linux-gnu.so
7fc649749000-7fc64974c000 r--p 0004e000 08:11 35391293 /opt/miniconda/lib/python3.6/lib-dynload/pyexpat.cpython-36m-x86_64-linux-gnu.so
7fc64974c000-7fc64974e000 rw-p 00051000 08:11 35391293 /opt/miniconda/lib/python3.6/lib-dynload/pyexpat.cpython-36m-x86_64-linux-gnu.so
7fc64974e000-7fc64975e000 r-xp 00000000 08:11 35391256 /opt/miniconda/lib/python3.6/lib-dynload/_elementtree.cpython-36m-x86_64-linux-gnu.so
7fc64975e000-7fc64995d000 ---p 00010000 08:11 35391256 /opt/miniconda/lib/python3.6/lib-dynload/_elementtree.cpython-36m-x86_64-linux-gnu.so
7fc64995d000-7fc64995e000 r--p 0000f000 08:11 35391256 /opt/miniconda/lib/python3.6/lib-dynload/_elementtree.cpython-36m-x86_64-linux-gnu.so
7fc64995e000-7fc649960000 rw-p 00010000 08:11 35391256 /opt/miniconda/lib/python3.6/lib-dynload/_elementtree.cpython-36m-x86_64-linux-gnu.so
7fc649960000-7fc64b960000 rw-p 00000000 00:00 0
7fc64b990000-7fc64bb50000 rw-p 00000000 00:00 0
7fc64bb50000-7fc64bb56000 r-xp 00000000 08:11 36308760 /root/.local/lib/python3.6/site-packages/matplotlib/backends/_tkagg.cpython-36m-x86_64-linux-gnu.so
7fc64bb56000-7fc64bd55000 ---p 00006000 08:11 36308760 /root/.local/lib/python3.6/site-packages/matplotlib/backends/_tkagg.cpython-36m-x86_64-linux-gnu.so
7fc64bd55000-7fc64bd57000 rw-p 00005000 08:11 36308760 /root/.local/lib/python3.6/site-packages/matplotlib/backends/_tkagg.cpython-36m-x86_64-linux-gnu.so
7fc64bd57000-7fc64bd5c000 r-xp 00000000 08:11 34998467 /usr/lib/x86_64-linux-gnu/libXdmcp.so.6.0.0
7fc64bd5c000-7fc64bf5b000 ---p 00005000 08:11 34998467 /usr/lib/x86_64-linux-gnu/libXdmcp.so.6.0.0
7fc64bf5b000-7fc64bf5c000 r--p 00004000 08:11 34998467 /usr/lib/x86_64-linux-gnu/libXdmcp.so.6.0.0
7fc64bf5c000-7fc64bf5d000 rw-p 00005000 08:11 34998467 /usr/lib/x86_64-linux-gnu/libXdmcp.so.6.0.0
7fc64bf5d000-7fc64bf5f000 r-xp 00000000 08:11 34998463 /usr/lib/x86_64-linux-gnu/libXau.so.6.0.0
7fc64bf5f000-7fc64c15f000 ---p 00002000 08:11 34998463 /usr/lib/x86_64-linux-gnu/libXau.so.6.0.0
7fc64c15f000-7fc64c160000 r--p 00002000 08:11 34998463 /usr/lib/x86_64-linux-gnu/libXau.so.6.0.0
7fc64c160000-7fc64c161000 rw-p 00003000 08:11 34998463 /usr/lib/x86_64-linux-gnu/libXau.so.6.0.0
7fc64c161000-7fc64e161000 rw-p 00000000 00:00 0
7fc64e161000-7fc64e2a1000 rw-p 00000000 00:00 0
7fc64e2a1000-7fc64e2c2000 r-xp 00000000 08:11 34998491 /usr/lib/x86_64-linux-gnu/libxcb.so.1.1.0
7fc64e2c2000-7fc64e4c1000 ---p 00021000 08:11 34998491 /usr/lib/x86_64-linux-gnu/libxcb.so.1.1.0
7fc64e4c1000-7fc64e4c2000 r--p 00020000 08:11 34998491 /usr/lib/x86_64-linux-gnu/libxcb.so.1.1.0
7fc64e4c2000-7fc64e4c3000 rw-p 00021000 08:11 34998491 /usr/lib/x86_64-linux-gnu/libxcb.so.1.1.0
7fc64e4c3000-7fc64e5f8000 r-xp 00000000 08:11 34998459 /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
7fc64e5f8000-7fc64e7f8000 ---p 00135000 08:11 34998459 /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
7fc64e7f8000-7fc64e7f9000 r--p 00135000 08:11 34998459 /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
7fc64e7f9000-7fc64e7fd000 rw-p 00136000 08:11 34998459 /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
7fc64e7fd000-7fc64e832000 r--p 00000000 08:11 35260283 /opt/miniconda/lib/libtk8.6.so
7fc64e832000-7fc64e90e000 r-xp 00035000 08:11 35260283 /opt/miniconda/lib/libtk8.6.so
7fc64e90e000-7fc64e942000 r--p 00111000 08:11 35260283 /opt/miniconda/lib/libtk8.6.so
7fc64e942000-7fc64e959000 r--p 00144000 08:11 35260283 /opt/miniconda/lib/libtk8.6.so
7fc64e959000-7fc64e962000 rw-p 0015b000 08:11 35260283 /opt/miniconda/lib/libtk8.6.so
7fc64e962000-7fc650962000 rw-p 00000000 00:00 0
7fc650977000-7fc6509a2000 r--p 00000000 08:11 35260275 /opt/miniconda/lib/libtcl8.6.so
7fc6509a2000-7fc650ae0000 r-xp 0002b000 08:11 35260275 /opt/miniconda/lib/libtcl8.6.so
7fc650ae0000-7fc650b24000 r--p 00169000 08:11 35260275 /opt/miniconda/lib/libtcl8.6.so
7fc650b24000-7fc650b34000 r--p 001ac000 08:11 35260275 /opt/miniconda/lib/libtcl8.6.so
7fc650b34000-7fc650b35000 rw-p 001bc000 08:11 35260275 /opt/miniconda/lib/libtcl8.6.so
7fc650b35000-7fc650b36000 rw-p 00000000 00:00 0
7fc650b36000-7fc650b48000 r-xp 00000000 08:11 35391281 /opt/miniconda/lib/python3.6/lib-dynload/_tkinter.cpython-36m-x86_64-linux-gnu.so
7fc650b48000-7fc650d48000 ---p 00012000 08:11 35391281 /opt/miniconda/lib/python3.6/lib-dynload/_tkinter.cpython-36m-x86_64-linux-gnu.so
7fc650d48000-7fc650d49000 r--p 00012000 08:11 35391281 /opt/miniconda/lib/python3.6/lib-dynload/_tkinter.cpython-36m-x86_64-linux-gnu.so
7fc650d49000-7fc650d4a000 rw-p 00013000 08:11 35391281 /opt/miniconda/lib/python3.6/lib-dynload/_tkinter.cpython-36m-x86_64-linux-gnu.so
7fc650d4a000-7fc650dca000 rw-p 00000000 00:00 0
7fc650dca000-7fc650e21000 r-xp 00000000 08:11 36308753 /root/.local/lib/python3.6/site-packages/matplotlib/backends/_backend_agg.cpython-36m-x86_64-linux-gnu.so
7fc650e21000-7fc651021000 ---p 00057000 08:11 36308753 /root/.local/lib/python3.6/site-packages/matplotlib/backends/_backend_agg.cpython-36m-x86_64-linux-gnu.so
7fc651021000-7fc651022000 rw-p 00057000 08:11 36308753 /root/.local/lib/python3.6/site-packages/matplotlib/backends/_backend_agg.cpython-36m-x86_64-linux-gnu.so
7fc651022000-7fc651163000 rw-p 00000000 00:00 0
7fc651163000-7fc653163000 rw-p 00000000 00:00 0
7fc653199000-7fc653219000 rw-p 00000000 00:00 0
7fc653219000-7fc653221000 r-xp 00000000 08:11 35391249 /opt/miniconda/lib/python3.6/lib-dynload/_csv.cpython-36m-x86_64-linux-gnu.so
7fc653221000-7fc653420000 ---p 00008000 08:11 35391249 /opt/miniconda/lib/python3.6/lib-dynload/_csv.cpython-36m-x86_64-linux-gnu.so
7fc653420000-7fc653421000 r--p 00007000 08:11 35391249 /opt/miniconda/lib/python3.6/lib-dynload/_csv.cpython-36m-x86_64-linux-gnu.so
7fc653421000-7fc653423000 rw-p 00008000 08:11 35391249 /opt/miniconda/lib/python3.6/lib-dynload/_csv.cpython-36m-x86_64-linux-gnu.so
7fc653423000-7fc653523000 rw-p 00000000 00:00 0
7fc653523000-7fc653527000 r-xp 00000000 08:11 34604374 /lib/x86_64-linux-gnu/libuuid.so.1.3.0
7fc653527000-7fc653726000 ---p 00004000 08:11 34604374 /lib/x86_64-linux-gnu/libuuid.so.1.3.0
7fc653726000-7fc653727000 r--p 00003000 08:11 34604374 /lib/x86_64-linux-gnu/libuuid.so.1.3.0
7fc653727000-7fc653728000 rw-p 00004000 08:11 34604374 /lib/x86_64-linux-gnu/libuuid.so.1.3.0
7fc653728000-7fc653762000 r-xp 00000000 08:11 36308660 /root/.local/lib/python3.6/site-packages/matplotlib/_image.cpython-36m-x86_64-linux-gnu.so
7fc653762000-7fc653962000 ---p 0003a000 08:11 36308660 /root/.local/lib/python3.6/site-packages/matplotlib/_image.cpython-36m-x86_64-linux-gnu.so
7fc653962000-7fc653963000 rw-p 0003a000 08:11 36308660 /root/.local/lib/python3.6/site-packages/matplotlib/_image.cpython-36m-x86_64-linux-gnu.so
7fc653963000-7fc653964000 rw-p 00000000 00:00 0
7fc653964000-7fc655964000 rw-p 00000000 00:00 0
7fc655978000-7fc655cf8000 rw-p 00000000 00:00 0
7fc655cf8000-7fc655d25000 r-xp 00000000 08:11 36308678 /root/.local/lib/python3.6/site-packages/matplotlib/_path.cpython-36m-x86_64-linux-gnu.so
7fc655d25000-7fc655f25000 ---p 0002d000 08:11 36308678 /root/.local/lib/python3.6/site-packages/matplotlib/_path.cpython-36m-x86_64-linux-gnu.so
7fc655f25000-7fc655f26000 rw-p 0002d000 08:11 36308678 /root/.local/lib/python3.6/site-packages/matplotlib/_path.cpython-36m-x86_64-linux-gnu.so
7fc655f26000-7fc655f27000 rw-p 00000000 00:00 0
7fc655f27000-7fc655f63000 r-xp 00000000 08:11 36308103 /root/.local/lib/python3.6/site-packages/kiwisolver.cpython-36m-x86_64-linux-gnu.so
7fc655f63000-7fc656162000 ---p 0003c000 08:11 36308103 /root/.local/lib/python3.6/site-packages/kiwisolver.cpython-36m-x86_64-linux-gnu.so
7fc656162000-7fc656165000 rw-p 0003b000 08:11 36308103 /root/.local/lib/python3.6/site-packages/kiwisolver.cpython-36m-x86_64-linux-gnu.so
7fc656165000-7fc658165000 rw-p 00000000 00:00 0
7fc658173000-7fc6582f3000 rw-p 00000000 00:00 0
7fc6582f3000-7fc6583cc000 r-xp 00000000 08:11 36308714 /root/.local/lib/python3.6/site-packages/matplotlib/ft2font.cpython-36m-x86_64-linux-gnu.so
7fc6583cc000-7fc6585cc000 ---p 000d9000 08:11 36308714 /root/.local/lib/python3.6/site-packages/matplotlib/ft2font.cpython-36m-x86_64-linux-gnu.so
7fc6585cc000-7fc6585d3000 rw-p 000d9000 08:11 36308714 /root/.local/lib/python3.6/site-packages/matplotlib/ft2font.cpython-36m-x86_64-linux-gnu.so
7fc6585d3000-7fc658754000 rw-p 00000000 00:00 0
7fc658754000-7fc658765000 r-xp 00000000 08:11 35391259 /opt/miniconda/lib/python3.6/lib-dynload/_json.cpython-36m-x86_64-linux-gnu.so
7fc658765000-7fc658964000 ---p 00011000 08:11 35391259 /opt/miniconda/lib/python3.6/lib-dynload/_json.cpython-36m-x86_64-linux-gnu.so
7fc658964000-7fc658965000 r--p 00010000 08:11 35391259 /opt/miniconda/lib/python3.6/lib-dynload/_json.cpython-36m-x86_64-linux-gnu.so
7fc658965000-7fc658966000 rw-p 00011000 08:11 35391259 /opt/miniconda/lib/python3.6/lib-dynload/_json.cpython-36m-x86_64-linux-gnu.so
7fc658966000-7fc65a966000 rw-p 00000000 00:00 0
7fc65a969000-7fc65ab29000 rw-p 00000000 00:00 0
7fc65ab29000-7fc65abea000 r-xp 00000000 08:11 35391300 /opt/miniconda/lib/python3.6/lib-dynload/unicodedata.cpython-36m-x86_64-linux-gnu.so
7fc65abea000-7fc65adea000 ---p 000c1000 08:11 35391300 /opt/miniconda/lib/python3.6/lib-dynload/unicodedata.cpython-36m-x86_64-linux-gnu.so
7fc65adea000-7fc65adeb000 r--p 000c1000 08:11 35391300 /opt/miniconda/lib/python3.6/lib-dynload/unicodedata.cpython-36m-x86_64-linux-gnu.so
7fc65adeb000-7fc65ae06000 rw-p 000c2000 08:11 35391300 /opt/miniconda/lib/python3.6/lib-dynload/unicodedata.cpython-36m-x86_64-linux-gnu.so
7fc65ae06000-7fc65af47000 rw-p 00000000 00:00 0
7fc65af47000-7fc65af61000 r-xp 00000000 08:11 35391275 /opt/miniconda/lib/python3.6/lib-dynload/_ssl.cpython-36m-x86_64-linux-gnu.so
7fc65af61000-7fc65b161000 ---p 0001a000 08:11 35391275 /opt/miniconda/lib/python3.6/lib-dynload/_ssl.cpython-36m-x86_64-linux-gnu.so
7fc65b161000-7fc65b162000 r--p 0001a000 08:11 35391275 /opt/miniconda/lib/python3.6/lib-dynload/_ssl.cpython-36m-x86_64-linux-gnu.so
7fc65b162000-7fc65b167000 rw-p 0001b000 08:11 35391275 /opt/miniconda/lib/python3.6/lib-dynload/_ssl.cpython-36m-x86_64-linux-gnu.so
7fc65b167000-7fc65d167000 rw-p 00000000 00:00 0
7fc65d199000-7fc65d259000 rw-p 00000000 00:00 0
7fc65d259000-7fc65d275000 r--p 00000000 08:11 35390014 /opt/miniconda/lib/libssl.so.1.0.0
7fc65d275000-7fc65d2b6000 r-xp 0001c000 08:11 35390014 /opt/miniconda/lib/libssl.so.1.0.0
7fc65d2b6000-7fc65d2c7000 r--p 0005d000 08:11 35390014 /opt/miniconda/lib/libssl.so.1.0.0
7fc65d2c7000-7fc65d2cc000 r--p 0006d000 08:11 35390014 /opt/miniconda/lib/libssl.so.1.0.0
7fc65d2cc000-7fc65d2d2000 rw-p 00072000 08:11 35390014 /opt/miniconda/lib/libssl.so.1.0.0
7fc65d2d2000-7fc65d412000 rw-p 00000000 00:00 0
7fc65d412000-7fc65d415000 r-xp 00000000 08:11 35391264 /opt/miniconda/lib/python3.6/lib-dynload/_multiprocessing.cpython-36m-x86_64-linux-gnu.so
7fc65d415000-7fc65d615000 ---p 00003000 08:11 35391264 /opt/miniconda/lib/python3.6/lib-dynload/_multiprocessing.cpython-36m-x86_64-linux-gnu.so
7fc65d615000-7fc65d616000 r--p 00003000 08:11 35391264 /opt/miniconda/lib/python3.6/lib-dynload/_multiprocessing.cpython-36m-x86_64-linux-gnu.so
7fc65d616000-7fc65d617000 rw-p 00004000 08:11 35391264 /opt/miniconda/lib/python3.6/lib-dynload/_multiprocessing.cpython-36m-x86_64-linux-gnu.so
7fc65d617000-7fc65d757000 rw-p 00000000 00:00 0
7fc65d757000-7fc65d765000 r-xp 00000000 08:11 35391282 /opt/miniconda/lib/python3.6/lib-dynload/array.cpython-36m-x86_64-linux-gnu.so
7fc65d765000-7fc65d964000 ---p 0000e000 08:11 35391282 /opt/miniconda/lib/python3.6/lib-dynload/array.cpython-36m-x86_64-linux-gnu.so
7fc65d964000-7fc65d965000 r--p 0000d000 08:11 35391282 /opt/miniconda/lib/python3.6/lib-dynload/array.cpython-36m-x86_64-linux-gnu.so
7fc65d965000-7fc65d968000 rw-p 0000e000 08:11 35391282 /opt/miniconda/lib/python3.6/lib-dynload/array.cpython-36m-x86_64-linux-gnu.so
7fc65d968000-7fc65f968000 rw-p 00000000 00:00 0
7fc65f986000-7fc65f987000 rw-p 00000000 00:00 0
7fc65f987000-7fc65f988000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f988000-7fc65f989000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f989000-7fc65f98a000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f98a000-7fc65f98b000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f98b000-7fc65f98c000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f98c000-7fc65f98d000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f98d000-7fc65f98e000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f98e000-7fc65f98f000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f98f000-7fc65f990000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f990000-7fc65f991000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f991000-7fc65f992000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f992000-7fc65f993000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f993000-7fc65f994000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f994000-7fc65f995000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f995000-7fc65f996000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f996000-7fc65f997000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f997000-7fc65f998000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f998000-7fc65f999000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f999000-7fc65f99a000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f99a000-7fc65f99b000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f99b000-7fc65f99c000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f99c000-7fc65f99d000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f99d000-7fc65f99e000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f99e000-7fc65f99f000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f99f000-7fc65f9a0000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f9a0000-7fc65f9a1000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f9a1000-7fc65f9a2000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f9a2000-7fc65f9a3000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f9a3000-7fc65f9a4000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc65f9a4000-7fc65faa4000 rw-p 00000000 00:00 0
7fc65faa4000-7fc65faba000 r-xp 00000000 08:11 35391273 /opt/miniconda/lib/python3.6/lib-dynload/_socket.cpython-36m-x86_64-linux-gnu.so
7fc65faba000-7fc65fcba000 ---p 00016000 08:11 35391273 /opt/miniconda/lib/python3.6/lib-dynload/_socket.cpython-36m-x86_64-linux-gnu.so
7fc65fcba000-7fc65fcbb000 r--p 00016000 08:11 35391273 /opt/miniconda/lib/python3.6/lib-dynload/_socket.cpython-36m-x86_64-linux-gnu.so
7fc65fcbb000-7fc65fcc0000 rw-p 00017000 08:11 35391273 /opt/miniconda/lib/python3.6/lib-dynload/_socket.cpython-36m-x86_64-linux-gnu.so
7fc65fcc0000-7fc65fce5000 r-xp 00000000 08:11 35784994 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libgomp-7c85b1e2.so.1
7fc65fce5000-7fc65fee4000 ---p 00025000 08:11 35784994 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libgomp-7c85b1e2.so.1
7fc65fee4000-7fc65fee5000 r--p 00024000 08:11 35784994 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libgomp-7c85b1e2.so.1
7fc65fee5000-7fc65feea000 rw-p 00025000 08:11 35784994 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libgomp-7c85b1e2.so.1
7fc65feea000-7fc65ff61000 r-xp 00000000 08:11 35784993 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libcudart-1b201d85.so.10.1
7fc65ff61000-7fc660161000 ---p 00077000 08:11 35784993 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libcudart-1b201d85.so.10.1
7fc660161000-7fc660165000 rw-p 00077000 08:11 35784993 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libcudart-1b201d85.so.10.1
7fc660165000-7fc660166000 rw-p 00000000 00:00 0
7fc660166000-7fc660169000 rw-p 0007c000 08:11 35784993 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libcudart-1b201d85.so.10.1
7fc660169000-7fc662169000 rw-p 00000000 00:00 0
7fc662169000-7fc66216a000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66216a000-7fc66216b000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66216b000-7fc66216c000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66216c000-7fc66216d000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66216d000-7fc66216e000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66216e000-7fc66216f000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66216f000-7fc662170000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc662170000-7fc662171000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc662171000-7fc662172000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc662172000-7fc662173000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc662173000-7fc662174000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc662174000-7fc662175000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc662175000-7fc662176000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc662176000-7fc662177000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc662177000-7fc662178000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc662178000-7fc662179000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc662179000-7fc66217a000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66217a000-7fc66217b000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66217b000-7fc66217c000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66217c000-7fc66217d000 rw-s 00000000 00:47 82 /dev/shm/pDViuw (deleted)
7fc66217d000-7fc66217e000 rw-s 00000000 00:47 81 /dev/shm/0qfMtv (deleted)
7fc66217e000-7fc66217f000 rw-s 00000000 00:47 80 /dev/shm/Gllgtu (deleted)
7fc66217f000-7fc662180000 rw-s 00000000 00:47 79 /dev/shm/ypPMst (deleted)
7fc662180000-7fc662181000 rw-s 00000000 00:47 78 /dev/shm/8k7OSs (deleted)
7fc662181000-7fc662182000 rw-s 00000000 00:47 77 /dev/shm/KElSis (deleted)
7fc662182000-7fc662183000 rw-s 00000000 00:47 76 /dev/shm/HKVXIr (deleted)
7fc662183000-7fc662184000 rw-s 00000000 00:47 75 /dev/shm/wi3Tzr (deleted)
7fc662184000-7fc662185000 rw-s 00000000 00:47 74 /dev/shm/qqhRqr (deleted)
7fc662185000-7fc662186000 rw-s 00000000 00:47 73 /dev/shm/YbNQhr (deleted)
7fc662186000-7fc662187000 rw-s 00000000 00:47 72 /dev/shm/2DRGzr (deleted)
7fc662187000-7fc662188000 rw-s 00000000 00:47 71 /dev/shm/u6IxRr (deleted)
7fc662188000-7fc662189000 rw-s 00000000 00:47 70 /dev/shm/IKHq9r (deleted)
7fc662189000-7fc66218a000 rw-s 00000000 00:47 69 /dev/shm/9BycSs (deleted)
7fc66218a000-7fc66218b000 rw-s 00000000 00:47 68 /dev/shm/myuZAt (deleted)
7fc66218b000-7fc66218c000 rw-s 00000000 00:47 67 /dev/shm/DZJOju (deleted)
7fc66218c000-7fc66218d000 rw-s 00000000 00:47 66 /dev/shm/Szw9sv (deleted)
7fc66218d000-7fc66218e000 rw-s 00000000 00:47 65 /dev/shm/sXivCw (deleted)
7fc66218e000-7fc66218f000 rw-s 00000000 00:47 64 /dev/shm/JvhTLx (deleted)
7fc66218f000-7fc662190000 rw-s 00000000 00:47 63 /dev/shm/Lfu6mz (deleted)
7fc662190000-7fc662191000 rw-s 00000000 00:47 62 /dev/shm/SKukYA (deleted)
7fc662191000-7fc662192000 rw-s 00000000 00:47 61 /dev/shm/sUIAzC (deleted)
7fc662192000-7fc662193000 rw-s 00000000 00:47 60 /dev/shm/laVbDE (deleted)
7fc662193000-7fc662194000 rw-s 00000000 00:47 59 /dev/shm/JO3NGG (deleted)
7fc662194000-7fc662195000 rw-s 00000000 00:47 58 /dev/shm/YTJsKI (deleted)
7fc662195000-7fc662196000 rw-s 00000000 00:47 57 /dev/shm/H6iQeL (deleted)
7fc662196000-7fc662197000 rw-s 00000000 00:47 56 /dev/shm/fUIeJN (deleted)
7fc662197000-7fc662198000 rw-s 00000000 00:47 55 /dev/shm/RHlFdQ (deleted)
7fc662198000-7fc662199000 rw-s 00000000 00:47 54 /dev/shm/7UW68S (deleted)
7fc662199000-7fc66219a000 rw-s 00000000 00:47 53 /dev/shm/Bulz4V (deleted)
7fc66219a000-7fc66219b000 rw-s 00000000 00:47 52 /dev/shm/3z63ZY (deleted)
7fc66219b000-7fc66219c000 rw-s 00000000 00:47 51 /dev/shm/lfJ5l2 (deleted)
7fc66219c000-7fc66219d000 rw-s 00000000 00:47 50 /dev/shm/ing8H5 (deleted)
7fc66219d000-7fc66219e000 rw-s 00000000 00:47 49 /dev/shm/Ea8c48 (deleted)
7fc66219e000-7fc66219f000 rw-s 00000000 00:47 48 /dev/shm/cJZQQc (deleted)
7fc66219f000-7fc6621a0000 rw-s 00000000 00:47 47 /dev/shm/qcBvDg (deleted)
7fc6621a0000-7fc6622e0000 rw-p 00000000 00:00 0
7fc6622e0000-7fc6622e8000 r-xp 00000000 08:11 35784995 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libnvToolsExt-3965bdd0.so.1
7fc6622e8000-7fc6624e8000 ---p 00008000 08:11 35784995 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libnvToolsExt-3965bdd0.so.1
7fc6624e8000-7fc6624e9000 rw-p 00008000 08:11 35784995 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libnvToolsExt-3965bdd0.so.1
7fc6624e9000-7fc6624ea000 rw-p 0000a000 08:11 35784995 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libnvToolsExt-3965bdd0.so.1
7fc6624ea000-7fc66252f000 r-xp 00000000 08:11 35784987 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10.so
7fc66252f000-7fc66272f000 ---p 00045000 08:11 35784987 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10.so
7fc66272f000-7fc662730000 r--p 00045000 08:11 35784987 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10.so
7fc662730000-7fc662731000 rw-p 00046000 08:11 35784987 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10.so
7fc662731000-7fc662732000 rw-p 00000000 00:00 0
7fc662732000-7fc66273c000 rw-p 0005f000 08:11 35784987 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10.so
7fc66273c000-7fc662760000 r-xp 00000000 08:11 35784988 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so
7fc662760000-7fc66295f000 ---p 00024000 08:11 35784988 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so
7fc66295f000-7fc662960000 r--p 00023000 08:11 35784988 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so
7fc662960000-7fc662965000 rw-p 00024000 08:11 35784988 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so
7fc662965000-7fc66296a000 rw-p 00030000 08:11 35784988 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so
7fc66296a000-7fc66696a000 rw-p 00000000 00:00 0
7fc66696a000-7fc66696b000 rw-s 00000000 00:47 46 /dev/shm/3tmcqk (deleted)
7fc66696b000-7fc66696c000 rw-s 00000000 00:47 45 /dev/shm/uMBlDo (deleted)
7fc66696c000-7fc66696d000 rw-s 00000000 00:47 44 /dev/shm/t3EvQs (deleted)
7fc66696d000-7fc66696e000 rw-s 00000000 00:47 43 /dev/shm/kGVH3w (deleted)
7fc66696e000-7fc66696f000 rw-s 00000000 00:47 42 /dev/shm/S5rDHB (deleted)
7fc66696f000-7fc666970000 rw-s 00000000 00:47 41 /dev/shm/wdNzlG (deleted)
7fc666970000-7fc666971000 rw-s 00000000 00:47 40 /dev/shm/S5kyZK (deleted)
7fc666971000-7fc666972000 rw-s 00000000 00:47 39 /dev/shm/NN4U3P (deleted)
7fc666972000-7fc666973000 rw-s 00000000 00:47 38 /dev/shm/onDi8U (deleted)
7fc666973000-7fc666974000 rw-s 00000000 00:47 37 /dev/shm/LRqkg0 (deleted)
7fc666974000-7fc666975000 rw-s 00000000 00:47 36 /dev/shm/FOZLO5 (deleted)
7fc666975000-7fc666976000 rw-s 00000000 00:47 35 /dev/shm/e8venb (deleted)
7fc666976000-7fc666977000 rw-s 00000000 00:47 34 /dev/shm/GwGJVg (deleted)
7fc666977000-7fc666978000 rw-s 00000000 00:47 33 /dev/shm/jBC6Um (deleted)
7fc666978000-7fc666979000 rw-s 00000000 00:47 32 /dev/shm/e5luUs (deleted)
7fc666979000-7fc66697a000 rw-s 00000000 00:47 31 /dev/shm/9BaUTy (deleted)
7fc66697a000-7fc66697b000 rw-s 00000000 00:47 30 /dev/shm/kKffkF (deleted)
7fc66697b000-7fc66697c000 rw-s 00000000 00:47 29 /dev/shm/DyoBKL (deleted)
7fc66697c000-7fc66697d000 rw-s 00000000 00:47 28 /dev/shm/S3NZaS (deleted)
7fc66697d000-7fc66697e000 rw-s 00000000 00:47 27 /dev/shm/54r01Y (deleted)
7fc66697e000-7fc66697f000 rw-s 00000000 00:47 26 /dev/shm/Vml2S5 (deleted)
7fc66697f000-7fc666980000 rw-s 00000000 00:47 25 /dev/shm/uuh6Jc (deleted)
7fc666980000-7fc666981000 rw-s 00000000 00:47 24 /dev/shm/3UmQ2j (deleted)
7fc666981000-7fc666982000 rw-s 00000000 00:47 23 /dev/shm/cLuBlr (deleted)
7fc666982000-7fc666983000 rw-s 00000000 00:47 22 /dev/shm/hk1oEy (deleted)
7fc666983000-7fc666984000 rw-s 00000000 00:47 21 /dev/shm/cArdoG (deleted)
7fc666984000-7fc666985000 rw-s 00000000 00:47 20 /dev/shm/4iN27N (deleted)
7fc666985000-7fc666986000 rw-s 00000000 00:47 19 /dev/shm/P5fURV (deleted)
7fc666986000-7fc666987000 rw-s 00000000 00:47 18 /dev/shm/nmnh33 (deleted)
7fc666987000-7fc666988000 rw-s 00000000 00:47 17 /dev/shm/wIkFec (deleted)
7fc666988000-7fc666989000 rw-s 00000000 00:47 16 /dev/shm/aZi5pk (deleted)
7fc666989000-7fc66698a000 rw-s 00000000 00:47 15 /dev/shm/8Tl92s (deleted)
7fc66698a000-7fc66698b000 rw-s 00000000 00:47 14 /dev/shm/tRZ8pD (deleted)
7fc66698b000-7fc66698c000 rw-s 00000000 00:47 13 /dev/shm/fxRaNN (deleted)
7fc66698c000-7fc66698d000 rw-s 00000000 00:47 12 /dev/shm/mqypDY (deleted)
7fc66698d000-7fc666a0d000 rw-p 00000000 00:00 0
7fc666a0d000-7fc666a23000 r-xp 00000000 08:11 34604298 /lib/x86_64-linux-gnu/libgcc_s.so.1
7fc666a23000-7fc666c22000 ---p 00016000 08:11 34604298 /lib/x86_64-linux-gnu/libgcc_s.so.1
7fc666c22000-7fc666c23000 rw-p 00015000 08:11 34604298 /lib/x86_64-linux-gnu/libgcc_s.so.1
7fc666c23000-7fc666d95000 r-xp 00000000 08:11 34605180 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21
7fc666d95000-7fc666f95000 ---p 00172000 08:11 34605180 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21
7fc666f95000-7fc666f9f000 r--p 00172000 08:11 34605180 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21
7fc666f9f000-7fc666fa1000 rw-p 0017c000 08:11 34605180 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21
7fc666fa1000-7fc666fa5000 rw-p 00000000 00:00 0
7fc666fa5000-7fc667a51000 r-xp 00000000 08:11 35785000 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch_python.so
7fc667a51000-7fc667c51000 ---p 00aac000 08:11 35785000 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch_python.so
7fc667c51000-7fc667c65000 r--p 00aac000 08:11 35785000 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch_python.so
7fc667c65000-7fc667c81000 rw-p 00ac0000 08:11 35785000 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch_python.so
7fc667c81000-7fc667cd5000 rw-p 00000000 00:00 0
7fc667cd5000-7fc667d41000 rw-p 00d66000 08:11 35785000 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libtorch_python.so
7fc667d41000-7fc667d49000 r-xp 00000000 08:11 35784998 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libshm.so
7fc667d49000-7fc667f48000 ---p 00008000 08:11 35784998 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libshm.so
7fc667f48000-7fc667f49000 r--p 00007000 08:11 35784998 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libshm.so
7fc667f49000-7fc667f4a000 rw-p 00008000 08:11 35784998 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libshm.so
7fc667f4a000-7fc667f50000 rw-p 0000c000 08:11 35784998 /opt/miniconda/lib/python3.6/site-packages/torch/lib/libshm.so
7fc667f50000-7fc667f65000 r-xp 00000000 08:11 35783473 /opt/miniconda/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
7fc667f65000-7fc668164000 ---p 00015000 08:11 35783473 /opt/miniconda/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
7fc668164000-7fc668165000 r--p 00014000 08:11 35783473 /opt/miniconda/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
7fc668165000-7fc668166000 rw-p 00015000 08:11 35783473 /opt/miniconda/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
7fc668166000-7fc66816d000 rw-p 00026000 08:11 35783473 /opt/miniconda/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
7fc66816d000-7fc66825d000 r-xp 00000000 08:11 35651935 /opt/miniconda/lib/python3.6/site-packages/numpy/.libs/libgfortran-ed201abd.so.3.0.0
7fc66825d000-7fc66845c000 ---p 000f0000 08:11 35651935 /opt/miniconda/lib/python3.6/site-packages/numpy/.libs/libgfortran-ed201abd.so.3.0.0
7fc66845c000-7fc66845e000 rw-p 000ef000 08:11 35651935 /opt/miniconda/lib/python3.6/site-packages/numpy/.libs/libgfortran-ed201abd.so.3.0.0
7fc66845e000-7fc66845f000 rw-p 00000000 00:00 0
7fc66845f000-7fc668467000 rw-p 000f2000 08:11 35651935 /opt/miniconda/lib/python3.6/site-packages/numpy/.libs/libgfortran-ed201abd.so.3.0.0
7fc668467000-7fc669f5d000 r-xp 00000000 08:11 35651936 /opt/miniconda/lib/python3.6/site-packages/numpy/.libs/libopenblasp-r0-34a18dc3.3.7.so
7fc669f5d000-7fc66a15c000 ---p 01af6000 08:11 35651936 /opt/miniconda/lib/python3.6/site-packages/numpy/.libs/libopenblasp-r0-34a18dc3.3.7.so
7fc66a15c000-7fc66a175000 rw-p 01af5000 08:11 35651936 /opt/miniconda/lib/python3.6/site-packages/numpy/.libs/libopenblasp-r0-34a18dc3.3.7.so
7fc66a175000-7fc66a180000 rw-p 00000000 00:00 0
7fc66a180000-7fc66a1f8000 rw-p 01be1000 08:11 35651936 /opt/miniconda/lib/python3.6/site-packages/numpy/.libs/libopenblasp-r0-34a18dc3.3.7.so
7fc66a1f8000-7fc66a5a3000 r-xp 00000000 08:11 35652018 /opt/miniconda/lib/python3.6/site-packages/numpy/core/_multiarray_umath.cpython-36m-x86_64-linux-gnu.so
7fc66a5a3000-7fc66a7a2000 ---p 003ab000 08:11 35652018 /opt/miniconda/lib/python3.6/site-packages/numpy/core/_multiarray_umath.cpython-36m-x86_64-linux-gnu.so
7fc66a7a2000-7fc66a7c1000 rw-p 003aa000 08:11 35652018 /opt/miniconda/lib/python3.6/site-packages/numpy/core/_multiarray_umath.cpython-36m-x86_64-linux-gnu.so
7fc66a7c1000-7fc66a7e2000 rw-p 00000000 00:00 0
7fc66a7e2000-7fc66a7e9000 rw-p 0144c000 08:11 35652018 /opt/miniconda/lib/python3.6/site-packages/numpy/core/_multiarray_umath.cpython-36m-x86_64-linux-gnu.so
7fc66a7e9000-7fc66a7ea000 r-xp 00000000 08:11 35391265 /opt/miniconda/lib/python3.6/lib-dynload/_opcode.cpython-36m-x86_64-linux-gnu.so
7fc66a7ea000-7fc66a9ea000 ---p 00001000 08:11 35391265 /opt/miniconda/lib/python3.6/lib-dynload/_opcode.cpython-36m-x86_64-linux-gnu.so
7fc66a9ea000-7fc66a9eb000 r--p 00001000 08:11 35391265 /opt/miniconda/lib/python3.6/lib-dynload/_opcode.cpython-36m-x86_64-linux-gnu.so
7fc66a9eb000-7fc66a9ec000 rw-p 00002000 08:11 35391265 /opt/miniconda/lib/python3.6/lib-dynload/_opcode.cpython-36m-x86_64-linux-gnu.so
7fc66a9ec000-7fc66aa6c000 rw-p 00000000 00:00 0
7fc66aa6c000-7fc66aa73000 r-xp 00000000 08:11 35391296 /opt/miniconda/lib/python3.6/lib-dynload/select.cpython-36m-x86_64-linux-gnu.so
7fc66aa73000-7fc66ac72000 ---p 00007000 08:11 35391296 /opt/miniconda/lib/python3.6/lib-dynload/select.cpython-36m-x86_64-linux-gnu.so
7fc66ac72000-7fc66ac73000 r--p 00006000 08:11 35391296 /opt/miniconda/lib/python3.6/lib-dynload/select.cpython-36m-x86_64-linux-gnu.so
7fc66ac73000-7fc66ac75000 rw-p 00007000 08:11 35391296 /opt/miniconda/lib/python3.6/lib-dynload/select.cpython-36m-x86_64-linux-gnu.so
7fc66ac75000-7fc66ac78000 r-xp 00000000 08:11 35391267 /opt/miniconda/lib/python3.6/lib-dynload/_posixsubprocess.cpython-36m-x86_64-linux-gnu.so
7fc66ac78000-7fc66ae77000 ---p 00003000 08:11 35391267 /opt/miniconda/lib/python3.6/lib-dynload/_posixsubprocess.cpython-36m-x86_64-linux-gnu.so
7fc66ae77000-7fc66ae78000 r--p 00002000 08:11 35391267 /opt/miniconda/lib/python3.6/lib-dynload/_posixsubprocess.cpython-36m-x86_64-linux-gnu.so
7fc66ae78000-7fc66ae79000 rw-p 00003000 08:11 35391267 /opt/miniconda/lib/python3.6/lib-dynload/_posixsubprocess.cpython-36m-x86_64-linux-gnu.so
7fc66ae79000-7fc66aeb9000 rw-p 00000000 00:00 0
7fc66aeb9000-7fc66aeeb000 r-xp 00000000 08:11 35391691 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/liblzma-6cd627ed.so.5.2.4
7fc66aeeb000-7fc66b0eb000 ---p 00032000 08:11 35391691 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/liblzma-6cd627ed.so.5.2.4
7fc66b0eb000-7fc66b0ec000 rw-p 00032000 08:11 35391691 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/liblzma-6cd627ed.so.5.2.4
7fc66b0ec000-7fc66b0ed000 rw-p 00034000 08:11 35391691 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/liblzma-6cd627ed.so.5.2.4
7fc66b0ed000-7fc66b17c000 r-xp 00000000 08:11 35391694 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libtiff-8267adfe.so.5.4.0
7fc66b17c000-7fc66b37c000 ---p 0008f000 08:11 35391694 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libtiff-8267adfe.so.5.4.0
7fc66b37c000-7fc66b380000 rw-p 0008f000 08:11 35391694 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libtiff-8267adfe.so.5.4.0
7fc66b380000-7fc66b38a000 rw-p 00094000 08:11 35391694 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libtiff-8267adfe.so.5.4.0
7fc66b38a000-7fc66b39e000 r-xp 00000000 08:11 35391698 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libz-a147dcb0.so.1.2.3
7fc66b39e000-7fc66b59d000 ---p 00014000 08:11 35391698 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libz-a147dcb0.so.1.2.3
7fc66b59d000-7fc66b59e000 rw-p 00013000 08:11 35391698 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libz-a147dcb0.so.1.2.3
7fc66b59e000-7fc66b59f000 rw-p 00015000 08:11 35391698 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libz-a147dcb0.so.1.2.3
7fc66b59f000-7fc66b613000 r-xp 00000000 08:11 35391692 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libopenjp2-b3d7668a.so.2.3.1
7fc66b613000-7fc66b812000 ---p 00074000 08:11 35391692 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libopenjp2-b3d7668a.so.2.3.1
7fc66b812000-7fc66b814000 rw-p 00073000 08:11 35391692 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libopenjp2-b3d7668a.so.2.3.1
7fc66b814000-7fc66b816000 rw-p 00076000 08:11 35391692 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libopenjp2-b3d7668a.so.2.3.1
7fc66b816000-7fc66b851000 r-xp 00000000 08:11 35391689 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libjpeg-3b10b538.so.9.3.0
7fc66b851000-7fc66ba50000 ---p 0003b000 08:11 35391689 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libjpeg-3b10b538.so.9.3.0
7fc66ba50000-7fc66ba51000 rw-p 0003a000 08:11 35391689 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libjpeg-3b10b538.so.9.3.0
7fc66ba51000-7fc66ba53000 rw-p 0003c000 08:11 35391689 /opt/miniconda/lib/python3.6/site-packages/PIL/.libs/libjpeg-3b10b538.so.9.3.0
7fc66ba53000-7fc66bad2000 r-xp 00000000 08:11 35391880 /opt/miniconda/lib/python3.6/site-packages/PIL/_imaging.cpython-36m-x86_64-linux-gnu.so
7fc66bad2000-7fc66bcd2000 ---p 0007f000 08:11 35391880 /opt/miniconda/lib/python3.6/site-packages/PIL/_imaging.cpython-36m-x86_64-linux-gnu.so
7fc66bcd2000-7fc66bcd9000 rw-p 0007f000 08:11 35391880 /opt/miniconda/lib/python3.6/site-packages/PIL/_imaging.cpython-36m-x86_64-linux-gnu.so
7fc66bcd9000-7fc66bce8000 rw-p 00087000 08:11 35391880 /opt/miniconda/lib/python3.6/site-packages/PIL/_imaging.cpython-36m-x86_64-linux-gnu.so
7fc66bce8000-7fc66bd68000 rw-p 00000000 00:00 0
7fc66bd68000-7fc66bd72000 r-xp 00000000 08:11 35391288 /opt/miniconda/lib/python3.6/lib-dynload/math.cpython-36m-x86_64-linux-gnu.so
7fc66bd72000-7fc66bf72000 ---p 0000a000 08:11 35391288 /opt/miniconda/lib/python3.6/lib-dynload/math.cpython-36m-x86_64-linux-gnu.so
7fc66bf72000-7fc66bf73000 r--p 0000a000 08:11 35391288 /opt/miniconda/lib/python3.6/lib-dynload/math.cpython-36m-x86_64-linux-gnu.so
7fc66bf73000-7fc66bf75000 rw-p 0000b000 08:11 35391288 /opt/miniconda/lib/python3.6/lib-dynload/math.cpython-36m-x86_64-linux-gnu.so
7fc66bf75000-7fc66bff5000 rw-p 00000000 00:00 0
7fc66bff5000-7fc66bff7000 r-xp 00000000 08:11 35391287 /opt/miniconda/lib/python3.6/lib-dynload/grp.cpython-36m-x86_64-linux-gnu.so
7fc66bff7000-7fc66c1f7000 ---p 00002000 08:11 35391287 /opt/miniconda/lib/python3.6/lib-dynload/grp.cpython-36m-x86_64-linux-gnu.so
7fc66c1f7000-7fc66c1f8000 r--p 00002000 08:11 35391287 /opt/miniconda/lib/python3.6/lib-dynload/grp.cpython-36m-x86_64-linux-gnu.so
7fc66c1f8000-7fc66c1f9000 rw-p 00003000 08:11 35391287 /opt/miniconda/lib/python3.6/lib-dynload/grp.cpython-36m-x86_64-linux-gnu.so
7fc66c1f9000-7fc66c21e000 r-xp 00000000 08:11 35260232 /opt/miniconda/lib/liblzma.so.5.2.4
7fc66c21e000-7fc66c41d000 ---p 00025000 08:11 35260232 /opt/miniconda/lib/liblzma.so.5.2.4
7fc66c41d000-7fc66c41e000 r--p 00024000 08:11 35260232 /opt/miniconda/lib/liblzma.so.5.2.4
7fc66c41e000-7fc66c41f000 rw-p 00025000 08:11 35260232 /opt/miniconda/lib/liblzma.so.5.2.4
7fc66c41f000-7fc66c426000 r-xp 00000000 08:11 35391261 /opt/miniconda/lib/python3.6/lib-dynload/_lzma.cpython-36m-x86_64-linux-gnu.so
7fc66c426000-7fc66c626000 ---p 00007000 08:11 35391261 /opt/miniconda/lib/python3.6/lib-dynload/_lzma.cpython-36m-x86_64-linux-gnu.so
7fc66c626000-7fc66c627000 r--p 00007000 08:11 35391261 /opt/miniconda/lib/python3.6/lib-dynload/_lzma.cpython-36m-x86_64-linux-gnu.so
7fc66c627000-7fc66c629000 rw-p 00008000 08:11 35391261 /opt/miniconda/lib/python3.6/lib-dynload/_lzma.cpython-36m-x86_64-linux-gnu.so
7fc66c629000-7fc66c63d000 r-xp 00000000 08:11 35391241 /opt/miniconda/lib/python3.6/lib-dynload/_bz2.cpython-36m-x86_64-linux-gnu.so
7fc66c63d000-7fc66c83c000 ---p 00014000 08:11 35391241 /opt/miniconda/lib/python3.6/lib-dynload/_bz2.cpython-36m-x86_64-linux-gnu.so
7fc66c83c000-7fc66c83d000 r--p 00013000 08:11 35391241 /opt/miniconda/lib/python3.6/lib-dynload/_bz2.cpython-36m-x86_64-linux-gnu.so
7fc66c83d000-7fc66c83f000 rw-p 00014000 08:11 35391241 /opt/miniconda/lib/python3.6/lib-dynload/_bz2.cpython-36m-x86_64-linux-gnu.so
7fc66c83f000-7fc66c87f000 rw-p 00000000 00:00 0
7fc66c87f000-7fc66c886000 r-xp 00000000 08:11 35391302 /opt/miniconda/lib/python3.6/lib-dynload/zlib.cpython-36m-x86_64-linux-gnu.so
7fc66c886000-7fc66ca85000 ---p 00007000 08:11 35391302 /opt/miniconda/lib/python3.6/lib-dynload/zlib.cpython-36m-x86_64-linux-gnu.so
7fc66ca85000-7fc66ca86000 r--p 00006000 08:11 35391302 /opt/miniconda/lib/python3.6/lib-dynload/zlib.cpython-36m-x86_64-linux-gnu.so
7fc66ca86000-7fc66ca88000 rw-p 00007000 08:11 35391302 /opt/miniconda/lib/python3.6/lib-dynload/zlib.cpython-36m-x86_64-linux-gnu.so
7fc66ca88000-7fc66ca92000 r-xp 00000000 08:11 35391276 /opt/miniconda/lib/python3.6/lib-dynload/_struct.cpython-36m-x86_64-linux-gnu.so
7fc66ca92000-7fc66cc92000 ---p 0000a000 08:11 35391276 /opt/miniconda/lib/python3.6/lib-dynload/_struct.cpython-36m-x86_64-linux-gnu.so
7fc66cc92000-7fc66cc93000 r--p 0000a000 08:11 35391276 /opt/miniconda/lib/python3.6/lib-dynload/_struct.cpython-36m-x86_64-linux-gnu.so
7fc66cc93000-7fc66cc95000 rw-p 0000b000 08:11 35391276 /opt/miniconda/lib/python3.6/lib-dynload/_struct.cpython-36m-x86_64-linux-gnu.so
7fc66cc95000-7fc66cc96000 rw-s 00000000 00:47 11 /dev/shm/jEHEt9 (deleted)
7fc66cc96000-7fc66cc97000 rw-s 00000000 00:47 10 /dev/shm/SmoUjk (deleted)
7fc66cc97000-7fc66cc98000 rw-s 00000000 00:47 9 /dev/shm/hKNaav (deleted)
7fc66cc98000-7fc66cc99000 rw-s 00000000 00:47 8 /dev/shm/M7Ir0F (deleted)
7fc66cc99000-7fc66cc9a000 rw-s 00000000 00:47 7 /dev/shm/pb8IQQ (deleted)
7fc66cc9a000-7fc66cc9b000 rw-s 00000000 00:47 6 /dev/shm/0420G1 (deleted)
7fc66cc9b000-7fc66cc9c000 rw-s 00000000 00:47 5 /dev/shm/P2xjxc (deleted)
7fc66cc9c000-7fc66cc9d000 rw-s 00000000 00:47 4 /dev/shm/GK0Cnn (deleted)
7fc66cc9d000-7fc66cc9e000 rw-s 00000000 00:47 3 /dev/shm/nJ6Wdy (deleted)
7fc66cc9e000-7fc66cdde000 rw-p 00000000 00:00 0
7fc66cdde000-7fc66cde1000 r-xp 00000000 08:11 35391258 /opt/miniconda/lib/python3.6/lib-dynload/_heapq.cpython-36m-x86_64-linux-gnu.so
7fc66cde1000-7fc66cfe0000 ---p 00003000 08:11 35391258 /opt/miniconda/lib/python3.6/lib-dynload/_heapq.cpython-36m-x86_64-linux-gnu.so
7fc66cfe0000-7fc66cfe1000 r--p 00002000 08:11 35391258 /opt/miniconda/lib/python3.6/lib-dynload/_heapq.cpython-36m-x86_64-linux-gnu.so
7fc66cfe1000-7fc66cfe3000 rw-p 00003000 08:11 35391258 /opt/miniconda/lib/python3.6/lib-dynload/_heapq.cpython-36m-x86_64-linux-gnu.so
7fc66cfe3000-7fc66d0a3000 rw-p 00000000 00:00 0
7fc66d0a3000-7fc66d0a4000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66d0a4000-7fc66d0a5000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66d0a5000-7fc66d0a6000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66d0a6000-7fc66d0a7000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66d0a7000-7fc66d0a8000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66d0a8000-7fc66d0ab000 r--p 00000000 08:11 35390031 /opt/miniconda/lib/libz.so.1.2.11
7fc66d0ab000-7fc66d0bf000 r-xp 00003000 08:11 35390031 /opt/miniconda/lib/libz.so.1.2.11
7fc66d0bf000-7fc66d0c6000 r--p 00017000 08:11 35390031 /opt/miniconda/lib/libz.so.1.2.11
7fc66d0c6000-7fc66d0c7000 r--p 0001d000 08:11 35390031 /opt/miniconda/lib/libz.so.1.2.11
7fc66d0c7000-7fc66d0c8000 rw-p 0001e000 08:11 35390031 /opt/miniconda/lib/libz.so.1.2.11
7fc66d0c8000-7fc66d1c8000 rw-p 00000000 00:00 0
7fc66d1c8000-7fc66d2d0000 r-xp 00000000 08:11 34604309 /lib/x86_64-linux-gnu/libm-2.23.so
7fc66d2d0000-7fc66d4cf000 ---p 00108000 08:11 34604309 /lib/x86_64-linux-gnu/libm-2.23.so
7fc66d4cf000-7fc66d4d0000 r--p 00107000 08:11 34604309 /lib/x86_64-linux-gnu/libm-2.23.so
7fc66d4d0000-7fc66d4d1000 rw-p 00108000 08:11 34604309 /lib/x86_64-linux-gnu/libm-2.23.so
7fc66d4d1000-7fc66d4d8000 r-xp 00000000 08:11 34604351 /lib/x86_64-linux-gnu/librt-2.23.so
7fc66d4d8000-7fc66d6d7000 ---p 00007000 08:11 34604351 /lib/x86_64-linux-gnu/librt-2.23.so
7fc66d6d7000-7fc66d6d8000 r--p 00006000 08:11 34604351 /lib/x86_64-linux-gnu/librt-2.23.so
7fc66d6d8000-7fc66d6d9000 rw-p 00007000 08:11 34604351 /lib/x86_64-linux-gnu/librt-2.23.so
7fc66d6d9000-7fc66d6db000 r-xp 00000000 08:11 34604371 /lib/x86_64-linux-gnu/libutil-2.23.so
7fc66d6db000-7fc66d8da000 ---p 00002000 08:11 34604371 /lib/x86_64-linux-gnu/libutil-2.23.so
7fc66d8da000-7fc66d8db000 r--p 00001000 08:11 34604371 /lib/x86_64-linux-gnu/libutil-2.23.so
7fc66d8db000-7fc66d8dc000 rw-p 00002000 08:11 34604371 /lib/x86_64-linux-gnu/libutil-2.23.so
7fc66d8dc000-7fc66d8df000 r-xp 00000000 08:11 34604290 /lib/x86_64-linux-gnu/libdl-2.23.so
7fc66d8df000-7fc66dade000 ---p 00003000 08:11 34604290 /lib/x86_64-linux-gnu/libdl-2.23.so
7fc66dade000-7fc66dadf000 r--p 00002000 08:11 34604290 /lib/x86_64-linux-gnu/libdl-2.23.so
7fc66dadf000-7fc66dae0000 rw-p 00003000 08:11 34604290 /lib/x86_64-linux-gnu/libdl-2.23.so
7fc66dae0000-7fc66dca0000 r-xp 00000000 08:11 34604277 /lib/x86_64-linux-gnu/libc-2.23.so
7fc66dca0000-7fc66dea0000 ---p 001c0000 08:11 34604277 /lib/x86_64-linux-gnu/libc-2.23.so
7fc66dea0000-7fc66dea4000 r--p 001c0000 08:11 34604277 /lib/x86_64-linux-gnu/libc-2.23.so
7fc66dea4000-7fc66dea6000 rw-p 001c4000 08:11 34604277 /lib/x86_64-linux-gnu/libc-2.23.so
7fc66dea6000-7fc66deaa000 rw-p 00000000 00:00 0
7fc66deaa000-7fc66dec2000 r-xp 00000000 08:11 34604345 /lib/x86_64-linux-gnu/libpthread-2.23.so
7fc66dec2000-7fc66e0c1000 ---p 00018000 08:11 34604345 /lib/x86_64-linux-gnu/libpthread-2.23.so
7fc66e0c1000-7fc66e0c2000 r--p 00017000 08:11 34604345 /lib/x86_64-linux-gnu/libpthread-2.23.so
7fc66e0c2000-7fc66e0c3000 rw-p 00018000 08:11 34604345 /lib/x86_64-linux-gnu/libpthread-2.23.so
7fc66e0c3000-7fc66e0c7000 rw-p 00000000 00:00 0
7fc66e0c7000-7fc66e0ed000 r-xp 00000000 08:11 34604257 /lib/x86_64-linux-gnu/ld-2.23.so
7fc66e0ed000-7fc66e0ee000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66e0ee000-7fc66e0ef000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66e0ef000-7fc66e0f0000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66e0f0000-7fc66e0f1000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66e0f1000-7fc66e0f2000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66e0f2000-7fc66e0f3000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66e0f3000-7fc66e0f4000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66e0f4000-7fc66e0f5000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66e0f5000-7fc66e0f6000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66e0f6000-7fc66e0f7000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66e0f7000-7fc66e0f8000 rw-s 00000000 00:06 455 /dev/nvidiactl
7fc66e0f8000-7fc66e0f9000 r--s 00000000 00:06 460 /dev/nvidia3
7fc66e0f9000-7fc66e0fa000 r--s 00000000 00:06 459 /dev/nvidia2
7fc66e0fa000-7fc66e0fb000 r--s 00000000 00:06 458 /dev/nvidia1
7fc66e0fb000-7fc66e0fc000 r--s 00000000 00:06 456 /dev/nvidia0
7fc66e0fc000-7fc66e0fd000 rwxp 00000000 00:00 0
7fc66e0fd000-7fc66e13d000 rw-p 00000000 00:00 0
7fc66e13d000-7fc66e164000 r--p 00000000 08:11 34604822 /usr/lib/locale/C.UTF-8/LC_CTYPE
7fc66e164000-7fc66e165000 r--p 00000000 08:11 34604829 /usr/lib/locale/C.UTF-8/LC_NUMERIC
7fc66e165000-7fc66e166000 r--p 00000000 08:11 34604832 /usr/lib/locale/C.UTF-8/LC_TIME
7fc66e166000-7fc66e2d8000 r--p 00000000 08:11 34604821 /usr/lib/locale/C.UTF-8/LC_COLLATE
7fc66e2d8000-7fc66e2d9000 r--p 00000000 08:11 34604827 /usr/lib/locale/C.UTF-8/LC_MONETARY
7fc66e2d9000-7fc66e2da000 r--p 00000000 08:11 34604826 /usr/lib/locale/C.UTF-8/LC_MESSAGES/SYS_LC_MESSAGES
7fc66e2da000-7fc66e2db000 r--p 00000000 08:11 34604830 /usr/lib/locale/C.UTF-8/LC_PAPER
7fc66e2db000-7fc66e2dc000 r--p 00000000 08:11 34604828 /usr/lib/locale/C.UTF-8/LC_NAME
7fc66e2dc000-7fc66e2dd000 r--p 00000000 08:11 34604820 /usr/lib/locale/C.UTF-8/LC_ADDRESS
7fc66e2dd000-7fc66e2e2000 rw-p 00000000 00:00 0
7fc66e2e2000-7fc66e2e3000 r--p 00000000 08:11 34604831 /usr/lib/locale/C.UTF-8/LC_TELEPHONE
7fc66e2e3000-7fc66e2e4000 r--p 00000000 08:11 34604824 /usr/lib/locale/C.UTF-8/LC_MEASUREMENT
7fc66e2e4000-7fc66e2eb000 r--s 00000000 08:11 34605148 /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache
7fc66e2eb000-7fc66e2ec000 r--p 00000000 08:11 34604823 /usr/lib/locale/C.UTF-8/LC_IDENTIFICATION
7fc66e2ec000-7fc66e2ed000 r--p 00025000 08:11 34604257 /lib/x86_64-linux-gnu/ld-2.23.so
7fc66e2ed000-7fc66e2ee000 rw-p 00026000 08:11 34604257 /lib/x86_64-linux-gnu/ld-2.23.so
7fc66e2ee000-7fc66e2ef000 rw-p 00000000 00:00 0
7fff02b9e000-7fff02bc5000 rw-p 00000000 00:00 0 [stack]
7fff02be3000-7fff02be6000 r--p 00000000 00:00 0 [vvar]
7fff02be6000-7fff02be8000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Aborted (core dumped)
|
awaiting response (this tag is deprecated),triaged
|
low
|
Critical
|
589,654,708 |
opencv
|
No checks if CV_CN_MAX is exceded at Mat constructor
|
##### System information (version)
<!-- Example
- OpenCV => 4.2
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2017
-->
- OpenCV => 3.4.8 (+)
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2017
##### Detailed description
When creating a map, types with ch > CV_CN_MAX are accepted.
I guess it's realted to the CV_MAKETYPE macro.
##### Steps to reproduce
for example this runs without any errors, but the result is incorrect
```
cv::Mat test = cv::Mat(200, 200, CV_MAKETYPE(CV_64F, 1200));
std::cout << test.channels(); // 176
```
or
```
cv::Mat test = cv::Mat(200, 200, CV_64F(1200));
std::cout << test.channels(); //176
```
##### Issue submission checklist
- [ ] I updated to latest OpenCV version and the issue is still there
<!--
master branch for OpenCV 4.x and 3.4 branch for OpenCV 3.x releases.
OpenCV team supports only latest release for each branch.
The ticket is closed, if the problem is not reproduced with modern version.
-->
|
feature,priority: low,category: core
|
low
|
Critical
|
589,669,892 |
rust
|
Incorrect/confusing error message with multiple bounds of a single associated-type parametrized trait
|
```rust
#![allow(unreachable_code)]
pub trait Super<P> {
type Assoc;
}
pub trait A<P>: Super<P> {}
pub trait B<P>: Super<P> {
fn join(&mut self, handle: Self::Assoc) -> ();
}
pub trait Trait<T> {
type Assoc;
}
// Removing B<<C as Trait<T>>::Assoc> bound allows this to infer and resolve
fn bug<T, U, C: Trait<T> + Trait<U> + B<<C as Trait<T>>::Assoc> + A<<C as Trait<T>>::Assoc> + B<<C as Trait<U>>::Assoc> + A<<C as Trait<U>>::Assoc>>() {
let ctx: &mut C = panic!();
let a: <C as Super<<C as Trait<U>>::Assoc>>::Assoc = panic!();
// Removing this line allows it to resolve
ctx.join(a);
}
```
I expect that this would compile given that the type of `a` is fully specified and only one implementation of `B` can satisfy the requirement imposed. Aside from that, the error message produced here is on `let ctx`, specifically "consider giving `ctx` the explicit type `&mut C`, where the type parameter `C` is specified". An error is in fact reported on `join`,
```
cannot infer type for type parameter `C`
note: cannot resolve <C as Super<_>>::Assoc == <C as Super<<C as Trait<U>>::Assoc>>::Assoc
```
but in the real-world codebase from which this originated the error was reported in an earlier use of `ctx` in the function, not the one which caused the issue, and it would be reported on the first occurrence of a use of `C` by receiver in the function regardless of the nature of that use.
Fully qualifying the call i.e.
```rust
B::<<C as Trait<U>>::Assoc>::join(ctx, a);
```
also resolves and the program compiles successfully, my concern here is around the exceedingly confusing misplaced error message which sometimes has a span in unrelated source.
[playground reproduction](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=a0fb72d5443c71262f275bc54b199f9e)
|
A-diagnostics,T-compiler,C-bug,D-confusing,D-invalid-suggestion
|
low
|
Critical
|
589,672,142 |
flutter
|
FloatingActionButtonThemeData disabled color
|
FloatingActionButtonThemeData does not have disabled color. It would be a easy if it had, so dimming the floating action button when data fields are not filled could be done with one line of code.
```
gintas-mac:lifetrens gintas$ flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel master, v1.16.3-pre.64, on Mac OS X 10.15.3 19D76, locale
en-GB)
[โ] Android toolchain - develop for Android devices (Android SDK version 29.0.3)
[โ] Xcode - develop for iOS and macOS (Xcode 11.4)
[โ] Chrome - develop for the web
[โ] Android Studio (version 3.6)
[!] VS Code (version 1.43.2)
โ Flutter extension not installed; install from
https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[โ] Connected device (3 available)
! Doctor found issues in 1 category.
```
|
framework,f: material design,c: proposal,P2,team-design,triaged-design
|
low
|
Major
|
589,695,840 |
excalidraw
|
Excalidraw is a bit broken with Oculus hand tracking
|
When using Excalidraw in the Oculus Browser (on Quest w/ hand tracking enabled), pinch-to-click results in a click event *and* the context menu pops up. Desired behavior would be to just register the click event and suppress the context menu (I think)
|
bug
|
low
|
Critical
|
589,700,919 |
excalidraw
|
Synchronize cursor type
|
Right now all the collaborations only have a single cursor, but when you edit the scene locally you have many cursor types. Would be nice to synchronize cursor type.
|
collaboration
|
low
|
Minor
|
589,704,862 |
rust
|
rustc_mir::shim misuses `ParamEnv`s.
|
(This example is about `CloneShim`, but `DropGlue` has a similar issue)
In this snippet, `param_env` is obtained from `tcx.param_env(def_id)` and used twice with `self_ty`:
1. for `is_copy_modulo_regions`
* if `self_ty` doesn't involve type parameters, `param_env` doesn't matter, but if it does, they could be from anywhere (e.g. caller of `clone`, via MIR inliner), so the `param_env` being that of `Clone::clone` itself is not useful at all (all it contains is `Self: Clone`)
* to get the right `ParamEnv`, the `mir_shims` query would need to take `ParamEnvAnd<InstanceDef>` instead of just `InstanceDef` (but we might not need this)
* however it's a waste to generate one shim for every `Copy` type, we should instead do what `DropShim` does and have `Option<Ty>` in `CloneShim`, or split out of a `CloneByCopyShim`
2. for evaluating an array length
* the length (a `ty::Const`) is converted to `u64` only to be converted back to `ty::Const`, which is unnecessary and we could skip it entirely
https://github.com/rust-lang/rust/blob/77621317d643cc5d13da60b26ab68b057668e688/src/librustc_mir/shim.rs#L330-L343
|
T-compiler,A-MIR,C-bug
|
low
|
Minor
|
589,705,709 |
pytorch
|
Integration of Large Model Support in PyTorch
|
## ๐ Feature
PyTorch Large Model Support (LMS) is a feature in the PyTorch provided by IBM here: [here (official IBM repo)](https://github.com/IBM/pytorch-large-model-support/) and [here (fork of the main maintener of LMS)](https://github.com/mtbrandy/pytorch) that allows the successful training of deep learning models that would otherwise exhaust GPU memory and abort with "out-of-memory" errors. LMS manages this oversubscription of GPU memory by temporarily swapping tensors to host memory when they are not needed.
With LMS, deep learning models can scale significantly beyond what was previously possible and, ultimately, generate more accurate results.
## Motivation
* When training recurrent models with back-propagation through time (BPTT) it is often useful to 'truncate' the sequence length as little as possible, especially when dealing with audio inputs or EEG data that have high temporal resolution. This results in a larger memory footprint, and this is where LMS can save the day.
* Also, the amount of compute needed to train state-of-the-art models doubles on average every 3.5 months (see https://openai.com/blog/ai-and-compute/). This comes both from the use of larger batch sizes and the use of larger models (like the now famous GPT-2 with 1.5B parameters). For instance the Transformer-XL can have a big memory footprint (https://openai.com/blog/sparse-transformer/). Using LMS is very useful when you want to test something out without using gradients checkpointing right away.
* LMS can be extremely beneficial to anyone who cannot afford access to high-end GPUs (within small startups or in academic research). Using cloud services or buying the Titan RTX ($2,499) to run models is often too expensive.
* GPU RAM is most of the time limited to about 8GB and is not extensible. Regular RAM on the other hand can easily be increased up to 128GB or more and is underused during trainings.
* Finally, LMS could be useful when smoke testing runs with small GPUs (either manually or within the context of a CI). This leaves the small (often older) GPUs still busy while the larger ones are used for real runs with or without LMS.
## Pitch (copy/paste from the doc of LMS)
One or more elements of a deep learning model can lead to GPU memory exhaustion.
These include:
* Model depth and complexity
* Base data size (for example, high-resolution images)
* Batch size
Traditionally, the solution to this problem has been to modify the model until it fits in GPU memory. This approach, however, can negatively impact accuracy โ especially if concessions are made by reducing data fidelity or model complexity.
## Alternatives
Checkpointing can some sometimes helps. But that not always the cases...
## Additional context
This feature is maintained for a while (since at least PyTorch 1.1) by @mtbrandy and is proposed for contribution to PyTorch since at least August 2019 (I did not found any mention of it on this repo):
https://www.reddit.com/r/pytorch/comments/cgyppk/large_model_support_for_pytorch/
https://discuss.pytorch.org/t/very-large-model-on-single-gpu/28962
It is as well mentionned here:
https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_pytorch.html
Official repos:
https://github.com/IBM/pytorch-large-model-support/
https://github.com/mtbrandy/pytorch
-----
I am basically creating this issue because I really like LMS. So far I have waited the support of LMS for each version of PyTorch. Each time I had to manually compile PyTorch (and create wheels) to have the support of it. (BTW, many thanks to @mtbrandy that still maintains this fork).
The thing that I am missing is why this feature has not been integrated in PyTorch even though the code is made by IBM (and maintained) :sweat_smile:.
I mean, it needs an "opt-in" from the user, so it is not enabled by default! If the reason is "it can reduce the speed performance". I agree with you, but it can also allows people to experiment more without the need of a super-expansive GPU. I really think that the community, small start-ups, students etc. would benefits from this even if they will surely not use that most of the time.
|
module: internals,feature,low priority,module: memory usage,triaged
|
medium
|
Critical
|
589,723,602 |
TypeScript
|
[P in keyof T]: T[P] not accepting inferred base type via extends
|
<!-- ๐จ STOP ๐จ ๐ฆ๐ง๐ข๐ฃ ๐จ ๐บ๐ป๐ถ๐ท ๐จ
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.8.3
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
* `[P in keyof T]?: T[P]`
* `typescript generic extends in keyof`
* `typescript generic extends T[P]`
* `typescript mongo collection generic`
**Code**
```ts
interface IModel {
id: string;
}
type Query<T> = {
[P in keyof T]?: T[P]
}
export class Base<T extends IModel> {
public find(query: Query<T>): void {
// Forward the query to database
console.log(query);
}
public findOneById(id: string): void {
this.find({ id });
}
}
```
**Expected behavior:**
It's expected to allow to execute the find with the properties defined on `IModel` as base of any other model;
**Actual behavior:**
It shows an error:
```
Argument of type '{ id: string; }' is not assignable to parameter of type 'Query<T>'.ts(2345)
```
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
https://typescript-play.js.org/#code/JYOwLgpgTgZghgYwgAgJIFkD2ATCAbZAbwCgBIYbALmQGcwpQBzAbmIF9jiwBPABxQCKAV2jcAPABUAfMgC8RMgG0ACslDIA1hG6YYyCQF0A-NQkqD7ThAAevTFDDIEeODRrIAQq4iTkNyCDY7hg4+DIkpLxCAEZ4wAjIMKDYABQAjiJQ3NTCopJSAJTUAG6YFAqkpAD0VcgAYvYA7nBQ2MhgABYoGaLtmMjYcGBw0d5kpAiYIDSYeBAAdHiYjOmZ3AWspBxkUbHxickA8iAQHtyoqRTUdAwgjEXIpeURpJ3ANPNJgSmEam1sGzIHA4QA
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
Didn't find any related issue
|
Suggestion,Help Wanted,Effort: Difficult,Experience Enhancement
|
medium
|
Critical
|
589,726,709 |
pytorch
|
Caffe2 generate_proposals_op_gpu_test crashes on Windows
|
See https://app.circleci.com/pipelines/github/pytorch/pytorch/148057/workflows/df15651f-11ba-4459-8687-cd11a2a13b15/jobs/4987650
```
unning "C:\Users\circleci\project\build\win_tmp\build\torch\test\generate_proposals_op_gpu_test.exe"
Running main() from ..\third_party\googletest\googletest\src\gtest_main.cc
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from GenerateProposalsTest
[ RUN ] GenerateProposalsTest.TestRealDownSampledGPU
+ cleanup
```
|
module: tests,triaged
|
low
|
Critical
|
589,747,100 |
pytorch
|
How do you change Adam learning rate since the latest commits ?
|
## ๐ Bug
Since this commit : https://github.com/pytorch/pytorch/commit/76035f050b215d0606fe786901dcd07b5c9544fe#diff-b8c53e7a2010d3dae3200c9911950551
It's no longer possible to directly access the options variable of Adam in libtorch, making it impossible to change the learning rate on the fly with libtorch.
How can I change the learning rate ?
## To Reproduce
Steps to reproduce the behavior:
1. Install the latest nightly of libtorch
2. Create an Adam optimizer
3. Try to access its .options variable to change the learning rate : options does not exist anymore
cc @yf225
|
module: cpp,triaged
|
low
|
Critical
|
589,747,431 |
pytorch
|
Dimension reducing variants of bitwise operations (bitwise_or, bitwise_and, bitwise_xor)
|
## ๐ Feature
Function for reducing tensor along specified dim with bitwise operations.
## Motivation
In my project I have a need to reduce my tensors along some dimensions with a bitwise operations.
## Pitch
In pytorch I can reduce my tensor along dim in multiple ways (like t.min(dim=0), t.sum(dim=0), t.any(dim=0), t.all(dim=0).
Unfortunately it's not yet possible to reduce dimension with a bitwise operation like bitwise_or, bitwise_xor, bitwise_and.
Possible method headers could look like this:
```
def bitwise_or(dim=None, keepdim=False)
def bitwise_and(dim=None, keepdim=False)
def bitwise_xor(dim=None, keepdim=False)
```
Currently in BoolTensor there are two special methods `any(dim)` and `all(dim)` that implements logical or/and reduction. Bitwise_or/bitwise_and could be a generalization of those two to other tensor types. (Similarly as & operator that is a bitwise operation for non Bool tensors, and a logical one for BoolTensor)
Possibly loosely connected to https://github.com/pytorch/pytorch/pull/26824 - however it seems like it's only a pytorch distributed reduction method, not a tensor API one.
## Alternatives
I implemented binary reducing operations in python using builtin pytorch functions with a loop along dimension dim. I imagine that implementing those directly in C++/CUDA could yield performance boost.
|
triaged,function request,module: reductions
|
low
|
Major
|
589,750,625 |
flutter
|
Build error: Could not resolve all artifacts for configuration ':<package>:classpath', Could not find builder.jar (com.android.tools.build:builder:3.2.1)
|
## Steps to Reproduce
<!-- You must include full steps to reproduce so that we can reproduce the problem. -->
Im just trying to use some very useful packages like path provider and flutter clipboard manager but every time i run it it throws me the same error but the package names replacing in the error message.
**Expected results:** <!-- what did you want to see? -->
i expect to just run normally and use them like any other packages.
when i remove them from my project everything goes fine and my project comes up in the emulator.
**Actual results:** <!-- what did you see? -->
this error is really annoying me and i dont know what to do.
This is The error message:
```
Launching lib\main.dart on Android SDK built for x86 in debug mode...
[!] Your app isn't using AndroidX.
To avoid potential build failures, you can quickly migrate your app by following the steps on https://goo.gl/CP92wY.
Running Gradle task 'assembleDebug'...
FAILURE: Build failed with an exception.
* What went wrong:
A problem occurred configuring project ':flutter_clipboard_manager'.
> Could not resolve all artifacts for configuration ':flutter_clipboard_manager:classpath'.
> Could not find builder.jar (com.android.tools.build:builder:3.2.1).
Searched in the following locations:
https://dl.google.com/dl/android/maven2/com/android/tools/build/builder/3.2.1/builder-3.2.1.jar
> Could not get unknown property 'android' for project ':flutter_clipboard_manager' of type org.gradle.api.Project.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 29s
Finished with error: Gradle task assembleDebug failed with exit code 1
```
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
flutter analyze output:
```Analyzing quotesapp...
No issues found! (ran in 1.7s)

```
This is my flutter doctor output:
```
[โ] Flutter (Channel stable, v1.12.13+hotfix.8, on Microsoft Windows [Version 10.0.17763.1098], locale en-US)
โข Flutter version 1.12.13+hotfix.8 at C:\src\flutter
โข Framework revision 0b8abb4724 (7 weeks ago), 2020-02-11 11:44:36 -0800
โข Engine revision e1e6ced81d
โข Dart version 2.7.0
[โ] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
โข Android SDK at E:\AndroidSDK
โข Android NDK location not configured (optional; useful for native profiling support)
โข Platform android-28, build-tools 28.0.3
โข ANDROID_HOME = E:\AndroidSDK
โข Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
โข Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
โข All Android licenses accepted.
[โ] Android Studio (version 3.5)
โข Android Studio at C:\Program Files\Android\Android Studio
โข Flutter plugin version 44.0.1
โข Dart plugin version 191.8593
โข Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
[โ] VS Code (version 1.43.2)
โข VS Code at C:\Users\Shayan\AppData\Local\Programs\Microsoft VS Code
โข Flutter extension version 3.8.1
[โ] Connected device (1 available)
โข Android SDK built for x86 โข emulator-5554 โข android-x86 โข Android 8.1.0 (API 27) (emulator)
โข No issues found!
```
|
c: crash,platform-android,tool,t: gradle,P2,team-android,triaged-android
|
low
|
Critical
|
589,760,858 |
godot
|
yield "node_added" executed multiple times
|
**Godot version:** 3.2.1 stable
**OS/device including version:** Ubuntu
**Issue description:**
Adding nodes to the tree after `yield`ing for `node_added` to be emitted, will execute the yield again. This leads to a stack overflow
As a workaround you can just use `call_deferred("add_child", ...)` instead of the second `add_child(...)`
**Steps to reproduce:**
The following script illustrates the problem, just add it to a button's script:
```
extends Button
func _pressed() -> void:
call_deferred("add_child", Node.new())
yield(get_tree(), "node_added")
add_child(Node.new())
```
**Minimal reproduction project:**
[yield.zip](https://github.com/godotengine/godot/files/4398618/yield.zip)
|
bug,topic:core
|
low
|
Minor
|
589,768,081 |
rust
|
Tracking Issue for {BTreeMap,BTreeSet}::extract_if
|
This is a tracking issue for the Implementation of a `extract_if` method on `BTreeMap` and `BTreeSet`, similar to the one in LinkedList and in Vec (#43244).
The feature gate for the issue is `#![feature(btree_extract_if)]` (previously `btree_drain_filter`)
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also uses as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Steps
- [x] Implement suboptimally, relying on an `Ord` bound: efficient for simple and common cases, but falling back on a coarse restart after complicated removals.
- [x] ~~Possibly adjust the underlying tree representation (using `Cell`s).~~
- [x] Implement all cases without relying on an `Ord` bound, tracking every adjustment
- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
### Unresolved Questions
- Do we want the `Ord` bound on `drain_filter` (that is currently not required)?
- Do we need to implement some post-panic guarantees beyond memory leaks, like the undocumented guarantees that `Vec::drain_filter` and `LinkedList::drain_filter` provide, as I tried to discuss in #67290?
- The unstable book seems abandoned - [the other drain_filter is listed](https://doc.rust-lang.org/unstable-book/library-features/drain-filter.html) but generated from somewhere.
### Implementation history
- Remove the `Ord` bound on `DrainFilter` (not on `drain_filter`) (#70843)
- Keep track of position when deleting from a `BTreeMap` (#70795)
- Initial suboptimal implementation (#68770)
- Related attempt to implement `drain` and `retain` (#66747)
- #104455
|
A-collections,T-libs-api,B-unstable,C-tracking-issue,Libs-Tracked
|
medium
|
Critical
|
589,781,344 |
godot
|
When "use bezier curves" active animation not working properly
|
**Godot version:** 3.2
**OS/device including version:** Manjaro arc linux
**Issue description:** I am working on skeloton animation. And when i insert key with use bezier curves activated Some bones rotation change to 0. when i deselect "use bezier curves" working normal.
i got screen record to show what issue
[](https://streamable.com/38jdy)
**Steps to reproduce:**
**Minimal reproduction project:**
|
bug,topic:core
|
low
|
Minor
|
589,793,498 |
PowerToys
|
[FancyZones] Feature Request: add hotkey to increment/decrement number of zones in template layouts.
|
# Summary of the new feature/enhancement
When using one of the template layouts the user needs to open the settings window each time they want to increase or decrease the number of zones. It would be more convenient if fancy zones provided a keybinding to do this without opening the settings window.
# Proposed technical implementation details
Add logic in FancyZones.cpp to respond to a keypress and increment/decrement the zone-count for the active layout.
I have created a local branch that implements this feature using win+ctrl+oem_plus/oem_minus. Further work would be needed to have this setting be optional or to make this binding customisable.
|
Idea-Enhancement,Help Wanted,Product-FancyZones
|
low
|
Minor
|
589,814,554 |
neovim
|
spell checker integration
|
As mentioned in https://github.com/neovim/neovim/issues/356 would be great to have external spellchecking.
I noticed there was some work on hunspell https://github.com/vim/vim/pull/2500 , but nothing finished.
Is any chance to integrate enchant or nuspell https://nuspell.github.io/ (or at least older hunspell) as an option to Neovim?
Please keep this at least as placeholder to track progress. @mcepl
|
enhancement,status:blocked-external,has:vim-patch,spell
|
high
|
Critical
|
589,815,908 |
pytorch
|
Caching Support for class Dataset
|
## ๐ Feature
<!-- A clear and concise description of the feature proposal -->
For datasets that fit into memory, but the samples are loaded individually a cache could decrease the time needed to fetch the samples. In each call to the dataset the cache should be checked for the existence of the object to be loaded and if possible return the cached sample.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
Loading data from the file system or even worse from network storage can take a considerable amount of time. Since training speed is limited by the time it takes to load the requested data, speed-ups on the data fetching side can lead to improved training time and higher utilization of the computational resources.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
A simple function like the following could handle the caching:
```
def get_item(self, index):
if index in self.cache.keys():
return self.cache[index]
sample = self.load_item(index)
self.cache[index] = sample
return sample
```
However, for this sample to work the function `load_item` needs to be implemented by the user just like using `Dataset` directly.
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
I started defining the required class, however, found that my implementation requires to much action using this as a base class.
I would appreciated any suggestions to reduce the complexity of this proposal:
````
class CachingDataset(Dataset):
r"""A caching :class:`Dataset`.
All subclasses should overwrite :meth:`load_item`, supporting fetching a
data sample from file.
Subclasses could also optionally overwrite
:meth:`__len__`, which is expected to return the size of the dataset by many
:class:`~torch.utils.data.Sampler` implementations and the default options
of :class:`~torch.utils.data.DataLoader`.
For datasets that fit into the memory this allows faster data loading from the second batch on
by caching the loaded data into the `cache`.
.. note::
Performance gains will be visible from the second epoch on.
Example 1: Loading data from cache if available using :meth:`get_item`::
>>> class MyCachingDataset(torch.utils.data.CachingDataset):
... def __init__(self):
... super().__init__()
...
... def load_item(self, index):
... sample = read_from_file(self.data[index])
... return sample
"""
def __init__(self):
self.cache = dict()
def __getitem__(self, index):
return self.get_item(index)
def get_item(self, index):
if index in self.cache.keys():
return self.cache[index]
sample = self.load_item(index)
self.cache[index] = sample
return sample
def load_item(self, index):
raise NotImplementedError
```
cc @SsnL
|
module: dataloader,triaged,enhancement
|
low
|
Critical
|
589,821,879 |
flutter
|
Animation listener of Hero flightShuttleBuilder not triggered
|
I have a hero transition going on from one page to another. In the `flightShuttleBuilder`, I register a listener for the animation in order to trigger a `setState` after the animation completed. This works fine so far.
However, when first triggering the hero to move to the details page, then popping the route and while the hero animation is on-going, tapping the same source hero again to go back to the details page, then the animation listener does not receive the `completed` status.
Interestingly, when replacing the `InkWell` widget with a simple `GestureDetector`, the source widget can not be tapped while the transition is on-going, which would also be fine for me, since it at least prevents the bug from happening.
The whole way of abusing the `flightShuttleBuilder` just to know when the transition is finished feels a bit weird to me anyways, is there a cleaner way to achieve this?
This is how it looks like when not interrupting the transition, when the animation has been completed, the `setState` changes the color of the body to yellow:

And this how it looks like when the bug is happening, the animation finishes, but the color is not changed, because the listener is never triggered:

Code:
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
initialRoute: '/',
routes: {
'/': (context) => HomePage(),
'/details': (context) => DetailsPage(),
},
);
}
}
class HomePage extends StatefulWidget {
@override
_HomePageState createState() => _HomePageState();
}
class _HomePageState extends State<HomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('title')),
body: ListView.builder(
padding: const EdgeInsets.all(40),
itemCount: 2,
itemBuilder: (context, index) {
return InkWell(
onTap: () async {
await Navigator.of(context).pushNamed(
'/details',
arguments: index,
);
},
child: Row(
children: <Widget>[
Padding(
padding: const EdgeInsets.all(10),
child: Hero(
tag: index,
child: Container(color: Colors.red, width: 80, height: 80),
),
),
],
),
);
},
),
);
}
}
class DetailsPage extends StatefulWidget {
@override
_DetailsPageState createState() => _DetailsPageState();
}
class _DetailsPageState extends State<DetailsPage> {
Color bodyColor = Colors.red;
@override
Widget build(BuildContext context) {
final int index = ModalRoute.of(context).settings.arguments;
return Scaffold(
appBar: AppBar(title: Text('Details')),
body: Column(
children: [
Hero(
tag: index,
flightShuttleBuilder: (_, animation, __, ___, ____) {
animation.addStatusListener((status) {
if (status == AnimationStatus.completed) {
WidgetsBinding.instance.scheduleFrameCallback((_) {
setState(() {
bodyColor = Colors.yellow;
});
});
}
});
return Container(
color: Colors.purple,
width: 200,
height: 200,
);
},
child: Container(
color: Colors.purple,
width: 500,
height: 300,
),
),
Container(
color: bodyColor,
width: 200,
height: 200,
),
],
),
);
}
}
```
Flutter doctor:
```
[โ] Flutter (Channel stable, v1.12.13+hotfix.8, on Linux, locale en_US)
โข Flutter version 1.12.13+hotfix.8 at /opt/flutter
โข Framework revision 0b8abb4724 (7 weeks ago), 2020-02-11 11:44:36 -0800
โข Engine revision e1e6ced81d
โข Dart version 2.7.0
[โ] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
โข Android SDK at /home/developer/android-sdk
โข Android NDK location not configured (optional; useful for native profiling support)
โข Platform android-29, build-tools 29.0.2
โข ANDROID_HOME = /home/developer/android-sdk
โข ANDROID_SDK_ROOT = /home/developer/android-sdk
โข Java binary at: /opt/android-studio/jre/bin/java
โข Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
โข All Android licenses accepted.
[โ] Android Studio (version 3.5)
โข Android Studio at /opt/android-studio
โข Flutter plugin version 42.1.1
โข Dart plugin version 191.8593
โข Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[โ] Connected device (1 available)
โข Android SDK built for x86 โข emulator-5554 โข android-x86 โข Android 10 (API 29) (emulator)
โข No issues found!
```
|
framework,a: animation,f: routes,has reproducible steps,P2,found in release: 3.7,found in release: 3.10,team-framework,triaged-framework
|
low
|
Critical
|
589,870,682 |
flutter
|
Bad PlatformView performance on Android
|
I'm trying to integrate native OpenGL rendering by exposing GLSurfaceView as PlatformView.
On my Nexus 6P (Android 8.1) the performance is quite bad, rendering is not smooth.
Here is a complete example:
https://github.com/t-artikov/flutter_platform_view_test
<details>
<summary>flutter doctor -v</summary>
```
[โ] Flutter (Channel master, v1.16.4-pre.18, on Linux, locale en_US.UTF-8)
โข Flutter version 1.16.4-pre.18 at /home/timur/flutter
โข Framework revision c8efcb632b (2 days ago), 2020-03-27 22:31:01 -0700
โข Engine revision 3ee9e3d378
โข Dart version 2.8.0 (build 2.8.0-dev.18.0 1402e8e1a4)
[โ] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
โข Android SDK at /home/timur/Android/Sdk
โข Android NDK location not configured (optional; useful for native profiling support)
โข Platform android-29, build-tools 29.0.2
โข ANDROID_HOME = /home/timur/Android/Sdk
โข Java binary at: /snap/android-studio/84/android-studio/jre/bin/java
โข Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)
โข All Android licenses accepted.
[โ] Linux toolchain - develop for Linux desktop
โข clang++ 9.0.0
โข GNU Make 4.1
[โ] Android Studio (version 3.6)
โข Android Studio at /snap/android-studio/84/android-studio
โข Flutter plugin version 44.0.2
โข Dart plugin version 192.7761
โข Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)
[โ] VS Code (version 1.43.0)
โข VS Code at /usr/share/code
โข Flutter extension version 3.8.1
[โ] Connected device (2 available)
โข Nexus 6P โข ENU7N15B06000345 โข android-arm64 โข Android 8.1.0 (API 27)
โข Linux โข Linux โข linux-x64 โข Linux
โข No issues found!
```
</details>
[Observatory timeline](https://github.com/flutter/flutter/files/4432099/platform_view-timeline.zip)
|
platform-android,engine,c: performance,a: platform-views,c: rendering,perf: speed,P2,team-android,triaged-android
|
low
|
Major
|
589,879,635 |
excalidraw
|
Discoverability/UX/user pain points
|
I've come across these feature/UI not being obvious enough for new (or even experienced) users:
- [ ] **sharing by link:** currently it's hidden under an `export` menu which isn't at all obvious it should contain that feature. (case 1: did a usability test with a colleague; case 2: https://github.com/excalidraw/excalidraw/pull/1094)
- [ ] **multi-line arrows/lines:** people don't tend to read the hint even if it's in their face. Or maybe they skim it and not understand it properly.
- [x] **multiline text** https://github.com/excalidraw/excalidraw/issues/939
- [ ] **double-click to enter text (and potentially center to element)** https://github.com/excalidraw/excalidraw/issues/938
|
discussion
|
low
|
Major
|
589,888,422 |
pytorch
|
Caffe2 ReshapeOpGPUTest crashes on Windows
|
See https://app.circleci.com/pipelines/github/pytorch/pytorch/148411/workflows/eee3cc8e-867b-412c-9c3c-9c46a1da56b6/jobs/4994040
```
Running "C:\Users\circleci\project\build\win_tmp\build\torch\test\reshape_op_gpu_test.exe"
Running main() from ..\third_party\googletest\googletest\src\gtest_main.cc
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from ReshapeOpGPUTest
[ RUN ] ReshapeOpGPUTest.testReshapeWithScalar
+ cleanup
```
|
module: tests,triaged
|
low
|
Critical
|
589,894,657 |
rust
|
Misleading E0583 when attempting to compile a file that includes itself as a module
|
If we have any file `aaa.rs` and `example.rs` in the same directory, where `example.rs` contains:
```rust
mod aaa;
mod example;
```
Compiling `example.rs` with `rustc example.rs` gives an error about being unable to find `aaa`, rather than an error about `example.rs` importing itself as a module:
```
$ rustc example.rs
error[E0583]: file not found for module `aaa`
--> example.rs:1:5
|
1 | mod aaa;
| ^^^
|
= help: name the file either example/aaa.rs or example/aaa/mod.rs inside the directory ""
error: aborting due to previous error
For more information about this error, try `rustc --explain E0583`.
```
<details>
<summary>The error message with <code>rustc +nightly</code>.</summary>
```
error[E0583]: file not found for module `aaa`
--> example.rs:1:1
|
1 | mod aaa;
| ^^^^^^^^
|
= help: to create the module `aaa`, create file "example/aaa.rs"
error[E0583]: file not found for module `example`
--> example.rs:2:1
|
2 | mod example;
| ^^^^^^^^^^^^
|
= help: to create the module `example`, create file "example/example.rs"
error[E0601]: `main` function not found in crate `example`
--> example.rs:1:1
|
1 | / mod aaa;
2 | | mod example;
| |____________^ consider adding a `main` function to `example.rs`
error: aborting due to 3 previous errors
Some errors have detailed explanations: E0583, E0601.
For more information about an error, try `rustc --explain E0583`.
```
</details>
**I expected to see this happen:** `rustc` give an error about `mod example;` in `example.rs`.
**Instead, this happened:** `rustc` gave an error about not being able to find the *first* import in the file *relative* to `example.rs` *as a module.*
The error messages are a little bit better on nightly, but still misleading.
<details>
<summary>Shell script to reproduce the issue.</summary>
```bash
#!/usr/bin/env bash
pushd "$(mktemp --directory)" > /dev/null || exit
touch aaa.rs
echo "mod aaa;" > example.rs
echo "mod example;" >> example.rs
rustc example.rs
popd > /dev/null || exit
```
</details>
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.42.0 (b8cedc004 2020-03-09)
binary: rustc
commit-hash: b8cedc00407a4c56a3bda1ed605c6fc166655447
commit-date: 2020-03-09
host: x86_64-unknown-linux-gnu
release: 1.42.0
LLVM version: 9.0
```
`rustc +nightly --version --verbose`:
```
rustc 1.44.0-nightly (1057dc97a 2020-03-20)
binary: rustc
commit-hash: 1057dc97afce39ff6a224966ece3ed438af4c1f5
commit-date: 2020-03-20
host: x86_64-unknown-linux-gnu
release: 1.44.0-nightly
LLVM version: 9.0
```
The error messages are the same with `RUST_BACKTRACE=1` on the stable and nightly compilers.
|
C-enhancement,A-diagnostics,T-compiler,D-papercut,D-newcomer-roadblock
|
low
|
Critical
|
589,908,473 |
godot
|
Using print_line from the destructor of a tool class in a GDNative library crashes Godot on exit
|
Godot 3.2.1
Windows 10 64 bits
I made an editor plugin which instanciates a C++ NativeScript (GDNative) to speed up some things. This class inherits `Reference` and gets freed when Godot quits.
As I often do to debug some of my classes, I used `Godot::print` in the destructor of that class. It gets destroyed when my editor plugin gets destroyed, because it is stored in one of its member variables.
However, this causes Godot to crash on exit.
The call stack to this is unbelievably long. It starts from the scene tree destroying its root, which goes to *predelete* the `EditorNode` (the one in GDCLASS), gets to destroy my plugin (I guess?), which destroys the class, which prints, goes to `EditorNode::_print_handler`, and leads to the `RichTextLabel` of the editor... which was freed already.
It looks like the print handler is not being told that the console output has been freed and still references a dangling pointer?
Indeed, `print_handler` is only removed in `~EditorNode`, but `~EditorNode` has not been called yet.
```cpp
void EditorNode::_print_handler(void *p_this, const String &p_string, bool p_error) {
EditorNode *en = (EditorNode *)p_this;
en->log->add_message(p_string, p_error ? EditorLog::MSG_TYPE_ERROR : EditorLog::MSG_TYPE_STD);
}
```
This print handler should be setup and deinitialized by `EditorLog`, not `EditorNode`.

This is happening in a large plugin and making a simple one means going through a lot of steps (and only for Windows) so I don't have a simple test project at the moment.
However, reproduction steps with this project are simple:
http://zylannprods.fr/dl/godot/gdnative_hterrain_repro_crash_on_exit.zip
On Windows, just launch the editor on this project, with debugger attached, and close it.
This project contains a precompiled debug dll compiled with msvc.
|
bug,topic:gdextension,crash
|
low
|
Critical
|
589,912,519 |
godot
|
Item of PopupMenu not selectable on first click if no mouse motion.
|
**Godot version:**
3.2.1
**OS/device including version:**
Windows 10 Pro
**Issue description:**
PopupMenu of a MenuButton with action_mode set to button release is not selectable on the first click if their is no mouse motion or if you consume the InputEventMouseMotion. On windows you would probably not notice since you always hover the menu item before clicking on it. However it is a problem on Mobile since I need to touch the menu twice to select it. If I make a "big" touch it works because it detects drag motion a little bit but does not work on quick touch. The second quick touch works every time.
**Steps to reproduce:**
1) Attache this script on a MenuButton and run it.
```
extends MenuButton
func _ready():
get_popup().add_item("item 1", 1)
get_popup().connect("gui_input", self, "_on_PopupMenu_gui_input" )
connect("pressed", self, "_on_Button_pressed")
action_mode = BaseButton.ACTION_MODE_BUTTON_RELEASE
flat = false
func _on_Button_pressed():
get_popup().popup()
func _on_PopupMenu_gui_input(event):
if event is InputEventMouseMotion:
get_popup().accept_event()
if event is InputEventMouseButton:
if event.pressed:
print("mouse button pressed on popupmenu")
else:
print("mouse button released on popupmenu")
```
2) Click on the button, click on the menu item, click again.
|
bug,topic:input,topic:gui
|
low
|
Minor
|
589,914,185 |
excalidraw
|
Support colors for ranges in text
|
I often want to change a color of a single word in text. That's not currently possible. Not looking for any advanced features or more UI but this would be really handy.
|
enhancement,help wanted,Candidate P1
|
high
|
Critical
|
589,932,299 |
pytorch
|
Caffe2 utility_ops_gpu_test fails on Windows
|
From https://app.circleci.com/pipelines/github/pytorch/pytorch/148427/workflows/3844733c-9ac5-4003-b085-d73172e4272b/jobs/4994613
```
Running "C:\Users\circleci\project\build\win_tmp\build\torch\test\utility_ops_gpu_test.exe"
Running main() from ..\third_party\googletest\googletest\src\gtest_main.cc
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from UtilityOpGPUTest
[ RUN ] UtilityOpGPUTest.testReshapeWithScalar
"utility_ops_gpu_test" failed with 3
+ cleanup
```
|
module: tests,triaged
|
low
|
Critical
|
589,960,708 |
youtube-dl
|
[youtube] -f to merge video with two audio tracks
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.03.24. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a feature request
- [x] I've verified that I'm running youtube-dl version **2020.03.24**
- [x] I've searched the bugtracker for similar feature requests including closed ones
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
There are surround sound audio tracks popping up on YouTube now, and I'd like to mux both a stereo and a multi-channel track into the file using something like `--format 313+258+251`, so that the resulting file can be played on both stereo and surround setups without sounding bad.
But that gives the error:
```
[youtube] fbyIYXEu-nQ: Downloading webpage
ERROR: The first format must contain the video, try using "-f 251+258"
ERROR: 'NoneType' object is not subscriptable
```
Example video is the Vsauce Mindfield series: [https://www.youtube.com/watch?v=fbyIYXEu-nQ](https://www.youtube.com/watch?v=fbyIYXEu-nQ)
The example formats would be:
```
251 webm audio only tiny 147k , opus @160k (48000Hz), 21.18MiB
258 m4a audio only tiny 388k , m4a_dash container, mp4a.40.2 (48000Hz), 63.82MiB
313 webm 3840x2160 2160p 18419k , vp9, 24fps, video only, 2.39GiB
```
And I expect ffmpeg to mux them into a Matroska file. I know I could download the individual tracks and then mux them together manually, but I like how youtube-dl is able to embed thumbnails, metadata, subtitles and descriptions in one go.
|
request
|
low
|
Critical
|
589,966,380 |
flutter
|
Text widget should allow for other line-breaking algorithms
|
Currently the Text widget uses a **greedy** algorithm to apply line-breaks. Words are laid out until they don't fit, and then it continues in the next line.
I suggest there should be an option to choose other, better, line-breaking algorithms:
enum LineBreakAlgorithm {
greedy,
minimumRaggedness,
totalFit,
custom
}
Text({ ...
List<int> Function(ParagraphInfo) customLineBreakAlgorithm,
LineBreakAlgorithm lineBreakAlgorithm = LineBreakAlgorithm.greedy})
The `LineBreakAlgorithm.minimumRaggedness` could implement one of the algorithms shown here:
- https://xxyxyz.org/line-breaking.
The `LineBreakAlgorithm.totalfit` could implement the **total-fit** line breaking algorithm, by Donald Knuth and Michael Plass, which is considered the best there is, and is used in TeX and Adobe InDesign:
- http://www.eprg.org/G53DOC/pdfs/knuth-plass-breaking.pdf
- https://github.com/robertknight/tex-linebreak
- https://github.com/baskerville/paragraph-breaker
- https://cs.uwaterloo.ca/~dberry/ATEP/StudentLectures/Ananya.pdf
- https://news.ycombinator.com/item?id=1134342
- https://www.tug.org/TUGboat/tb21-3/tb68fine.pdf
- https://www.students.cs.ubc.ca/~cs-490/2015W2/lectures/Knuth.pdf
- https://fastapi.metacpan.org/source/MWARD/Text-Reflow-1.17/Reflow.pm
- https://github.com/jaroslov/knuth-plass-thoughts/blob/master/plass.cpp (C++)
- https://gist.github.com/allen-chin/fb660a1fc77b00d1bdce25adb567dc0e (Java)
- https://github.com/manifoldco/ansiwrap/blob/master/ansiwrap.go (Go)
The `LineBreakAlgorithm.custom` would accept a custom algorithm that would return a list of positions (indexes in the text string) where the line-breaks should be applied. This would allow us to do cool stuff like make text into circular or triangular shapes.
|
framework,engine,a: typography,customer: crowd,c: proposal,P3,team-engine,triaged-engine
|
medium
|
Critical
|
589,982,023 |
pytorch
|
Wrong conv2d output on GPU when kernel has many zeros
|
## ๐ Bug
When kernel has many zeros (e.g. in a masked conv), conv2d output is wrong on GPU.
## To Reproduce
Here's a test function, in which I convolve x
0 0 0 0 0
0 0 0 0 0
0 0 1 1 1
0 0 1 1 1
0 0 1 1 1
with k
1 1 1 1 1
1 1 1 1 1
1 1 0 0 0
1 1 0 0 0
1 1 0 0 0
On CPU, the output is 0 (expected). On GPU, the output varies depending on channel count of x.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
```
import torch
def test(c):
x = torch.ones(1, c, 5, 5)
x[:,:,:2]=0
x[:,:,:,:2]=0
k = 1-x
print(torch.nn.functional.conv2d(x.cuda(), k.cuda()))
print(torch.nn.functional.conv2d(x.cpu(), k.cpu()))
test(32)
tensor([[[[-1.9055e-05]]]], device='cuda:0')
tensor([[[[0.]]]])
test(320)
tensor([[[[-0.1693]]]], device='cuda:0')
tensor([[[[0.]]]])
test(640)
tensor([[[[5.0290]]]], device='cuda:0')
tensor([[[[0.]]]])
```
## Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.0
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
GPU 4: Tesla V100-SXM2-32GB
GPU 5: Tesla V100-SXM2-32GB
GPU 6: Tesla V100-SXM2-32GB
GPU 7: Tesla V100-SXM2-32GB
Nvidia driver version: 410.79
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] torch==1.4.0
[pip] torchvision==0.5.0
[conda] blas 1.0 mkl
[conda] mkl 2020.0 166
[conda] mkl-service 2.3.0 py36he904b0f_0
[conda] mkl_fft 1.0.15 py36ha843d7b_0
[conda] mkl_random 1.1.0 py36hd6b4f25_0
[conda] pytorch 1.4.0 py3.6_cuda10.0.130_cudnn7.6.3_0 pytorch
[conda] torchvision 0.5.0 py36_cu100 pytorch
cc @ezyang @gchanan @zou3519 @ngimel
|
module: dependency bug,module: numerical-stability,module: cuda,module: convolution,triaged
|
low
|
Critical
|
590,031,850 |
flutter
|
Flutter Embedders should be able to use Direct3D on Windows.
|
Based on guidance from the Skia team in a recent sync, the Direct3D 12 backend may be ready to be experimented with. It was noted (though I did not find references) that the minimum version supported was 12 with an Angle based fallback recommended for versions less than 12. The backend may currently be enabled by setting the `skia_use_direct3d` GN flag.
|
c: new feature,engine,dependency: skia,platform-windows,e: embedder,P3,team-engine,triaged-engine
|
low
|
Minor
|
590,066,528 |
node
|
Consider adding some scripting, etc. to make sure that the key used to sign a release is listed in README.md
|
To avoid an issue like the one mentioned in #32559.
P.S. #32559 is about a specific key. This is one is about scripting, etc. to make sure this is avoided going forward.
|
security,release
|
low
|
Minor
|
590,169,222 |
pytorch
|
nn.LSTM gives nondeterministic results with dropout and multiple layers, OR cuDNN version mismatch
|
## ๐ Bug
I get nondeterministic results when I run a model containing an nn.LSTM with dropout > 0 on the GPU, even when all seeds are set and torch.backends.cudnn.deterministic = True, torch.backends.cudnn.benchmark = False. Note that this issue is a near duplicate of #18110.
## To Reproduce
I have a working example at https://gist.github.com/nimz/7da5db4031c523e61659c4afd443844d, that runs a single forward and backward pass of a simple model. It can be run with no arguments. If it is run multiple times, one can observe that the forward outputs are always the same, but some of the parameter gradients differ from run to run. This seems to only happen to the lstm.weight_ih_lX parameters.
## Expected behavior
I would expect the runs to be exactly the same when run back-to-back on the same machine, but they are not. (This is true whether or not I use CUDA_VISIBLE_DEVICES=0, if that is helpful.)
## Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla V100-PCIE-16GB
GPU 1: Tesla V100-PCIE-16GB
Nvidia driver version: 430.26
cuDNN version: Could not collect [torch.backends.cudnn.version() outputs 7603]
Versions of relevant libraries:
[pip] numpy==1.18.2
[pip] torch==1.4.0
[pip] torchvision==0.5.0
[conda] torch 1.4.0 pypi_0 pypi
[conda] torchvision 0.5.0 pypi_0 pypi
## Additional context
The issue https://github.com/pytorch/pytorch/issues/18110 suggests that nondeterminism should be fixed in cuDNN 7.6.1, but I have cuDNN 7.6.3 according to the output of torch.backends.cudnn.version(), and yet this issue still arises. I did run the user-posted example in #18110, and that script does seem to give me deterministic results. It is also possible that torch.backends.cudnn.version() is incorrect. I believe I may actually be using cuDNN version 7.6.2, since my cudnn.h file contains
```
#define CUDNN_MAJOR 7
#define CUDNN_MINOR 6
#define CUDNN_PATCHLEVEL 2
```
However, I do not know whether this is the version PyTorch is actually using. (Nevertheless, version 7.6.2 would still not explain the nondeterminism.)
cc @ngimel @csarofeen @ptrblck
|
module: cudnn,module: cuda,triaged,module: determinism
|
low
|
Critical
|
590,183,415 |
kubernetes
|
curl -sSL https://xxxxxxx/kubernetes.repo -o /etc/yum.repos.d/
|
<!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
support:
```
curl -sSL https://xxxxxxx/kubernetes.repo -o /etc/yum.repos.d/
```
**Why is this needed**:
the offical repo content is too lang, and i don't want to copy and paste it everytime, and it so ugly write it to my shellscript, take too many line!!!
```
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
```
|
kind/feature,priority/important-longterm,sig/release,lifecycle/frozen
|
low
|
Major
|
590,187,430 |
flutter
|
Browser title does not update when navigating back from page
|
## Steps to Reproduce
1. Create example app as seen here and run it for the web: http://dartpad.dartlang.org/7894c397c7671d7d6a358dce11167bbe
2. Notice that the title on the browser is set correctly for the home page to 'Home Page'
3. Navigate to the test page and see that the title changes to 'Test Page'
3. Go back to the home page and the title does not update to 'Home Page' again, it remains 'Test Page'
<details>
<summary>Logs</summary>
```
[โ] Flutter (Channel master, v1.16.3-pre.56, on Mac OS X 10.15.4 19E266, locale en-DE)
โข Flutter version 1.16.3-pre.56 at /Users/perc/development/flutter
โข Framework revision 8857c4cec8 (4 days ago), 2020-03-25 21:21:01 -0400
โข Engine revision b235233e9d
โข Dart version 2.8.0 (build 2.8.0-dev.17.0 2323087237)
[โ] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
โข Android SDK at /Users/perc/Library/Android/sdk
โข Android NDK location not configured (optional; useful for native profiling support)
โข Platform android-stable, build-tools 29.0.2
โข Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 11.3.1)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Xcode 11.3.1, Build version 11C505
โข CocoaPods version 1.8.4
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[!] Android Studio
โข Android Studio at /Applications/Android Studio with Blaze.app/Contents
โ Flutter plugin not installed; this adds Flutter specific functionality.
โ Dart plugin not installed; this adds Dart specific functionality.
โข Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6222593)
[โ] Android Studio (version 3.6)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin version 44.0.2
โข Dart plugin version 192.7761
โข Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)
[!] VS Code (version 1.43.0)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โ Flutter extension not installed; install from
https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[โ] Connected device (4 available)
โข iPhone 11 Pro Max โข 81158E5C-C27D-49D1-9FD9-55EE1011FEBF โข ios โข com.apple.CoreSimulator.SimRuntime.iOS-13-3 (simulator)
โข macOS โข macOS โข darwin-x64 โข Mac OS X 10.15.4 19E266
โข Chrome โข chrome โข web-javascript โข Google Chrome 80.0.3987.149
โข Web Server โข web-server โข web-javascript โข Flutter Tools
```
</details>
|
framework,f: routes,platform-web,has reproducible steps,customer: web10,P2,found in release: 3.7,found in release: 3.9,team-web,triaged-web
|
low
|
Critical
|
590,188,180 |
pytorch
|
[discussion] Generic solutions for too-small-epsilon in FP16 training
|
Motivating PR: https://github.com/pytorch/pytorch/pull/33596
@albanD @ezyang Should we have a separate docs page (at least) about fp16 quirks, given that new built-in autocast support has been merged? Another related issue: https://github.com/pytorch/pytorch/pull/35594
It would also be nice to discuss some global hooks that can replace inf/nan in output/grad by zero, especially when a third-party library is used under the hood that doesn't allow to swap epsilons easily (or some epsilon factory, but that's more complicated).
Two related issues: https://github.com/pytorch/pytorch/issues/31829 https://github.com/pytorch/pytorch/issues/31557
cc @ezyang @SsnL @albanD @zou3519 @gqchen
|
module: docs,module: autograd,triaged,enhancement,module: half
|
low
|
Major
|
590,193,182 |
react
|
Bug: SuspenseList revealOrder="backwards" is not consistent without tail props
|
<!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: 0.0.0-experimental-aae83a4b9
## Steps To Reproduce
1. If `<SuspenseList revealOrder="backwards">` is expected to show last component to load first if it loads early but it waits for the top components. If `tail` prop is set it works fine.
2. if `<SuspenseList revealOrder="backwards" tail="collapsed">` is given everything works as expected
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
```
<SuspenseList revealOrder="backwards">
<ProfileDetails resource={resource} />
<ErrorBoundary fallback={null}>
<Suspense fallback={<h2>Loading posts...</h2>}>
<ProfileTimeline resource={resource} />
</Suspense>
</ErrorBoundary>
<Suspense fallback={<h2>Loading fun facts...</h2>}>
<ProfileTrivia resource={resource} />
</Suspense>
</SuspenseList>
```
works fine when added tail
```
<SuspenseList revealOrder="backwards" tail="hidden">
<ProfileDetails resource={resource} />
<ErrorBoundary fallback={null}>
<Suspense fallback={<h2>Loading posts...</h2>}>
<ProfileTimeline resource={resource} />
</Suspense>
</ErrorBoundary>
<Suspense fallback={<h2>Loading fun facts...</h2>}>
<ProfileTrivia resource={resource} />
</Suspense>
</SuspenseList>
```
Link to code example: [https://codesandbox.io/s/bug-suspenselist-revealordertogether-and-error-boundaries-18429-1oky8](https://codesandbox.io/s/bug-suspenselist-revealordertogether-and-error-boundaries-18429-1oky8)
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
`<SuspenseList revealOrder="backwards">` waits for the top elements to load unless tail props is set.
## The expected behavior
It is expected to work backwards order is last component is loaded in suspense without tail prop being set or not.
|
Type: Needs Investigation,Component: Suspense
|
low
|
Critical
|
590,212,966 |
godot
|
KinematicBody2D pushes RigidBody2D through the ground when standing on them
|
**Godot version:**
Im using Godot 3.2
**OS/device including version:**
Windows 10 Home, using Ryzen 5 3600x 16 gb Ram GTX 1050.
**Issue description:**
The Rigid Body 2D works fine like in this example -> https://imgur.com/a/vLhaJiy
when there are a few of them interacting. If i throw a bunch more of boxes like here
->https://imgur.com/a/ReZ5Oxb
that happens. Is not solid enough even to the player to stand on. Is there a fix? Because that feels like a piece of paper intead of a solid box.
**Steps to reproduce:**
Add 20+ RigidBody2D coliding each others or make a kinematic body 2d try to stand on.
Mass isnt important, doenst change anything.
**Minimal reproduction project:**
https://up2sha.re/file?f=IJ489K
https://mega.nz/#!RqxQWKLL!vNu4gZPu725KbHL5eBPPc09mLzMGLzOk1ZpoAN9ylXU
If you have time, i recommend you to see the project, otherwise it will be difficult to understand.
|
bug,confirmed,topic:physics
|
low
|
Major
|
590,250,994 |
rust
|
Functional record update: private fields should not throw errors if not explicitly used in literals
|
I tried this code:
```rust
mod module {
#[derive(Default)]
pub struct S {
pub public_field: u32,
_private_field: u32,
}
}
use module::S;
pub fn f() -> S {
S {
public_field: 42,
..S::default()
}
}
```
I expected to see this happen: code compiles successfully.
Instead, this happened: rustc throws an error
```
error[E0451]: field `_private_field` of struct `module::S` is private
--> <source>:14:11
|
14 | ..S::default()
| ^^^^^^^^^^^^ field `_private_field` is private
```
As a user, I field this confusing, because I'm not trying to access or set the private field explicitly anywhere in the literal.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
Applies to all known rustc versions.
I understand why this happens in principle, if the literal is desugared literally (no pun intended) to:
```rust
let s_rest = S::default();
let s = S {
public_field: 42,
_private_field: s_rest.private_field,
};
```
However, it seems that this code could be equally expanded to:
```rust
let mut s = S::default();
s.public_field = 42;
let s = s;
```
This way an immutable literal could be used with many more kinds of structures without users manually resorting to the temporarily-mutable variable as in the 2nd example.
Do I miss some other difference between the two that would break some existing usecases if the implementation is changed?
|
T-lang,C-feature-request
|
medium
|
Critical
|
590,288,045 |
TypeScript
|
Asynchronous type guards and assertion signatures
|
## Search Terms
async, asynchronous, promise, type guards, assertion signatures, asserts
## Suggestion
TypeScript currently supports user-defined type guards and assertion signatures.
```typescript
function isCustomType(value: unknown): value is CustomType {
// ...
}
function assertIsCustomType(value: unknown): asserts value is CustomType {
// ...
}
```
I think it would be great if we could also define custom **asynchronous** type guards and assertions.
```typescript
async function isCustomType(value: unknown): Promise<value is CustomType> {
// ...
}
async function assertIsCustomType(value: unknown): Promise<asserts value is CustomType> {
// ...
}
```
## Use Cases
This feature would allow to check types and validate data asnychonously. Please look at the examples below.
## Examples
Imagine the code like this:
```typescript
interface User {
name: string;
}
type Valid<T> = T & {
readonly $valid: unique symbol;
}
async function createUser(user: User): Promise<void> {
validateUser(user);
await saveValidUserToDatabase(user);
}
function validateUser(user: User): asserts user is Valid<User> {
if (user.name.length < 5) {
throw new Error('User name must be at least 5 characters long');
}
}
async function saveValidUserToDatabase(user: Valid<User>): Promise<void> {
// ...
}
```
But sometimes, validation is done asynchronously, e.g. on server-side. Currently, you can achieve it this way:
```typescript
async function createUser(user: User): Promise<void> {
await validateUser(user);
await saveValidUserToDatabase(user as Valid<User>);
}
async function validateUser(user: User): Promise<void> {
// ...
}
```
But if TypeScript supported asynchronous assertions, you could skip `as Valid<User>` type assertion and let the TS do the job:
```typescript
async function createUser(user: User): Promise<void> {
await validateUser(user);
await saveValidUserToDatabase(user);
}
async function validateUser(user: User): Promise<asserts user is Valid<User>> {
// ...
}
```
Exactly the same issue could be presented for user-defined type guards.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
|
Suggestion,In Discussion
|
high
|
Critical
|
590,295,646 |
pytorch
|
allgather_coalesced for tensors of different types seems to be broken
|
## ๐ Bug
allgather_coalesced only ensures that type of ith input and ith output tensors match, but implementation seems to assume that all tensors are of the same type. We should either make checking more strict or implementation more general.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar
|
oncall: distributed,triaged
|
low
|
Critical
|
590,329,157 |
youtube-dl
|
VTV.vn support request
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.03.24. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2020.03.24**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://vtv.vn/video/thoi-su-17h-vtv1-30-3-2020-430204.htm
- Live-stream and playlist: https://vtv.vn/truyen-hinh-truc-tuyen/vtv1.htm
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
VTV is the national TV broadcaster of Vietnam.
Videos and live-streams for vtv.vn are found here:
https://vtv.vn/truyen-hinh-truc-tuyen/vtv1.htm
https://vtv.vn/truyen-hinh-truc-tuyen/vtv2.htm
https://vtv.vn/truyen-hinh-truc-tuyen/vtv3.htm
https://vtv.vn/truyen-hinh-truc-tuyen/vtv4.htm
https://vtv.vn/truyen-hinh-truc-tuyen/vtv5.htm
https://vtv.vn/truyen-hinh-truc-tuyen/vtv6.htm
https://vtv.vn/truyen-hinh-truc-tuyen/vtv7.htm
https://vtv.vn/truyen-hinh-truc-tuyen/vtv8.htm
https://vtv.vn/truyen-hinh-truc-tuyen/vtv9.htm
Each page represents a TV channel (VTV1, VTV2...), has a live-stream player (default when the page is opened) and also a tool to navigate already broadcast videos.
|
site-support-request
|
low
|
Critical
|
590,350,244 |
godot
|
AnimationTree is too easy to break
|
**Godot version:**
3.2.1
**OS/device including version:**
OSX 10.14.6
**Issue description:**
Switching a tscn with an AnimationPlayer using the same animations renders the AnimationTree unable to work in game, but it works in the editor.
**Steps to reproduce:**
Create a scene
Import or link a scene with an AnimationPlayer
Create and setup an AnimationTree with an AnimationNodeBlendTree
Delete or swap the scene with the linked AnimationPlayer (you can even reuse the same scene as before)
Reactivate the AnimationTree
You will notice that everything works just fine in the editor, if you launch the game it will start with the default animation (so the tree is valid and working) but setting a different parameter will not work. The parameters are correctly set and they return the correct value but it just doesn't play.
**Minimal reproduction project:**
|
bug,topic:editor
|
low
|
Minor
|
590,361,532 |
flutter
|
iOS intermitent black screen when opening another UIViewController from platform channel
|
Redirect flutter to native black screen appear for some second. only for iOS
Flutter snippet:
```dart
try {
await platform.invokeMethod('openNativeChat');
} on PlatformException catch (e) {
print(e.message);
}
```
App Delegate.swiftimport UIKit
```swift
import Flutter
@UIApplicationMain
@objc class AppDelegate: FlutterAppDelegate {
override func application(
_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?
) -> Bool {
//GeneratedPluginRegistrant.register(with: self)
guard let controller = window?.rootViewController as? FlutterViewController else {
return super.application(application, didFinishLaunchingWithOptions: launchOptions)
}
let methodChannel = FlutterMethodChannel(name: "test",
binaryMessenger: controller.binaryMessenger)
methodChannel.setMethodCallHandler({
(call: FlutterMethodCall, result: @escaping FlutterResult) -> Void in
controller.view.backgroundColor = UIColor.yellow
switch call.method {
case "openNativeChat":
self.window?.rootViewController = nil
let storyboard = UIStoryboard(name: "Main", bundle: nil)
let initialViewController:InitialViewController = storyboard.instantiateViewController(withIdentifier: "InitialViewController") as! InitialViewController
let navigationController = UINavigationController(rootViewController: controller)
// navigationController.isNavigationBarHidden = false
self.window = UIWindow(frame: UIScreen.main.bounds)
self.window?.makeKeyAndVisible()
self.window.rootViewController = navigationController
navigationController.pushViewController(initialViewController, animated: true)
break
default:
result(FlutterMethodNotImplemented)
break
}
})
GeneratedPluginRegistrant.register(with: self)
return super.application(application, didFinishLaunchingWithOptions: launchOptions)
}
}
```
|
platform-ios,engine,a: existing-apps,has reproducible steps,P2,found in release: 3.7,found in release: 3.10,team-ios,triaged-ios
|
low
|
Major
|
590,409,034 |
pytorch
|
USE_AVX/USE_AVX2 does not affect __AVX2__ macro defition
|
I tried to `USE_AVX=0 USE_AVX2=0 python setup.py develop` but `__AVX2__` was defined and AVX2 specific code was built.
```
CMake Warning:
Manually-specified variables were not used by the project:
USE_AVX2
```
|
triaged,module: build warnings,internals
|
low
|
Minor
|
590,428,822 |
TypeScript
|
Cannot find usage references for most kinds of merged declarations
|
<!-- ๐จ STOP ๐จ ๐ฆ๐ง๐ข๐ฃ ๐จ ๐บ๐ป๐ถ๐ท ๐จ
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.9.0-dev.20200330
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
merged declaration find references
**Code**
```ts
namespace namespaceFunction {
export type Foo = string;
}
function namespaceFunction() {}
namespace NamespaceClass {
export type Foo = string;
}
class NamespaceClass {}
namespace NamespaceInterface {
export type Foo = string;
}
interface NamespaceInterface {}
function interfaceFunction() {}
interface interfaceFunction {}
NamespaceClass.name;
namespaceFunction.name;
interfaceFunction.name;
type X = NamespaceInterface | interfaceFunction;
type Y = NamespaceClass.Foo | NamespaceInterface.Foo | namespaceFunction.Foo;
```
**Expected behavior:**
Each of these declarations has exactly one usage reference (accessible with Ctrl+Click):
- `namespaceFunction` (as function)
- `namespaceFunction` (as namespace)
- `NamespaceClass` (as class)
- `NamespaceClass` (as namespace)
- `NamespaceInterface` (as interface)
- `NamespaceInterface` (as namespace)
- `interfaceFunction` (as interface)
- `interfaceFunciton` (as function)
**Actual behavior:**
Only usage references for the following declarations can be found:
- `interfaceFunction` (as function, points to `interfaceFunction.name`)
- `namespaceFunction` (as function, points to `namespaceFunction.name`)
Unmerging these declarations by renaming allows all usage references to be found correctly.
[**Playground Link:**](https://www.typescriptlang.org/play/?ts=3.9.0-dev.20200328&ssl=1&ssc=1&pln=23&pc=78#code/HYQwtgpgzgDiDGEAEpKwRAYgV2PALgJYD2wSA3gLABQSSEAHjMQE75L4CeMymxxSALxIo+FoWABzANw0AvjQBmuAiTKpocRDjxFSACgCUFBdRob0iJADlwmjAGEANiChQKNOo2ZsO3XvxCImISMvI08C5uNnaWEM6u7uSm5rFayLZo6QCSwPgQLIoYHrT0TKzsXDxIfALCouJSstSmEvmFxZn2iLntRVbJNEoqemRtBf1YI2pGJjTjHVYLkzqqpHNm1F1xCW4AdBrNFumrowd2zcsYp2rnkM1VyAAaQds5eRPFAD5IV9rTpAe-iQAE1XmlHFEoHtakgfm8ML1PogYYEfsdrgDgKjiNIgA) <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related issues**: Looks like #22467 is related but not sure if this is necessarily a duplicate/a direct consequence. Also, I recently opened #36626 and the general context is similar but the issue is distinct.
|
Bug
|
low
|
Critical
|
590,437,241 |
pytorch
|
TensorBoard add_scalars throws error when dict has keys of type int
|
## ๐ Bug
In `torch/utils/tensorboard/writer.py`, method `add_scalars` takes as argument `tag_scalar_dict (dict): Key-value pair storing the tag and corresponding values`.
However, when the keys of `tag_scalar_dict` are not of type `string` (e.g. `int`), it will throw an error.
## To Reproduce
Steps to reproduce the behavior:
1. Run the example code with int keys instead of string keys as such:
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
r = 5
for i in range(100):
writer.add_scalars('run_14h', {0:i*np.sin(i/r),
1:i*np.cos(i/r),
2: np.tan(i/r)}, i)
writer.close()
The error thrown is the following:
`line 365, in add_scalars
fw_tag = fw_logdir + "/" + main_tag.replace("/", "_") + "_" + tag
TypeError: can only concatenate str (not "int") to str
Exception ignored in: <function SimpleImageViewer.__del__ at 0x000001728CED5048>`
## Expected behavior
.. image:: _static/img/tensorboard/add_scalars.png
:scale: 50 %
## Environment
PyTorch version: 1.3.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Microsoft Windows 10 Home
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] msgpack-numpy==0.4.4.3
[pip3] numpy==1.17.3
[pip3] torch==1.3.0
[pip3] torchtext==0.5.0
[pip3] torchvision==0.4.1
[conda] Could not collect
|
triaged
|
low
|
Critical
|
590,437,593 |
TypeScript
|
property-will-be-overwritten-by-spread error is issued multiple times
|
```ts
interface XX {
paddingLeft: string;
}
declare var x: XX
declare function clonElement<Q>(props?: object): void;
clonElement({ style: {
paddingLeft: 'hi',
...x
}})
```
**Expected behavior:**
One error on `paddingLeft: 'hi'`: 'paddingLeft' is specified more than once, so this usage will be overwritten.
**Actual behavior:**
Two errors.
Our old commit of ant-design issues this error _8_ times. This is probably just a result of (1) not caching the type from checking object literals (2) checking object literals multiple times during resolveCall.
Not high priority -- I'm not sure it's observable from VS Code -- and probably expensive to fix, depending on whether the reason that we decided to not cache object literal types is still true.
|
Bug
|
low
|
Critical
|
590,438,656 |
vscode
|
Provide a "focus-follow-mouse" setting
|
<!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I am used for years (X11 times) with focus following the mouse, instead of having to click to type (which is slow, and wears).
|
feature-request
|
high
|
Major
|
590,473,311 |
terminal
|
Make a second `LibraryIncludes.h` that includes WIL and TIL & have it at the bottom of every `precomp.h`
|
Can you file a follow-on task to perhaps make a second `LibraryIncludes.h` that includes WIL and TIL and have it at the bottom of every `precomp.h`?
I feel like you've done this to like 6 more projects as of this PR and I bet it will just keep coming.
We should probably put all super-powered headers at the bottom of every `precomp.h`.
_Originally posted by @miniksa in https://github.com/microsoft/terminal/pull/5131_
|
Help Wanted,Product-Meta,Issue-Task,Area-CodeHealth
|
low
|
Minor
|
590,483,384 |
rust
|
Measure binary size impact of implicit caller location
|
The largest unanswered questions about [implicit caller location](https://github.com/rust-lang/rust/issues/47809) are about the binary size impact of the feature and how to offer users control over it.
There are three ways I see that `#[track_caller]` inflates binaries:
1. code bloat from panicking branches that were previously fused
2. encoding the `Location` structs themselves
3. encoding the `&'static str` for every filename referenced by a `Location`
We need to determine the impact to the ecosystem for each of these before deciding on mitigation strategy.
(1) was the main concern in the RFC but it was predicated on the assumption that `#[track_caller]` would affect inlining (an artifact of the implementation proposed). Since it doesn't affect inlining now, the only lost optimization in panicking branches would be for cases where LLVM previously used knowledge of a single constant panic location to make the failure path of a function smaller.
We've not yet had any reports of this and some ad-hoc measurements don't show any difference. This matches my mental model of LLVM's ability to optimize calling conventions which is "very expensive magic". I'm open to ideas for how to better assess this, including "wait and see".
**TLDR** Re (2) and (3),
`Location`s are 24 bytes when `size_of::<usize>() == size_of::<u64>()` and we don't currently measure how many we encode. We should put these in their own section to make it easier to measure.
Which filenames do we encode? In practice: probably all source files. We should put these in their own section too.
|
T-compiler,I-heavy,F-track_caller
|
low
|
Critical
|
590,554,548 |
youtube-dl
|
Subtitles from MPD file
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.03.24. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a feature request
- [x] I've verified that I'm running youtube-dl version **2020.03.24**
- [x] I've searched the bugtracker for similar feature requests including closed ones
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
Please add, when parsing an MPD, the support option for subtitles, so that we can also not only pull video and audio, we can also capture subtitles, because in cases of fragmented files it is very bad to do manually
|
request
|
low
|
Critical
|
590,563,195 |
rust
|
wasm target not honoring linker set at bootstrap time (1.42.0 and maybe earlier)
|
rust for gentoo linux (the one we ship in official repositories)
here's how we configure wasm target:
```toml
[target.wasm32-unknown-unknown]
linker = "lld"
```
but it still looks for rust-lld for that target by default.
```shel-session
cargo build --release --target=wasm32-unknown-unknown --verbose
Compiling wasm-test v0.1.0 (/tmp/wasm-test)
Running `rustc --crate-name wasm_test src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type cdylib --emit=dep-info,link -C opt-level=3 -C metadata=7a89689720da237e --out-dir /tmp/wasm-test/target/wasm32-unknown-unknown/release/deps --target wasm32-unknown-unknown -L dependency=/tmp/wasm-test/target/wasm32-unknown-unknown/release/deps -L dependency=/tmp/wasm-test/target/release/deps`
error: linker `rust-lld` not found
|
= note: No such file or directory (os error 2)
error: aborting due to previous error
error: could not compile `wasm-test`.
Caused by:
process didn't exit successfully: `rustc --crate-name wasm_test src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type cdylib --emit=dep-info,link -C opt-level=3 -C metadata=7a89689720da237e --out-dir /tmp/wasm-test/target/wasm32-unknown-unknown/release/deps --target wasm32-unknown-unknown -L dependency=/tmp/wasm-test/target/wasm32-unknown-unknown/release/deps -L dependency=/tmp/wasm-test/target/release/deps` (exit code: 1)
```
not sure when it started to happen, but it was working as expected before.
`rustc -Z unstable-options --target=wasm32-unknown-unknown --print target-spec-json`
```json
{
"arch": "wasm32",
"data-layout": "e-m:e-p:32:32-i64:64-n32:64-S128",
"default-hidden-visibility": true,
"dll-prefix": "",
"dll-suffix": ".wasm",
"dynamic-linking": true,
"emit-debug-gdb-scripts": false,
"env": "",
"exe-suffix": ".wasm",
"executables": true,
"has-elf-tls": true,
"is-builtin": true,
"limit-rdylib-exports": false,
"linker": "rust-lld",
"linker-flavor": "wasm-ld",
"lld-flavor": "wasm",
"llvm-target": "wasm32-unknown-unknown",
"max-atomic-width": 64,
"only-cdylib": true,
"os": "unknown",
"panic-strategy": "abort",
"pre-link-args": {
"gcc": [
"-Wl,--no-threads",
"-Wl,-z",
"-Wl,stack-size=1048576",
"-Wl,--stack-first",
"-Wl,--allow-undefined",
"-Wl,--fatal-warnings",
"-Wl,--no-demangle",
"-Wl,--export-dynamic",
"--target=wasm32-unknown-unknown",
"-nostdlib",
"-Wl,--no-entry"
],
"wasm-ld": [
"--no-threads",
"-z",
"stack-size=1048576",
"--stack-first",
"--allow-undefined",
"--fatal-warnings",
"--no-demangle",
"--export-dynamic",
"--no-entry"
]
},
"relocation-model": "static",
"simd-types-indirect": false,
"singlethread": true,
"target-c-int-width": "32",
"target-endian": "little",
"target-pointer-width": "32",
"tls-model": "local-exec",
"vendor": "unknown"
}
```
for some reason target spec still lists `rust-lld` as preferred linker
Yes, there are ways to mitigate that, like
`CARGO_TARGET_WASM32_UNKNOWN_UNKNOWN_LINKER=lld`
or
`RUSTFLAGS="-C linker=/usr/bin/lld`
lld is system installed and provides the following files
```
/usr/bin/lld
/usr/bin/wasm-ld -> lld
/usr/bin/ld64.lld -> lld
/usr/bin/ld.lld -> lld
/usr/bin/lld-link -> lld
```
if we build using bundled llvm and lld, everything works as expected ofc.
full config.toml, happens on x86_64 as well, so powerpc being primay is irrelevant.
```toml
[llvm]
optimize = true
release-debuginfo = false
assertions = false
targets = "WebAssembly;PowerPC;AMDGPU"
experimental-targets = ""
link-shared = true
[build]
build = "powerpc64le-unknown-linux-gnu"
host = ["powerpc64le-unknown-linux-gnu"]
target = ["powerpc64le-unknown-linux-gnu","wasm32-unknown-unknown"]
cargo = "/usr/bin/cargo"
rustc = "/usr/bin/rustc"
docs = false
compiler-docs = false
submodules = false
python = "python3.6"
locked-deps = true
vendor = true
extended = true
tools = ["rustfmt","rls","analysis","src","miri","clippy","cargo",]
verbose = 2
[install]
prefix = "/usr"
libdir = "lib"
docdir = "share/doc/rust-1.42.0-r1"
mandir = "share/man"
[rust]
optimize = true
debug = false
debug-assertions = false
default-linker = "powerpc64le-unknown-linux-gnu-gcc"
parallel-compiler = true
channel = "nightly"
rpath = false
lld = false
backtrace-on-ice = true
[dist]
src-tarball = false
[target.powerpc64le-unknown-linux-gnu]
cc = "powerpc64le-unknown-linux-gnu-gcc"
cxx = "powerpc64le-unknown-linux-gnu-g++"
linker = "powerpc64le-unknown-linux-gnu-gcc"
ar = "powerpc64le-unknown-linux-gnu-ar"
llvm-config = "/usr/lib/llvm/10/bin/llvm-config"
[target.wasm32-unknown-unknown]
linker = "lld"
```
|
A-linkage,T-compiler,O-wasm,C-bug
|
low
|
Critical
|
590,608,825 |
terminal
|
Add clamped math methods to `til` types
|
Add a clamped sub method to til::point instead of doing it on the outside? I feel like this line should read
```
const auto offsetPoint = coord.ClampSub(controlOrigin);
```
_Originally posted by @miniksa in https://github.com/microsoft/terminal/pull/5131_
I moved this because I thought this comment was a generally good idea. We've got checked math operators defined on the `til` types already, but there are scenarios where one might want to use clamped math instead. Those callers should be able to use `pointA.ClampedAdd(pointB)`, etc. to be able to do clamped math.
This seems like an easier solution than having some sort of other magic to say "I want a clamped point" that _always_ does clamped math.
|
Help Wanted,Product-Meta,Issue-Task,Area-CodeHealth
|
low
|
Minor
|
590,620,738 |
pytorch
|
libtorch for Windows. MNIST example does no work.
|
I've tried to run mnist train example (https://github.com/pytorch/examples/tree/master/cpp/mnist)
Code crushes on train_dataset creation whith message "Unhandled exception at 0x00007FFE7B879179 in MNIST.exe: Microsoft C++ exception: c10::Error at memory location 0x00000068D2EFF4B0. occurred".
[auto train_dataset = torch::data::datasets::MNIST(kDataRoot)
.map(torch::data::transforms::Normalize<>(0.1307, 0.3081))
.map(torch::data::transforms::Stack<>());]
Environment:
Win10, VS 2019, Release, CPU.
I'm not sure if mnist-data prpared correctly. Just put them to ./data catalog
26.03.2020 12:10 1ย 648ย 877 t10k-images-idx3-ubyte.gz
26.01.1998 18:07 7ย 840ย 016 t10k-images.idx3-ubyte
26.03.2020 12:10 4ย 542 t10k-labels-idx1-ubyte.gz
26.01.1998 18:07 10ย 008 t10k-labels.idx1-ubyte
26.03.2020 12:09 9ย 912ย 422 train-images-idx3-ubyte.gz
18.11.1996 18:36 47ย 040ย 016 train-images.idx3-ubyte
26.03.2020 12:09 28ย 881 train-labels-idx1-ubyte.gz
18.11.1996 18:36 60ย 008 train-labels.idx1-ubyte
What is wrong?
cc @yf225
|
module: cpp,triaged
|
low
|
Critical
|
590,642,144 |
pytorch
|
LibTorch API on Mobile
|
## ๐ Feature
Hello, this may already be available and undocumented, but is there any way to access the full C++ frontend API on the mobile build?
Currently the mobile build only exposes the TorchScript portion of libtorch, so, e.g. I can't use `torch::nn` operations on mobile.
cc @yf225
|
module: cpp,triaged,oncall: mobile
|
low
|
Minor
|
590,658,429 |
create-react-app
|
Would it be possible to support applying two templates?
|
### Is your proposal related to a problem?
This happens when the developer wants to apply two templates, for example, for a project requiring Redux and typescript, that would be something like:
`create-react-app myapp --template redux,typescript`
|
issue: proposal,needs triage
|
low
|
Minor
|
590,717,666 |
pytorch
|
Simple C++ custom autograd function code throws error "CUDA error: driver shutting down"
|
## ๐ Bug
Running the following code in cuda-enabled libtorch throws error "CUDA error: driver shutting down", even though the code doesn't use CUDA. Running the same code in cpu-only libtorch doesn't throw any error.
```cpp
#include <iostream>
#include <torch/torch.h>
using namespace torch::autograd;
class MulConstant : public Function<MulConstant> {
public:
static Variable forward(AutogradContext *ctx, Variable variable, double constant) {
ctx->saved_data["constant"] = constant;
return variable * constant;
}
static variable_list backward(AutogradContext *ctx, variable_list grad_outputs) {
return {grad_outputs[0] * ctx->saved_data["constant"].toDouble(), Variable()};
}
};
int main(int argc, char* argv[])
{
auto x = torch::randn({2}).requires_grad_();
auto y = MulConstant::apply(x, 5.5);
y.sum().backward();
std::cout << x.grad() << std::endl;
}
```
Error:
```
terminate called after throwing an instance of 'c10::Error'
terminate called recursively
what(): CUDA error: driver shutting down (setDevice at /pytorch/c10/cuda/impl/CUDAGuardImpl.h:42)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7fedc6342656 in /data/libtorch/libtorch_nightly_cu92/libtorch/lib/libc10.so)
frame #1: <unknown function> + 0xc6c2 (0x7fed7bad26c2 in /data/libtorch/libtorch_nightly_cu92/libtorch/lib/libc10_cuda.so)
frame #2: torch::autograd::Engine::set_device(int) + 0x159 (0x7fedb9c36b39 in /data/libtorch/libtorch_nightly_cu92/libtorch/lib/libtorch_cpu.so)
frame #3: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x34 (0x7fedb9c39064 in /data/libtorch/libtorch_nightly_cu92/libtorch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0xc70f (0x7fedc657b70f in /data/libtorch/libtorch_nightly_cu92/libtorch/lib/libtorch.so)
frame #5: <unknown function> + 0x76ba (0x7fed7c3756ba in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #6: clone + 0x6d (0x7fed7c8bc41d in /lib/x86_64-linux-gnu/libc.so.6)
Aborted (core dumped)
```
Better backtrace:
```
Thread 4 "example-app" hit Catchpoint 1 (exception thrown), 0x00007fffccab38bd in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
(gdb) bt
#0 0x00007fffccab38bd in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1 0x00007fffc4d14ab9 in c10::cuda::impl::CUDAGuardImpl::getDevice (this=0x997920) at ../c10/cuda/impl/CUDAGuardImpl.h:37
#2 0x00007fffc4d14ed6 in c10::cuda::impl::CUDAGuardImpl::setDevice (this=0x997920, d=...) at ../c10/cuda/impl/CUDAGuardImpl.h:51
#3 0x00007ffff0f101db in torch::autograd::Engine::set_device (this=0x7ffff7b16bc0 <torch::autograd::Engine::get_base_engine()::engine>, device=1) at ../torch/csrc/autograd/engine.cpp:264
#4 0x00007ffff0f1034d in torch::autograd::Engine::thread_init (this=0x7ffff7b16bc0 <torch::autograd::Engine::get_base_engine()::engine>, device=1, ready_queue=std::shared_ptr (count 2, weak 0) 0x1b33aa0)
at ../torch/csrc/autograd/engine.cpp:293
#5 0x00007ffff0f3613e in std::_Mem_fn_base<void (torch::autograd::Engine::*)(int, std::shared_ptr<torch::autograd::ReadyQueue> const&), true>::operator()<int, std::shared_ptr<torch::autograd::ReadyQueue>, void>(torch::autograd::Engine*, int&&, std::shared_ptr<torch::autograd::ReadyQueue>&&) const (this=0x1b340d8, __object=0x7ffff7b16bc0 <torch::autograd::Engine::get_base_engine()::engine>) at /usr/include/c++/5/functional:600
#6 0x00007ffff0f360a1 in std::_Bind_simple<std::_Mem_fn<void (torch::autograd::Engine::*)(int, std::shared_ptr<torch::autograd::ReadyQueue> const&)> (torch::autograd::Engine*, int, std::shared_ptr<torch::autograd::ReadyQueue>)>::_M_invoke<0ul, 1ul, 2ul>(std::_Index_tuple<0ul, 1ul, 2ul>) (this=0x1b340b8) at /usr/include/c++/5/functional:1531
#7 0x00007ffff0f35cb8 in std::_Bind_simple<std::_Mem_fn<void (torch::autograd::Engine::*)(int, std::shared_ptr<torch::autograd::ReadyQueue> const&)> (torch::autograd::Engine*, int, std::shared_ptr<torch::autograd::ReadyQueue>)>::operator()() (this=0x1b340b8) at /usr/include/c++/5/functional:1520
#8 0x00007ffff0f35ac8 in std::thread::_Impl<std::_Bind_simple<std::_Mem_fn<void (torch::autograd::Engine::*)(int, std::shared_ptr<torch::autograd::ReadyQueue> const&)> (torch::autograd::Engine*, int, std::shared_ptr<torch::autograd::ReadyQueue>)> >::_M_run() (this=0x1b340a0) at /usr/include/c++/5/thread:115
#9 0x00007fffccadec80 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#10 0x00007fffcbafa6ba in start_thread (arg=0x7fff9cd8d700) at pthread_create.c:333
#11 0x00007fffcc24441d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
```
Update: I noticed that if we initialize a cuda tensor (e.g. `auto cuda_tensor = torch::randn({3, 4}, torch::kCUDA); std::cout << cuda_tensor << std::endl;`) before running the C++ custom autograd function, the whole thing would pass and there is no error.
## Expected behavior
It should just work without throwing any error.
## Environment
Latest libtorch nightly
cc @ezyang @SsnL @albanD @zou3519 @gqchen @yf225
|
module: cpp,module: autograd,triaged
|
low
|
Critical
|
590,733,139 |
TypeScript
|
js function hover adds arguments `...args: any[]` when calling an object property named `arguments`
|
*TS Template added by @mjbvz*
**TypeScript Version**: 3.9.0-dev.20200330
**Search Terms**
- hover / quick info
- JavaScript
---

1.43.2
|
Bug
|
low
|
Minor
|
590,741,319 |
angular
|
Click event not firing only once after HammerManager destroyed on IOS
|
<!--๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
Oh hi there! ๐
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
-->
# ๐ bug report
### Affected Package
The issue is caused by package @angular/platform-browser
### Is this a regression?
8.0, 8.2
### Description
Click event not firing after HammerManager destroyed on IOS.
After HammerManager destroyed. click event on any area of document is ignored exactly 1 time.
If i click button twice. it works.
The click event skip `globalZoneAwareCallback`, `globalZoneAwareCaptureCallback` function on `zone.js`.
## ๐ฌ Minimal Reproduction
https://angular-hammerjs-touch-weird.stackblitz.io
## ๐ฅ Exception or Error
No Error. just ignored click event 1 time.
## ๐ Your Environment
**Angular Version:**
<pre><code>
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ โณ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 8.3.26
Node: 10.16.0
OS: darwin x64
Angular: 8.2.14
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Package Version
------------------------------------------------------------
@angular-devkit/architect 0.803.26
@angular-devkit/build-angular 0.803.26
@angular-devkit/build-ng-packagr 0.803.26
@angular-devkit/build-optimizer 0.803.26
@angular-devkit/build-webpack 0.803.26
@angular-devkit/core 8.3.26
@angular-devkit/schematics 8.3.26
@angular/cli 8.3.26
@ngtools/webpack 8.3.26
@schematics/angular 8.3.26
@schematics/update 0.803.26
ng-packagr 5.7.1
rxjs 6.4.0
typescript 3.5.3
webpack 4.39.2
</code></pre>
**Anything else relevant?**
IOS 13.3.1
- Safari
- Chrome
|
area: core,area: zones,core: event listeners,iOS,P4
|
low
|
Critical
|
590,814,627 |
flutter
|
Creating Resizable View that resizes when pinch or drag from corners and sides
|
**[Asked at stack overflow](https://stackoverflow.com/questions/60924384/creating-resizable-view-that-resizes-when-pinch-or-drag-from-corners-and-sides)** I am currently working on a ScreenView with features like draggable and resizable views(with or without using corners). The problem I have now is that I want to resize the view by touch gestures in the corners. Therefore, I thought of a Point which I add to a view on selection, which can be dragged to resize the selected view.
Even after a long research, I could not come up with a solution on how to overlay another view, which has its own DragListener. I could imagine putting the selected view and the point into one ViewGroup and having the Point overlay the View. Did someone have experience with this problem? Thank you in advance.
How to create the resizable view like this?
ReactNative Demo --> [React Native PLUGIN example](https://github.com/CaptainOmega/react-native-drag-resize)
JS DEMO(Goto Resizing Section) --> [JS Example, GOTO RESIZING section](https://interactjs.io/)
JQUERY Demo --> [JQuery Example](https://jqueryui.com/resizable/)
import 'package:flutter/material.dart';
import 'package:matrix_gesture_detector/matrix_gesture_detector.dart';
class TransformImage extends StatefulWidget {
TransformImage({Key key}) : super(key: key); // changed
@override
_TransformImageState createState() => _TransformImageState();
}
class _TransformImageState extends State<TransformImage> {
double scale = 0.0;
@override
Widget build(BuildContext context) {
final ValueNotifier<Matrix4> notifier = ValueNotifier(Matrix4.identity());
return Scaffold(
appBar: AppBar(
title: Text('Transform Image'), // changed
),
body: Center(
child: MatrixGestureDetector(
onMatrixUpdate: (m, tm, sm, rm) {
notifier.value = m;
},
// shouldRotate: false,
// shouldScale: true,
// shouldTranslate: false,
child: AnimatedBuilder(
animation: notifier,
builder: (ctx, child) {
return Transform(
transform: notifier.value,
child: Center(
child: Stack(
children: <Widget>[
Container(
color: Colors.red,
padding: EdgeInsets.all(10),
margin: EdgeInsets.only(top: 50),
child: Transform.scale(
scale:
1, // make this dynamic to change the scaling as in the basic demo
origin: Offset(0.0, 0.0),
child: Container(
color: Colors.green,
child: Image.network(
'https://picsum.photos/250?image=9',
),
),
),
),
],
),
),
);
},
),
),
),
);
}
}
|
framework,d: examples,would be a good package,c: proposal,P3,team-framework,triaged-framework
|
low
|
Major
|
590,817,426 |
pytorch
|
Cannot JIT functions with custom backwards (e.g. swish)
|
## ๐ Feature
<!-- Please support swish opt, which can be exported by torch.jit and save memory-->
The MemoryEfficientSwish as follows can not be exported by torch.jit:
```
class SwishImplementation(torch.autograd.Function):
@staticmethod
def forward(ctx, i):
result = i * torch.sigmoid(i)
ctx.save_for_backward(i)
return result
@staticmethod
def backward(ctx, grad_output):
i = ctx.saved_variables[0]
sigmoid_i = torch.sigmoid(i)
return grad_output * (sigmoid_i * (1 + i * (1 - sigmoid_i)))
```
```
class MemoryEfficientSwish(nn.Module):
def forward(self, x):
return SwishImplementation.apply(x)
```
The vanilla version swish is not efficient enough:
```
class Swish(nn.Module):
def forward(self, x):
return x * torch.sigmoid(x)
```
cc @suo @ezyang @SsnL @albanD @zou3519 @gqchen
|
oncall: jit,module: autograd,triaged
|
low
|
Minor
|
590,830,517 |
vscode
|
Allow context key expression that compares object properties
|
@jrieken I was attempting to change the Timeline's `timeline.excludeSources` setting from an array to an object (as requested [here](https://github.com/microsoft/vscode/commit/59c57e1e899db397deb331f13e1c8c431c7dcb93#commitcomment-37761862)).
I've pushed a branch with the changes here: [eamodio/timeline-excludesources](https://github.com/microsoft/vscode/commits/eamodio/timeline-excludesources)
Here I am trying to use the `timeline.excludeSources.<source-id>` property as the `toggled` condition.
https://github.com/microsoft/vscode/blob/2d551abb57bfde80a48f2f5517b4c5fcab2380a4/src/vs/workbench/contrib/timeline/browser/timelinePane.ts#L1177
This seems to work for the initial state, but it never gets updated when the property is removed or re-added. It looks like the cause is somewhere in here:
https://github.com/microsoft/vscode/blob/db36e74f62b1b2603906d4d2c892ab12cf632a12/src/vs/platform/contextkey/browser/contextKeyService.ts#L102-L120
Where `this._values` will contain `timeline.excludeSources.<source-id>`, but the `event.affectedKeys` will only have `timeline.excludeSources`.
/cc @rebornix
|
feature-request,context-keys
|
low
|
Minor
|
590,894,120 |
youtube-dl
|
maoritelevision.com (appears to use brightcove)
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.03.24. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2020.03.24**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.maoritelevision.com/shows/frenemies/S01E015/frenemies-episode-15
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
A username+password is needed; the following will work:
[email protected],password=testtest
|
site-support-request,broken-IE,patch-available
|
low
|
Critical
|
590,900,125 |
go
|
x/tools/go/analysis/internal: Add an option to emit diagnostics for external packages
|
<!--
Please answer these questions before submitting your issue. Thanks!
For questions please use one of our forums: https://github.com/golang/go/wiki/Questions
-->
### What version of golang/tools are you using?
<pre>
~/tools $ git rev-parse HEAD
a30bf2db82d4f645ded9030efbbf6d4fbe5e70eb
</pre>
### Proposal
Currently the analysis driver in `golang/tools` prints diagnostics only for root packages. We can't change this behavior without changing the `golang/tools` repository or developing a driver from scratch because this behavior is written in the `internal` package.
Some analyses could be more beneficial if they could emit diagnostics for dependencies because problems in dependencies can also be problems in the root packages.
So I would like to implement an option to emit diagnostics for external packages by adding an argument to `golang.org/x/tools/go/analysis/internal/checker.Run`, and adding a command-line argument to `golang.org/x/tools/go/analysis/singlechecker.Run` and `golang.org/x/tools/go/analysis/multichecker.Run`.
Is it OK, or are there any reasons why the option is not in the packages?
Thank you.
|
NeedsInvestigation,Tools,Analysis
|
low
|
Minor
|
590,911,774 |
pytorch
|
pytorch 1.4.0 hangs when using with CUDA >= 10.1
|
Why Pytorch hangs so long when I do `torch.cuda()`
cc @ngimel
|
needs reproduction,module: cuda,triaged
|
low
|
Major
|
590,926,541 |
flutter
|
[flutter-tools] [web] Service worker doesn't add the canvaskit.js and canvaskit.wasm files into the cache resource
|
@jonahwilliams has merged a PR here #48344 that generates the service worker js file.
when we build flutter app with the canvas kit web assembly flag :
`flutter build web --dart-define=FLUTTER_WEB_USE_SKIA=true`
The service worker file generated doesn't contain the canvaskit.js or canvaskit.wasm files in the RESOUCRSES.
Please make sure that these files are also cached as part of other resources of the service worker.
canvaskit.js and canvaskit.wasm come from external urls :
https://unpkg.com/[email protected]/bin/canvaskit.wasm
https://unpkg.com/[email protected]/bin/canvaskit.js
Please add this also as part of the caching resources in the generated service worker file
|
c: new feature,tool,platform-web,P2,team-web,triaged-web
|
low
|
Minor
|
590,939,748 |
pytorch
|
Expected object of scalar type Double but got scalar type Float for argument #2 'mat2' in call to _th_mm
|
## ๐ Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1.
1.
1.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
triaged
|
low
|
Critical
|
590,954,579 |
vscode
|
[rename on type] feedback incorrect after Rename
|
Testing #93808
- place the cursor inside the `bo|dy` tag of an html document
- F2 rename to body to 'foobar'
-> The feedback is no longer selecting the entire tag:

|
bug,editor-synced-region
|
low
|
Minor
|
590,968,666 |
godot
|
Viewports cause big memory leaks when instanced or duplicated
|
**Godot version:**
3.2.1
**OS/device including version:**
Ubuntu 19.04
GeForce GTX 980/PCIe/SSE2
Intelยฎ Coreโข i7-4820K CPU @ 3.70GHz ร 8
11,6ย GiB RAM
**Issue description:**
Instancing and duplicating viewports with relatively high resolutions in the editor causes unexpected, inconsistent behavior with regard to RAM usage. Running the project seems to exacerbate the RAM usage of the editor even more.
The issue is also present in executed projects which instance or duplicate viewports from GDScript.
These supposed memory leaks can be multiple gigabytes large, potentially __freezing the entire system__.
**Steps to reproduce:**
1. Open a scene with a viewport with a large resolution (e.g. 5000x5000). RAM usage is normal.
2. Duplicate the viewport. RAM usage goes up by more than I would expect, usually around 300 MB.
3. Delete the duplicated viewport. **RAM usage does not go back down.**
4. Duplicate the viewport again. Notice a big stutter and a RAM usage increase in **1-2 GB**.
3 and 4 can be repeated until the RAM is completely full, potentially causing a full system freeze.
Running the project often causes a huge increase in RAM usage in the editor as well. The game itself, however, has an expected RAM usage in the range of 30 MB (as long as the game itself does not instantiate or duplicate viewports!).
These numbers vary each time. Sometimes a step causes no increase in RAM usage at all, sometimes it shoots up by over 2 GB. The only thing that is entirely consistent for me is that deleting a viewport does not decrease RAM usage.
Small viewports (e.g. 100x100) do also leak memory when deleting them, but they never have these huge leaks when duplicating - they cause a consistent RAM increase of a few MB, as expected.
__Note: Reproducing this can freeze your entire system!__ Carefully monitor your RAM usage and make sure you stay a few gigabytes below the limit while testing this.
**Minimal reproduction project:**
Just create an empty scene, add a viewport, set the size to something like 5000x5000 and follow the steps described above.
|
bug,topic:core,topic:rendering,confirmed
|
low
|
Major
|
591,015,249 |
flutter
|
Feature Request: Add trace-startup to Flutter Web Testing
|
<!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
Flutter has a wide range of `devicelab` tests in the [`dev/devicelab` directory](https://github.com/flutter/flutter/tree/master/dev/devicelab).
Some of them use the option `--trace-startup` to record startup information, such as the time needed for the first frame to build/rasterize. An example can be found here: [flutter_gallery__start_up.dart](https://github.com/flutter/flutter/blob/master/dev/devicelab/bin/tasks/flutter_gallery__start_up.dart)
This option does not seem to work on Web for now.
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
## Proposal
We propose that Flutter Web should accept the `--trace-startup` option and record startup information, such as the time needed for the first frame to build/rasterize, similar to the case of Android/iOS.
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
|
a: tests,c: new feature,framework,platform-web,c: proposal,customer: web10,P2,team-web,triaged-web
|
low
|
Critical
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.