id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
451,461,639 | opencv | Inf/Nan in cvProjectpoints2 | - Can we have `inf/nan` issue in [`cvProjectPoints2Internal`](https://github.com/opencv/opencv/blob/db900af8c7a849539aff3941c2a788c190bb317b/modules/calib3d/src/calibration.cpp#L786-L787)?
- How we are informing the user about clipping not visible (in the camera plane) points?
From https://github.com/opencv/opencv/pull/14411#issuecomment-498224542
/cc @alalek
See also https://answers.opencv.org/question/20138/projectpoints-fails-with-points-behind-the-camera/ | category: calib3d | low | Minor |
451,508,278 | go | cmd/vet: document interaction with test sources | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.5 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/cosmin/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/cosmin/go:/home/cosmin/dev/git"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/go-1.12.5"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go-1.12.5/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="<project dir in go path>/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build217071151=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
Run `go vet something_test.go`
### What did you expect to see?
Successful execution
### What did you see instead?
```isomething_test.go: undefined: MyModel```
even if `MyModel` is defined and `go test` executes successfully.
It may be a regression of https://github.com/golang/go/issues/26797 | Documentation,NeedsFix,Analysis | low | Critical |
451,564,106 | go | cmd/cgo: warning: built-in function ‘free’ declared as non-function | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.6 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/yath/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/yath/go"
GOPROXY="https://proxy.golang.org,direct"
GORACE=""
GOROOT="/tmp/go"
GOTMPDIR=""
GOTOOLDIR="/tmp/go/pkg/tool/linux_amd64"
GCCGO="/usr/bin/gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build688375222=/tmp/go-build -gno-record-gcc-switches"
GOROOT/bin/go version: go version devel +98100c56da Mon Jun 3 01:37:58 2019 +0000 linux/amd64
GOROOT/bin/go tool compile -V: compile version devel +98100c56da Mon Jun 3 01:37:58 2019 +0000
uname -sr: Linux 5.1.5
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
/lib/x86_64-linux-gnu/libc.so.6: GNU C Library (Debian GLIBC 2.28-10) stable release version 2.28.
lldb --version: lldb version 6.0.1
gdb --version: GNU gdb (Debian 8.2.1-2) 8.2.1
</pre></details>
### What did you do?
`go run` the following program that is using `C.free` as a value. (In actual code, it’s passed to a `foo_set_free_fun(void (*)(void *))`.) The warning is not emit when calling `C.free()`, nor is it when e.g. printing the address of `C.getenv`.
```go
package main
// #include <stdlib.h>
import "C"
import "fmt"
func main() {
fmt.Printf("%#v\n", C.free)
}
```
### What did you expect to see?
<pre>
(unsafe.Pointer)(0x402030)
</pre>
### What did you see instead?
<pre>
# command-line-arguments
cgo-generated-wrappers:1:13: warning: built-in function ‘free’ declared as non-function [-Wbuiltin-declaration-mismatch]
(unsafe.Pointer)(0x402030)
</pre> | help wanted,NeedsFix | medium | Critical |
451,589,777 | pytorch | Title of docs page includes "PyTorch master documentation" even for non-master branches. | E.g. go to https://pytorch.org/docs/stable/ and look at the HTML title of the page. It is `PyTorch documentation — PyTorch master documentation`
| module: docs,triaged | low | Minor |
451,616,705 | godot | Curve2D is not very clear | The documentation of Curve2D is not very clear (class_curve2d.rst)
The `add_point` method method doesn't specify that the `Vector2 in` and `Vector2 out` (the control points / tangent), are local coordinates ( `(0,0)` is the point position in first parameter).
Furthermore, in `get_closest_offset`, we speak about a "local space", wich seems to be the space implicitly defined with the coordinates of the points (I mean we don't set explicitly what is the unit used). This precision isn't written anywhere, it would be great if it's the case.
Please confirm that I'm correct, and if someone can write better than me, it would be great :) | enhancement,documentation | low | Minor |
451,653,801 | terminal | Add a test to cover closing two sibling panes (near) simultaneously | * [ ] Dependent upon #1042.
I'm adding code in #825 to try and make sure that when a pane is closed nearly synchronously with a sibling, it'll close properly. However, manually testing this is _really_ hard, so I'm filing a follow-up task to make sure that's implemented correctly.
I'll need the test framework stuff I'm working on in 1042 to land first though, so this can be done in a unittest. | Area-UserInterface,Product-Terminal,Issue-Task,Area-CodeHealth | low | Minor |
451,679,912 | godot | Godot cannot find MSBuild.exe with the Visual Studio 2019 Preview | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
v3.1.1 stable mono x64
**OS/device including version:**
Windows 10 Pro 64-bit (10.0.17763)
NVIDIA RTX 2080 Ti (Driver Version: 430.86)
**Issue description:**
Godot is unable to find MSBuild, when only using VS 2019 Preview.
This issue is closely releated to #27269
The problem occures, because the vswhere executable is executed with _-release_, but VS 2019 Preview requires _-prerelease_ to be detected.
**Steps to reproduce:**
1. Uninstall all Visual Studio versions
2. Install VS 2019 Preview
3. Try to run any godot solution, that has an C# script.
**Example output**
```
"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -latest -products * -requires Microsoft.Component.MSBuild
```
->
```
Visual Studio Locator version 2.6.7+91f4c1d09e [query version 2.1.1046.44959]
Copyright (C) Microsoft Corporation. All rights reserved.
```
```
"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -prerelease -products * -requires Microsoft.Component.MSBuild
```
->
```
Visual Studio Locator version 2.6.7+91f4c1d09e [query version 2.1.1046.44959]
Copyright (C) Microsoft Corporation. All rights reserved.
instanceId: 2425be05
installDate: 28/05/2019 19:30:23
installationName: VisualStudioPreview/16.2.0-pre.1.0+28917.182
installationPath: C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview
installationVersion: 16.2.28917.182
productId: Microsoft.VisualStudio.Product.Enterprise
productPath: C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview\Common7\IDE\devenv.exe
state: 4294967295
isComplete: 1
isLaunchable: 1
isPrerelease: 1
isRebootRequired: 0
displayName: Visual Studio Enterprise 2019
description: Microsoft DevOps solution for productivity and coordination across teams of any size
channelId: VisualStudio.16.Preview
channelUri: https://aka.ms/vs/16/pre/channel
enginePath: C:\Program Files (x86)\Microsoft Visual Studio\Installer\resources\app\ServiceHub\Services\Microsoft.VisualStudio.Setup.Service
releaseNotes: https://go.microsoft.com/fwlink/?LinkId=660894#16.2.0-pre.1.0
thirdPartyNotices: https://go.microsoft.com/fwlink/?LinkId=660909
updateDate: 2019-05-28T17:30:23.8501785Z
catalog_buildBranch: d16.2
catalog_buildVersion: 16.2.28917.182
catalog_id: VisualStudioPreview/16.2.0-pre.1.0+28917.182
catalog_localBuild: build-lab
catalog_manifestName: VisualStudioPreview
catalog_manifestType: installer
catalog_productDisplayVersion: 16.2.0 Preview 1.0
catalog_productLine: Dev16
catalog_productLineVersion: 2019
catalog_productMilestone: Preview
catalog_productMilestoneIsPreRelease: True
catalog_productName: Visual Studio
catalog_productPatchVersion: 0
catalog_productPreReleaseMilestoneSuffix: 1.0
catalog_productSemanticVersion: 16.2.0-pre.1.0+28917.182
catalog_requiredEngineVersion: 2.2.10.34532
properties_campaignId:
properties_channelManifestId: VisualStudio.16.Preview/16.2.0-pre.1.0+28917.182
properties_nickname:
properties_setupEngineFilePath: C:\Program Files (x86)\Microsoft Visual Studio\Installer\vs_installershell.exe
``` | bug,topic:dotnet | low | Minor |
451,696,954 | flutter | [Material] Create a MaterialTapTargetSize for slider and consume in Range Slider | https://api.flutter.dev/flutter/material/MaterialTapTargetSize.html
| c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Minor |
451,699,519 | flutter | [iOS 13] new fullscreen stack type route transition | While the old slide style transition still exists:

There's an entirely new style now:

| c: new feature,framework,a: fidelity,f: cupertino,f: routes,P1,team-design,triaged-design | medium | Critical |
451,700,059 | go | proposal: x/crypto/blake2s: add New(size, key) | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version 1.12.2
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env GOHOSTARCH="amd64" GOHOSTOS="linux"
</pre></details>
### What did you do?
I want to be able to export newDigest for blake2s. blake2s.New128 and blake2s.New256 are great, but I want to interface with one function that can take hash size as an input (like blake2b.New)
### What did you expect to see?
I want something like blake2b.New, but for blake2s. One way to make that easy is just export blake2s.newDigest
### What did you see instead?
function isn't available for export in the package
| Proposal,Proposal-Crypto | low | Minor |
451,712,100 | rust | Unconditional_recursion lint not triggered in procedural macro | Hi,
I ma working with edition 2018 and I tried this piece of code with 1.37 nightly, 1.35 and 1.32 (random pick) with the same result.
Sorry if the point of my issue is at the end but I belive that the code is much more clear than me.
The code is:
lib.rs:
```rust
extern crate proc_macro;
use proc_macro::TokenStream;
#[proc_macro_derive(TestMacro)]
pub fn derive_answer_fn(input: TokenStream) -> TokenStream {
let name = input.into_iter().nth(1).unwrap();
format!(
r#"
pub trait A {{
fn f() -> Self;
}}
#[deny(unconditional_recursion)]
impl A for {0} {{
fn f() -> Self {{
let a = 0;
<{0} as A>::f()
}}
}}"#, name.to_string()
).parse().unwrap()
}
```
test.rs:
```rust
use m::TestMacro;
#[derive(TestMacro)]
struct Struct;
fn main() {
Struct::f();
}
```
I thought my test code would behave like:
```rust
struct Struct;
pub trait A {
fn f() -> Self;
}
#[deny(unconditional_recursion)]
impl A for Struct {
fn f() -> Self {
<Struct as A>::f()
}
}
fn main() {
Struct::f();
}
```
giving this error:
```
error: function cannot return without recursing
```
instead compiles and (obviously)
```
thread 'main' has overflowed its stack
fatal runtime error: stack overflow
```
Am I doing something wrong?
When does the linter check my code?
Is this a normal behavoir?
| A-lints,A-macros,T-compiler,C-bug | low | Critical |
451,740,384 | pytorch | Undefined symbols for architecture x86_64: "testing::internal::UntypedFunctionMockerBase::UntypedInvokeWith(void const*)" on Mac OS X | I get the following linker failure attempting to build on Mac:
```
[590/2104] Linking CXX executable bin/c10_either_test
FAILED: bin/c10_either_test
: && /usr/local/opt/ccache/libexec/g++ -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -Xpreprocessor -fopenmp -I/usr/local/include -DUSE_FBGEMM -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -faligned-new -fno-math-errno -fno-trapping-math -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const -O2 -g -DNDEBUG -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk -mmacosx-version-min=10.9 -Wl,-search_paths_first -Wl,-headerpad_max_install_names -rdynamic c10/test/CMakeFiles/c10_either_test.dir/util/either_test.cpp.o -o bin/c10_either_test -Wl,-rpath,/Users/jamesreed/pytorch/build/lib lib/libc10.dylib lib/libgmock.a lib/libgtest.a lib/libgtest_main.a lib/libgtest.a && :
Undefined symbols for architecture x86_64:
"testing::internal::UntypedFunctionMockerBase::UntypedInvokeWith(void const*)", referenced from:
(anonymous namespace)::ClassWithDestructorCallback::~ClassWithDestructorCallback() in either_test.cpp.o
(anonymous namespace)::OnlyMoveableClassWithDestructorCallback::~OnlyMoveableClassWithDestructorCallback() in either_test.cpp.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
[591/2104] Building CXX object c10/test/CMakeFiles/c10_logging_test.dir/util/logging_test.cpp.o
```
I've been told BUILD_BINARY=0 BUILD_TEST=0 gets around it. Trying that now, but seems like there's a root cause here to be diagnosed | module: build,module: internals,low priority,triaged,module: macos | low | Critical |
451,814,728 | TypeScript | You actually can get Infinity type if you're clever and other fun floating point oddities | As explained in [#15135](https://github.com/microsoft/TypeScript/issues/15135) and [#15351](https://github.com/microsoft/TypeScript/issues/15351), the Typescript team doesn't intend to support NaN and Infinity literals, due to complexity. Explicitly typing something as Infinity or -Infinity gives the error:
```
'Infinity' refers to a value, but is being used as a type here.
```
As it turns out, you actually can get the Infinity type fairly easily. Just use a really large number that parses to infinity as your type, like 1e999. -1e999 gives you negative Infinity. Fortunately, I don't believe you can get NaN from just parsing a double precision literal other than NaN itself. 0/0 is the simplest way to get NaN, but that's 2 literals and a divide.
There are a number of other floating point literal peculiarities resulting from round-off as well. I fully expect the Typescript team to "won't fix" them:
```
type DoesntExtend = 1.000000000000001 extends 1 ? 'yes' : 'no' // Resolves to 'no'
type Extends = 1.0000000000000001 extends 1 ? 'yes' : 'no' // Resolves to 'yes'
```
Here's another.
```
type NegativeZero = -0 extends 0 ? 'yes' : 'no'; // Resolves to 'yes'
```
In this case, -0 and 0 are actually distinct numbers with different bit patterns. Both Chrome and Firefox dev consoles correctly echo -0 as -0 rather than 0. However, the IEEE-754 standard defines that 0 === -0, so I'm not surprised Typescript thinks they're the same type. The intent for this distinction is to convey the idea of limits as a numeric representation (not to mention floating point is sign-magnitude anyways, so why not embrace it). For example, 1/-Infinity and -8*0 both result in -0 rather than 0. For a type system, I'm not sure that you strictly need or want to maintain this distinction.
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** Repro'd on playground
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
Infinity, NaN
**Code**
```
type Illegal = Infinity; // Doesn't compile
type Inf = 1e999; // Resolves to Infinity
type NegativeInf = -1e999; // Resolves to -Infinity
```
**Expected behavior:**
**Actual behavior:**
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
[Here](https://www.typescriptlang.org/play/#src=type%20Foo%20%3D%200%3B%20%2F%2F%20Illegal%0D%0Atype%20Bar%20%3D%201e999%3B%20%2F%2F%20Resolves%20to%20Infinity%0D%0Atype%20Baz%20%3D%20-1e999%20%2F%2F%20Resolves%20to%20-Infinity%0D%0A)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Discussion | medium | Critical |
451,849,782 | rust | Accessing a TLS variable in its own constructor results in a stack overflow | Consider the following program ([Playground](https://play.rust-lang.org/?version=nightly&mode=release&edition=2018&gist=b906422f1f99a3a48f4f8935573efb80)):
```rust
struct Foobar {}
thread_local! {
static TLS: Foobar = {
let _ = TLS.try_with( |_| {} );
Foobar {}
};
}
fn main() {
TLS.with( |_| {} );
}
```
Currently this results in a stack overflow:
```
thread 'main' has overflowed its stack
fatal runtime error: stack overflow
Aborted (core dumped)
```
Stack trace from GDB:
```
#71449 0x0000555555559813 in foobar::TLS::__init::ha2c35ff7b5b5b1c5 () at src/main.rs:11
#71450 0x0000555555558543 in _$LT$std..thread..local..LocalKey$LT$T$GT$$GT$::init::hdc42f4a92fb0acaa (self=0x5555555822f8, slot=0x7ffff7b71760) at /rustc/b139669f374eb5024a50eb13f116ff763b1c5935/src/libstd/thread/local.rs:249
#71451 0x00005555555589bb in _$LT$std..thread..local..LocalKey$LT$T$GT$$GT$::try_with::h53a54e92c6f85798 (self=0x5555555822f8, f=...) at /rustc/b139669f374eb5024a50eb13f116ff763b1c5935/src/libstd/thread/local.rs:298
#71452 0x0000555555559813 in foobar::TLS::__init::ha2c35ff7b5b5b1c5 () at src/main.rs:11
#71453 0x0000555555558543 in _$LT$std..thread..local..LocalKey$LT$T$GT$$GT$::init::hdc42f4a92fb0acaa (self=0x5555555822f8, slot=0x7ffff7b71760) at /rustc/b139669f374eb5024a50eb13f116ff763b1c5935/src/libstd/thread/local.rs:249
#71454 0x00005555555589bb in _$LT$std..thread..local..LocalKey$LT$T$GT$$GT$::try_with::h53a54e92c6f85798 (self=0x5555555822f8, f=...) at /rustc/b139669f374eb5024a50eb13f116ff763b1c5935/src/libstd/thread/local.rs:298
#71455 0x0000555555559813 in foobar::TLS::__init::ha2c35ff7b5b5b1c5 () at src/main.rs:11
#71456 0x0000555555558543 in _$LT$std..thread..local..LocalKey$LT$T$GT$$GT$::init::hdc42f4a92fb0acaa (self=0x5555555822f8, slot=0x7ffff7b71760) at /rustc/b139669f374eb5024a50eb13f116ff763b1c5935/src/libstd/thread/local.rs:249
#71457 0x00005555555589bb in _$LT$std..thread..local..LocalKey$LT$T$GT$$GT$::try_with::h53a54e92c6f85798 (self=0x5555555822f8, f=...) at /rustc/b139669f374eb5024a50eb13f116ff763b1c5935/src/libstd/thread/local.rs:298
#71458 0x0000555555559813 in foobar::TLS::__init::ha2c35ff7b5b5b1c5 () at src/main.rs:11
#71459 0x0000555555558543 in _$LT$std..thread..local..LocalKey$LT$T$GT$$GT$::init::hdc42f4a92fb0acaa (self=0x5555555822f8, slot=0x7ffff7b71760) at /rustc/b139669f374eb5024a50eb13f116ff763b1c5935/src/libstd/thread/local.rs:249
#71460 0x000055555555879b in _$LT$std..thread..local..LocalKey$LT$T$GT$$GT$::try_with::h1ec5fdb93a6312a2 (self=0x5555555822f8, f=...) at /rustc/b139669f374eb5024a50eb13f116ff763b1c5935/src/libstd/thread/local.rs:298
#71461 0x0000555555558623 in _$LT$std..thread..local..LocalKey$LT$T$GT$$GT$::with::ha3235fa6d0c897c9 (self=0x5555555822f8, f=...) at /rustc/b139669f374eb5024a50eb13f116ff763b1c5935/src/libstd/thread/local.rs:242
#71462 0x00005555555597f0 in foobar::main::hc8c8fd2120b717f4 () at src/main.rs:17
#71463 0x0000555555558500 in std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h1a821f2568568e6c () at /rustc/b139669f374eb5024a50eb13f116ff763b1c5935/src/libstd/rt.rs:64
#71464 0x00005555555615a3 in {{closure}} () at src/libstd/rt.rs:49
#71465 do_call<closure,i32> () at src/libstd/panicking.rs:297
#71466 0x00005555555635ba in __rust_maybe_catch_panic () at src/libpanic_unwind/lib.rs:92
#71467 0x0000555555562086 in try<i32,closure> () at src/libstd/panicking.rs:276
#71468 catch_unwind<closure,i32> () at src/libstd/panic.rs:388
#71469 lang_start_internal () at src/libstd/rt.rs:48
#71470 0x00005555555584d9 in std::rt::lang_start::h8641b0ab47425e46 (main=0x5555555597e0 <foobar::main::hc8c8fd2120b717f4>, argc=1, argv=0x7fffffffe188) at /rustc/b139669f374eb5024a50eb13f116ff763b1c5935/src/libstd/rt.rs:64
#71471 0x000055555555986a in main ()
#71472 0x00007ffff7b99ce3 in __libc_start_main () from /usr/lib/libc.so.6
#71473 0x000055555555817e in _start ()
```
I'd expect the `try_with` to simply return `Err` instead of going into an infinite loop.
Platform: x86_64-unknown-linux-gnu
rustc 1.37.0-nightly (2019-06-03 6ffb8f53ee1cb0903f9d) | C-enhancement,A-thread-locals,T-libs | low | Critical |
451,860,275 | rust | `rustc` runs out of memory while trying to compile a large static (lazy_static!) HashMap | ```
Blocking waiting for file lock on build directory
Compiling crash v0.1.0 (/home/fred/Workspace/rtuts/crash)
error: Could not compile `crash`.
Caused by:
process didn't exit successfully: `rustc --edition=2018 --crate-name crash src/main.rs --color always --crate-type bin --emit=dep-info,link -C debuginfo=2 -C metadata=71bdac3a949cee3b -C extra-filename=-71bdac3a949cee3b --out-dir /home/fred/Workspace/rtuts/crash/target/debug/deps -C incremental=/home/fred/Workspace/rtuts/crash/target/debug/incremental -L dependency=/home/fred/Workspace/rtuts/crash/target/debug/deps --extern lazy_static=/home/fred/Workspace/rtuts/crash/target/debug/deps/liblazy_static-7468e7fc38fb612c.rlib` (signal: 9, SIGKILL: kill)
shell returned 101
```
My machine has 48GB of memory, yet still can't compile the example project, which creates a static hash map at compile time.
I tried both nightly and stable, same thing.
The input source code is only 19M, only containing u32s and strings. This shouldn't happen, GCC could do this no problem.

# Project [crash.zip](https://github.com/rust-lang/rust/files/3251295/crash.zip)
| A-borrow-checker,T-compiler,I-compilemem,C-bug | low | Critical |
451,866,872 | go | os: could not iterate over named pipes on Windows | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.5 windows/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\mars9\AppData\Local\go-build
set GOEXE=.exe
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=C:\Users\mars9\go
set GOPROXY=
set GORACE=
set GOROOT=C:\Go
set GOTMPDIR=
set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=C:\Users\mars9\AppData\Local\Temp\go-build413446826=/tmp/go-build -gno-record-gcc-switches
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I wanted to iterate over Windows' named pipes using `os` package:
```golang
f, err := os.Open(`\\.\pipe`)
if err != nil {
log.Fatalf("Could not open pipe list: %s", err)
}
defer f.Close()
stat, err := f.Stat()
if err != nil {
log.Fatalf("Could not get file stats: %s", err)
}
log.Printf("Is Directory: %t", stat.IsDir()) // output: false
pipelist, err := f.Readdir(-1)
if os.IsNotExist(err) {
log.Print("Pipe list does not exist")
} else if err != nil {
log.Printf("Could not read pipe list: %s", err)
}
for _, pipe := range pipelist {
log.Printf("Pipe found: %s", pipe.Name())
}
```
### What did you expect to see?
The same result as in Python:
```python
for pipeName in os.listdir(r'\\.\pipe'):
print("Pipe found: %s" % pipeName)
```
```
Pipe found: InitShutdown
Pipe found: lsass
Pipe found: ntsvcs
Pipe found: scerpc
...
```
### What did you see instead?
```
Is Directory: false
Pipe list does not exist
``` | OS-Windows,NeedsInvestigation | low | Critical |
451,877,554 | vue | Stringify Vue instance | ### Version
2.6.10
### Reproduction link
[https://codesandbox.io/s/vue-stringify-instance-2h78n](https://codesandbox.io/s/vue-stringify-instance-2h78n)
### Steps to reproduce
Open the link, the error is reproduced right away as the code tries to serliaze the Vue instance.
### What is expected?
Being able to serlialize the Vue instance.
### What is actually happening?
Is not possible as the Vue prototype is missing the toJSON method.
---
The code is using `telejson` as the stringify library in order to remove circular references. Is the same utility used by Storybook 5 which will rise those errors in diferent scenarios, as for example when using the Action addon and passing as argument a Vue instance.
<!-- generated by vue-issues. DO NOT REMOVE --> | discussion | low | Critical |
451,948,183 | godot | Clicking F5 is causing many save request calls to the plugin | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
3.1.1
**OS/device including version:**
Ubuntu 18.04, 64bit, gtx1060
**Issue description:**
I noticed that when scene has unsaved changes, clicking F5 on the keyboard will cause many `save_external_data` calls in the plugin.
It can be a problem in scenario when saving the state of plugin takes a little longer (save operation take many times more time than in should).
From my short investigation it probably is a problem related to how keyboard shortcuts are implemented in the editor. The place at which I was looking at, was `BaseButton._unhandled_input()` function (where stacktrace took me when I used a breakpoint, placed at the line with the `save_external_data` call on the engine site (`EditorData::save_editor_external_data()`)).
**Steps to reproduce:**
Simple way:
1. Download sample project
2. Run Godot from terminal window and open the sample project
3. Open 'SomeScene.tscn'
4. Move spatial somewhere and click F5
5. In terminal window observe multiple lines in a row starting with `SaveIssue: Plugin is saving external data`
Manual way:
1. Run Godot from console
2. Create new plugin
3. Put this script in the main script file of the plugin:
```
tool
extends EditorPlugin
func save_external_data() -> void:
print("SaveIssue: Plugin is saving external data at msec time: ", OS.get_ticks_msec())
```
4.. Create new scene, put a Spatial inside, save it
5. Each time when you want to reproduce the problem, make local changes to the scene by moving spatial somewhere and click F5 on the keyboard, and take a look at terminal window (output inside editor is cleared at the moment of starting the sample)
**Minimal reproduction project:**
[PluginSaveIssue.zip](https://github.com/godotengine/godot/files/3252131/PluginSaveIssue.zip)
| bug,topic:editor,confirmed,topic:plugin | low | Minor |
451,955,331 | flutter | Expose the Google Maps Geometry Library in the maps plugin | ## Use case
I need to calculate area occupied by a polygon on a Google map.
I understand that the maps plugin is WIP but it still would be a nice to have.
While I only currently need only one function it probably makes sense to add support for the entire library
https://developers.google.com/maps/documentation/javascript/reference/geometry#spherical.computeArea
## Proposal
Add support for the Google Maps Geometry Library as a part of Maps plugin or a separate one. | c: new feature,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Major |
452,085,158 | TypeScript | [Feature Request] Abstract Interfaces with Undeclared Properties | Edit: changed `partial` to `abstract` because I think it makes more sense.
## Search Terms
Abstract Mapped Types
Partial Types
Meta Types
Super Types
## Suggestion
I'd like to propose an expansion to the TS syntax that allows you to create `abstract` interfaces with `undeclared` properties which force child types to declare those properties with any value, no matter what their value type may be in the child type. This can be helpful for identifying code which maps two objects to one another, where a developer wishes for TypeScript to warn of properties that are missing between two types. Optional properties should be allowed in the concrete type, but I don't think optionality should be defined by the abstract interface.
Both `abstract` and `undeclared` are just first-draft words, and I think `abstract` makes sense because it is the interface equivalent of `abstract class`.
## Use Cases
Where object mapping is used. For example, translating query string parameters to form values. The query string will always return parameters as a string or string array in the case where there are multiple parameters.
`example.com/?name=john%20doe&age=21`
```
abstract interface IFormQueryStringMap {
name: undeclared;
age: undeclared;
}
interface IFormQueryString extends IFormQueryStringMap {
name?: string;
age?: string;
}
interface IFormValues extends IFormQueryStringMap {
name: string;
age: number;
}
const selectFormValuesFromQueryString = (queryString: IFormQueryString): IFormValues => ({
name: queryString.name || '',
age: Number(queryString.age),
});
```
On loading the page, the app will translate unserialize `location.search -> IFormQueryString -> IFormValues` and populate the form. On submitting the form it will translate from `IFormValues -> IFormQueryString` and serialize to `location.search`. I use patterns like this often, and they can be subject to significant manual review. I think TypeScript is in unique position to solve a mapping problem like this. It will not catch the case where a developer forgets to map `IFormValues['age']` to `IFormQueryString['age']`, though.
## Examples
```
abstract interface AbstractInterface {
prop1: undeclared;
prop2: undeclared;
prop3: undeclared;
prop4: string; // acts normally
prop5?: string; // acts normally
propError?: undeclared; // error: cannot assign optionality to undeclared properties because it would declare undefined
}
abstract interface AnotherAbstractInterface extends AbstractInterface; // OK
interface ConcreteInterface extends AbstractInterface {
// error: prop1 is undeclared
prop2: string;
prop3?: string;
}
type ConcreteType = AbstractInterface; // error prop1 is undeclared
type ConcreteType = AbstractInterface & {
prop1: string;
prop2: number;
prop3: object;
} // OK
type Prop1 = AbstractInterface['prop1']; // error: AbstractInterface['prop1'] is undeclared
type Prop2 = ConcreteInterface['prop2']; // string
type Prop3 = ConcreteInterface['prop3']; // string | undefined
type Prop4 = ConcreteInterface['prop4']; // string
type Prop5 = ConcreteInterface['prop5']; // string | undefined
```
## Can this be done today?
Yes, but it would write JavaScript output.
```
interface IFormQueryString {
prop1: string;
prop2: string;
prop3?: string;
prop4: string;
prop5?: string;
}
type FormQueryMap = Record<keyof IFormQueryString, any>;
const formQuery = {
// Error on next assignment, prop1 is undefined
prop2: '';
prop3: '' as string | undefined;
prop4: '';
prop5: '' as string | undefined;
// this assignment makes sure that formQuery has all keys of IFormQueryString
const formQueryMap: FormQueryMap = formQuery;
export type FormQuery = typeof FormQuery;
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
For this last one, I'm not certain. I think it adds type safety, but it also might be too "meta". It might also be a case for decorators or annotations or something else. | Suggestion,Awaiting More Feedback | low | Critical |
452,100,757 | flutter | Investigate Swift Package Manager for Swift plugins | Since Swift has a 'first-party' package manager in the form of Swift Package Manager, it would be interesting to evaluate whether we could use that, instead of CocoaPods, for Swift plugins. That would allow for gradual migration away from CocoaPods over time assuming plugin development eventually shifts toward Swift, rather than having to do an expensive, ecosystem-breaking one-time migration.
According to docs, SPM packages can contain C/ObjC/C++/ObjC++ code, which substantially mitigates the bootstrapping problem; given that people could presumably make SPM packages that wrap existing non-Swift libraries for use in Swift plugins. | c: new feature,platform-ios,tool,platform-mac,a: existing-apps,P2,team-ios,triaged-ios | low | Critical |
452,126,903 | flutter | Unicode input should be indicated. | Ctrl+Shift+U (on certain operating systems) normally would bring up an underlined "u" in a text field to indicate it's awaiting a Unicode entry. This is not indicated on Flutter, despite it taking the input.

 | a: text input,c: new feature,framework,f: material design,platform-windows,platform-linux,a: desktop,P3,team-text-input,triaged-text-input | low | Minor |
452,135,929 | neovim | Completion item kinds standard | I thought this was appropriate since this is a discussion that is in interest of all gui implementations, let me know if that's not the case.
I noticed when [implementing](https://github.com/vhakulinen/gnvim/pull/49) coc.nvim support for completion icons for gnvim, that while there's a brief definition on `|complete-items|` with regard to the kind that is passed, most major implementations do not follow or extend it.
That would not be an issue if all used the same definitions, but that does not seem to be the case.
Currently each plugin must have its logic handled by the GUI.
These are currently the definitions for the major LSP implementations:
- [coc.nvim](https://github.com/neoclide/coc.nvim/blob/909710fddb04d383e5546b0f869c44f395a80d02/src/languages.ts#L143-L167) The only one that sends a single character by default, it highlights a possible issue with this restriction (max 52 kinds on ascii word chars)
- [vim-lsp](https://github.com/prabirshrestha/vim-lsp/blob/3441fa8c2d27b46a510cd9c17cfa1bde04ca4a6e/autoload/lsp/ui/vim/utils.vim#L39-L56)
- [LanguageClient-neovim](https://github.com/autozimu/LanguageClient-neovim/blob/0ac444affdff8db699684aa4cf04c2cb0daf0286/rplugin/python3/denite/lsp/protocol.py#L48-L55)
Not sure how feasible it is to enforce a more thorough standard (or if its something neovim should concern with in the first place), but if achieved would avoid ad-hoc logic for each LSP plugin for GUI developers. | documentation | low | Minor |
452,153,190 | flutter | Support Android Development from 32-bit Windows | At this time we have no plans to add 32-bit support for Windows. I'm filing this bug as a cleaner place to track this request (e.g. upvote if you want this) as well as the blocking issues were someone to want to add support in the future.
We have several other existing 32-bit windows bugs, all of which are closed, e.g. https://github.com/flutter/flutter/issues/22638 https://github.com/flutter/flutter/issues/21169 https://github.com/flutter/flutter/issues/14925
As originally captured in https://github.com/flutter/flutter/issues/14925#issuecomment-415864259, blockers are at least:
1. flutter/engine has no 32-bit build rules currently, including gen_snapshot
2. gen_snapshot is architecture dependent, unclear if a 32-bit gen_snapshot could build for 64-bit Android (which is a requirement for the Play Store as of this August).
Steam stats show 32-bit Windows as less than 2% market share:
https://store.steampowered.com/hwsurvey
Given the low market share numbers and [other vendors dropping support for 32-bit](https://www.extremetech.com/computing/267180-nvidia-ends-support-for-32-bit-operating-systems), it seems highly unlikely we'd add 32-bit Windows development support for Flutter.
I welcome thoughts from others, or additional/new data as to why we should add 32-bit development support? | platform-android,engine,platform-windows,P3,team-android,triaged-android | medium | Critical |
452,162,726 | flutter | Driver tests should fail when method channel calls are failed. | The MethodChannel code swallows exceptions and logs them to an error here:
https://github.com/flutter/engine/blob/661c24e4f359c60915a21c285d1d34777f0ccb30/shell/platform/android/io/flutter/plugin/common/MethodChannel.java#L205
This resulted in a recent regression(fixed in flutter/engine#9185 going unnoticed for over a day, even that we had a driver test triggering the bad code(as the exception was just logged).
We should make sure tests are failing when this happen(same for `IncomingMethodCallHandler#onMessage`). | platform-android,engine,t: flutter driver,a: platform-views,P2,team-android,triaged-android | low | Critical |
452,212,001 | flutter | IOS back gesture dismisses top route instead of its own route | When I make a back gesture on IOS and it is not the top route, the top route gets popped instead of the route where the back gesture originated and the back gesture animation gets stuck.
- See [video](https://drive.google.com/file/d/1BkcxZejNGJHUzazxTb9205c66yunXxwQ/view?usp=sharing)
- After the back gesture animation gets stuck, I'm trying to click everywhere. Nothing responds.
- For more details, see [#28237](https://github.com/flutter/flutter/issues/28237#issue-412698362)
```
[✓] Flutter (Channel beta, v1.6.3, on Mac OS X 10.14.5 18F132, locale en-BR)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
[✓] iOS toolchain - develop for iOS devices (Xcode 10.2.1)
[✓] Android Studio (version 3.4)
[✓] Connected device (2 available)
• No issues found!
```
| platform-ios,framework,f: material design,f: cupertino,f: routes,f: gestures,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-ios,triaged-ios | low | Critical |
452,224,873 | angular | ReactiveForms checkbox 'disabled' and 'checked' are not predictable when using FormBuilder.group() | # 🐞 bug report
### Affected Package
@angular/forms
### Is this a regression?
Not sure, haven't tried to do this before
### Description
When using FormBuilder to create a FormGroup, I have noticed that inputs of type `checkbox` cannot be set to both `disabled` and `checked`. `disabled` will work on its own, but when `checked` is added as well, I see the box gets checked but not disabled.
I understand that checkbox `checked` states are linked to their `value` property (somehow or other) but this is unintuitive and I can't find anything for the life of me about how it is supposed to work or be configured. I noticed that adding a falsey value for a checkbox control config also causes a disabled state.
Conversely, I also noted that when a checkbox is configured w/o a value and then set to `disabled`, it actually stays enabled _and checks the checkbox_. Not sure what's up but maybe these properties are in conflict somehow?
If there _is_ documentation around this, it's _really really_ hard to find. If this is a bug (which it looks to be, I've been messing with this for quite some time now), then hopefully this issue brings it to light.
Manual configuration after the form group creation works (i.e. manually calling `disable()`) but this is not desirable.
**A CLARIFICATION**
This is not a blocking bug. Checkbox inputs can still be configured using `value` and `disabled`. However I find this counter-intuitive. Configuring native inputs with `type="checkbox"` involves giving them a value _and_ the input itself having a checked/unchecked _state_, which are two different things. I think Angular conflates the two, for what I can imagine was considered "ease of use", but given that this is not how you configure native checkbox elements, it is frustrating, confusing, and worse, takes a long time to debug.
## 🔬 Minimal Reproduction
https://stackblitz.com/edit/angular-issue-repro2-3sy5kw
## 🔥 Exception or Error
N/A
## 🌍 Your Environment
**Angular Version:**
<pre><code>
Angular CLI: 7.3.9
Node: 10.15.0
OS: win32 x64
Angular: 7.2.15
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router, service-worker
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.13.9
@angular-devkit/build-angular 0.13.9
@angular-devkit/build-optimizer 0.13.9
@angular-devkit/build-webpack 0.13.9
@angular-devkit/core 7.3.9
@angular-devkit/schematics 7.3.9
@angular/cdk 7.3.7
@angular/cli 7.3.9
@angular/flex-layout 7.0.0-beta.19
@angular/material 7.3.7
@ngtools/webpack 7.3.9
@schematics/angular 7.3.9
@schematics/update 0.13.9
rxjs 6.5.2
typescript 3.2.4
webpack 4.29.0
</code></pre>
**Anything else relevant?**
FYI the version in stackblitz also reproduces the issue.
| type: bug/fix,freq2: medium,area: forms,state: confirmed,P3 | low | Critical |
452,272,356 | PowerToys | Prevent virtual keyboard from being dismissed when touching physical keyboard | Sometimes we need the virtual keyboard for typing in languages that use different scripts such as Korean or Russian, but we still keep using the physical keyboard for shortcuts such as Alt+Tab or for the numbers, etc.
Any time we hit any real key, the virtual keyboard is dismissed (Windows 10)
Maybe a pin function could help. There is plenty of space there to add one more icon | Idea-Enhancement,Product-Tweak UI Design | low | Major |
452,273,987 | terminal | Feature Request: Add tab indicator to show number of panes in tab | # Summary of the new feature/enhancement
Migrated from ADO. Need some kind of indicator for number of panes in tab. Just throwing an idea out there but why not make it look kinda like this (obviously ignore the fact that this is a browser):

Related to #1000
Pinging @cinnamon-msft & @zadjii-msft | Area-UserInterface,Product-Terminal,Issue-Task | low | Major |
452,286,371 | rust | Intrinsic for `type_name_of_id` to power a better `impl Debug for TypeId`? | Currently `TypeId`s have uninformative derived `Debug` impls:
```rust
fn main() {
println!("{:?}", std::any::TypeId::of::<usize>());
}
```
```
Compiling playground v0.0.1 (/playground)
Finished dev [unoptimized + debuginfo] target(s) in 0.91s
Running `target/debug/playground`
TypeId { t: 8766594652559642870 }
```
This results in fairly poor Debug output for dynamic types like `anymap`.
I think it could be quite nice for debugging/logging/etc to allow printing the type name from a `TypeId` in the Debug impl. It would provide an out of the box improvement to debugging existing dynamic typing tools, and IIUC the contents of Debug impls in the standard library are not considered stable so there's neither a breaking change here nor a de facto stabilization of the type_name representation.
I assume this would need to rely on some unstable intrinsic being exposed to get the type_name of an ID at run time, but I'm not really aware what would be needed.
Thoughts? cc @oli-obk as we had discussed this a bit on IRC. | T-lang,T-libs-api,needs-fcp | low | Critical |
452,314,368 | pytorch | collect_env ignores conda environment | Hi,
https://github.com/pytorch/pytorch/blob/95eb9339c10ffc4d74312f35ac15fffa61596de1/torch/utils/collect_env.py#L114
I find this function `get_running_cuda_version` ignores the existence of cudatoolkit in conda env, instead it uses `nvcc --version` , the system wide installed CUDA as a reference to get the cuda version.
This may be confusing if someone has cudatoolkit with a different cuda version installed.
Can it be improved? Thanks. | module: build,module: docs,low priority,module: collect_env.py,triaged | low | Minor |
452,341,744 | flutter | [go_router] Figure out how to use go_router to pop the target page | For example
1. request 1 runs in the background, and a dialog A is displayed after the end of the request
2. request 2 runs in the foreground and displays dialog B directly. Close the dialog B after ending the request
The problem is that dialog A is displayed after dialog B, but closed dialog A while using Navigator pop, but I needed to close dialog B
So how can i pop the target page?
```dart
import 'dart:async';
import 'package:flutter/material.dart';
import 'package:flutter/cupertino.dart';
void main() async {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: HomePage(),
);
}
}
class HomePage extends StatefulWidget {
@override
_HomePageState createState() => _HomePageState();
}
class _HomePageState extends State<HomePage> {
StreamController<bool> sc = StreamController();
Stream<bool> get stream => sc.stream;
@override
void initState() {
stream.listen((show) {
if (show) {
showDialogB();
} else {
Navigator.of(context).pop();
}
});
request1();
request2();
super.initState();
}
void request1() {
Timer(Duration(seconds: 3), () {
showDialogA();
});
}
void showDialogA() {
showDialog(
context: context,
builder: (context) {
return Container(
height: 100,
width: 100,
color: Colors.white,
child: Center(
child: Text(
"新版本",
style: TextStyle(color: Colors.black),
),
),
);
});
}
void showDialogB() {
showDialog(
context: context,
builder: (context) {
return Container(
height: 100,
width: 100,
color: Colors.white,
child: Center(
child: CupertinoActivityIndicator(),
),
);
});
}
void request2() {
sc.add(true);
Timer(Duration(seconds: 5), () {
//dimisssDialogB
sc.add(false);
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text("111"),
),
);
}
}
``` | c: new feature,f: routes,package,P3,p: go_router,team-go_router,triaged-go_router | low | Major |
452,366,094 | TypeScript | Show documentation for literals in intellisense |
## Suggestion
When filling in a value where the expected type is a union of literals, intellisense is able to suggest the possible values.
However, it does NOT show the documentation associated with each literal:

This makes me tend to use enums instead, which has a more verbose syntax and requires an extra import.
## Use Cases
Building and documented APIs in typescript will be nicer if we could just document the literals in a union.
## Search Terms
- intellisense for string literals
- documentation for string literals | Suggestion,Awaiting More Feedback | low | Minor |
452,410,477 | pytorch | how libtorch can work with tensor data as same as pytorch | I have a question about data manipulation with the C++ in libtorch.
For example, in pytorch I could deal with tensor like this:
img[:, :10, :] = 0
but with libtorch I don't know how to do as same as pytorch,if I use for circle,then it will be very slow,I want to know if there is any function with libtorch can do this,thank you very much.
| module: docs,module: cpp,low priority,triaged | low | Major |
452,438,107 | neovim | Prepend command with range `<,`> when in Visual mode, and with '<,'> when in Visual Line mode. | In Vim/Neovim, when I visually select a word (e.g. in the middle of the line) and start typing command, the editor is kind enough to prepend the range of the visual selection to the command:
```
:'<,'>
```
However, `'<,'>` gives me the range of lines instead of range of the actual visual selection, and I can't operate on the word (e.g. use Unix command `rev` to reverse it), because the operation I do affects the whole line.
To me in this case it would be more sensible if the editor would prepend ``:`<,`>`` instead of `:'<,'>` by default, because I think this would open many new possibilities with fewer keystrokes. At the moment e.g. ``:`<,`>!rev`` doesn't work, for unknown reason to me, but one can still use `` `< `` to jump to the beginning of current or previous visual selection, and `` `> `` to jump to the end of selection.
I think prepending `:'<,'>` to the command would definitely make sense in the visual-line selection. | enhancement | low | Minor |
452,439,185 | ant-design | Menu组件在vertical模式时,希望可以配置子菜单的展开方向 | - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
在使用`<DropDown>`组件时,多级的子菜单可以向左展开,以满足不同情况的需求,现在子菜单只能向右展开
### What does the proposed API look like?
```jsx
<Menu mode={'vertical'} unfoldDirection={'left' | 'right'}>
<Menu.Item>
xxx
</Menu.Item>
...
</Menu>
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive | low | Minor |
452,440,055 | terminal | Bug Report: ColorTool and changing colours causes strange behaviour since Windows 10 1903 | # Environment
```none
Microsoft Windows [Version 10.0.18362.30]
```
# Steps to reproduce
You should be able to use any color scheme, but I'll provide mine specifically below.
**Fotsies Colors.ini**
```ini
[table]
DARK_BLACK = 30,30,30
DARK_BLUE = 0,0,128
DARK_GREEN = 0,128,0
DARK_CYAN = 0,128,128
DARK_RED = 128,75,75
DARK_MAGENTA = 128,0,128
DARK_YELLOW = 238,237,240
DARK_WHITE = 192,192,192
BRIGHT_BLACK = 128,128,128
BRIGHT_BLUE = 0,0,255
BRIGHT_GREEN = 0,255,0
BRIGHT_CYAN = 0,255,255
BRIGHT_RED = 255,75,75
BRIGHT_MAGENTA = 255,0,255
BRIGHT_YELLOW = 255,255,0
BRIGHT_WHITE = 255,255,255
[screen]
FOREGROUND = DARK_WHITE
BACKGROUND = DARK_BLACK
[popup]
FOREGROUND = DARK_CYAN
BACKGROUND = DARK_BLACK
```
1. Open PowerShell as a regular user (I'm using 5.1 that comes with Windows 10)
2. Obtain the latest ColorTool release and extract it
3. Place the scheme above in the schemes sub-directory under ColorTool.exe
4. Load the scheme: `.\ColorTool.exe 'Fotsies Colors.ini'`
5. Notice the scheme is not applied correctly and that colors are not consistent as they should be
# Expected behavior
As per the previous stable Windows release, colors should be applied correctly and continue to function while using the terminal
# Actual behavior
Best way to demonstrate this is with some screenshots:

I tested this with a completely fresh install of windows 10 1903 just now to confirm it's nothing that I've done wrong. I can confirm that everything works perfectly in the previous October 2018 Windows 10 release.
Honestly, the terminal in this new release is unusable for me given all the problems I've been finding and I don't really have much choice but to reinstall the older Windows release as I depend on PowerShell. Unless of course any workarounds may be suggested?
Any help is greatly appreciated. | Product-Colortool,Help Wanted,Issue-Bug,Needs-Tag-Fix | low | Critical |
452,452,777 | vscode | Expect minimap functionality and similar to xcode 11 | ### The function definition is all suspended, which can be located quickly.

| feature-request,editor-minimap | medium | Major |
452,464,387 | go | proposal: x/crypto/blake2b,x/crypto/blake2s: Implement personalisation and salting | The current implementations of BLAKE2b and BLAKE2s support both variable output lengths and keyed BLAKE2 ([blake2b](https://github.com/golang/crypto/blob/20be4c3c3ed52bfccdb2d59a412ee1a936d175a7/blake2b/blake2b.go#L107-L108), [blake2s](https://github.com/golang/crypto/blob/20be4c3c3ed52bfccdb2d59a412ee1a936d175a7/blake2s/blake2s.go#L72-L73)), but do not yet enable setting the personalisation and salt sections of the parameter block. Personalisation in particular is becoming more commonplace in BLAKE2's usage within cryptographic protocols. It would be beneficial both inherently and for interoperability if the BLAKE2 implementations provided an interface for instantiating the digest state with a personalisation string and/or a salt. | Proposal,Proposal-Crypto | medium | Major |
452,493,821 | pytorch | Implementation of Group equivariant convolutions | ## 🚀 Feature
An implementation of group-equivariant convolutions defined by Cohen et. al in the paper 'Group equivariant convolutions' - in ICML 2016.
Specifically, an implementation of the group-convolution over the group of translations and rotations of multiples of 90 degrees at any centre, and the same group extended by reflections.
## Motivation
Group-equivariant convolutions (g-cnns) extend the equivariance of convolutions to beyond translations. Cohen et. al argue in their paper that the parameter-sharing in g-cnns allow for more efficient use of parameters.
Further, limited experiments conducted by Adam Bielski in his github repository https://github.com/adambielski/pytorch-gconv-experiments, suggest that g-cnns improve upon the performance of regular cnn networks in at least some cases.
I'm not sure if the existing pytorch implementations are efficient as they take considerably more time to run than nn.Conv2d.
To sum up, it would be awesome if you guys could create nn.GConv2d() for group-equivariant convolutions on at least the two groups mentioned before.
## Additional context
TS cohen's original implementation of group-equivariant convolutional networks: https://github.com/tscohen/GrouPy | feature,module: nn,low priority,triaged | low | Major |
452,596,549 | flutter | Focus/RawKeyboardListener widgets don't receive events in some cases | I tried to create a simple flutter app which used a RawKeyboardListener. It worked well under the android emulator using the non-Fuchsia flutter workflow, but when I moved it to Fuchsia and ran it as a Fuchsia app, it did not receive keyboard events. | customer: fuchsia,framework,P2,team-framework,triaged-framework | low | Minor |
452,609,323 | rust | Lint control attributes (allow/deny/etc) have no effect on lifetime and const parameters | ```rust
fn f<#[allow(warnings)] foo>() {} // warning: type parameter `foo` should have an upper camel case name
```
Expected behavior - no warning, actual behavior - warning.
cc https://github.com/rust-lang/rust/issues/61238, which is a similar issue | A-attributes,A-lints,T-compiler,C-bug | low | Minor |
452,614,102 | go | x/crypto/ssh: semantics around running session.Wait() after calling session.Run(), when EOF messages are sent | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
1.12.5
</pre>
### Does this issue reproduce with the latest release?
I am using a vendored version of the library with commit `e84da0312774c21d64ee2317962ef669b27ffb41`
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/vagrant/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/vagrant/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build896755704=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I will work on providing an exact reproducer of this via a small go program, but do not have this on hand immediately. I can do this if you like. My question is more about how to properly use the SSH library, as I feel like I am misunderstanding how it works.
I am wondering about the semantics behind running `session.Wait()` after calling `session.Run()`.
It appears that `session.Run()` calls `session.Start()` followed by `session.Wait()`.
I've found that if I try calling `session.Wait()` again after calling `session.Run()`, my code will hang infinitely. Is this because there is nothing to wait upon after `session.Run()` terminates, since the remote command has exited, so `session.Wait()` is waiting for something, while nothing is being sent?
I also am confused about how sessions are closed. I observe when running with `debugMux ` set to `true` locally, that my session gets sent a `channelEOFMsg` after I have called `session.Run()`, but before I call `session.Close()`. When I try to close the session, I get the error `failed to close session: EOF`. I would expect that after calling `Run`, that I'd be able to close the session gracefully. Who is sending these EOF's? Is that the expected behavior for running a command on a session, in that the session will get an EOF when the command is done running?
Output from running with `debugMux=true`:
```
--------------> session.Run() <----------------
2019/06/05 08:17:03 send global(2): ssh.channelOpenMsg{ChanType:"session", PeersID:0x2, PeersWindow:0x200000, MaxPacketSize:0x8000, TypeSpecificData:[]uint8(nil)}
2019/06/05 08:17:03 decoding(2): 91 &ssh.channelOpenConfirmMsg{PeersID:0x2, MyID:0x0, MyWindow:0x0, MaxPacketSize:0x8000, TypeSpecificData:[]uint8{}} - 17 bytes
2019/06/05 08:17:03 send(2): ssh.channelRequestMsg{PeersID:0x0, Request:"exec", WantReply:true, RequestSpecificData:[]uint8{0x0, 0x0, 0x0, 0x56, 0x6b, 0x75, 0x62, 0x65, 0x63, 0x74, 0x6c, 0x20, 0x67, 0x65, 0x74, 0x20, 0x70, 0x6f, 0x64, 0x73, 0x20, 0x2d, 0x2d, 0x61, 0x6c, 0x6c, 0x2d, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x73, 0x20, 0x2d, 0x6f, 0x20, 0x6a, 0x73, 0x6f, 0x6e, 0x70, 0x61, 0x74, 0x68, 0x3d, 0x27, 0x7b, 0x2e, 0x69, 0x74, 0x65, 0x6d, 0x73, 0x5b, 0x2a, 0x5d, 0x2e, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x64, 0x65, 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x7d, 0x27}}
2019/06/05 08:17:03 decoding(2): 93 &ssh.windowAdjustMsg{PeersID:0x2, AdditionalBytes:0x200000} - 9 bytes
2019/06/05 08:17:03 decoding(2): 99 &ssh.channelRequestSuccessMsg{PeersID:0x2} - 5 bytes
-------------> where is this EOF coming from? <---------------
2019/06/05 08:17:03 send(2): ssh.channelEOFMsg{PeersID:0x0}
-----------> close session here <-----------------
2019/06/05 08:17:03 decoding(2): 98 &ssh.channelRequestMsg{PeersID:0x2, Request:"exit-status", WantReply:false, RequestSpecificData:[]uint8{0x0, 0x0, 0x0, 0x0}} - 25 bytes
2019/06/05 08:17:03 decoding(2): 96 &ssh.channelEOFMsg{PeersID:0x2} - 5 bytes
2019/06/05 08:17:03 decoding(2): 97 &ssh.channelCloseMsg{PeersID:0x2} - 5 bytes
2019/06/05 08:17:03 send(2): ssh.channelCloseMsg{PeersID:0x0}
2019/06/05 08:17:03 send(2): ssh.channelCloseMsg{PeersID:0x0}
-------------> log of error from closing session <----------------
failed to close session: EOF
```
### What did you expect to see?
I would expect to not get an EOF after calling `Run()` on a session, and for `Wait()` to be idempotent / to return immediately if there is nothing to wait upon. I would also expect that EOF would not get sent by `Run()`, and that I would not get an error closing a session when nothing else has tried to close it.
### What did you see instead?
See above.
I will work on a small reproducer locally, but I hope that the above is enough context / information for me to get some answers, as I feel like this is moreso me not understanding how the library is supposed to be used.
| NeedsInvestigation | low | Critical |
452,667,754 | TypeScript | expando fields not added to symbol exports | **TypeScript Version:** 3.4.0, 3.5.0, master (7dc1f40dc15132ba87a70a3bec6d63317cb5b91e)
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
expando, iife, umd, exports
**Code**
Given the following code, similar to an UMD file:
```javascript
(function(){
var A = (function () {
function A() {}
return A;
}());
A.expando = true;
}());
```
**Expected behavior:**
The `ts.Symbol` of the outer `A` should have `expando` in it.
**Actual behavior:**
The `ts.Symbol` of the outer `A` does not have `expando` in it. When the declaration of `A` is at the top-level, without the "UMD wrapper", it works properly:
```javascript
var A = (function () {
function A() {}
return A;
}());
A.expando = true;
```
I have researched what is preventing the `expando` static prop to be added to exports, and it is due to when a `ts.Symbol` is considered for expando properties, here:
https://github.com/microsoft/TypeScript/blob/3d2af9ff332fca6c5db2390be0b1f08bba8402a1/src/compiler/binder.ts#L2649-L2678
Specifically, the symbol's flag of `A` inside the UMD wrapper are not sufficient to take the early-return in the first statement, whereas with `A` at the top-level it has been assigned appropriate flags.
Because the appropriate flags are not present, the code structure is analyzed to determine if the symbol should be classified as expando. Specifically, the relevant code is in `getExpandoInitializer`:
https://github.com/microsoft/TypeScript/blob/3d2af9ff332fca6c5db2390be0b1f08bba8402a1/src/compiler/utilities.ts#L1873-L1896
From this function, it becomes clear why the symbol fails to be recognized as expando symbol: the used IIFE syntax deviates from the syntax that is accounted for in `getExpandoInitializer`. When changing the sample into the following, it does indeed work as expected:
```javascript
(function(){
var A = (function () {
function A() {}
return A;
})(); // <-- The difference is here
A.expando = true;
}());
```
Unfortunately however, downleveled ES5 code does use the syntax that is _not_ accounted for.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
Can't be replicated in the playground, but copying the code into https://ts-ast-viewer.com, setting the script kind to JS and inspecting the Symbol of the outer `A` declaration shows that its `exports` are empty. The alternative IIFE syntax does have the proper `exports`.
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
n/a
**Background info**
This is an issue for Angular's Compatibility Compiler, which uses the TypeScript compiler to parse JavaScript bundles in various formats and uses the symbol information to reason about the code. PR https://github.com/angular/angular/pull/30795 now contains a hack to patch TS's `getExpandoInitializer`, which does indeed resolve the issue.
| Needs Investigation | low | Critical |
452,738,479 | go | proposal: net/url: add FromFilePath and ToFilePath | In the course of investigating #27698, I learned that the conversion between URLs and file paths on Windows is rather non-trivial. (See also #6027.)
According to https://blogs.msdn.microsoft.com/ie/2006/12/06/file-uris-in-windows/, a file path beginning with a UNC prefix should use the UNC hostname as the “host” part of the URL, with only two leading slashes, whereas a file path beginning with a drive letter should omit the “host” part and prepend a slash to the remainder of the path.
Once you know those rules, the implementation in each direction is only about ten lines of code, but it's easy to get wrong (for example, by neglecting the possibility of a UNC prefix) if you haven't thought about it in depth.
I propose that we add two functions to the `net/url` package to make these cases more discoverable, with the following signatures:
```go
// ToFilePath converts a URL with scheme "file" to an absolute file path.
// It returns a non-nil error if the URL does not have the scheme "file" or
// the resulting path is not well-formed.
func ToFilePath(u *URL) (string, error) {
[…]
}
// FromFilePath converts an absolute file path to a URL.
func FromFilePath(path string) (*URL, error) {
[…]
}
```
CC @alexbrainman | Proposal,FeatureRequest,Proposal-Hold | medium | Critical |
452,775,229 | pytorch | [JIT] kwarg with default doesn't work for class instantiation | ```
import torch
from typing import List, Optional, Dict
@torch.jit.script
class Series(object):
def __init__(self, floats : Optional[List[float]] = None, ints : Optional[List[int]] = None):
self.floats = torch.jit.annotate(Optional[List[float]], floats)
self.ints = torch.jit.annotate(Optional[List[int]], ints)
@torch.jit.script
def foo(df : Dict[str, Series]) -> Dict[str, Series]:
df['bar'] = Series(ints=[5, 6])
return df
print(foo.code)
print(foo({'col1': Series(floats=[3.0, 4.0]), 'col2': Series(ints=[3, 4])}))
```
```
RuntimeError:
for operator __init__(ClassType<Series> self, float[]? floats, int[]? ints) -> None:
argument floats not provided.
at dataframe.py:13:17
@torch.jit.script
def foo(df : Dict[str, Series]) -> Dict[str, Series]:
df['bar'] = Series(ints=[5, 6])
~~~~~~ <--- HERE
return df
:
at dataframe.py:13:17
@torch.jit.script
def foo(df : Dict[str, Series]) -> Dict[str, Series]:
df['bar'] = Series(ints=[5, 6])
~~~~~~ <--- HERE
return df
```
cc @suo | oncall: jit,triaged,jit-backlog | low | Critical |
452,797,776 | TypeScript | Type Guard Issue with Array.prototype.fill, Array Constructor | ## Typegurd Issue with Array.prototype.fill, Array Constructor
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
- Array.prototype.fill
- fill
- ArrayConstructor
- array implicit any
**Code**
```ts
const foo: number[] = new Array(3).fill("foo"); // accepted
```
**Actual behavior:**
This code above is accepted because `new Array(3)` returns `any[]`, and so `fill("foo")` returns `any[]`.
I know giving type explicitly to Array like
```ts
const foo: number[] = new Array<number>(3).fill("foo"); // error
```
would work, but I believe the compiler should reject the first one. (This is TypeScript.)
**Expected behavior:**
**Option A. Array.prototype.fill returns narrow type**
replace declaration of `Array.fill` in `lib.es2015.core.d.ts` like `fill<S extends T>(value: S, start?: number, end?: number): S[];`, then
```ts
const foo: number[] = new Array(3).fill("foo");
// error: "Type 'string[]' is not assignable to type 'number[]'."
// because `fill` is resolved as `fill<string>(value: string, .....): string[]`
const bar = new Array(3).fill(0); // `bar` is resolved as `number[]`
bar.fill("bar"); // error: "Type '"bar"' is not assignable to type 'number'."
const baz: (number|string)[] = new Array(3).fill(0); // accepted.
baz.fill("baz"); // accepted.
```
**Option B. Array Constructor never return implicit any**
The problem is that Array constructor returns `any[]` implicitly. The first code is accepted even with `--strict` or any other options. It means we always have to care about "Array constructor returns `any[]`".
Something like #26188 could solve this issue.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
https://www.typescriptlang.org/play/#src=const%20foo%3A%20number%5B%5D%20%3D%20new%20Array(3).fill(%22foo%22)%3B
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
#29604 | Suggestion,Awaiting More Feedback | medium | Critical |
452,872,295 | youtube-dl | Site Support Request: abema.tv | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.05.20. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [✓ ] I'm reporting a new site support request
- [✓] I've verified that I'm running youtube-dl version **2019.05.20**
- [✓ ] I've checked that all provided URLs are alive and playable in a browser
- [✓ ] I've checked that none of provided URLs violate any copyrights
- [✓ ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Live: https://abema.tv/now-on-air/abema-news
- video: https://abema.tv/video
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
It is not need login.
It is japanese famous live site.
| site-support-request | low | Critical |
452,879,318 | vscode | [json] Override/disable json-schema for package.json | I would like to override the default json-schema used by VSCode on some files with my own json-schema (example: VSCode use a json-schema for the Node's "package.json", but I want to be able to override it), like when configuring any other json-schema in settings.json:
```
//settings.json - currently this doesn't work and VSCode keep applying the Node's package.json json-schema
"json.schemas": [{
"fileMatch": ["*/package.json"],
"url": "./schemas/package.json"
}]
```
Reason is: I'm working on a project where there's a file called "package.json" which is not related to Node: it's something completely different and internal to the project, and it's not even in the root directory, but I can't rename it and VSCode wrongly use the Node's json-schema and everything become an error. | feature-request,json | medium | Critical |
452,886,059 | TypeScript | Add a code action to implement (overwritten) method from base class | ## Search Terms
implement, inherited, method
## Suggestion
When implementing a method that is defined in a class's base class or in external type declarations, it would be helpful if we didn't have to type out the method signature that is already known to TypeScript.
## Use Cases
Overwriting methods in inherited classes. Here's an example for implementing a custom `winston` transport.

TypeScript knows the method signature, but I have to type it out manually, potentially making a mistake along the way.
## Examples
```ts
import * as Transport from "winston-transport";
export class SpyTransport extends Transport {
log| // cursor here!
}
```
VSCode should offer a code action _implement method "log" from base class "Transport"_. After invoking it, the code should look like this:
```ts
import * as Transport from "winston-transport";
export class SpyTransport extends Transport {
public log(info: any, next: () => void): any {
throw new Error("not implemented!");
}
}
```
For reference, the type declarations look as follows:
```ts
declare class TransportStream extends stream.Writable {
public format?: logform.Format;
public level?: string;
public silent?: boolean;
public handleExceptions?: boolean;
constructor(opts?: TransportStream.TransportStreamOptions);
public log?(info: any, next: () => void): any;
public logv?(info: any, next: () => void): any;
public close?(): void;
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Critical |
452,886,260 | scrcpy | screen not update randomly on osx | screen just not update after some time, and would restore randomly, too. and keyboard / mouse event still works. | display,sdl | medium | Major |
452,893,499 | godot | Avoid performance discrepancies in the running project depending on the currently opened scene and tab in the editor |
**Godot version:**
Godot Engine 3.1.1
**OS/device including version:**
Windows 7 x64bit
8 GB Ram
GTX 980
**Issue description:**
Godot Editor can't handle complex level
as you can see in this video :
https://youtu.be/gWLli5CBhCQ
Here is a small level for our Game , almost everything in this level is a scripted scene , Trees are scenes , Rocks are scenes , Floor are scenes ... etc , this will give is the full control of everything for example making the Trees Destroyable or Light poles turn the lights off if they get Hit .. etc
____
1- Running the game when i'm in 3D mode in the Editor in the same level that contain many instances you will see that the FPS is about 160 FPS
2- Running the game when i'm in Script mode in the Editor in the same level that contain many instances you will see that the FPS is about 193 FPS
3- Running the game when i create an Empty scene that contain nothing and switch to Script mode in the Editor you will see that the FPS is about 243 FPS
and Godot Editor will get Laggy after running the game several times.. make the FPS Drop more and more ... in this case i need to Export the game , Close Godot and Run the Game to get the Full FPS - About 260 FPS
| enhancement,discussion,topic:editor | low | Major |
452,924,106 | TypeScript | Incorrect union type inference for conjunction with strictNullChecks disabled | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.5
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** union type inference conjunction strictNullChecks
**Code**
```ts
function fn(x: number) {
return x && 'a'
}
```
**Expected behavior:**
Return type of `fn` should be `0 | 'a'`
**Actual behavior:**
Return type of `fn` is `'' | 'a'` if `strictNullChecks` is disabled
This is causing me an issue with this specific bit of code, where I'm checking `window` so my code runs both on the browser and on the client:
```ts
function fn(x: number): OrientationType | undefined {
return window && (window.outerWidth > window.outerHeight ? 'landscape-primary' : 'portrait-primary')
}
```
I can replace the conjunction with a conditional expression, and I know that if `strictNullChecks` is disabled I should have a fallback for the return value but I'm posting here for the sake of trying to understand if this is working as intended. It also feels weird that my example compiles with `strictNullChecks` enabled but doesn't compile when it's disabled.
**Playground Link:** https://www.typescriptlang.org/play/#src=function%20fn(x%3A%20number)%3A%20OrientationType%20%7C%20undefined%20%7B%0D%0A%20%20return%20window%20%26%26%20(window.outerWidth%20%3E%20window.outerHeight%20%3F%20'landscape-primary'%20%3A%20'portrait-primary')%0D%0A%7D
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Bug | low | Critical |
452,940,305 | pytorch | [JIT] Memory Leak during tracing? | Hi,
I am trying to run [pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) through the JIT using the tracing API. I ran the example [run_squad.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py) without any changes with the following command and it worked without any issues.
```
CUDA_VISIBLE_DEVICES="0" python run_squad.py \
--bert_model bert-large-uncased \
--fp16 \
--do_train \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--train_batch_size 6 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 512 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
To run the script with the JIT, I changed the following lines
```
model.train()
for _ in trange(int(args.num_train_epochs), desc="Epoch"):
for step, batch in enumerate(tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0])):
if n_gpu == 1:
batch = tuple(t.to(device) for t in batch) # multi-gpu does scattering it-self
input_ids, input_mask, segment_ids, start_positions, end_positions = batch
loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions)
```
to be
```
model.train()
traced = False
for _ in trange(int(args.num_train_epochs), desc="Epoch"):
for step, batch in enumerate(tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0])):
if n_gpu == 1:
batch = tuple(t.to(device) for t in batch) # multi-gpu does scattering it-self
input_ids, input_mask, segment_ids, start_positions, end_positions = batch
if not traced:
model = torch.jit.trace(model, (input_ids, segment_ids, input_mask, start_positions, end_positions), check_trace=False)
traced = True
logger.info("Tracing complete")
loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions)
```
I also disabled the FusedLayerNorm [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L231) to make it run with the tracing.
I ran the modified script with the same command, but I got a CUDA OOM Error.
Error Log: [log](https://gist.github.com/chughtapan/590af6f078143436c10817b690b8b028)
Since the unmodified code was running perfectly, the traced module should also run within the available GPU memory. Am I doing something wrong?
## Environment
PyTorch version: 1.1.0
Is debug build: Yes
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.14.0
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: Tesla V100-PCIE-16GB
GPU 1: Tesla V100-PCIE-16GB
Nvidia driver version: 418.67
cuDNN version: /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7.4.2
Versions of relevant libraries:
[pip] numpy==1.16.3
[pip] pytorch-pretrained-bert==0.6.2
[pip] torch==1.1.0
[conda] blas 1.0 mkl
[conda] magma-cuda100 2.5.0 1 pytorch
[conda] mkl 2019.3 199
[conda] mkl-include 2019.3 199
[conda] mkl_fft 1.0.12 py36ha843d7b_0
[conda] mkl_random 1.0.2 py36hd81dba3_0
[conda] pytorch-pretrained-bert 0.6.2 <pip>
[conda] torch 1.1.0 <pip>
Thanks,
Tapan | oncall: jit,triaged | low | Critical |
452,959,175 | rust | Consider changing the way unevaluated constants are printed | cc https://github.com/rust-lang/rust/pull/60742#discussion_r288742080
Right now when we encounter a `ConstValue::Unevaluated` during pretty printing, we fall back to printing the constant's source code or `_` if that also fails. Alternative options are:
* try to eagerly evaluate (and fall back to one of the following)
* always print `_`
* print something like `{{unevaluated: {}}}` where `{}` is the source code
* print just `{unevaluated}` (seems useless for users, so maybe not a good idea) | C-enhancement,A-diagnostics,T-compiler,A-const-eval,A-const-generics | low | Minor |
452,985,037 | pytorch | downsampling with grid_sample doesn't match interpolate | I'm taking this discussion from https://github.com/pytorch/pytorch/issues/20785#issuecomment-495186085 as it is unrelated to `grid_sample` not being aligned.
---
There is also another point that I'd like to make about `grid_sample`, which is that even after we make this change to make it aligned, it won't match 1:1 with `interpolate` in some cases.
Indeed, if we use `grid_sample` to downsample an image using bilinear interpolation, it will always take the 4 closest pixels that correspond to the neighbors in the image space. This means that for large downsampling factors, this will make the bilinear interpolation look almost like a nearest neighbor interpolation.
Here is where this is defined
https://github.com/pytorch/pytorch/blob/90182a7332997fb0edf666abc4b554b83a1670d1/aten/src/ATen/native/cuda/GridSampler.cu#L181-L186
We might potentially want to replace the +1 with the corresponding image offset after a lookup in the coordinates of the next elements in the grid, to have smoother interpolation. But I haven't thought about it throughly, and we might be overfitting to a particular case. In particular, I don't know how what I proposed just above would behave with grids having holes.
---
Comment from @bnehoran from https://github.com/pytorch/pytorch/issues/20785#issuecomment-499312073 :
I am not sure I understand how what you are suggesting would work for a more general grid.
First of all, if I understand what you are suggesting, you are proposing to allow grid_sample to also do the job of upsampling and/or downsampling tensors, right?
Just to avoid confusion, as you mention, this is different than the point made in this issue, which is more about using the two functionalities in conjunction with one another (that is, using grid_sample on a tensor that has been upsampled or downsampled using interpolate), but incidentally, it seems that these changes should also cover the case of using gird_sample to upsample/downsample, for the most part.
I agree that calling grid_sample on a tensor using an identity grid of larger size than the tensor should have the same effect as upsampling it using interpolate with bi/tri/linear modes, and I think these changes should do that. Similarly, attempting to downsample a tensor by using grid_sample with an identity grid that is smaller than the tensor, would bi/tri/linearly interpolate between the nearest whole pixels (note: rather than average pooling over the nearby area), which I believe should also be equivalent to the bi/tri/linear modes of interpolate. I think this equivalence is what you meant to point out, right?
The nearest modes should similarly become equivalent.
There would just be no equivalent to the area mode of interpolate. However, it's not immediately clear to me how such a mode should really behave for grid_sample. If the identity grid is evenly spaced, then it is all well, since the rectangular areas that each grid pixel covers are well defined. However, as soon as the grid starts deviating significantly from the identity (for example, in an extreme case, suppose that multiple grid points land on the same location of the sampled tensor), how do you then define the "area" covered by each grid point?
Maybe there does exist some way to define it that is both natural and consistent, but at the very least, it seems non-trivial.
cc @fmassa @vfdev-5 | triaged,module: vision,module: interpolation | low | Major |
452,998,285 | pytorch | RuntimeError: cublas runtime error | RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:411
My GPU is 2080ti,CUDA 10.0
How can I slove the problem?Thank you. | triaged,module: cublas | low | Critical |
453,029,949 | terminal | [Spec Proposal] Changes to Colortool commandline switches | In my opinion, the current command line switches are a bit confusing since there are multiple ways to have it not do the default action of writing the color scheme to the current console (I think -x is a little different from that, at least based on my reading). I think it would be better for there to be no default behavior, each switch should do its thing and nothing more, and the user can combine switches to combine behaviors. (i.e. remove the -b switch, add a switch that applies to the current console, then change the code so -x and -t aren't mutually exclusive with -d and the new switch for current console).
_Originally posted by @DJackman123 in https://github.com/microsoft/terminal/pull/1052#issuecomment-499147982_ | Issue-Feature,Product-Colortool,Needs-Tag-Fix | low | Major |
453,045,934 | pytorch | Slow convolution with large kernels, should be using FFT | ## 🐛 Bug
When using `Conv1d` with a large kernel size (1024 for instance) on gpu, the cudnn implementation is very slow and gets slower as I increase the kernel size. I thought it was using FFT but apparently not. If it were using FFT, the computation time should be independent of the kernel size, because the kernel is anyway padded to the length of the input.
I have tried benchmarking with both `torch.backends.cudnn.benchmark` set to `False` and `True`. My implementation using the FFT is significantly faster especially when using a stride of 1. You will find hereafter the code both for the FFT based convolution implementation I use and the profiling. My implementation is within ~5e-5 of the reference implementation for random weights and input. For a kernel size of 1024, with 64 channels, a stride of 1 and an input of length 64000, the default implementation is about 20 times slower than the FFT based one. When using a kernel size of 2048, it is 40 times slower.
## To Reproduce
Steps to reproduce the behavior:
1. Copy the code below in `profile_conv.py`
2. Run `python3 profile_conv.py`
```python
import time
from functools import partial
from itertools import product
import torch
from torch import nn
from torch.nn import functional as F
def compl_mul(a, b):
"""
Given a and b two tensors of dimension 4
with the last dimension being the real and imaginary part,
returns a multiplied by the conjugate of b, the multiplication
being with respect to the second dimension.
"""
op = partial(torch.einsum, "bct,dct->bdt")
return torch.stack([
op(a[..., 0], b[..., 0]) + op(a[..., 1], b[..., 1]),
op(a[..., 1], b[..., 0]) - op(a[..., 0], b[..., 1])
],
dim=-1)
class FastConv(nn.Module):
"""
Convoluton based on FFT, faster for large kernels and small strides.
"""
def __init__(self,
in_channels,
out_channels,
kernel_size,
stride=1,
bias=True):
super().__init__()
if bias:
self.bias = nn.Parameter(torch.zeros(out_channels, 1))
else:
self.bias = None
self.weight = nn.Parameter(
torch.zeros(out_channels, in_channels, kernel_size))
self.kernel_size = kernel_size
self.stride = stride
def forward(self, signal):
padded = F.pad(self.weight,
(0, signal.size(-1) - self.weight.size(-1)))
signal_fr = torch.rfft(signal, 1)
weight_fr = torch.rfft(padded, 1)
output_fr = compl_mul(signal_fr, weight_fr)
output = torch.irfft(output_fr, 1, signal_sizes=(signal.size(-1), ))
output = output[..., ::self.stride]
target_length = (signal.size(-1) - self.kernel_size) // self.stride + 1
output = output[..., :target_length].contiguous()
if self.bias is not None:
output += self.bias
return output
def profile(module, *args, repetitions=10, warmup=1):
"""
Given a module and args, apply repeatedly the module to the args,
calling `torch.cuda.synchronize()` in between. Return the time per
repetition. Not perfect profiling but gives a rough idea.
"""
module(*args)
begin = time.time()
for _ in range(repetitions):
module(*args)
torch.cuda.synchronize()
return (time.time() - begin) / repetitions
def human_seconds(seconds, display='.2f'):
"""
Human readable string from a number of seconds.
"""
value = seconds * 1e6
ratios = [1e3, 1e3, 60, 60, 24]
names = ['us', 'ms', 's', 'min', 'hrs', 'days']
last = names.pop(0)
for name, ratio in zip(names, ratios):
if value / ratio < 0.3:
break
value /= ratio
last = name
return f"{format(value, display)} {last}"
def test_one(kernel_size=1024,
channels=8,
batch_size=16,
length=16000 * 5,
stride=1):
print(f"Benchmark for kernel_size={kernel_size} "
f"stride={stride} channels={channels}")
device = "cuda"
signal = torch.randn(batch_size, channels, length, device=device)
conv = nn.Conv1d(
channels, channels, kernel_size, stride=stride, bias=False).to(device)
fft_conv = FastConv(
channels, channels, kernel_size, stride=stride, bias=False).to(device)
fft_conv.weight = conv.weight
conv_output = conv(signal)
fft_output = fft_conv(signal)
error = torch.abs(conv_output - fft_output)
print("\tMean error={:.2g}, max error={:.2g}".format(
error.mean(), error.max()))
torch.backends.cudnn.benchmark = False
print("\tCudnn benchmark = False: {}".format(
human_seconds(profile(conv, signal))))
torch.backends.cudnn.benchmark = True
print("\tCudnn benchmark = True: {}".format(
human_seconds(profile(conv, signal))))
print("\tFFT Conv: {}".format(human_seconds(profile(fft_conv, signal))))
def test():
print("torch.backends.cudnn.is_available(): ",
torch.backends.cudnn.is_available(),
"\ntorch.backends.cudnn.version(): ", torch.backends.cudnn.version())
grid = product([256, 1024, 2048], [64], [1, 16])
for kernel_size, channels, stride in grid:
test_one(kernel_size=kernel_size, channels=channels, stride=stride)
if __name__ == "__main__":
test()
```
Output from the above code in my environment:
```
torch.backends.cudnn.is_available(): True
torch.backends.cudnn.version(): 7501
Benchmark for kernel_size=256 stride=1 channels=64
Mean error=9.5e-07, max error=2.5e-05
Cudnn benchmark = False: 0.33 s
Cudnn benchmark = True: 0.33 s
FFT Conv: 160.82 ms
Benchmark for kernel_size=256 stride=16 channels=64
Mean error=9.5e-07, max error=1.8e-05
Cudnn benchmark = False: 23.70 ms
Cudnn benchmark = True: 23.12 ms
FFT Conv: 160.16 ms
Benchmark for kernel_size=1024 stride=1 channels=64
Mean error=1.9e-06, max error=4.1e-05
Cudnn benchmark = False: 3.28 s
Cudnn benchmark = True: 3.31 s
FFT Conv: 160.77 ms
Benchmark for kernel_size=1024 stride=16 channels=64
Mean error=1.9e-06, max error=3.5e-05
Cudnn benchmark = False: 213.10 ms
Cudnn benchmark = True: 213.29 ms
FFT Conv: 158.74 ms
Benchmark for kernel_size=2048 stride=1 channels=64
Mean error=2.6e-06, max error=6e-05
Cudnn benchmark = False: 6.68 s
Cudnn benchmark = True: 6.68 s
FFT Conv: 160.56 ms
Benchmark for kernel_size=2048 stride=16 channels=64
Mean error=2.6e-06, max error=4.8e-05
Cudnn benchmark = False: 0.43 s
Cudnn benchmark = True: 0.43 s
FFT Conv: 160.41 ms
```
## Expected behavior
When using a stride of 1 and large kernel size, the FFT implementation is much faster than the default one. The FFT one takes 160ms whatever the size of the kernel, versus 3.3 seconds (resp 6.7) for the default one with a kernel size of 1024 (resp 2048). For large strides, the cudnn implementation is competitive or faster as expected (the FFT only has an interest if we want the convolution for all positions).
I would expect cudnn to provide a fast implementation for large kernels with low stride, which can be especially useful in audio (filters implementation). When talking about this around me, most people were surprised as it has been announced that an FFT based implementation was added to cudnn.
## Environment
Collecting environment information...
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: version 3.13.4
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Quadro GP100
GPU 1: Quadro GP100
Nvidia driver version: 410.79
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.15.4
[pip3] torch==1.1.0
[pip3] torchvision==0.2.2
[conda] blas 1.0 mkl
[conda] mkl 2018.0.3 1
[conda] mkl_fft 1.0.6 py37h7dd41cf_0
[conda] mkl_random 1.0.1 py37h4414c95_1
[conda] pytorch 1.1.0 py3.7_cuda10.0.130_cudnn7.5.1_0 pytorch
[conda] torchvision 0.2.2 py_3 pytorch
cc @csarofeen @ptrblck @mruberry @peterbell10 @VitalyFedyunin @ngimel | module: performance,module: cudnn,module: convolution,triaged,module: fft | low | Critical |
453,094,862 | flutter | google_sign_in throws an Unknown PlatformException after token expires | ## Steps to Reproduce
1. Create a simple Flutter project with `google_sign_in` but without `firebase_auth` (or any Firebase dependency).
2. Authenticate with a Google account.
3. Try accessing the `authentication` via `await <GoogleSignInAccount>.authentication` and read the token.
4. Keep the application open for over an hour (until the token expires).
5. Try to again `await <GoogleSignInAccount>.authentication`: this throws a `PlatformException(exception, Unknown, null)`.
## Small code snippet
_Note: `auth_service` is just a wrapper that exposes a `GoogleSignIn` instance._
```
Future<void> tryPlatform() async {
var account = auth_service.google.currentUser;
if (account == null) throw Exception('No account.');
final authentication = await account.authentication;
if (authentication == null) throw Exception('No authentication.');
final response = await http.get(
'https://www.googleapis.com/oauth2/v3/tokeninfo?id_token=${authentication.idToken}');
if (response.statusCode != 200) throw Exception('Failed check.');
final data = convert.jsonDecode(response.body) as Map<String, dynamic>;
if (data == null || !data.containsKey('exp'))
throw Exception('Failed decoding.');
print(DateTime.fromMillisecondsSinceEpoch(int.parse(data['exp']) * 1000));
}
```
## Screenshot
_Note: I tried to call the function three times before the token expired and once after it did. This is the result._
<img width="732" alt="image" src="https://user-images.githubusercontent.com/16031715/59045058-9f849080-887f-11e9-8fc7-413043ff021b.png">
## Flutter Doctor
```
[✓] Flutter (Channel stable, v1.5.4-hotfix.2, on Mac OS X 10.14.5 18F132, locale en-US)
• Flutter version 1.5.4-hotfix.2 at /Users/emilio/Library/Flutter
• Framework revision 7a4c33425d (5 weeks ago), 2019-04-29 11:05:24 -0700
• Engine revision 52c7a1e849
• Dart version 2.3.0 (build 2.3.0-dev.0.5 a1668566e5)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/emilio/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 10.2.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.2.1, Build version 10E1001
• ios-deploy 1.9.4
• CocoaPods version 1.6.1
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 36.0.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[!] VS Code (version 1.34.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
✗ Flutter extension not installed; install from
https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (2 available)
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 9 (API 28) (emulator)
``` | c: crash,platform-android,customer: crowd,p: google_sign_in,package,P2,team-android,triaged-android | low | Critical |
453,114,993 | flutter | Automatically set Dart SDK constraints when using flutter create | Currently we hardcode the Dart SDK constraints in our `flutter create` templates, e.g.:
https://github.com/flutter/flutter/blob/master/packages/flutter_tools/templates/app/pubspec.yaml.tmpl#L22
We should automatically make the lower bound be the current Dart version. | tool,dependency: dart,P2,team-tool,triaged-tool | low | Minor |
453,141,411 | vscode | Quickpick loses active item if items are updated | I tried to make a quickpick that only showed the `detail` line for the selected item to save space:
```ts
st quickPick = vs.window.createQuickPick<PickableDevice>();
quickPick.busy = true;
quickPick.items = devices;
quickPick.placeholder = "Select a device to use";
// Show detail line only when selected.
quickPick.onDidChangeActive((active) => {
quickPick.items.forEach((d) => d.detail = undefined);
active.forEach((d) => d.detail = d.expandedDetail);
quickPick.items = [...quickPick.items];
});
```
However when the `items` collection is replaced (which is required to redraw the list), the active items is reset to the top item in the list. Since the items are the exact same objects in the array, it seems like this could persist? | bug,quick-pick | low | Minor |
453,147,346 | terminal | Per project docs | I'd like to see documentation per VS project, similar to that in `doc/ORGANIZATION.md`, or to `abstract` comment on files. It could contain:
- responsibilities
- target platforms (standalone window app / embeded control / server / api)
- used frameworks (`pure c++`/ `.NET Framework` / `.NET Standard` / `c++/cli` / `winapi` / `winrt` / `wpf` / `winforms` / rendering lib)
- its place in the "stack"
- target (native exe / dll / source only)
This'd make contibuting much easier, but I'm affraid that's subject to frequent changes | Issue-Docs,Product-Meta,Area-CodeHealth | low | Minor |
453,151,230 | TypeScript | React: enforce correct usage of refs | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
Potentially a use case for #12936
> if you have a problem and you think exact types are the right solution, please describe the original problem here
https://github.com/microsoft/TypeScript/issues/12936#issuecomment-284590083
## Search Terms
react refs exact type
## Suggestion
In React, it's fairly common to use [refs](https://reactjs.org/docs/refs-and-the-dom.html). This involves creating an object of a specific type, and then passing that value as a prop to a component, which will mutate/reassign its `current` property.
Currently it seem there is no way to use refs in a type safe way. They can be accidentally passed to the wrong component, and there will be no type error.
https://stackoverflow.com/questions/56378639/typescript-react-enforce-correct-ref-props
``` tsx
import * as React from 'react';
class Modal extends React.Component<{}> {
close = () => {};
}
declare const modal: Modal;
modal.close();
const modalRef = React.createRef<Modal>();
// Let's try giving this ref to the correct component…
// No error as expected :-)
<Modal ref={modalRef} />;
class SomeOtherComponent extends React.Component<{}> {}
// Let's try giving this ref to the wrong component…
// Expected type error but got none! :-(
<SomeOtherComponent ref={modalRef} />;
```
IIUC, this is because `ref={modalRef}` is equivalent to:
``` ts
declare let ref: { current: {} };
declare let modalRef: { current: { close: () => void } };
ref = modalRef;
```
… which will correctly not error, because `modalRef` is a subtype of the target type `ref`.
However, when we're passing a ref to a component, we want to make sure the `ref` argument value (`modalRef`) is a _supertype_, or exact match, of the parameter type (`ref`).
This is because the component is responsible for mutating the ref's `current` property—if we pass in a ref of the wrong type, it will be mutated to something else. Later, when we try to use the ref's `current` value, the value will not match the type:
``` tsx
// Now when we try to use this ref, TypeScript tells us it's safe to do so.
// But it's not, because the ref has been incorrectly assigned to another component!
if (modalRef.current !== null) {
modalRef.current.close(); // runtime error!
}
```
I am looking for a way to catch these mistakes at compile time, with TypeScript.
Note if the target component is a subtype of `React.Component`, we will get errors as desired:
``` ts
class SomeOtherComponent extends React.Component<{}> {
foo = () => {}
}
// Let's try giving this ref to the wrong component…
// We got an error :-)
<SomeOtherComponent ref={modalRef} />;
```
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Needs Proposal | low | Critical |
453,164,614 | flutter | [Request] DraggableScrollableSheet should show an AppBar when on full-screen | ## Use case
As stated in [Material Design guidelines](https://material.io/design/components/sheets-bottom.html#standard-bottom-sheet):
> When full-screen, bottom sheets can be internally scrolled to reveal additional content. A toolbar should be used to provide a collapse or close affordance to exit this view.
> Include a close affordance in a full-height modal bottom sheet to dismiss the sheet.
## Proposal
Add a new optional parameter where you can add a custom `AppBar`.
- If none is provided, it will apear as it is right now.
- If provided (and `maxChildSize` is set to `1.0`), when the user drags the `DraggableScrollableSheet` to the top, the provided `AppBar` will appear over the `DraggableScrollableSheet`. When the `AppBar` is shown, the user shouldn't be able to dismiss the `DraggableScrollableSheet` by dragging it to the bottom.
Video showing behavior [here](https://storage.googleapis.com/spec-host-backup/mio-design%2Fassets%2F1mYYvVz_dCIx0bRyY_VV4hOs9709f1ESb%2Fmodal-behavior-dismissal-close.mp4).
cc/ @dnfield | c: new feature,framework,f: material design,f: scrolling,P3,team-design,triaged-design | low | Major |
453,166,542 | pytorch | Not obvious how to install torchvision with PyTorch source build | Previously, it used to be possible to build PyTorch from source, and then `pip install torchvision` and get torchvision available. Now that torchvision is binary distributions, this no longer works; to make matters worse, it explodes in non-obvious ways.
When I had an existing install of torchvision 0.3.0, I got this error:
```
ImportError: /scratch/ezyang/pytorch-tmp-env/lib/python3.7/site-packages/torchvision/_C.cpython-37m-x86_64-linux-gnu.so: u
ndefined symbol: _ZN3c106Device8validateEv
```
I reinstalled torchvision with `pip install torchvision`. Then I got this error:
```
File "/scratch/ezyang/pytorch-tmp-env/lib/python3.7/site-packages/torchvision/ops/boxes.py", line 2, in <module>
from torchvision import _C
ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory
```
(I'm on a CUDA 10 system).
In the end, I cloned torchvision and built/installed it from source. | triaged,module: vision | low | Critical |
453,177,712 | bitcoin | listrecievedbyaddress with include_empty not filtering out "send" side of address book | I noticed if you call listreceivedbyaddress with true for incude_empty it was skipping the cross-reference against mapTally. The skip causes the entire address book (including "send" purpose) to be included in the results erroneously. There should be a "if purpose not 'send' then include results".
Code would be inserted here:
https://github.com/bitcoin/bitcoin/blob/master/src/wallet/rpcwallet.cpp#L1122
something like:
```
if(item_it->second.purpose == "send")
continue;
``` | Bug,Wallet | low | Minor |
453,186,922 | vscode | Save Side Bar layouts per Workspace | When I open a one of my workspaces it consists of all short (few letters) filenames and then another I open has some very long filenames. Is there a way currently to save the positioning of that column? I switch back and forth all day.
If not I thought that (simple sounding) idea might be considered to be added to saved settings for the saved workspace.
Thanks for listening and your consideration and or help.
Note: I tried to find a setting or reference but seem unable. Please forgive me if I missed it.
<!-- Describe the feature you'd like. -->
| feature-request,layout | medium | Critical |
453,202,352 | go | x/tools/gopls: signatureHelp should return position info for signatures | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +385b2e0cac Fri May 24 21:34:53 2019 +0000 linux/amd64
$ go list -m golang.org/x/tools
golang.org/x/tools v0.0.0-20190521203540-521d6ed310dd
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE="on"
GOARCH="amd64"
GOBIN="/home/myitcv/gostuff/src/github.com/myitcv/govim/cmd/govim/.bin"
GOCACHE="/home/myitcv/.cache/go-build"
GOENV="/home/myitcv/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/myitcv/gostuff"
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/myitcv/gos"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/myitcv/gos/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/myitcv/gostuff/src/github.com/myitcv/govim/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build579955727=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
The signature help call from client to server (https://github.com/Microsoft/language-server-protocol/blob/gh-pages/specification.md#signature-help-request-leftwards_arrow_with_hook) returns a list of signatures:
```go
type SignatureHelp struct {
Signatures []SignatureInformation `json:"signatures"`
ActiveSignature float64 `json:"activeSignature"`
ActiveParameter float64 `json:"activeParameter"`
}
type SignatureInformation struct {
Label string `json:"label"`
Documentation string `json:"documentation,omitempty"`
Parameters []ParameterInformation `json:"parameters,omitempty"`
}
type ParameterInformation struct {
Label string `json:"label"`
Documentation string `json:"documentation,omitempty"`
}
```
But what's strange is that the signature information does not include position information, i.e. the start (and end, where it's valid/available) of where the call signature is valid: think the starting `(` (to the end `)`).
This position information is valuable to my mind because it tells the client where to place, for example, a popup window.
---
cc @stamblerre @ianthehat
| help wanted,FeatureRequest,gopls,Tools | low | Critical |
453,236,586 | material-ui | [Chip] Rename onDelete and deleteIcon | Even the docs show `onDelete` and `deleteIcon` used for purposes other than deleting the Chip. I'd imagine it's common to use them for other purposes.
I propose renaming them to `onSecondaryAction` and `secondaryIcon`, or something similar. The current attributes could be aliases for those so as to not be a breaking change, at least for v4. | discussion,breaking change,component: chip | low | Major |
453,242,146 | flutter | Make sure the add-to-app include scripts can build for the correct CPU architecture | Make sure include_flutter.groovy+flutter.gradle and podhelper.rb+xcode_backend.sh embed can figure out what architecture the outer project is trying to build and build the appropriate architecture | platform-android,tool,a: existing-apps,P2,team-android,triaged-android | low | Minor |
453,285,273 | youtube-dl | Site support request: American Archive for Public Broadcasting | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.05.20. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [ x] I'm reporting a new site support request
- [ x] I've verified that I'm running youtube-dl version **2019.05.20**
- [x ] I've checked that all provided URLs are alive and playable in a browser
- [x ] I've checked that none of provided URLs violate any copyrights
- [ x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
Single video: http://americanarchive.org/catalog/cpb-aacip_259-q23qzn4s
Single video: http://americanarchive.org/catalog/cpb-aacip_259-804xkk0v
Single video: http://americanarchive.org/catalog/cpb-aacip_81-48sbchx0
Single video: http://americanarchive.org/catalog/cpb-aacip_37-97kps40n
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Incredibly valuable website run by the Library of Congress and WGBH Boston designed to house archival material by public media organizations across the United States.
| site-support-request | low | Critical |
453,294,223 | flutter | Handle duplicate platform transient dependencies | Assume an inner Flutter project uses a firebase plugin which depends on nanopb, the outer platform project also directly depends on nanopb, and another Flutter plugin also depends on nanopb, make sure we can correctly resolve. | tool,customer: alibaba,a: existing-apps,P3,team-tool,triaged-tool | low | Minor |
453,305,412 | flutter | Interaction between modals, scrolls, and focuses | ## Description
The way Flutter handles the interaction between modals, scrolls and focuses is different from native Android.
Below are two videos recorded on a native Android app:

Highlight of this video:
- When a dialog pops up, the on-screen keyboard
- [ ] **a)** does not disappear
- [ ] **b)** is rendered hidden behind the overlay
- When the dialog disappears, the previously focused widget
- [ ] **c)** keeps its focus
- [ ] **d)** is not scrolled into view

Highlight of this video:
- [ ] **e)** In addition to c), if the keyboard was not screen before the dialog, resuming focus does not bring it up.
## Related issue
https://github.com/flutter/flutter/pull/33152 tried to make `ModalScope` resume its previous focus when its next route is popped. This PR solved only c), violating everything else, and even broke some tests by violating d).
## Why is this important
**Contenxt menu**. When the user is editing and right clicks the text to bring up a context menu
- It _requires_ c) so that the text field is not unfocused
- It _prefers_ a), or at least e), so that the screen doesn't go through multiple sudden metric changes.
**Dialog**. In addition to the reasons above,
- It _prefers_ d) so that when the user is editing, scrolls down, and pops up a dialog that's at the bottom of the page, the screen doesn't scroll back to the top when the dialog disappears.
## Suggestion
If we only want to ensure basic support for context menu, it shouldn't be too hard if we add some special management of focus and input connection onto context menu popups, so that a), e), c), and even d) can be fixed for context menu.
In order to fix a) c), d) and e) for all modals including dialogs, we might need to allow focus scopes to manage their own focuses, which will be a much bigger change that might be breaking.
Fixing b) is probably impossible in a single-view Flutter app, and need to be discussed after Flutter supports easy multi-view. | framework,f: material design,a: fidelity,f: scrolling,P2,team-design,triaged-design | low | Minor |
453,312,055 | flutter | [path_provider] Provide access to asset paths | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
I have a blob asset with custom format, I do not want to read it all at once.
So I try to use File::openRead(start, end), but without an api to get root bundle path.
## Proposal
Can you add an api just like
`Future<Directory> getRootAssetBundleDirectory() async { ... } ` ? | c: new feature,p: path_provider,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Critical |
453,324,949 | TypeScript | Improve TypeScript experience when dependencies are missing. | I was in a project, and forgot that I deleted `node_modules` (or forgot to `npm install`), and in this case I was seeing `any` all over the place, and was jumping myself to conclusions that something was wrong with TypeScript or VS Code.
It could be helpful if VS Code could suggest something like
> Did you forget to install dependencies? Perhaps you should run `npm install`?
or similar. | Suggestion,Awaiting More Feedback | low | Minor |
453,444,081 | vue | Unnecessary renders on parent update when $attrs is bound | ### Version
2.6.10
### Reproduction link
[https://codepen.io/anon/pen/zQVRgG?editors=1010](https://codepen.io/anon/pen/zQVRgG?editors=1010)
### Steps to reproduce
Type something into the first field
Uncomment line 8 or 14 then try again
### What is expected?
In console:
```
Render a
```
### What is actually happening?
In console:
```
Render a
Render b
```
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | low | Major |
453,478,161 | terminal | Feature Request: Support Font Families instead of individual fonts | # Summary of the new feature/enhancement
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
Using more than one fonts, like "Font Family" in VS Code.
It is a useful feature for CJK users. | Issue-Feature,Area-Rendering,Area-Settings,Product-Terminal | medium | Major |
453,517,517 | pytorch | `attn_mask` in nn.MultiheadAttention is additive | ## 📚 Documentation
It likely should be mentioned that the [`attn_mask`](https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/activation.py#L767) argument of MHA is an additive mask (-inf masks values), rather than the standard multiplicative mask (0 masks values). Perhaps even enforce a value check (all values should be 0 / -inf ?, otherwise print warning?)
cc @brianjo @mruberry @albanD @jbschlosser @zhangguanheng66 | module: docs,module: nn,triaged | low | Major |
453,642,920 | go | cmd/gofmt: extra indentations when adding comments | I encounter an issue with `gofmt`, where adding a comment to the first line causes extra indentation to the rest of the lines.
For example, giving this block of code:
```go
func main() {
t1 := foo.Bar().
Baz().
Qux()
}
```
Adding a comment (`//`) after `Bar()` adds extra indentations to the lines below:
```go
func main() {
t1 := foo.Bar(). //
Baz().
Qux()
}
```
[Playground](https://play.golang.org/p/defYUt_Zgmi)
Same for this case:
```go
func main() {
t1 := foo.Bar(). // bar
Baz(). // baz
Qux() // qux
}
```
[Playground](https://play.golang.org/p/PokqI18xpB3)
Maybe I miss something, but I expect the formatting to be as follow:
```go
func main() {
t1 := foo.Bar(). //
Baz().
Qux()
}
func main() {
t1 := foo.Bar(). // bar
Baz(). // baz
Qux() // qux
}
``` | NeedsFix | low | Major |
453,672,506 | go | cmd/go: subcommand as git merge tool for go.mod merge conflicts | I expect merge conflicts in `go.mod` and `go.sum` to be a relatively common occurrence.
Quoting from @bcmills in the Go Slack:
> If you have a merge conflict no `go.mod` or `go.sum`, you can usually just remove the conflict markers (taking the union of the requirements) and then run `go mod tidy`to get back into a consistent state.
https://gophers.slack.com/archives/C9BMAAFFB/p1559923453042900?thread_ts=1559922004.041700&cid=C9BMAAFFB
This doesn't appear terribly burdensome but it does introduce friction in the development process. I'm opening this as an exploratory issue to investigate whether there's anything the tooling can/should do to mitigate the conflict.
My off the cuff question was whether `go mod tidy` could be updated to understand conflicts and automatically resolve them in `go.mod` and `go.sum`. My summary of the resulting conversation is that it's likely possible but it's unclear whether it's desirable.
Some of the reasons against that were brought up:
* Both parsing of the conflicts and resolving them is VCS specific.
* This may be expanding the scope of what tidy is responsible for farther than it should be.
It's possible there's a less invasive approach to mitigating conflicts so I'm including the above suggestion only as one potential option to explore. | NeedsInvestigation,FeatureRequest,GoCommand,modules | medium | Critical |
453,677,679 | flutter | [google_maps_flutter] Location Accuracy setting for IOS | ## Use case
We'd like to switch the accuracy to a low level when using the google maps plugin so as to not consume battery power. Currently, for Android, you are able to change the location accuracy by changing android.permission.ACCESS_FINE_LOCATION to android.permission.ACCESS_COURSE_LOCATION. However, there is no such option for IOS.
It is my understanding that IOS has the option to changed the accuracy of the location using CCLocationManager and desiredAccuracy however the current plugin is either not clear on whether it allows this or does not give an easy way to access the location manager and change the property.
## Proposal
Be able to set the accuracy level for both android and ios
| c: new feature,platform-ios,p: maps,package,c: proposal,customer: vroom,P3,team-ios,triaged-ios | low | Major |
453,687,010 | create-react-app | CSS custom properties are not polyfilled | ### Is this a bug report?
Yes
I have CSS variables (custom properties) in my CSS, which are not supported by IE. The CRA docs state "Support for new CSS features like the all property, break properties, custom properties, and media query ranges are automatically polyfilled to add support for older browsers." However, they are not polyfilled when they are imported from another file.
### Steps to reproduce
```css
// A.css
:root {
--test: #fff;
}
```
```css
// B.css
@import 'A.css'
body {
color: var(--test)
}
```
### Expected Behavior
The built CSS file should have the CSS variables processed out and replace the var(--) calls with the proper values
### Actual Behavior
The autoprefixer works as expected, adding vendor prefixes and such, and changing the browserslist settings do alter the outputted CSS in some way. However, no browserslist setting is making the custom properties be post-processed, and as such the CSS will not work on IE. It seems like the variable replacement is happening before the concatenation from imports, and so it doesn't find the property definition from other files.
Note that I also deleted node_modules and package-lock and the issue persists.
| tag: underlying tools,issue: needs investigation | medium | Critical |
453,705,165 | pytorch | [jit] In pickler, don't memoize if not necessary | Follow up to #21542, now the pickler works the same as Python's in that it outputs memoization IDs for every object. For improved binary size and faster deserialization we should do a pass over all the `IValue`s to be serialized and only memoize the ones that are actually used more than once (similar to [`pickletools.optimize()`](https://docs.python.org/3/library/pickletools.html#pickletools.optimize)
cc @suo | oncall: jit,triaged,jit-backlog | low | Minor |
453,726,210 | rust | wrong backtrace line for .unwrap() if it is on the next line | meta:
````
rustc 1.37.0-nightly (5eeb567a2 2019-06-06)
binary: rustc
commit-hash: 5eeb567a27eba18420a620ca7d0c007e29d8bc0c
commit-date: 2019-06-06
host: x86_64-unknown-linux-gnu
release: 1.37.0-nightly
LLVM version: 8.0
````
I had a Command::new()....unwrap() and some methods in between that were also unwrapping their arguments.
The panic that I got however was not pointing to any of the unwrap()s but to the first line of the statement (the Command::new()) which is quite confusing.
Here is similar example:
````rust
fn a(_: i32, _: i16, _: i8, _: i32, _: i16, _: i8) -> i32 {
4
}
fn main() {
let X: i32 = a(
9,
Some(4)
.unwrap(),
Some(7)
.unwrap(),
Some(9)
.unwrap(),
Some( // backtrace points here
None
)
.unwrap()
.unwrap(), // should point here
7,
);
}
````
the backtrace points to line 15 which contains `Some(` which is very confusing, imo it should point to line 19 which is the panicking `unwrap()`
Can reproduce on stable, beta and nightly.
Playground: https://play.rust-lang.org/?version=beta&mode=debug&edition=2018&gist=be591b1d4dbf5b729c5733a831c823e9 | A-debuginfo,T-compiler | low | Critical |
453,737,047 | pytorch | Feature Request: beta cdf | ## 🚀 Feature
Please implement the cdf for beta distribution.
## Motivation
The cdf of beta is used quite frequently but it's not implemented.
## Code to reproduce the problem:
~~~~
from torch.distributions.beta import Beta
m = Beta(torch.tensor([0.5]), torch.tensor([0.5]))
m.cdf(0.0029)
~~~~
## Stack Trace
```
File "<stdin>", line 1, in <module>
File "/home/.virtualenvs/test/lib/python3.6/site-packages/torch/distributions/distribution.py", line 133, in cdf
raise NotImplementedError
NotImplementedError
``` | module: distributions,feature,triaged | low | Critical |
453,738,404 | pytorch | Pytorch hangs when dataloader multiprocessing workers are killed | ## 🐛 Bug
There are basically 2 issues here:
1) Pytorch does not handle properly when dataloader workers hang. The whole process hangs when a worker hangs if no timeout is specified. When a timeout is specified, the dataloader simply throws an exception without cleaning up its worker processes.
2) Since Pytorch doesn't handle it, we had to handle it ourselves, by forcefully killing the worker processes. This in turn can cause another hang when pin memory thread is used. Because dataloader uses the same "work_result_queue" to send data to the pin memory thread and to notify "done" to the pin memory thread, it's possible that the "done" event can never be read properly from the queue after we kill the worker processes. And because dataloader iterator joins the pin memory thread indefinitely when it self-destructs, the self-destruction can hang forever.
## To Reproduce
Steps to reproduce the behavior:
1. Just create a dummy dataset that sleeps forever when loading.
2. Launch Dataloader with multi-workers and pin_memory=True, timeout=some number.
3. Add exception handler to handle the timeout, and in there kill the workers.
## Expected behavior
The workers are killed. But the main process hangs when it tries to join the pin memory thread indefinitely during dataloader iterator's self-destruction process, while the pin memory thread is still reading the queue indefinitely.
Many times it works without hang. But it indeed occurs fairly often.
## Environment
- PyTorch Version (e.g., 1.0): 1.0.1 beta2 (1.1 has the same issue from code reading).
- OS (e.g., Linux): Ubuntu 18.04 LTS
- Python version: 3.6.7
| module: dataloader,triaged | low | Critical |
453,754,989 | go | cmd/compile: slice hint seems to perform better than loop bound check | [CL 164966](164966) was submitted was a BCE optimization to math/big.
It had a surprising comment -
> // However, checking i < len(x) && i < len(y) as well is faster than
// having the compiler do a bounds check in the body of the loop;
// remarkably it is even faster than hoisting the bounds check
// out of the loop, by doing something like
// _, _ = x[len(z)-1], y[len(z)-1]
To test it out, I ran the same benchmarks again by hoisting the bounds check out of the loop, and got an improvement. This issue is to investigate why I see an improvement but @josharian did not.
Here is what I did:
On a quiet machine, I ran - `gotip test -run=xxx -bench=BenchmarkAddV -tags=math_big_pure_go -count=10`
The changes are -
```diff
--- a/src/math/big/arith.go
+++ b/src/math/big/arith.go
@@ -69,7 +69,9 @@ func divWW_g(u1, u0, v Word) (q, r Word) {
// The resulting carry c is either 0 or 1.
func addVV_g(z, x, y []Word) (c Word) {
// The comment near the top of this file discusses this for loop condition.
- for i := 0; i < len(z) && i < len(x) && i < len(y); i++ {
+ _ = x[len(z)-1]
+ _ = y[len(z)-1]
+ for i := 0; i < len(z); i++ {
zi, cc := bits.Add(uint(x[i]), uint(y[i]), uint(c))
z[i] = Word(zi)
c = Word(cc)
@@ -92,7 +94,8 @@ func subVV_g(z, x, y []Word) (c Word) {
func addVW_g(z, x []Word, y Word) (c Word) {
c = y
// The comment near the top of this file discusses this for loop condition.
- for i := 0; i < len(z) && i < len(x); i++ {
+ _ = x[len(z)-1]
+ for i := 0; i < len(z); i++ {
zi, cc := bits.Add(uint(x[i]), uint(c), 0)
z[i] = Word(zi)
c = Word(cc)
```
And the benchmark results -
```
name old time/op new time/op delta
AddVV/1-4 6.43ns ± 2% 7.25ns ± 2% +12.70% (p=0.000 n=10+10)
AddVV/2-4 8.16ns ± 1% 8.59ns ± 3% +5.32% (p=0.000 n=10+10)
AddVV/3-4 9.79ns ± 2% 9.67ns ± 2% -1.16% (p=0.014 n=10+10)
AddVV/4-4 11.0ns ± 2% 10.7ns ± 1% -2.99% (p=0.000 n=9+10)
AddVV/5-4 11.7ns ± 3% 11.5ns ± 5% ~ (p=0.070 n=9+10)
AddVV/10-4 18.1ns ± 6% 16.9ns ± 4% -6.62% (p=0.000 n=10+10)
AddVV/100-4 163ns ± 3% 159ns ± 2% -2.57% (p=0.005 n=10+10)
AddVV/1000-4 1.54µs ± 2% 1.52µs ± 1% ~ (p=0.053 n=9+9)
AddVV/10000-4 15.7µs ± 4% 15.2µs ± 1% -2.78% (p=0.001 n=10+10)
AddVV/100000-4 167µs ± 6% 168µs ± 1% ~ (p=0.605 n=9+9)
AddVW/1-4 6.45ns ± 2% 6.25ns ± 2% -3.13% (p=0.000 n=10+10)
AddVW/2-4 8.66ns ± 1% 7.37ns ± 2% -14.89% (p=0.000 n=10+10)
AddVW/3-4 9.07ns ± 1% 8.31ns ± 3% -8.37% (p=0.000 n=9+10)
AddVW/4-4 10.3ns ± 2% 9.5ns ± 3% -8.50% (p=0.000 n=10+10)
AddVW/5-4 10.3ns ± 2% 10.2ns ± 2% -1.74% (p=0.006 n=10+10)
AddVW/10-4 14.9ns ± 4% 14.1ns ± 4% -5.31% (p=0.000 n=10+10)
AddVW/100-4 27.1ns ± 5% 27.0ns ± 4% ~ (p=0.956 n=10+10)
AddVW/1000-4 122ns ± 2% 121ns ± 4% ~ (p=0.210 n=10+10)
AddVW/10000-4 3.97µs ± 1% 3.97µs ± 1% ~ (p=0.807 n=8+10)
AddVW/100000-4 68.9µs ± 1% 69.9µs ± 1% +1.42% (p=0.001 n=10+10)
```
Josh says he did shorter runs. I set count as 10. Perhaps longer runs somehow affect the branch predictor ?
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +98100c56da Mon Jun 3 01:37:58 2019 +0000 linux/amd64
</pre>
cc'ing folks who were in the original CL - @josharian @mundaym @griesemer
My CPU details are
<details>
```
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 61
Model name: Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz
Stepping: 4
CPU MHz: 911.804
CPU max MHz: 2700.0000
CPU min MHz: 500.0000
BogoMIPS: 4389.77
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 3072K
NUMA node0 CPU(s): 0-3
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap intel_pt xsaveopt dtherm ida arat pln pts flush_l1d
```
</details>
| Performance,NeedsInvestigation,compiler/runtime | low | Minor |
453,798,664 | flutter | Flutter Lifecyle onPause and onStop | ## Use case
I'm developing an Alarming App, and Popup up even If the mobile is locked. I need to tweak the FlutterActivity to support this feature.
This is an detail of this [issue](https://stackoverflow.com/questions/25369909/onpause-and-onstop-called-immediately-after-starting-activity):
Below is a lifecycle of App popup when using Keyguard:
**onCreate - onStart - onResume - onPause - onStop**
This will cause flutter engine rendering stuck, when `eventDelegate.onStop` and `eventDelegate.onPause` called.
### solution
The workaround I found was **making custom flutter activity** and **comment out this two lines**, but I don't know If there is any side effect.
## Proposal
Hope there is a way to control Flutter's rendering behavior, I think comments out these two would cause some side effect in the future.
| c: new feature,platform-android,engine,c: proposal,P3,team-android,triaged-android | low | Minor |
453,798,692 | TypeScript | Doesn't support rename refactoring with importing from a module (yarn workspaces) | Ts 3.5.1
I use yarn workspaces in the project so may
`./package/module` folder is linked and available as `@app/module`
Say I have such file, that imports from the same module `test`
```ts
import { a, b } from '@app/module/test'
import { b } from './packages/module/test'
```
Then I rename file `packages/module/test` to `test2`
and refactoring does change our file to:
```ts
import { a, b } from '@app/module/test' // this is not
import { b } from '@app/module/test2' // this is renamed and put @app instead of `./packages`
```
Expected result would probably be:
```ts
import { a, b } from '@app/module/test2'
import { b } from './packages/module/test2'
```
If there would be importing only `from '@app/module/test'` it would not even suggest rename refactoring.
| Bug | low | Minor |
453,798,966 | youtube-dl | solidarium.org | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.06.08**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.solidarum.org/vivre-ensemble/adrien-labaeye-berlin-des-communautes-aux-communs
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
It looks like they use the sproutvideo-player | site-support-request | low | Critical |
453,807,311 | TypeScript | TouchList is missing Symbol.iterator, can't use with 'for..of' | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.6.0-dev.20190608
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** touchlist iterator
**Code**
```ts
let touches = new TouchList();
for (const touch of touches) {
}
```
**Expected behavior:**
Code compiles successfully.
**Actual behavior:**
Error on line 2:
`Type 'TouchList' must have a '[Symbol.iterator]()' method that returns an iterator. ts(2488)`
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
N/A because the Playground complains due to `for...of` requiring `--downlevelIteration`--but there's no way to enable it.
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
#2695, labeled `help wanted` but closed | Bug | low | Critical |
453,815,301 | pytorch | torch.bernoulli's parameter generator not documented | ## 📚 Documentation
[torch.bernoulli](https://pytorch.org/docs/stable/torch.html#torch.bernoulli) shows two extra parameters other than the two documented ones: * and generator=None
What are they?
| module: docs,triaged,module: random | low | Minor |
453,823,495 | flutter | Exception in PlatformView is not captured by FlutterError.onError | I'm having to catch errors that occur in PlatformView on the Java side.
When using the FlutterError method I noticed that errors triggered in platform_view.dart are not reported to FlutterError.
To solve this problem, I made a change in message_codecs.dart, but I changed the [decodeEnvelope](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/services/message_codecs.dart#L597) method to be as follows
```dart
import 'package:flutter/foundation.dart' show ReadBuffer, WriteBuffer, required, FlutterError, ErrorDescription, FlutterErrorDetails;
//...//
if (errorCode is String && (errorMessage == null || errorMessage is String) && !buffer.hasRemaining) {
try {
throw PlatformException(code: errorCode, message: errorMessage, details: errorDetails);
} catch(e, stack) {
FlutterError.reportError(FlutterErrorDetails(
exception: e,
stack: stack,
library: 'services library',
context: ErrorDescription('while decode evelope from platform stream'),
));
throw e;
}
}
else
throw const FormatException('Invalid envelope');
```
I'm sure that is not the best solution, but I wonder if there is a plan for an official solution | framework,a: platform-views,c: proposal,a: error message,P2,team-framework,triaged-framework | low | Critical |
453,824,405 | svelte | Idle until urgent, what about a queue for improving render sense of speed? | Dear @Rich-Harris your work with Svelte is amazing.
I consider you a genius and an innovator like _Leonardo Da Vinci_ or _Henry Ford_.
With **Svelte 3** you gave us a bit of that fresh air that we missed for so long.
So first of all I want to give you an immense and melodious thanks.
---
And now I would like your holy opinion about a doubt, a desire that has gripped me for a long time.
## **A DOUBT, A DESIRE**
Let's say I have a Svelte 3 SPA (single page application) and a router (svero, navaid, svelte-routing, universal or else).
When I move from one page to another Svelte is very fast, but when there are so many components I feel "the weight", "the slowness" of rendering the entire "page".
If I use the Chrome "Performance" tab > "CPU throttling" > "6x" or "4x" I can see a small, annoying "delay", "lag" before the new page rendering. All this without CSS or JS animations.
"Simply" the page is too full of components that are all needed.
And on those components mounting, there's a lot going on in background (query mostly and computation).
Those components are needed but aren't **immediately** needed.
What I need immediately _instead_ is the sense of speed of my SPA: to navigate from one page to another (_even if fake, empty_) must be fast!
An example (although not 100%) could be YouTube, which has done a good job on this.
What I'd like to have is:
- click from a page to another
- immediately render **first of all** the new page, even without any component and maybe just a text "Loading..."
- **and then** "slowly" render everything else
I tried something like this:
```html
<script>
import { onMount } from "svelte";
import { doQuery } from "../queries";
let canRender = false;
onMount(() => {
setTimeout(() => {
setTimeout(() => {
canRender = true;
});
});
});
</script>
<div>
{#if canRender}
{#await $doQuery}
Loading...
{:then result}
{#each result as player}
<div>{player.name}</div>
{:else}
No player
{/each}
{/await}
{:else}
Loading...
{/if}
</div>
```
I tried also with `tick`:
```html
<script>
import { onMount, tick } from "svelte";
import { doQuery } from "../queries";
let canRender = false;
onMount(async () => {
await tick();
canRender = true;
});
</script>
<div>
{#if canRender}
{#await $doQuery}
Loading...
{:then result}
{#each result as player}
<div>{player.name}</div>
{:else}
No player
{/each}
{/await}
{:else}
Loading...
{/if}
</div>
```
But I don't know if I'm doing well.
And maybe now the problem is just "a tick" away from the first render and then all the renderings happen together at the same time again (am I wrong?).
Since Svelte is perfect I think there might be another different way to deal with this.
I read about:
- https://philipwalton.com/articles/idle-until-urgent/ and
- https://medium.com/@alexandereardon/dragging-react-performance-forward-688b30d40a33
Since your brain is among the most prosperous in the world, do you think it's possible to introduce something like an "idle-task-rendering-automagically-queue-for-components-and-navigation" in Svelte?
I obviously propose myself as an undying and immediate beta-tester and wiki-docs-writer.
Thanks again for your commitment.
---
This message is also addressed to people of very high value like @timhall, @jacwright and @lukeed.
Something wonderful can come from your minds.
Thanks to you too for all your wonderful work. | perf,temp-stale | medium | Major |
453,824,589 | godot | Light2D mask doesn't respect: Layer Min, Layer Max, Item Cull Mask | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
3.1
**Steps to reproduce:**
Add some GUI elements. Add a masking Light2D. GUI will be affected.
What happens is Light2D in Mask mode doesn't respect: Layer Min, Layer Max, Item Cull Mask. It masks everything, even the default UI. | bug,topic:rendering,confirmed | medium | Major |
453,826,324 | godot | Godot badly projects 3D camera ray if stretch mode is 2d+"expand" (or "keep") | Godot 3.1.1
Windows 10 64 bits
I have a 3D game in which I use the following code to get a ray from the captured mouse (itself patched to half the size of viewport because of #29559):
```gdscript
var mouse_pos = get_viewport().size / 2.0
var ray_origin = _camera.project_ray_origin(mouse_pos)
var ray_direction = _camera.project_ray_normal(mouse_pos)
```
Which works fine with the default stretch mode, even when I resize the window.
However, I have a HUD I want to scale nicely, but if I set the stretch mode to `2d` and `expand`, the 3D ray becomes wrong, offset somehow. I believe camera project functions are broken.
Meanwhile, I can't make my UI to stretch properly because of that.
(sorry no time for a repro project yet, I'm in the middle of a game jam) | bug,topic:core,confirmed,topic:3d | low | Critical |
453,827,020 | flutter | [web] implement Paragraph.getBoxesForPlaceholders for flutter web | This is used for text/inline widgets. In the meantime it could probably return an empty List to be safe. | framework,c: API break,a: typography,platform-web,c: proposal,P2,team-web,triaged-web | low | Minor |
453,829,965 | create-react-app | Incorrect order of CSS in build version | ### Is this a bug report?
Yes
### Environment
`npx create-react-app --info` prints empty result for some reason.
### Steps to Reproduce
Using CRA 3.0.1.
1. `create-react-app style_test`
2. `npm i [email protected] --save`
3. In `index.js` import two CSS files - first, a reset CSS, second, the bootstrap CSS.
4. Run dev server and production builds and compare results.
### Expected Behavior
Both dev server and production builds should display text using bootstrap CSS.
### Actual Behavior
Dev server works as expected, but production build contains two CSS chunks, one with bootstrap code and another with reset CSS, loaded in wrong order (bootstrap first). As a result, in production version text is not styled.
Bootstrap (and only bootstrap) appears in `2.*.chunk.css`, while reset CSS and all my sass styles end up in `main.*.chunk.css`. What is the benefit of using two chunks, anyway? Just faster load times due to parallel loading of both scripts?
### Reproducible Demo
https://github.com/d07RiV/style_test | issue: bug | medium | Critical |
453,834,458 | opencv | javadoc: improve content rendering of Java wrappers | Current Issues:
- some formula blocks are broken
- missing images (see .png in text)
- missing "see also" links
- paragraphs without line breaks
- some lists formatting are broken
Possible variants are in comments below.
<details>
<summary>Example</summary>
[Link](https://docs.opencv.org/3.4/javadoc/org/opencv/core/Core.html#norm-org.opencv.core.Mat-) (page will be auto-updated, see screenshot from 2019-06-08).

relates #14667
</details>
-----
**Vote**: use emoji buttons for comments below (:+1: / :-1: ) | feature,category: documentation,category: java bindings,vote | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.