id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
478,478,625 | deno | Redirect between stderr and stdout in `Deno.Command` | Ref https://github.com/denoland/deno/pull/1828#issuecomment-467719574
Rather than adding `Process::combinedOutput()`, I propose modelling `2>&1` in `RunOptions`.
Ref https://docs.python.org/3/library/subprocess.html#using-the-subprocess-module
```ts
export interface RunOptions {
cmd: string[];
cwd?: string;
env?: {
[key: string]: string;
};
stdout?: ProcessStdio | number;
stderr?: ProcessStdio | number;
stdin?: ProcessStdio | number;
}
```
->
```ts
export interface RunOptions {
cmd: string[];
cwd?: string;
env?: {
[key: string]: string;
};
stdout?: ProcessStdio | number;
stderr?: ProcessStdio | "stdout" | number;
stdin?: ProcessStdio | number;
}
``` | feat,public API,runtime | low | Minor |
478,509,857 | go | go/types: Config.Check tries to create go.sum | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.7 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/user/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/user/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/go"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build824852661=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Using type `types.Config.Check()` on a file importing a package with `go.mod` but without `go.sum` causes an attempt to write the missing `go.sum` file. I realize not having `go.sum` is a mistake on the repo maintainer side, but there are projects like that out there and they're used. This in turns causes problems in applications that depend on the type-checker to find out some information about source code (e.g. ObjectBox code generator).
See https://github.com/vaind/go-type-checker-issues for a minimal example that shows the issue. Try running the following code as a non-root user on linux:
<pre>
$ git clone [email protected]:vaind/go-type-checker-issues.git
$ cd go-type-checker-issues/gosum
$ go test ./...
--- FAIL: TestGoSumPermissions (0.71s)
gosum_test.go:39: gosum.go:5:4: could not import github.com/anacrolix/missinggo (type-checking package "github.com/anacrolix/missinggo" failed (/home/user/go/pkg/mod/github.com/anacrolix/[email protected]/strcase.go:6:2: could not import github.com/huandu/xstrings (go/build: importGo github.com/huandu/xstrings: exit status 1
go: writing go.sum: open /home/user/go/pkg/mod/github.com/anacrolix/[email protected]/go.sum439740474.tmp: permission denied
)))
FAIL
FAIL github.com/vaind/go-type-checker-issues/gosum 0.711s
</pre>
Interestingly a combination of go get & go test works fine, but unfortunately that doesn't solve the problem if you have such imports in your own project...
### What did you expect to see?
I expect the type checker wouldn't try to create a missing `go.sum` file.
| NeedsInvestigation,modules,Tools | low | Critical |
478,537,732 | flutter | Improve color controls for flutter. | It would be great to have improved color functions similar to as outlined here:
https://thoughtbot.com/blog/controlling-color-with-sass-color-functions
| c: new feature,framework,would be a good package,P3,team-framework,triaged-framework | low | Minor |
478,554,939 | terminal | Use C++/WinRT FastAbi | This will reduce the QI burden (and have miscellaneous other poorly-documented effects) in our consumption of C++/WinRT. | Area-Build,Product-Meta,Issue-Task | low | Minor |
478,572,439 | godot | Doing =+ should raise a syntax error | **Godot version:**
3.1.1 stable official
**OS/device including version:**
OSX 10.14.4
**Issue description:**
I made a typo, instead of writing
`weight += o.get_weight()`
I wrote
`weight =+ o.get_weight()`
IMHO, if there is **no space** beetwen the assignement and the + sign, then the syntax should not be valid, to avoid such error to be time consuming
**Steps to reproduce:**
Just test something like
`weight =+ o.get_weight()`
| enhancement,discussion,topic:gdscript | low | Critical |
478,574,672 | pytorch | Multiplying a very large CUDA tensor with another tensor yields unexpected result | ## 🐛 Bug
Multiplying a very large CUDA tensor with another tensor yields unexpected result.
## To Reproduce
Steps to reproduce the behavior:
1. Generate the following random matrices
```
A = torch.randn((11111111, 20), device=torch.device("cuda"))
B = torch.randn((20, 2), device=torch.device("cuda"))
```
2. Then `(A @ B)[8807984:]` must be the same as `A[8807984:] @ B`. But it is not the case!
Minimal example:
```
A = torch.randn((11111111, 20), device=torch.device("cuda"))
B = torch.randn((20, 2), device=torch.device("cuda"))
print((A @ B)[8807984:].equal(A[8807984:] @ B))
```
returns `False`
## Expected behavior
```
A = torch.randn((11111111, 20), device=torch.device("cuda"))
B = torch.randn((20, 2), device=torch.device("cuda"))
print((A @ B)[8807984:].equal(A[8807984:] @ B))
```
Should return `True`
## Environment
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 18.04.2 LTS
GCC version: (Homebrew gcc 5.5.0_4) 5.5.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.1.85
GPU models and configuration: GPU 0: GeForce GTX 1080 Ti
Nvidia driver version: 390.116
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.17.0
[pip] pytorch-lightning==0.3.6.9
[pip] torch==1.1.0
[pip] torchkge==0.10.3
[pip] torchvision==0.3.0
[conda] blas 2.10 mkl conda-forge
[conda] libblas 3.8.0 10_mkl conda-forge
[conda] libcblas 3.8.0 10_mkl conda-forge
[conda] liblapack 3.8.0 10_mkl conda-forge
[conda] liblapacke 3.8.0 10_mkl conda-forge
[conda] mkl 2019.4 243
[conda] pytorch 1.1.0 py3.7_cuda9.0.176_cudnn7.5.1_0 pytorch
[conda] pytorch-lightning 0.3.6.9 pypi_0 pypi
[conda] torchkge 0.10.3 dev_0 <develop>
[conda] torchvision 0.3.0 py37_cu9.0.176_1 pytorch
cc @ezyang @gchanan @zou3519 | module: dependency bug,module: cuda,triaged | low | Critical |
478,609,358 | godot | RichTextLabel and Label shadow colors multiply the main font color instead of overriding it | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
3.1.1 stable
**OS/device including version:**
Windows 10
**Issue description:**
Label and Richtextlabel shadow colors are copied from the font color and then modulated. This affects bitmap fonts, so that only full black shadows can be used. Not sure if other nodes use shadows, in that case they might suffer from the same problem. buttons do not have shadows.
The shadow colors should be a white (255, 255, 255, 255) color instead, so they can be modulated separately from the original font color.

**Steps to reproduce:**
Add the supplied font to a label
**Minimal reproduction project:**
[fonts.zip](https://github.com/godotengine/godot/files/3483040/fonts.zip)
Added a colored font to test with. The 'm' and 'a' characters are colored, use those to test :)
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| bug,topic:core,confirmed | low | Critical |
478,637,686 | flutter | Modify FDE menubar plugin API to support multiple shortcuts | Currently, the FDE menubar plugin API only allows a single LogicalKeySet shortcut. It would be great if this would allow for multiple. The use case at hand for me is a 'delete' action, which I would really like to be bound to both the backspace and delete keys as independent LogicalKeySets. Ideally, one of these shortcuts would be treated as the primary in that the symbols reflecting its keys would be shown in the menu item, and any other shortcuts would trigger the callback and menu highlight but wouldn't be shown in the menu item. | c: new feature,framework,platform-mac,a: desktop,customer: octopod,P3,team-macos,triaged-macos | low | Minor |
478,652,547 | youtube-dl | Request support for wetv.vip | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.08.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.08.02**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://wetv.vip/play?vid=o00318x0wds
- Playlist: https://wetv.vip/play?cid=jenizogwk2t8400
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
This site is official site for chinese/korean dramas. there is a lot of free content and yes a lot of content, that you need subscription for. Can you please look into it, if its possible to add it?
thanks
| site-support-request | low | Critical |
478,672,810 | pytorch | fractional_max_pool2d_with_indices silently ignores output_ratio if output_size is provided | Reading the code: https://github.com/pytorch/pytorch/blob/32efb431294d99a60899b0809c6363065608e556/torch/nn/functional.py#L339-L350 , it looks like if both output_ratio and output_size are provided, then the function ignores output_ratio. There should be a check for this.
cc @heitorschueroff | module: error checking,triaged,module: pooling | low | Minor |
478,679,903 | TypeScript | Give more information in --extendedDiagnostics | This is a list I may keep adding to as I help troubleshoot users with slow projects.
# Breakdown of file types
Much of the time, users just have a lot of .d.ts files in `node_modules`. It'd be easier to tell the cause if I could see the breakdown of:
* `.ts`/`.tsx` files
* `.d.ts` files
* `.js`/`.jsx` files
I'd want to know number of lines and number of files.
# Breakdown of program construction time
I've seen many examples of incorrect `exclude` globs that try to exclude `node_modules` but fail to do so.
What I'm looking for is number of folders explored during program construction time. | Suggestion,Experience Enhancement,Rescheduled | low | Major |
478,699,977 | pytorch | Tests do not pass with the latest protobuf | Updating protobuf submodule to the latest version causes test errors, as shown in #22595 (for example, https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-devtoolset7-rocmrpm-centos7.5-test/29677/console ).
The latest protobuf has been reported twice to fix build on ARM: #22564
Looks like a low-hanging fruit, might worth fixing it. | module: protobuf,caffe2,triaged | low | Critical |
478,705,613 | pytorch | tensor.var_mean variant for existing torch.var_mean (and same for std_mean) | According to docs, the method variant currently seems absent (yet the input tensor is clear, so a method variant makes sense):
https://pytorch.org/docs/stable/torch.html?highlight=var_mean#torch.var_mean | triaged,function request,module: reductions | low | Minor |
478,709,635 | pytorch | torch.{save,load} data corruption when serializing a Module with __{get,set}state__ | Repro:
```
import torch
import tempfile
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
self.x = torch.rand(3, 4)
def __getstate__(self):
print('storage', self.x.storage())
print('cdata', self.x.storage()._cdata)
print('data', self.x)
return (self.x,)
def __setstate__(self, state):
x = state[0]
print('storage', x.storage())
print('cdata', x.storage()._cdata)
print('data', x)
foo = Foo()
with tempfile.NamedTemporaryFile() as f:
torch.save(foo, f)
f.seek(0)
loaded = torch.load(f)
```
Output
```
storage 0.616813063621521
0.1769922971725464
0.8948532938957214
0.48996198177337646
0.05472159385681152
0.5072327852249146
0.2782403826713562
0.28143006563186646
0.34611016511917114
0.08622455596923828
0.3336881399154663
0.06343936920166016
[torch.FloatStorage of size 12]
cdata 94388766643008
data tensor([[0.6168, 0.1770, 0.8949, 0.4900],
[0.0547, 0.5072, 0.2782, 0.2814],
[0.3461, 0.0862, 0.3337, 0.0634]])
storage -6.13554555064281e-24
3.079493505200218e-41
-6.0064247976762345e-24
3.079493505200218e-41
1.401298464324817e-45
0.0
4.0678076049753325e-31
6.206981819115192e-36
6.206981819115192e-36
6.206981819115192e-36
6.206981819115192e-36
2.7953761007662624e-20
[torch.FloatStorage of size 12]
cdata 94388766662208
data tensor([[-6.1355e-24, 3.0795e-41, -6.0064e-24, 3.0795e-41],
[ 1.4013e-45, 0.0000e+00, 4.0678e-31, 6.2070e-36],
[ 6.2070e-36, 6.2070e-36, 6.2070e-36, 2.7954e-20]])
```
cc @ezyang @gchanan @zou3519 | high priority,module: serialization,triaged,quansight-nack | low | Major |
478,746,407 | TypeScript | Utility type: Object entries tuple | ## Search Terms
Utility type, object entries, tuple, mapped type
## Suggestion
An `Entries<T>` utility type that constructs a union of all object entry tuples of type T in `lib.es5.d.ts`.
```typescript
/**
* Construct a union of all object entry tuples of type T
*/
type Entries<T> = {
[P in keyof T]: [P, T[P]];
}[keyof T];
```
## Use Cases
I understand the history behind why `Object.entries` provides `string` as the key type rather than the actual keys (because there may exist more actual keys that those defined in the type). I refer to discussion [here](https://github.com/microsoft/TypeScript/pull/12253#issuecomment-263132208).
Sometimes, however, we can be confident that only properties defined in the type will be present at runtime, and have found that a utility type like this can be helpful to more strictly define such types, particularly in tests.
## Examples
```typescript
const expected = {
foo: "foo",
bar: "Bar",
baz: "BAZ",
} as const;
(Object.entries(expected) as Entries<typeof expected>).forEach(([input, output]) => {
// where `myFunction`'s first parameter only accepts `"foo" | "bar" | "baz"` (or a superset)
expect(myFunction(input)).to.equal(output);
});
```
```typescript
function thatAcceptsJsxAttributeTuples(attributes: readonly Entries<JSX.IntrinsicElements["path"][]>): void {
// ...
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Major |
478,755,398 | vue-element-admin | npm run preview打开页面空白 |
## Bug report(问题描述)
使用npm run preview 打包成功生成dist文件夹,使用file协议可以打开dist目录下的index文件出现登录界面,可是使用http://localhost:9526/访问时页面空白
#### Steps to reproduce(问题复现步骤)
npm run preview
使用浏览器打开 http://localhost:9526/
#### Screenshot or Gif(截图或动态图)


#### Other relevant information(格外信息)
- Your OS:
- Node.js version:8.9.3
- vue-element-admin version:master上的最新版本
| bug,enhancement :star: | low | Critical |
478,760,728 | opencv | compile a cap_dshow.cpp in the mingw | Hi,
I've got an error by compiling with a branch of opencv 3.4.7.
##### Error
```
opencv-3.4.7\modules\videoio\src\cap_dshow.cpp:2313:41: error: 'sprintf_instead_use_StringCbPrintfA_or_StringCchPrintfA' was not declared in this scope
```
##### System information
OS: windows 10.
gcc version(mingw32):
`
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=C:/DEV/SDK/msys64/mingw32/bin/../lib/gcc/i686-w64-mingw32/9.1.0/lto-wrapper.exe
Target: i686-w64-mingw32
Configured with: ../gcc-9.1.0/configure --prefix=/mingw32 --with-local-prefix=/mingw32/local --build=i686-w64-mingw32 --host=i686-w64-mingw32 --target=i686-w64-mingw32 --with-native-system-header-dir=/mingw32/i686-w64-mingw32/include --libexecdir=/mingw32/lib --enable-bootstrap --with-arch=i686 --with-tune=generic --enable-languages=c,lto,c++,fortran,ada,objc,obj-c++ --enable-shared --enable-static --enable-libatomic --enable-threads=posix --enable-graphite --enable-fully-dynamic-string --enable-libstdcxx-filesystem-ts=yes --enable-libstdcxx-time=yes --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-isl-version-check --enable-lto --enable-libgomp --disable-multilib --enable-checking=release --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --enable-plugin --with-libiconv --with-system-zlib --with-gmp=/mingw32 --with-mpfr=/mingw32 --with-mpc=/mingw32 --with-isl=/mingw32 --with-pkgversion='Rev3, Built by MSYS2 project' --with-bugurl=https://sourceforge.net/projects/msys2 --with-gnu-as --with-gnu-ld --disable-sjlj-exceptions --with-dwarf2
Thread model: posix
gcc version 9.1.0 (Rev3, Built by MSYS2 project)
`
> So, I changed all sprintf to sprintf_s without any error.
Is there any point that i missed?
Here is my configure log:
```
======================================
Detected processor: AMD64
sizeof(void) = 4 on 64 bit processor. Assume 32-bit compilation mode
libjpeg-turbo: VERSION = 2.0.2, BUILD = opencv-3.4.7-libjpeg-turbo
Found TBB (env): C:/DEV/SDK/msys64/mingw32/lib/libtbb.dll.a
Found OpenBLAS libraries: C:/DEV/SDK/msys64/mingw32/lib/libopenblas.a
Found OpenBLAS include: C:/DEV/SDK/msys64/mingw32/include/OpenBLAS
LAPACK(OpenBLAS): LAPACK_LIBRARIES: C:/DEV/SDK/msys64/mingw32/lib/libopenblas.a
LAPACK(OpenBLAS): Can't build LAPACK check code. This LAPACK version is not supported.
A library with LAPACK API found.
LAPACK(LAPACK/Generic): LAPACK_LIBRARIES: C:/DEV/SDK/msys64/mingw32/lib/libopenblas.a
LAPACK(LAPACK/Generic): Can't build LAPACK check code. This LAPACK version is not supported.
Picked up JAVA_TOOL_OPTIONS: -Djava.net.preferIPv4Stack=true
Found apache ant: C:/DEV/Tools/apache-ant-1.9.14/bin/ant.bat (1.9.14)
VTK is not found. Please set -DVTK_DIR in CMake to VTK build directory, or to VTK install subdirectory with VTKConfig.cmake file
OpenCV Python: during development append to PYTHONPATH: C:/DEV/SDK/opencv-3.4.7/build/opencv_build/python_loader
Caffe: NO
Protobuf: NO
Glog: YES
freetype2: NO
harfbuzz: NO
No preference for use of exported gflags CMake configuration set, and no hints for include/library directories provided. Defaulting to preferring an installed/exported gflags CMake configuration if available.
Found installed version of gflags: C:/DEV/SDK/msys64/mingw32/lib/cmake/gflags
Detected gflags version: 2.2.2
Found installed version of Eigen: C:/DEV/SDK/msys64/mingw32/share/eigen3/cmake
Found required Ceres dependency: Eigen version 3.3.7 in C:/DEV/SDK/msys64/mingw32/include/eigen3
Found installed version of glog: C:/DEV/SDK/msys64/mingw32/lib/cmake/glog
Detected glog version: 0.4.0
Found required Ceres dependency: glog
Found installed version of gflags: C:/DEV/SDK/msys64/mingw32/lib/cmake/gflags
Detected gflags version: 2.2.2
Found required Ceres dependency: gflags
Found Ceres version: 1.14.0 installed in: C:/DEV/SDK/msys64/mingw32 with components: [EigenSparse, SparseLinearAlgebraLibrary, LAPACK, SuiteSparse, CXSparse, SchurSpecializations, OpenMP, Multithreading]
Checking SFM deps... FALSE
Module opencv_sfm disabled because the following dependencies are not found: Glog/Gflags
Excluding from source files list: modules/imgproc/src/imgwarp.avx2.cpp
Excluding from source files list: modules/imgproc/src/resize.avx2.cpp
Excluding from source files list: modules/imgproc/src/sumpixels.avx512_skx.cpp
Excluding from source files list: <BUILD>/modules/dnn/layers/layers_common.avx2.cpp
Excluding from source files list: <BUILD>/modules/dnn/layers/layers_common.avx512_skx.cpp
Excluding from source files list: modules/features2d/src/fast.avx2.cpp
Tesseract: YES
General configuration for OpenCV 3.4.7 =====================================
Version control: unknown
Extra modules:
Location (extra): C:/DEV/SDK/opencv-3.4.7/opencv_contrib-cleanup_stl_string_replacement/modules
Version control (extra): unknown
Platform:
Timestamp: 2019-08-07T15:02:29Z
Host: Windows 10.0.17134 AMD64
CMake: 3.15.1
CMake generator: MinGW Makefiles
CMake build tool: C:/DEV/SDK/msys64/mingw32/bin/mingw32-make.exe
Configuration: Release
CPU/HW features:
Baseline: SSE SSE2
requested: SSE2
Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX
requested: SSE4_1 SSE4_2 AVX FP16
SSE4_1 (12 files): + SSE3 SSSE3 SSE4_1
SSE4_2 (1 files): + SSE3 SSSE3 SSE4_1 POPCNT SSE4_2
FP16 (0 files): + SSE3 SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX
AVX (5 files): + SSE3 SSSE3 SSE4_1 POPCNT SSE4_2 AVX
C/C++:
Built as dynamic libs?: YES
C++11: YES
C++ Compiler: C:/DEV/SDK/msys64/mingw32/bin/i686-w64-mingw32-g++.exe (ver 9.1.0)
C++ flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -mfpmath=sse -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG
C++ flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -mfpmath=sse -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG
C Compiler: C:/DEV/SDK/msys64/mingw32/bin/i686-w64-mingw32-gcc.exe
C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -mfpmath=sse -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG
C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -mfpmath=sse -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG
Linker flags (Release): -Wl,--gc-sections
Linker flags (Debug): -Wl,--gc-sections
ccache: NO
Precompiled headers: YES
Extra dependencies:
3rdparty dependencies:
OpenCV modules:
To be built: aruco bgsegm bioinspired calib3d ccalib core datasets dnn dnn_objdetect dpm face features2d flann fuzzy hdf hfs highgui img_hash imgcodecs imgproc java line_descriptor ml objdetect optflow ovis phase_unwrapping photo plot reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking ts video videoio videostab xfeatures2d ximgproc xobjdetect xphoto
Disabled: world
Disabled by dependency: -
Unavailable: cnn_3dobj cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev cvv freetype js matlab python2 python3 sfm viz
Applications: apps
Documentation: NO
Non-free algorithms: NO
Windows RT support: NO
GUI:
Win32 UI: YES
VTK support: NO
Media I/O:
ZLib: build (ver 1.2.11)
JPEG: build-libjpeg-turbo (ver 2.0.2-62)
WEBP: build (ver encoder: 0x020e)
PNG: build (ver 1.6.37)
TIFF: build (ver 42 - 4.0.10)
JPEG 2000: build (ver 1.900.1)
OpenEXR: build (ver 2.3.0)
HDR: YES
SUNRASTER: YES
PXM: YES
Video I/O:
DC1394: NO
FFMPEG: YES (prebuilt binaries)
avcodec: YES (ver 57.107.100)
avformat: YES (ver 57.83.100)
avutil: YES (ver 55.78.100)
swscale: YES (ver 4.8.100)
avresample: YES (ver 3.7.0)
GStreamer: NO
DirectShow: YES
Parallel framework: TBB (ver 2019.0 interface 11001)
Trace: YES (built-in)
Other third-party libraries:
Lapack: NO
Eigen: YES (ver 3.3.7)
Custom HAL: NO
Protobuf: build (3.5.1)
OpenCL: YES (no extra features)
Include path: C:/DEV/SDK/opencv-3.4.7/opencv-3.4.7/3rdparty/include/opencl/1.2
Link libraries: Dynamic load
Python (for build): C:/DEV/SDK/msys64/mingw32/bin/python2.7.exe
Java:
ant: C:/DEV/Tools/apache-ant-1.9.14/bin/ant.bat (ver 1.9.14)
JNI: C:/DEV/SDK/jdk-8u999-windows-i586/jdk1.8.0_999/include C:/DEV/SDK/jdk-8u999-windows-i586/jdk1.8.0_999/include/win32 C:/DEV/SDK/jdk-8u999-windows-i586/jdk1.8.0_999/include
Java wrappers: YES
Java tests: NO
Install to: C:/DEV/SDK/opencv-3.4.7/build/opencv_build/install
-----------------------------------------------------------------
Configuring done
``` | priority: low,category: videoio,category: build/install,incomplete,needs investigation | low | Critical |
478,770,541 | flutter | Everything builds from a Flutter module | This is a brainstorm proposal to restructure the modularity of Flutter projects to increase modular reuse between add-to-app cases and full-Flutter-app cases and to make the project structure generally more compositional.
Consider that there are conceptually really 3 things involved when building a Flutter project
0. Flutter engine. Binaries that include the Flutter C++ engine + platform specific bits, Skia, Dart VM etc. These are pre-built and independent of the user's project.
1. A Dart package. The inputs are .dart files, a pubspec and Flutter dependent assets such as fonts etc. The outputs are JIT/AOT binaries/assets and project assets. There's a single Dart package across all platforms.
2. A platform library wrapping the dart package. The inputs are the output of 0 and 1 and some minimal platform specific bootstrapping scaffolding such as some .gradles and .xcodeproj. The output is a Flutter-tool agnostic platform specific library package such as .aars + POMs on Android, .frameworks on iOS, .dll + assets etc. This is a library and is not runnable. There is 1 platform library per platform per dart package.
3. A runnable platform application that runs the Flutter platform library. The inputs are the output of 2 plus all things relevant for the application and platform. With Android as an example, this is the .gradle package names, flavors, AndroidManifests, .java files, resources etc. The output is a runnable APK, exe, IPA etc etc. There can be many apps that depends on 1 platform library + dart package combo per platform. This is a leaf node and no reverse dependency exists. This could be your standard `flutter create` app which is then just a simple bootstrap for your platform library in 2. This could be your own existing app for add-to-app cases. This could be a test bootstrap for the Flutter library you're developing but don't want to test by embedding it inside your full add-to-app app because the Flutter screen is 10 screens deep in your outer app.
Historically, 2 didn't exist and 1 and 3 are highly coupled together since we only had full-Flutter app cases. But it's unmodular and hard to deal with add-to-app cases which are theoretically a strict subset of full-app cases but in practice become parallel implementations because full-app cases are too tightly coupled.
In this proposal, 1 doesn't know about 2 and 2 doesn't know about 3.
## What does `flutter run` do then
`flutter run` is currently run inside (1). If (1) doesn't know about (3), how does it launch the platform specific bootstrap?
This convenience mechanism should still be a modular mechanism built on top of the proposal above and itself using more modular mechanisms.
The Dart package's pubspec should still list only its own dependencies such as assets and plugins. Convenience features like flutter run can be mapped via a separate mapping mechanism such as a app-map.yaml file:
```yaml
android:
simple_test:
path: ../my_flutter_bootstrap
command: ./gradlew assembleDebug # optional, has reasonable default
test_against_staging:
path: ../my_flutter_bootstrap
command: ./gradlew assembleFirebaseStaging
real_app:
path: /Volumes/corp_src/app1
command: flutter build aar; ln -s ./build/host/outputs/repo
/Volumes/corp_src/app1/dependencies; ./gradlew assembleDebug
ios:
...
web:
...
```
Then we can `flutter run simple_test`. No need to -d someAndroidDevice since it knows its Android. We also won't need to hardcode the Flutter<->platform app's relative directory structure since anything can live anywhere relative to each other.
We also remove the need for anything to be called Runner.app or for .app. to be anywhere in the package namespace on Android since the app is the leaf node and nothing depends on it.
Behind the scene, flutter run is broken down into running the `command` in app-map.yaml (which in-turn depends on the platform library (2) which in turn builds the Dart code in (1)) and running `flutter attach` (which is what it conceptually does anyway).
## What does `flutter create` do then
For convenience, it would do the exact same thing as it does today from the surface. But behind the scene, it's shimmed out by 2 things (which we can rename to make them more first class etc). A `flutter create -t module my_flutter_module` and a `flutter create-wrapper-project my_test_launcher --from-module ../my_flutter_module`. But you can independently do just one of the 2. e.g. you would call flutter create -t module to make a new Flutter module. You would call `flutter create-wrapper-project` if you're trying to create another project around an existing module etc.
## How do plugins work
This is another case where we amalgamated 2 things (1- the resolution of the pub dependencies and the retrieval of all platforms' native components of the plugin and 2- the creation of the platform's generated registration and the retrieval of the plugin's platform side transient native dependencies, such as firebase, and the packaging of everything).
I propose that we split the plugin processing into 2. `flutter packages get` only operates in Flutter modules and only does Flutter things. At the end of this, you can import the plugin's Dart code but nothing for the platform side has been done.
Only upon trying to build the platform library's artifacts (such as by flutter build aar in the library) do the native code generation and transient dependency resolution happen. | tool,a: existing-apps,a: build,P3,team-tool,triaged-tool | low | Critical |
478,799,869 | vscode | [themes] Show extension info in Preferences: Color Theme panel | <!-- Please search existing issues to avoid creating duplicates. -->
In current `Preferences: Color Theme` panel, there is no extension info that lets users know which extension provided the themes. I'd like to purge some unwanted theme extensions, but I don't know which one I should uninstall.

It will be great if we can add the extension info at the right of theme name :) Something like the below picture.

<!-- Describe the feature you'd like. -->
| feature-request,themes | low | Minor |
478,830,256 | pytorch | [Caffe2] build android in v1.1.0 with headfile error | pytorch version v1.1.0
I have built the android caffe2 lib with android ndk r20 and x86 ABI like this
`./scripts/build_android.sh -DANDROID_ABI=x86 -DANDROID_TOOLCHAIN=clang `
Then I copy all headfiles and lib from build_android folder to my android project.
And I got this error when build android JNI native cpp codes with this head file <caffe2/core/operator.h>:
----------------------------------------------------------------------------------
In file included from ../../../../src/main/cpp\caffe2/core/operator.h:1375:
../../../../src/main/cpp\caffe2/core/c10_operator.h:5:10: fatal error: 'torch/csrc/jit/script/function_schema_parser.h' file not found
#include <torch/csrc/jit/script/function_schema_parser.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
ninja: build stopped: subcommand failed.
----------------------------------------------------------------------------------
I guess this file referenced a torch function that shouldn't be in caffe2 install files or this function can be re-implemented.
Can anyone give me some advice? Thank you very much.
| caffe2,triaged | low | Critical |
478,858,857 | flutter | Expose convolution matrix filter. | I'd like to apply convolution filters to a canvas without having to record the content and work on the raw pixel data.
I believe there is no way to do that. Is there a reason why?
## Proposal
Expose [SkImageFilters::MatrixConvolution](https://github.com/google/skia/blob/master/include/effects/SkImageFilters.h#L198) | c: new feature,engine,P3,team-engine,triaged-engine | low | Major |
478,877,939 | terminal | Define size for background image | <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
As we are now able to define background image position using `backgroundImageAlignment`. It makes really easy to put an image to lower right of the terminal.
It will be very appreciated to also define the size of this image. It will help to place a large image in a corner of the terminal.
# Proposed technical implementation details (optional)
<!--
A clear and concise description of what you want to happen.
-->
Add properties into the profiles.json file: eg. `backgroundImageWidth` and `backgroundImageHeight`.
| Help Wanted,Area-Settings,Product-Terminal,Issue-Task | low | Critical |
478,881,986 | terminal | 400 Client Error in Azure Cloud Shell Connector | # Environment
Windows build number: Microsoft Windows [version 10.0.18362.10005]
Windows Terminal version (if applicable): 0.3.2171.0
# Steps to reproduce
1- Open a new CloudShell tab and log in
2- Execute some azure cli commands for about 10 min
# Expected behavior
Azure CLI commands continue to execute without any problem
# Actual behavior
After some time, the azure cli commands will failed with the following error :
```Error occurred in request., HTTPError: 400 Client Error: Bad Request for url: http://localhost:50342/oauth2/token```
Executig several times the same commands (```az keyvault secret show --vault-name $kvName -n $secretName --query id -o tsv```for instance) won't change anything. However the azure cloud shell is opened in the browser (shell.azure.com) the same command will work and executing azure cli commands will start working again in the azure cloud shell of windows terminal.
| Help Wanted,Issue-Bug,Product-Terminal,Area-AzureShell | low | Critical |
478,924,508 | flutter | Allow configuring a default `--local-engine` | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
I' m using a locally built engine, and I need to pass `--local-engine` everywhere when use flutter command, such as `flutter doctor`
Or an error : `You must specify --local-engine if you are using a locally built engine.`
The IDE VSCode and Android Studio / IntelliJ currently not support pass this to all flutter calls,
so it's hard to use them.
Maybe add a fallback check for an environment variable like LOCAL_ENGINE is the easyest way for all tools to work.
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
## Proposal
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
I patched flutter_tools locally like this:
```
diff --git "a/packages/flutter_tools/lib/src/runner/flutter_command_runner.dart" "b/packages/flutter_tools/lib/src/runner/flutter_command_runner.dart"
index d332cc1b6..302ab0169 100644
--- "a/packages/flutter_tools/lib/src/runner/flutter_command_runner.dart"
+++ "b/packages/flutter_tools/lib/src/runner/flutter_command_runner.dart"
@@ -459,7 +459,11 @@ class FlutterCommandRunner extends CommandRunner<void> {
if (globalResults['local-engine'] != null) {
localEngine = globalResults['local-engine'];
} else {
- throwToolExit(userMessages.runnerLocalEngineRequired, exitCode: 2);
+ localEngine = platform.environment['LOCAL_ENGINE'];
+
+ if (localEngine == null) {
+ throwToolExit(userMessages.runnerLocalEngineRequired, exitCode: 2);
+ }
}
final String engineBuildPath = fs.path.normalize(fs.path.join(enginePath, 'out', localEngine));
``` | c: new feature,tool,P2,team-tool,triaged-tool | low | Critical |
478,939,702 | vue | Warn when v-for with a Range is not a valid integer | ### What problem does this feature solve?
When using the v-for directive with a Range and the number passed is not a valid integer (valid integer = an integer between 0 and 2^32-1), Vue.JS still tries to create an Array and then throws the error `[Vue warn]: Error in render: "RangeError: Invalid array length"`.
This happened to me while passing a computed property to the directive: `v-for="n in lists"` where `lists` is the computed property.
When the developer has multiple v-for directives, it is unclear where the error occurs, making the debugging process though.
If the mistake is caught by Vue before creating the Array, rendering the component shouldn't have to stop. Instead we can throw a warning and render an empty v-for directive. This would would make debugging easier, since other v-for directives in the same component would still render.
### What does the proposed API look like?
Add a check in /src/core/instance/render-helpers/render-list.js at line 22, checking if a valid number is:
* higher or equal to 0
* lower or equal to 2^32 - 1
* modules to 1 is equal to 0
```javascript
else if (typeof val === 'number') {
if (val >= 0 && val <= 4294967295 && val % 1 === 0) {
ret = new Array(val)
for (i = 0; i < val; i++) {
ret[i] = render(i + 1, i)
}
} else {
warn (
`Number passed to v-for directive not valid (expected valid integer), got ${val}`,
this
)
ret = []
}
}
```
<!-- generated by vue-issues. DO NOT REMOVE --> | discussion | low | Critical |
478,969,374 | go | crypto/ecdsa: make PublicKey implement encoding.TextMarshaler/TextUnmarshaler using PEM | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.7 darwin/amd64
</pre>
and whatever play.golang.org uses.
### Does this issue reproduce with the latest release?
It is the latest release as of now. It can also be reproduced here: https://play.golang.org/p/cbsOhB8lHxe
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/path/to/my/gocode"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12.7/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12.7/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/dev/null"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/td/yb98xptn12d60pfrjw87kkm80000gn/T/go-build432091977=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
https://play.golang.org/p/cbsOhB8lHxe
```go
package main
import (
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"encoding/json"
"log"
)
func main() {
privKey, _ := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
jsonKey, err := json.Marshal(privKey)
if err != nil {
log.Fatalf("error marshalling private key: %v", err)
}
var retrieveKey ecdsa.PrivateKey
if err := json.Unmarshal(jsonKey, &retrieveKey); err != nil {
log.Fatalf("error parsing json: %s", err)
}
}
```
### What did you expect to see?
A key struct created from json that has just been created from the very same data structure.
The same process does succeed for RSA: https://play.golang.org/p/FFYREV4NMfv , although not on the playground, where I get "Program exited: process took too long." I did test it locally and I'm actually using it as a workaround.
### What did you see instead?
error parsing json: json: cannot unmarshal object into Go struct field PrivateKey.Curve of type elliptic.Curve
I have also written the jsonKey to file and verified that it is valid json. | Unfortunate,Proposal-Accepted,NeedsFix,Proposal-Crypto | medium | Critical |
478,986,944 | pytorch | Multi-gpu example freeze and is not killable | ## 🐛 Bug
Running pytorch with multiple P40 gpus freeze and is not killable (even kill -9 by root). Only a reboot removes this process.
Inside docker container (with nvidia-docker2) it freezes docker. https://github.com/NVIDIA/nvidia-docker/issues/1010
## To Reproduce
Steps to reproduce the behavior:
1. Install pytorch 1.0.2
2. Run the following code on multiple P40 Gpus
```
import os
###tutorial from https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html
###no error with only 1 gpu
# os.environ['CUDA_VISIBLE_DEVICES'] = '0'
#### to reproduce error allow multi gpu
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'
import torch
torch.cuda.device_count()
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
# Parameters and DataLoaders
input_size = 5000 #increased input size (works with 500 on multi gpu)
output_size = 2000 #increased output size (works with 200 on multi gpu)
batch_size = 300
data_size = 100
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
rand_loader = DataLoader(dataset=RandomDataset(input_size, data_size),
batch_size=batch_size, shuffle=True)
class Model(nn.Module):
# Our model
def __init__(self, input_size, output_size):
super(Model, self).__init__()
self.fc = nn.Linear(input_size, output_size)
def forward(self, input):
output = self.fc(input)
return output
model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)
model.to(device)
for i in range(10000):
for data in rand_loader:
input = data.to(device)
output = model(input)
```
## Expected behavior
The training
## Environment
Collecting environment information...
PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.2 LTS
GCC version: (crosstool-NG fa8859cb) 7.2.0
CMake version: Could not collect
Python version: 3.5
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: Tesla P40
GPU 1: Tesla P40
GPU 2: Tesla P40
GPU 3: Tesla P40
GPU 4: Tesla P40
GPU 5: Tesla P40
GPU 6: Tesla P40
GPU 7: Tesla P40
Nvidia driver version: 410.79
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.2
Versions of relevant libraries:
[pip3] numpy==1.15.2
[conda] mkl 2018.0.3 1 defaults
[conda] mkl_fft 1.0.6 py35_0 conda-forge
[conda] mkl_random 1.0.1 py35_0 conda-forge
[conda] nomkl 2.0 0 defaults
[conda] numexpr 2.6.5 py35_nomklhaa809a4_0 [nomkl] defaults
[conda] pytorch 1.0.1 py3.5_cuda10.0.130_cudnn7.4.2_2 pytorch
[conda] torch 0.4.1 <pip>
[conda] torchvision 0.2.2 py_3 pytorch
cc @ezyang @gchanan @zou3519 @ngimel | module: dependency bug,module: multi-gpu,module: multiprocessing,module: cuda,triaged,module: deadlock,has workaround,module: data parallel,quansight-nack | high | Critical |
479,050,906 | go | cmd/compile: mention shadowing of predeclared identifiers in errors they cause | (Related: #31064, #14494)
# What version of Go are you using (`go version`)?
1.12, but N/A
### Does this issue reproduce with the latest release?
I believe so?
### What operating system and processor architecture are you using (`go env`)?
N/A
### What did you do?
Horrible monstrous things. Examples:
```
const iota = 0
type int struct{}
var nil = "haha"
```
Okay, fine, those are stupid. But you know what I've done *unintentionally*?
```
...
len := a -b
...
for i := 0; i < len(x); i++ {
...
```
and then get confused by the weird error message about len not being a function.
### What did you expect to see?
I think redeclaring predeclared identifiers should be an error. Or, since that's probably impractical, it should be a thing in `go vet`.
### What did you see instead?
Really weird and misleading error messages when I forget that something is a predeclared identifier. I've hit this most often with `len`, because that's a natural variable name and when I'm naming something that I'm usually not thinking about the function. | help wanted,NeedsFix | low | Critical |
479,058,949 | go | cmd/go: do not lookup "parent" modules automatically to resolve imports | One of the most annoying missteps the go command makes is when you mistype an import path in your own module and it goes looking for parent modules that might provide the import. In general that's fine, but I think automatic download and search of parents of the main module should be disabled - it's almost always a typo, and if not the user can run go get explicitly.
For example if I am in module rsc.io/tmp/x and import rsc.io/tmp/x/foo instead of rsc.io/tmp/x/boo, it should not try to download rsc.io/tmp and rsc.io to find rsc.io/tmp/x/foo.
/cc @bcmills @jayconrod | NeedsInvestigation,early-in-cycle,modules | low | Minor |
479,075,014 | pytorch | torch.nn.functional.grid_sample with 'circular' border conditions | In ```torch.nn.functional.grid_sample```, it would be nice to have the circular border conditions implemented as well.
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
## Pitch
<!-- A clear and concise description of what you want to happen. -->
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. --> | module: nn,triaged | low | Major |
479,082,779 | pytorch | `suggest_memory_format` has ambiguity & cannot represent intended layout format for corner cases | ## 🐛 Bug
`suggest_memory_format` has ambiguous meaning for two cases:
1. tensor with NCHW where C = 1.
we could use stride of C as a hint to tell the intended memory format.
2. tensor with NCHW where H == W == 1.
there's no way to identify the intended memory format from strides.
The problem with this is: There's no proper way to correctly represent desired memory format for tensor with shape NC11. (yes, we can always hack it by utilizing/special casing stride for size 1 dimension. hence the emphasis on *proper*)
This would impact our cudnn setup. For 1x1 convolution, we have filter size with KC11. We would want to specify memory format properly for this case as it would impact the algo we get. | module: internals,triaged | low | Critical |
479,106,459 | TypeScript | Non-Nullable Objects in params don't have properties parsed | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.2
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** object param
**Code**
```ts
/**
* Converts a type to a markdown string.
* @param {!Object} [opts]
* @param {boolean} [opts.narrow] Combine type and description.
*/
```
**Expected behavior:**

**Actual behavior:**

**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Domain: JSDoc,Needs Investigation | low | Critical |
479,109,981 | pytorch | [JIT] script doesn't convert dtypes back to torch.dtype from long | ```
import torch
def foo(x):
return x.dtype
scripted = torch.jit.script(foo)
x = torch.rand(3, 4)
print('non-scripted', foo(x))
print('scripted', scripted(x))
```
Output
```
non-scripted torch.float32
scripted 6
```
We probably need to mirror this logic: https://github.com/pytorch/pytorch/blob/489cc46686e7e689e7130e78b73fe77a9edac5ed/torch/csrc/jit/pybind_utils.h#L338 into toPyObject https://github.com/pytorch/pytorch/blob/489cc46686e7e689e7130e78b73fe77a9edac5ed/torch/csrc/jit/pybind_utils.h#L536 | oncall: jit,triaged | low | Minor |
479,129,339 | go | cmd/vet: go vet reporting error in vendor with cgo disabled | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.7 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
I have tried it with the golang:1.13-rc docker image, and it does reproduce. Same with the golang:1.12.5-stretch docker image.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/paulbrousseau/Library/Caches/go-build"
GOEXE=""
GOFLAGS="-mod=vendor"
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/paulbrousseau/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12.7/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12.7/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="0"
GOMOD="/Users/paulbrousseau/work/devex/go/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/ys/3_9lwjwj3hj9nbry2j249x5h0000gq/T/go-build483197916=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
In addition, I have `GO111MODULE=on`
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I have a monorepo with many packages. Most are libraries, but a few are applications. My intention is to vet the code, test the code, and build the code via a shell script, and as quickly as possible. In this respect, I am running `go vet ./...` rather than checking each individual package.
I have cgo disabled so that for our Linux builds, we can create static binaries for easy dockerization. The builds are happy.
There _is_ one application which directly imports the Neo4J go packages, and these require cgo. When I build that one application, I enable cgo, and it's also happy.
Circling back to `go vet`, if I run with cgo enabled, it's happy. If I run with cgo disabled, I get errors (see below). I understand why those errors occur; the structs are defined in files that import "C", and with cgo disabled, they are unavailable. Cool. But it is my understanding that as of some versions ago, go tools (`lint`, etc.) were supposed to ignore the `vendor` directory.
### What did you expect to see?
No problems from `go vet`, as the troublesome package is in vendor.
### What did you see instead?
```
# github.com/neo4j-drivers/gobolt
vendor/github.com/neo4j-drivers/gobolt/connector_worker.go:30:14: undefined: Config
vendor/github.com/neo4j-drivers/gobolt/connector_worker.go:31:15: undefined: seaboltConnector
```
### Etc.
I have a hypothesis. `go vet` isn't the tool which is unhappy with the Neo4J libraries. But maybe `go vet` has to compile my code, so that it can do its analysis? And if cgo is disabled, it can't do so.
If that's the case, is there some way that I can work around this and still use `./...` to `go vet`? If I build first and vet later, will vet discover the built packages (and if so, how does that work if I build to a non-standard location)?
| NeedsInvestigation,modules,Analysis | low | Critical |
479,129,821 | react | textarea does not show warning when switching from uncontrolled to controlled like inputs do | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
**Do you want to request a *feature* or report a *bug*?**
Bug
**What is the current behavior?**
While things like `<input>` correctly get a warning when switching from uncrontrolled to controlled, I'm noticing `<textarea>` does not
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:**
Here's a codesandbox. Type in the input field, we see error (correct), change to textarea and start over, type in field and we don't see the error (incorrect I think) https://codesandbox.io/s/recursing-dawn-jls8i
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
16.8
| Type: Bug,Component: DOM | low | Critical |
479,138,307 | TypeScript | VS Code indicates available code action but doesn't actually offer a quick fix | In VS Code, I noticed that sometimes, it seems to indicate available code actions to convert a `Promise.then()` function to `async/await` with the three grey dots but then, when actually trying to execute the Quick Fix, it shows "No code actions available".


From my investigation, this seems to happen because one of the Promises inside the function has an `any` type.
I've found the current behavior really confusing because the combination of the three grey dots and then an unavailable Quick Fix is a strange combination.
**TypeScript Version:** 3.6.0-dev.20190808
**Search Terms:** code action async await promise
**Code**
```ts
let somePromise;
function myFunction() { // VS Code shows three grey dots and "This may be converted to an async function.ts(80006)" on hover
return Promise.resolve()
.then(() => {
return somePromise
.then(() => {
return 'Yeah!';
});
});
}
```
**Expected behavior:**
Either:
- VS Code doesn't indicate an available code action in the first place
- It provides some kind of hint why it can't execute a quick fix or how to enable it
- The code action still works and does the conversion anyway
**Actual behavior:**
VS Code shows a possible available code action but then there is none
| Bug,Domain: Quick Fixes | low | Minor |
479,177,067 | pytorch | [RPC] Add type annotations for RPC-related Python files | @ezyang suggested that, as `torch.distributed.rpc` will be PY3 only, we should enforce type annotations immediately.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera | triaged,better-engineering,module: rpc | low | Minor |
479,184,085 | go | cmd/go: clean -x removes *.so but doesn't list such files in output | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.7 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/zephyrtronium/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/zephyrtronium/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build755048247=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
```
~/mygo/plugintest$ printf 'package main\n\nvar String = "Hello, world!"\n' >plugin.go
~/mygo/plugintest$ go build -buildmode=plugin
~/mygo/plugintest$ go clean -x
```
### What did you expect to see?
`rm -f` including `plugintest.so`, or `plugintest.so` still present in the directory.
### What did you see instead?
```
cd /home/zephyrtronium/go/src/github.com/zephyrtronium/plugintest
rm -f plugintest plugintest.exe plugintest.test plugintest.test.exe plugin plugin.exe
```
`plugintest.so` was removed by `go clean`, but not listed in its output. It seems that any files removed via https://github.com/golang/go/blob/master/src/cmd/go/internal/clean/clean.go#L323 aren't printed, which includes `*.so` (which the documentation says come from SWIG). | help wanted,NeedsInvestigation | low | Critical |
479,192,455 | TypeScript | Contextually infer parameters for type aliases/interfaces | ## Search Terms
type alias parameter inference contextual
## Suggestion
The type parameters of a type alias (or interface) should be able to be inferred when a value is being assigned/cast to that type/interface.
## Use Cases/Examples
My program defines a few common types/interfaces to be used as contracts between components. E.g.
```ts
// type for an object holding a function + its inverse
type FunctionPair<T, U> = { apply(it: T): U, reverse(it: U): T };
```
Then, throughout the program, I need to make objects of this type. If I have a factory function (or use a class with its constructor), this isn't too bad:
```ts
function makeFunctionPair<T, U>(apply: (it: T) => U, reverse: (it: U) => T) {
return { apply, reverse } as FunctionPair<T, U>;
}
```
However, I'd like to be able to just write these (simple) domain objects with literals, rather than using a factory function, and then signal the type (with the implied relation between the `apply` and `reverse` properties) to the compiler with a type annotation/assertion inline:
```ts
const a: FunctionPair<string, () => string> = {
apply(it: string) { return () => it + "!!!"; },
reverse(it) { return it().slice(0, -3); }
}
```
However, the above is a bit verbose, in that I have to add `<string, () => string>` to the type annotation, whereas it seems like this should be inferrable. I'm proposing to be able to just do:
```ts
/* FunctionPair type param values inferred contextually from the assigned object */
const a: FunctionPair = {
apply(it: string) { return () => it + "!!!"; },
reverse(it) { return it().slice(0, -3); }
}
```
Here's another example: imagine a runtime that uses the idea of effects as data. The user sends into the runtime an object describing the effect to perform, and a callback to call with any result:
```ts
type EffectNamesWithReturnTypes = {
x: { status: 0 | 1 },
y: { changes: string[] },
};
type EffectDescriptor<T extends keyof EffectNamesWithReturnTypes> = {
name: T;
cb: (result: EffectNamesWithReturnTypes[T]) => void
}
```
It would be nice to be able to write:
```ts
const effect = <EffectDescriptor>{ name: "x", cb: (it) => it.status };
```
Rather than having to write:
```ts
const effect = <EffectDescriptor<"x">>{ name: "x", cb: (it) => it.status };
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | medium | Major |
479,201,296 | flutter | Enable hot restart on profile mode | Currently, performance debugging in profile mode is really time consuming because every small change requires a cold restart, which could take several minutes.
While hot reload is impossible in AOT profile build, hot restart should be possible according to @cbracken .
Let's enable it in flutter tools to significantly speedup our customers' performance debugging process. | c: new feature,tool,c: performance,customer: dream (g3),P3,team-tool,triaged-tool | medium | Critical |
479,210,410 | pytorch | Allow incompatible shapes in load_state_dict(strict=False) | ## 🚀 Feature
Right now, `module.load_state_dict(strict=False)` allows the following:
* loading a dict with missing parameters
* loading a dict with more parameters than needed
And it returns an object containing the information about what are missing or what parameters are unexpected.
But it will throw an error if there are parameters with the same name but different shape. It would be good to also allow this behavior, and return information about the unmatched parameters.
## Motivation
This will help with work in transfer learning, for example.
## Pitch
User can write
```
ret = model.load_state_dict("pretrained.pkl")
for x in ret.incompatible_keys:
logger.warn("x is not loaded because it has shape xx in checkpoint but shape yy in the model")
```
## Alternatives
User now have to manually modify the state_dict for such use cases.
UPDATE: it's error-prone for users to do it manually, because some modules (e.g. quantization observers) by design expect to support loading checkpoints with incompatible shapes. It's hard for users to distinguish them from the unsupported incompatible shapes.
Some related discussions at https://github.com/pytorch/pytorch/issues/8282#issuecomment-520101919 | module: serialization,triaged,enhancement | low | Critical |
479,234,639 | godot | RemoteTransform2D scale only with global coordinates enabled is wrong | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.2dev, 3.1.1, 3.1, 3.0.6 and maybe others
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Windows 10
**Issue description:**
<!-- What happened, and what was expected. -->
Looks like when you get the scale from a `Transform2D` the coordinates it uses aren't the ones one would expect as per issue https://github.com/godotengine/godot/issues/21020
This makes the `RemoteTransform2D` scale behavior when its `use global coordinates` enabled (the default) and only the scale selected to behave unexpectedly

**Steps to reproduce:**
Check the MRP
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[remotetransform2d_scale_issue.zip](https://github.com/godotengine/godot/files/3488578/remotetransform2d_scale_issue.zip)
| enhancement,topic:core,documentation | low | Critical |
479,242,079 | TypeScript | Flag 'instanceof' expressions that are provably always true or false | ## Suggestion
When refactoring code, TypeScript is generally very good at finding and reporting situations where existing code will break due to type changes. This allows "refactoring with confidence". This isn't surprising given that one of the primary goals of TypeScript is to "statically identify constructs that are likely to be errors".
There's one common case where TypeScript is "blind" and doesn't report errors when types are refactored. This case involves the use of the 'instanceof' operator. This operator can be used in situations where it makes no sense -- e.g. where the type specified on the LHS has no possible relation to the type specified on the RHS.
To help developers avoid this common programming error, I propose that TypeScript report an error for any 'instanceof' expression that is provably always true or false at compilation time. Any such expression will likely be unintended by the programmer and should be flagged as errors. At best, such operations represent unnecessary code that imposes runtime overhead.
## Example
```typescript
// ClassB is a subclass of ClassA.
class ClassA {}
class ClassB extends ClassA {}
// ClassC has nothing to do with either
// ClassA or ClassB.
class ClassC {
method1() {}
}
function function1(param1: ClassB) {
// Static analysis can prove this expression
// will always evaluate to false. It's a common
// source of programming errors, especially during
// refactoring.
if (param1 instanceof ClassC) {
// This call isn't even valid given that param1
// was declared as type ClassB.
param1.method1();
}
// Static analysis can prove this expression will
// always evaluate to true, so at best it's unnecessary
// runtime overhead and more commonly is a behavior
// that is unintended by the programmer.
if (param1 instanceof ClassA) {
// ...
}
}
```
Are there any legitimate uses of 'instanceof' where the result is statically provable to be true or false in all cases? I can't think of any, but it's possible I'm overlooking some specialized cases. If there are, this check could be added as an optional compiler switch to preserve the current behavior.
| Suggestion,Awaiting More Feedback | low | Critical |
479,248,639 | pytorch | sccache crashes when building `Distribution.cu` on Windows | There are many occurences of this build error in Azure Pipelines.
https://dev.azure.com/pytorch/PyTorch/_build/results?buildId=3891
https://dev.azure.com/pytorch/PyTorch/_build/results?buildId=3901
https://dev.azure.com/pytorch/PyTorch/_build/results?buildId=3695
```
[737/1919] Building NVCC (Device) object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cuda/torch_generated_Distributions.cu.obj
FAILED: caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cuda/torch_generated_Distributions.cu.obj
cmd.exe /C "cd /D C:\w\1\s\windows\pytorch\build\build\caffe2\CMakeFiles\torch.dir\__\aten\src\ATen\native\cuda && C:\w\1\s\windows\conda\envs\py3\Library\bin\cmake.exe -E make_directory C:/w/1/s/windows/pytorch/build/build/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cuda/. && C:\w\1\s\windows\conda\envs\py3\Library\bin\cmake.exe -D verbose:BOOL=OFF -D build_configuration:STRING=Release -D generated_file:STRING=C:/w/1/s/windows/pytorch/build/build/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cuda/./torch_generated_Distributions.cu.obj -D generated_cubin_file:STRING=C:/w/1/s/windows/pytorch/build/build/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cuda/./torch_generated_Distributions.cu.obj.cubin.txt -P C:/w/1/s/windows/pytorch/build/build/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cuda/torch_generated_Distributions.cu.obj.Release.cmake"
Distributions.cu
cl : Command line warning D9025 : overriding '/EHs' with '/EHa'
cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
Distributions.cu
cl : Command line warning D9025 : overriding '/EHs' with '/EHa'
cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
Distributions.cu
cl : Command line warning D9025 : overriding '/EHs' with '/EHa'
cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
Distributions.cu
cl : Command line warning D9025 : overriding '/EHs' with '/EHa'
cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
Distributions.cu
cl : Command line warning D9025 : overriding '/EHs' with '/EHa'
cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
Distributions.cu
cl : Command line warning D9025 : overriding '/EHs' with '/EHa'
cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
Distributions.cu
cl : Command line warning D9025 : overriding '/EHs' with '/EHa'
cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
Distributions.cu
error: failed to execute compile
caused by: error reading compile response from server
caused by: Failed to read response header
caused by: An existing connection was forcibly closed by the remote host. (os error 10054)
CMake Error at torch_generated_Distributions.cu.obj.Release.cmake:279 (message):
Error generating file
C:/w/1/s/windows/pytorch/build/build/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cuda/./torch_generated_Distributions.cu.obj
```
Any ideas, @yf225? | module: build,triaged,module: build warnings | low | Critical |
479,257,200 | go | crypto/tls: error with client certificate and X448 and X25519 curves | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.7 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes (`1.12.7`)
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary>
```
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/<redacted>/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/<redacted>/go"
GOPROXY="direct"
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/<redacted>/go/src/test/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build069598509=/tmp/go-build -gno-record-gcc-switches"
```
</details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
As far as I've been able to reproduce it it only happens with an ecc client certificate and a rsa server certificate.
https://gist.github.com/cromefire/590eb9743dbadeca89c213b0aa1a2d58 (play.golang.org doesn't work with tcp it seems)
The same thing using `curl` works:
```
curl -vk --cert ecccert.pem --key ecckey.pem https://go-issue.cromefire.myds.me
```
The backend server (`Apache/2.4.39 (Ubuntu)`, with `OpenSSL 1.1.1c`) is using no special config:
```apache
<VirtualHost *:443>
# Skipped Name, logging and DocumentRoot
Include includes/ssl.conf # TLS certs, rsa ones
SSLProtocol TLSv1.2
SSLVerifyClient optional_no_ca
</VirtualHost>
```
For debugging purposes the program creates `/tmp/keylog.txt` which can be imported into wireshark
### What did you expect to see?
The expected result are no errors
### What did you see instead?
```
rsa-ecc: ok
ecc-ecc: ok
nocert-ecc: ok
rsa-rsa: ok
ecc-rsa: Get https://go-issue.cromefire.myds.me: remote error: tls: illegal parameter
nocert-rsa: ok
``` | NeedsInvestigation | low | Critical |
479,265,196 | youtube-dl | Site request : paramountnetwork.es/peliculas | ## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.08.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x ] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.08.02**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x ] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
- Single video: https://www.paramountnetwork.es/peliculas/euzp1h/biggles-el-viajero-del-tiempo
- Playlist: https://www.paramountnetwork.es/peliculas
## Description
just another website that offer videos.
| site-support-request | low | Critical |
479,293,759 | node | Step Into for Generator Function triggers Continue | * **Version**: v12.4.0
* **Platform**: Linux cefn-bionic-thinkpad 4.15.0-50-generic #54-Ubuntu SMP Mon May 6 18:46:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**: Debugger protocol
<!-- Please provide more details below this comment. -->
When debugging, trying to STEP-IN to a generator function call causes the debugger session to CONTINUE.
This is inconsistent when...
* STEP-IN on a regular function call triggers STEP-IN (steps to first line in the called function)
* STEP-IN on a NON-function-call triggers STEP-OVER (steps to next line in the current function)
* STEP-IN on a NON-function-call at the end of a function triggers STEP-OUT (steps to a line following the original function-call)
The above three seem to me to be correct behaviour, consistent with using STEP-IN as a shortcut for _go-to-the-next-line-of-code_.
This inconsistency is a particular problem because it is impossible to judge from the interactive debugger session whether the apparent function call you are stepping into is a generator or a regular function. Therefore you might try to STEP-IN, then find that the debugger has chosen to CONTINUE because it was a generator call, making that stack call in the debugger session unrecoverable unless you had another breakpoint to catch it. The only workaround is to have a 100% accurate model of which calls are generators and Never call STEP-IN on a generator call.
Taking the below example, assuming just two manually-inserted Debugger breakpoints.
* One at `run()`
* One at: `console.log("Finished script")`. This is jumped to if you hit CONTINUE at any time.
Pressing the STEP-IN control in Chrome Inspector or VSCode ...
* STEP-IN on `run()` triggers STEP-IN
* STEP-IN on `const nothing = makeNothing()` triggers STEP-IN
* STEP-IN on `functionRunning = makeNothing.name` triggers STEP-OVER
* STEP-IN on `return` triggers STEP-OUT
* STEP-IN on `const sequence = makeSequence(5)` then surprisingly triggers CONTINUE - jumping to the next manually-inserted breakpoint at `console.log("Finished script")` instead of the expected behaviour - stepping over to the line `console.log("Arbitrary logging")`
```javascript
let lastFunctionName = null
function makeNothing() {
lastFunctionName = makeNothing.name
return
}
function* makeSequence(limit) {
lastFunctionName = makeSequence.name
for (let i = 0; i < limit; i++) {
yield i
}
}
function run() {
const nothing = makeNothing()
const sequence = makeSequence(5)
console.log("Arbitrary logging")
}
console.log("Starting script")
run()
console.log("X")
console.log("X")
console.log("X")
console.log("Finished script")
``` | v8 engine,inspector | low | Critical |
479,313,433 | youtube-dl | add support to Youtuner.co | ## Checklist
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.08.02**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
- Single video: http://youtuner.co/s/147665
- Playlist: http://youtuner.co/channel/animecrazies.com.br
- Playlist: http://youtuner.co/category/animes
- Playlist: http://youtuner.co/index/results?s=anime
- Playlist: http://youtuner.co/
- Playlist: http://youtuner.co/index/results/animes
## Description
Youtuner is brazilian site for discover and play podcasts. There is a lot of free content and without subscription for. Can you please look into it, if its possible to add it?
Thanks | site-support-request | low | Critical |
479,324,970 | TypeScript | Correlated type constraint breaks under return type inference | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
return type, generic, constraint, assignable, correlated type
**Code**
```ts
type XStr = {x:string};
type XNum = {x:number};
type U = XStr|XNum;
type Args = { str : XStr, num : XNum };
declare function foo<
ReturnT extends U,
ValueT extends ReturnT["x"]
> (
f : (args : Args) => ReturnT,
value : ValueT
) : void;
/*
Error as expected.
Type 'string | number' does not satisfy the constraint 'string'.
Type 'number' is not assignable to type 'string'.
*/
foo<XStr, string|number>(
(args:Args) => args.str,
""
);
//Inferred type, foo<XStr, string | number>
foo(
args => args.str,
//Expected: Error
//Actual: OK
"" as string|number
);
//Inferred type, foo<XStr, string>
foo(
//Added explicit type annotation to function params
(args:Args) => args.str,
/*
Error as expected.
Type 'string | number' does not satisfy the constraint 'string'.
Type 'number' is not assignable to type 'string'.
*/
"" as string|number
);
/////
/*
Error as expected.
Type '1' does not satisfy the constraint 'string'.
*/
foo<XStr, 1>(
(args:Args) => args.str,
1
);
//Inferred type, foo<XStr, 1>
foo(
args => args.str,
//Expected: Error
//Actual: OK
1
);
//Inferred type, foo<XStr, string>
foo(
//Added explicit type annotation to function params
(args:Args) => args.str,
/*
Error as expected.
Type '1' does not satisfy the constraint 'string'.
*/
1
);
```
**Expected behavior:**
I'm just calling it a **correlated type** because it reminds me of correlated subqueries from SQL.
1. The constraint type of `ValueT` is dependent on the type of `ReturnT`.
2. When `f` does not have parameters, or all parameters are **explicitly** annotated,
`ValueT` is inferred correctly.
3. When `f` has parameters that are **not** explicitly annotated,
`ValueT` is inferred **incorrectly**.
4. Attempting to explicitly set invalid type paramters will error as expected.
+ `foo<XStr, string|number>` should not be allowed
+ `foo<XStr, 1>` should not be allowed
**Actual behavior:**
+ `foo<XStr, string|number>` is allowed under inference
+ `foo<XStr, 1>` is allowed under inference
**Playground Link:**
[Playground](http://www.typescriptlang.org/play/#code/LAKALgngDgpgBADQMpgE5wLxwN4A8BcAzmgJYB2A5gL4DcoksiAcgK4C2mOBZ7ARjKlr1o8AKqdkaAD4JWbOuBFwAgqgqFO2OMXT5EKVABo4PDntns4QkKAAmMAMYAbAIap4AMxZkHYEgHsyOA9-fwAeUDgouAAlGDAWVDIAFTgYXDAYMlsNUUNI6IA1FycWGFT0zOyNOISk5IBtACJcJoBdUAA+OAAKAqiPOD0et3UhlTVCAEpMbtrElPyQaLgANxKy8eLS8tAZvVX-ElsFUAB6ACp+uABRVFR-dBcNdNhfGFsAOlBr5KUAch05AocCkJj4An+cFs-hgGjI-jA2hcfkIHggcDAAAt4A5AjoXOQkYDSJR-t9ltE-ox-qZ+KgoSR4Yi4M9CCQKGQXLwnPAwP5MQCgWSKRczqAQuFJEZtKSKFI6QJOn1KVERpN8Kp1DMMN1RoRPjolismk09gozmcAJJkDwCdy2QWwYySsLS4zCkFgxWoToS0Iqlb62asyaGtDG6KWm64N6ZWx6O4PVDXS3KXwsEp6ADyAGlrqbWRpPQqISmQFMLdbbfaPk6YC7Qm6DB65X6QJLA1GzspbPZHa8nCQHCQkQx4C4yAiwCiAkF+cFvL453AoG4XGxCNd1epNZMdXqw0bU1dVStbvdHkW0rHHPGKdcVtT4CTUMDQeC2PSoTC4SYWYQs5ohi2K4viaCEmQxKeuSj5UgCPqMsySJshyXI8nyArjnAr7ArBZ5igWTTXiWPrmj8ICWlRFGXNcSZXs8N5xh8D5ns+OEAIw-rCyHIqi6KYjicB4mQBJEjhMGiuKHZNu6cAccq276nu2ohvq4ZGNcHHkZR1Z2vcdbjo2UotvJ7adtcwa6qG6gaZGUTRre7wJheyapj2GZZnAeZaTplo2vpDr1sZzYRrKb6UOZAbub2-ZMUOI5jkok7TrOgSYgKXg+H46VrqgG5bmeO6ECp0xqUeEYnnBUT0U8LxOfeFHnvBNJcdCPH-kigH8SBQkiWJUESXK+ErIRZ7aRWChAA)
**Related Issues:**
https://github.com/microsoft/TypeScript/issues/32540#issuecomment-520193240
https://github.com/microsoft/TypeScript/issues/29133
A different, more complex example,
https://github.com/microsoft/TypeScript/issues/14829#issuecomment-520191642
-----
[Edit]
Can someone come up with a better name for this?
-----
I'm working on rewriting my type-safe SQL builder library and it relies on the return type of generic functions being inferred correctly. But it seems like return type inference just breaks in so many unexpected ways.
Anonymous callback functions are used a lot for building the `WHERE`, `ORDER BY`, `GROUP BY`, `HAVING`, `JOIN`, etc. clauses.
Since return type inference for generic functions is not robust, it's basically a blocker for me =( | Discussion | low | Critical |
479,332,991 | go | x/crypto/argon2: panic if keyLen == 0 | ### What version of Go are you using (`go version`)?
### Does this issue reproduce with the latest release?
Assuming Playground uses the latest Go version, yes.
### What operating system and processor architecture are you using (`go env`)?
### What did you do?
https://play.golang.org/p/k07nWDYZ7Zk
### What did you expect to see?
Should return empty byte slice
### What did you see instead?
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0xffffffff addr=0x0 pc=0xab662]
goroutine 1 [running]:
golang.org/x/crypto/blake2b.(*digest).Write(0x0, 0x432240, 0x4, 0x40, 0x0, 0xf0520, 0x40c080, 0xce380)
/tmp/gopath522840708/pkg/mod/golang.org/x/[email protected]/blake2b/blake2b.go:216 +0x22
golang.org/x/crypto/argon2.blake2bHash(0x15ee7c, 0x0, 0x0, 0x44e400, 0x400, 0x400)
/tmp/gopath522840708/pkg/mod/golang.org/x/[email protected]/argon2/blake2b.go:26 +0xc0
golang.org/x/crypto/argon2.extractKey(0x800000, 0x10000, 0x10000, 0x10000, 0x4, 0x0, 0x1, 0x0, 0x0, 0x0)
/tmp/gopath522840708/pkg/mod/golang.org/x/[email protected]/argon2/argon2.go:255 +0x2c0
golang.org/x/crypto/argon2.deriveKey(0x1, 0x414008, 0x1, 0x1, 0x414009, 0x1, 0x1, 0x0, 0x0, 0x0, ...)
/tmp/gopath522840708/pkg/mod/golang.org/x/[email protected]/argon2/argon2.go:117 +0x2e0
golang.org/x/crypto/argon2.Key(...)
/tmp/gopath522840708/pkg/mod/golang.org/x/[email protected]/argon2/argon2.go:75
main.main()
/tmp/sandbox040143225/prog.go:9 +0x100
```
| help wanted,NeedsFix | low | Critical |
479,335,195 | scrcpy | Request: Custom keyboard mapping | Really want custom mapping keyboard controls. I have seen another post about this and someone said it would be too much work but the thread is closed. Keyboard mapping would be amazing :) | feature request | high | Critical |
479,335,820 | pytorch | "To compact weights again call flatten_parameters()" is printed every step for every GPU | ## 🐛 Bug
This warning gets printed during every single forward pass in PyTorch 1.2. In PyTorch 1.1, it was only printed once.
```
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1266: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1266: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1266: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1266: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1266: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1266: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1266: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1266: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
```
| module: nn,module: rnn,triaged,module: data parallel | medium | Critical |
479,352,992 | vscode | [html] autoclose not working properly with multicursor | Only the first label will be completed, and all subsequent ones will be wrong. . .
- VSCode Version: 1.36.1
- OS Version: Windows 10 1809 LTSC
Steps to Reproduce:
1. Open an HTML file
2. Select h1-h6 six labels
3. Input `>` Completion
4. All will be completed as `h1`
Does this issue occur when all extensions are disabled?: Yes
Screenshot:

| bug,html | low | Minor |
479,383,463 | flutter | CupertinoTabBar can't adaptive tablet layout | On iPad, the icons and titles of the tab bar items are arranged horizontally. In Flutter apps they are drawn vertically (icon over title).

```
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel master, v1.8.5-pre.116, on Microsoft Windows [Version 10.0.18950.1000], locale zh-CN)
[√] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
[√] Visual Studio - develop for Windows (Visual Studio Community 2017 15.9.11)
[√] Android Studio (version 3.4)
[√] VS Code (version 1.37.0)
[√] Connected device (2 available)
• No issues found!
``` | framework,a: tablet,a: fidelity,f: cupertino,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-design,triaged-design | low | Major |
479,387,648 | scrcpy | GUI Sidebar with all commands | Is it possible to add a sidebar to the program where you can use all commands that are also available as shortcuts? Indeed, it is nice to have shortcuts, but it would be cool to have them all as a button too, in case there is a person like me who can not recognize all of them and dont want to look into the commandline help everytime.
Also.. would it be possible to minimize the whole thing to the tray and dont have the windo in the taskbar anymore? I know, this is difficult in cross platfom.. but maybe.... | feature request | low | Minor |
479,387,679 | TypeScript | Imported values don't conflict with global module declarations (regression?) | Is this an intentionally changed behavior?
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.20190809
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
import assert from 'power-assert';
declare global {
const assert: typeof assert;
}
```
**Expected behavior:**
Circular reference error.
**Actual behavior:**
No error.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Bug | low | Critical |
479,398,436 | go | cmd/go: support ~ in replace statements | I share some directories between machines with different user names. It would be useful to be able to add a statement like the following to a go.mod.
```
replace golang.org/x/net => ~/src/golang.org/x/net
``` | NeedsDecision,FeatureRequest,GoCommand,modules | low | Major |
479,446,152 | pytorch | Failed to build pytorch with NanoPi M4 | ## 🐛 Bug
```
Building wheel torch-1.3.0a0+d3f6d58
-- Building version 1.3.0a0+d3f6d58
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/media/pi/AA3D-4D92/pytorch/torch -DCMAKE_PREFIX_PATH=/usr/lib/python3/dist-packages -DNUMPY_INCLUDE_DIR=/usr/local/lib/python3.6/dist-packages/numpy/core/include -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/usr/lib/libpython3.6m.so.1.0 -DTORCH_BUILD_VERSION=1.3.0a0+d3f6d58 -DUSE_CUDA=False -DUSE_DISTRIBUTED=True -DUSE_NUMPY=True /media/pi/AA3D-4D92/pytorch
-- The CXX compiler identification is GNU 7.4.0
-- The C compiler identification is GNU 7.4.0
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Not forcing any particular BLAS to be found
-- Performing Test COMPILER_WORKS
-- Performing Test COMPILER_WORKS - Failed
CMake Error at cmake/MiscCheck.cmake:52 (message):
Could not run a simple program built with your compiler. If you are trying
to use -fsanitize=address, make sure libasan is properly installed on your
system (you can confirm if the problem is this by attempting to build and
run a small program.)
Call Stack (most recent call first):
CMakeLists.txt:292 (include)
-- Configuring incomplete, errors occurred!
See also "/media/pi/AA3D-4D92/pytorch/build/CMakeFiles/CMakeOutput.log".
See also "/media/pi/AA3D-4D92/pytorch/build/CMakeFiles/CMakeError.log".
Traceback (most recent call last):
File "setup.py", line 756, in <module>
build_deps()
File "setup.py", line 321, in build_deps
cmake=cmake)
File "/media/pi/AA3D-4D92/pytorch/tools/build_pytorch_libs.py", line 61, in build_caffe2
rerun_cmake)
File "/media/pi/AA3D-4D92/pytorch/tools/setup_helpers/cmake.py", line 314, in generate
self.run(args, env=my_env)
File "/media/pi/AA3D-4D92/pytorch/tools/setup_helpers/cmake.py", line 143, in run
check_call(command, cwd=self.build_dir, env=env)
File "/usr/lib/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '-GNinja', '-DBUILD_PYTHON=True', '-DBUILD_TEST=True', '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_INSTALL_PREFIX=/media/pi/AA3D-4D92/pytorch/torch', '-DCMAKE_PREFIX_PATH=/usr/lib/python3/dist-packages', '-DNUMPY_INCLUDE_DIR=/usr/local/lib/python3.6/dist-packages/numpy/core/include', '-DPYTHON_EXECUTABLE=/usr/bin/python3', '-DPYTHON_INCLUDE_DIR=/usr/include/python3.6m', '-DPYTHON_LIBRARY=/usr/lib/libpython3.6m.so.1.0', '-DTORCH_BUILD_VERSION=1.3.0a0+d3f6d58', '-DUSE_CUDA=False', '-DUSE_DISTRIBUTED=True', '-DUSE_NUMPY=True', '/media/pi/AA3D-4D92/pytorch']' returned non-zero exit status 1.
```
## To Reproduce
Steps to reproduce the behavior:
```bash
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
USE_CUDA=0 sudo python3 setup.py install
```
## Expected behavior
build success
## Environment
```
Collecting environment information...
PyTorch version: 1.1.0a0
Is debug build: No
CUDA used to build PyTorch: None
OS: Ubuntu 18.04 LTS
GCC version: (Ubuntu/Linaro 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] numpy==1.16.2
[pip3] torch==1.1.0a0
[pip3] torchvision==0.2.2.post3
[conda] Could not collect
```
| module: build,low priority,triaged | low | Critical |
479,466,034 | pytorch | torch.unique is inconsistent with NumPy's unique | ## 🐛 Bug
Hi! I recently write a mxnet version `numpy.unique`, I looked the implementation of numpy and torch, and find there may be something wrong.
When `sorted` and `dim` is not `None`, there may be incorrect result.
Use things like `np.moveaxis` instead of `transpose`.
Details is shown in https://github.com/numpy/numpy/issues/14244 and https://github.com/numpy/numpy/pull/14255.
cc @mruberry @rgommers @heitorschueroff | triaged,module: numpy,module: correctness (silent) | low | Critical |
479,532,457 | angular | Angular shouldn't encode URL characters not encoded by encodeURI(window.location/*.href*/) in location bar upon navigation events (page back/forward) | # 🐞 bug report
Possible expansion of scope for issue #32101: if this scope is deemed appropriate, the fix for both issues might be in one code change (made according to
```javascript
encodeURI(window.location/*.href*/)
```
behavior as mentioned here).
P.S. The rest of the info requested can be looked up from #32101: avoiding duplication of it here.
| type: bug/fix,freq1: low,area: router,state: confirmed,P3 | low | Critical |
479,589,237 | youtube-dl | Support Download from https://www.hotseatathome.com | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.08.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.08.02**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.hotseatathome.com/members/the-hot-seat-core-principles/
- Single video: https://www.hotseatathome.com/members/mission-2/
- Single video: https://www.hotseatathome.com/members/the-hot-seat-advanced-principles/
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
The site requires credentials to access content on the site. To get these credentials, one has to pay for the service.
Note: Tried the "cookies" approach and --username and --password approach but the site in question unsupported
| site-support-request,account-needed | low | Critical |
479,589,806 | go | net/http: RawPath shows inconsistent, case-sensitive behaviour with percentage encoded Unicode URI strings | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.7 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
Any environment
<details><summary><code>go env</code> Output</summary><br><pre>
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/user/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/user/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build606111517=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
```go
package main
import (
"fmt"
"html/template"
"net/http"
"net/url"
)
type content struct {
TplEncoded string
ManuallyEncoded template.URL
ShowPaths bool
RawPath string
Path string
}
func main() {
tpl, _ := template.New("test").Parse(`<!doctype html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta charset="utf-8" />
</head>
<body>
{{ if .ShowPaths }}
<p>RawPath = {{ .RawPath }}</p>
<p>Path = {{ .Path }}</p>
{{ else }}
<a href="/link/{{ .TplEncoded }}">Template encoded link</a><br />
<a href="/link/{{ .ManuallyEncoded }}">Manually encoded link</a>
<br />
<p>View this page's source to see the (lower/upper)case difference
in the links</p>
{{ end }}
</body>
</html>`)
// Renders the root with good and bad links.
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
s := "😋" // Unicode emoji.
tpl.Execute(w, content{
// html/template encodes into lowercase characters.
TplEncoded: s,
// url.PathEscape encodes into uppercase characters.
ManuallyEncoded: template.URL(url.PathEscape(s)),
})
})
// This handler produces inconsistent RawPath based on (upper/lower)case encoding in the URI.
http.HandleFunc("/link/", func(w http.ResponseWriter, r *http.Request) {
tpl.Execute(w, content{
ShowPaths: true,
RawPath: r.URL.RawPath,
Path: r.URL.Path,
})
})
fmt.Println("Go to http://127.0.0.1:8080")
http.ListenAndServe(":8080", nil)
}
```
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
### What did you expect to see?
`url.PathEscape("😋") => %F0%9F%98%8B`
`/link/%F0%9F%98%8B` (A) and `/link/%f0%9f%98%8b` (B) (upper and lower case respectively) are equivalent as per RFC 3986. An `http.HandlerFunc()` handling either of the URLs is expected to show consistent behaviour.
### What did you see instead?
An http handler that processes the identical URIs A and B behaves differently. B, which has uppercase characters, produces an empty `http.Request.URL.RawPath` where as A that has lowercase characters produces an `http.Request.URL.RawPath` with unescaped characters. This breaks Unicode URL handling in popular HTTP routers like chi and httprouter.
Discovered this inconsistency when using `html/template` that encodes Unicode strings in `<a>` to have lowercase characters as opposed to `url.PathEscape` that produces uppercase characters. | NeedsInvestigation | medium | Critical |
479,610,874 | pytorch | torch.fft crash when used with nn.DataParallel | ## 🐛 Bug
Calling `torch.fft` on CUDA tensor in a `nn.DataParallel` wrapped module segfaults.
## To Reproduce
Steps to reproduce the behavior:
Run the following script on PyTorch master (77c08aa46c3f3460b95b89cbe357b99180bc824d) on a machine with 2 or more GPUs:
```python
import torch
import torch.nn as nn
class FFTSEGV(nn.Module):
def forward(self, input):
return torch.fft(input, 2)
model = FFTSEGV()
tensor = torch.rand(4, 3, 2).cuda()
out = model.cuda(0)(tensor.cuda(0))
print(out)
# Uncomment this to avoid segfault.
# Maybe it forces some lazy initialization to happen the correct way...
#out = model.cuda(1)(tensor.cuda(1))
#print(out)
# This segfaults if cuFFT didn't initialize on cuda:1.
dmodel = nn.DataParallel(model)
print(dmodel(tensor))
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
It shouldn't crash, but it crashes. Very likely lazy initialization gone wrong.
<!-- A clear and concise description of what you expected to happen. -->
## Additional context
See https://discuss.pytorch.org/t/pytorch-fft-on-multiple-gpus/43532 for the original report.
<!-- Add any other context about the problem here. -->
| module: cuda,triaged,module: data parallel | low | Critical |
479,620,412 | opencv | Matrix size overflow in core/src/matrix.cpp | ##### System information (version)
- OpenCV : 4.1.0 (custom compilation) and the latest 4.1.1 (dowloaded from the release page for Windows)
- Operating System / Platform : Windows 64 Bit
- Compiler : Visual Studio 2015
##### Detailed description
```
OpenCV(4.1.0) Error: Assertion failed ((int)nelems >= 0) in cv::Mat::reserve,
file d:\librairies\opencv\opencv-4.1.0\modules\core\src\matrix.cpp, line 626
```
I wanted to create a matrix of size 1 951 054 560 using `cv::Mat::push_back` of `std::vector<float>`. Unfortunalety, at the last `cv::Mat::reserve` in `core/src/matrix.cpp`, I got stuck on the `CV_Assert( (int)nelems >= 0 )`. I think it is because of a int_32 overflow [there](https://github.com/opencv/opencv/blob/master/modules/core/src/matrix.cpp#L626).
Indeed, I had a 1 463 290 920 sized matrix but with a growth factor of 1.5 would become a 2 194 936 380 which is above the int_32 limit of 2 147 483 648.
##### Steps to reproduce
Some raw code to demonstrate it :
```c++
# I also tried vector<double> with the same outcome.
#include <iostream>
#include <vector>
#include <opencv2/core.hpp>
using namespace std;
using namespace cv;
int main()
{
vector<float> foo = vector<float>(1463290920, 0.0f);
vector<float> overflow = vector<float>(1, 0.0f);
Mat m;
m.push_back(foo);
m.push_back(overflow);
}
```
I tried the long long cast instead of the int one and it seems to be working :
```c++
#include <iostream>
#include <vector>
#include <opencv2/core.hpp>
using namespace std;
using namespace cv;
int main()
{
size_t r = 1463290920;
cout << (r * 3 + 1) / 2 << endl;
cout << max(r + 1, (r * 3 + 1) / 2) << endl;
cout << (long long)max(r + 1, (r * 3 + 1) / 2) << endl;
cout << ((long long) max(r + 1, (r * 3 + 1) / 2) >= 0) << endl;
}
``` | feature,priority: low,category: core | low | Critical |
479,668,806 | react | ErrorBoundary rendering multiple copies of itself when ref assignment fails | **Do you want to request a *feature* or report a *bug*?**
bug
**What is the current behavior?**
When a error occurs during the assignment of a `ref` (and maybe other conditions), a error boundary wrapping that error may get confused and it renders itself multiple times inside the same parent. See https://codesandbox.io/s/stoic-fermi-6etqb which renders:
```html
<div id="root">
<div class="boundary"><span>content</span></div>
<div class="boundary"><span>error</span></div>
</div>
```
**What is the expected behavior?**
```html
<div id="root">
<div class="boundary"><span>error</span></div>
</div>
``` | Type: Bug,Component: Reconciler | low | Critical |
479,675,551 | flutter | [google_maps_flutter] Circle strokewidth is slightly off on iOS | The stroke appers a lot weider on iOS than on Android.
Tested on 2 devices with same Device Pixel Ratio (3.0)
## Steps to Reproduce
1. [Android](https://photos.app.goo.gl/udtaJpaLtdauTKAs8)
2. [iOS](https://i.imgur.com/d0lrLPe.png)
## Example Code:
```dart
class HomePage extends StatelessWidget {
@override
Widget build(BuildContext context) {
print(
'Device Ratio: ' + MediaQuery.of(context).devicePixelRatio.toString(),
);
return Scaffold(
body: Container(
color: Colors.green,
child: GoogleMap(
initialCameraPosition: CameraPosition(
target: const LatLng(47.6, 8.8796),
zoom: 15,
),
circles: Set<Circle>()
..add(Circle(
circleId: CircleId('hi2'),
center: LatLng(47.6, 8.8796),
radius: 50,
strokeWidth: 3,
strokeColor: Colors.black,
)),
markers: Set<Marker>()
..add(
Marker(
markerId: MarkerId('hi'),
position: LatLng(47.6, 8.8796),
),
),
),
),
);
}
}
```
## Logs
```
[✓] Flutter (Channel master, v1.8.5-pre.78, on Mac OS X 10.14.6 18G87, locale de-DE)
• Flutter version 1.8.5-pre.78 at /Users/mhein/Documents/flutter
• Framework revision 0f4ae3ff4e (4 days ago), 2019-08-08 08:25:37 +0100
• Engine revision f200ee13aa
• Dart version 2.5.0 (build 2.5.0-dev.1.0 f29f41f1a5)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/mhein/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 10.2.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.2.1, Build version 10E1001
• CocoaPods version 1.7.0
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 37.1.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[✓] VS Code (version 1.36.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.2.0
[✓] Connected device (2 available)
• FRD L09 • 73QDU17715005241 • android-arm64 • Android 7.0 (API 24)
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 9 (API 28) (emulator)
• No issues found!
```
| platform-ios,a: quality,p: maps,package,has reproducible steps,P3,found in release: 2.0,found in release: 2.2,team-ios,triaged-ios | low | Minor |
479,706,070 | youtube-dl | fembed |
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.08.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [ ] I'm reporting a new site support request
- [ ] I've verified that I'm running youtube-dl version **2019.08.02**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that none of provided URLs violate any copyrights
- [ ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
- Single video: https://www.fembed.com/v/zyvn5ykx8v1
- Single video: https://www.fembed.com/v/zyvn5ykx8v1
## Description
https://www.fembed.com/v/zyvn5ykx8v1
| site-support-request | low | Critical |
479,711,719 | pytorch | [feature request] Subset of eigenvalues/eigenvectors | ## 🚀 Feature
Computation of a subset of eigenvalues and eigenvectors
Inspired by [MATLAB eigs](https://it.mathworks.com/help/matlab/ref/eigs.html#bu2_q3e-sigma):
o = torch.eigs(A, B, k, sigma)
Solves the generalized eigenvalue problem `A * V = d * B * V `
- `k`: Number of eigenvalues to compute, specified as a positive scalar integer.
- `sigma`: The eigenvalues closest to the number sigma.
## Motivation
I need the first `k` smallest eigenvalues of a `n * n` matrix, currently it's an intractable problem in PyTorch.
e.g. with `k=30` and `n=500`
MATLAB:
```matlab
[phi, e] = eigs(S, L, 30, -1e-5);
% Elapsed time is 0.074229 seconds.
```
NumPy:
```python
e, phi = scipy.sparse.linalg.eigs(S, 30, L, sigma=-1e-5)
# Elapsed time is 2.5 seconds.
```
PyTorch:
```python
e, phi = torch.symeig(C, eigenvectors=True)
# cpu
# Elapsed time is 5 seconds plus the time for the cholesky decomposition.
# cuda
# Elapsed time is 2 seconds plus the time for the cholesky decomposition.
```
## Pitch
I want to be able to compute the subset of eigenvalues/eigenvectors of a matrix that I need, without computing all the others.
## Alternatives
Computing all the eigenvalues/eigenvectors and then selecting the relevant ones.
## Usecase
The [Laplace-Beltrami](https://en.wikipedia.org/wiki/Laplace%E2%80%93Beltrami_operator) operator is a key tool in geometry processing, and its eigenvalues/eigenvectors are an optimal basis to represent smooth functions on surfaces.
For example, it's common to project functions into the basis of LB eigenvectors, truncating the base to the first k-elements. This is equivalent to the low-pass filtering of the Fourier series:

Moreover, the associated eigenvalues encode [properties of the surface](https://arxiv.org/abs/1811.11465), e.g. when sorted they lie on a line and the slope of this line is proportional to the surface's area.
If only `k` elements of the spectrum will be used, it is extremely wasteful computing all the eigenvalues/eigenvectors of the Laplace-Beltrami operator, since when discretized their number is equal to the number of vertices in the mesh, yielding an intractable problem in practice.
| module: performance,feature,module: cpu,triaged | medium | Major |
479,739,552 | terminal | Unicode Right-To-Left override (U+202E) displaces text to the right terminal edge | <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
<!--
This bug tracker is monitored by Windows Terminal development team and other technical folks.
**Important: When reporting BSODs or security issues, DO NOT attach memory dumps, logs, or traces to Github issues**.
Instead, send dumps/traces to [email protected], referencing this GitHub issue.
If this is an application crash, please also provide a Feedback Hub submission link so we can find your diagnostic data on the backend. Use the category "Apps > Windows Terminal (Preview)" and choose "Share My Feedback" after submission to get the link.
Please use this form and describe your issue, concisely but precisely, with as much detail as possible.
-->
# Environment
```none
Windows build number: 10.0.18956.1000
Windows Terminal version (if applicable): 0.3.2171.0
```
# Steps to reproduce
1. Create a file with the [Unicode Right-To-Left override](http://www.unicode-symbol.com/u/202E.html) character in its name, such as `harmless_file[U+202E]txt.exe`
2. Display this files name in the Terminal by running `dir /b` in cmd or `ls` in PowerShell etc.
# Expected behavior
- The files name should appear as `harmless_fileexe.txt`
# Actual behavior
-The files name appears as:
harmless_file exe.txt
Screenshot comparing Windows Terminal, conhost and File Explorer:

| Product-Conhost,Help Wanted,Area-Rendering,Issue-Bug,Product-Terminal,Issue-Task,Priority-2 | low | Critical |
479,748,296 | TypeScript | Types `number` and explicitly constrained `T extends unknown` shouldn't be comparable | ```ts
const num = 1;
function check<T extends unknown>(x: T) {
return x === num;
}
check(num);
```
**Expected**: Error: This condition will always return 'false' since the types 'T' and '1' have no overlap.
**Actual**: No error.
Contrast this with the following example from #32768.
```ts
const num = 1;
function check<T>(x: T) {
return x === num;
}
check(num);
``` | Suggestion,Awaiting More Feedback | medium | Critical |
479,770,418 | youtube-dl | Unsupported URL: 2x2tv.ru | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.08.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.08.02**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: [https://2x2tv.ru/video/atomnyj-les/sezon-2/seriya-9-sekret-skuki/](url)
- LiveTV stream: [https://2x2tv.ru/online](url)
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
All media examples above plays fine in Google Chrome browser.
| site-support-request | low | Critical |
479,781,144 | kubernetes | Probe.SuccessThreshold validation is contextual (bad) | Comments for this field (staging/src/k8s.io/api/core/v1/types.go) say "Must be 1 for liveness" and #77807 is adding startup in the same way.
This is kind of gross API. It would be nice if we could just make that field work normally for those contexts (I am not sure why we don't want it to work, even if it is odd).
/kind bug
/sig node | kind/bug,priority/backlog,sig/node,lifecycle/frozen,triage/accepted | low | Critical |
479,784,129 | pytorch | TensorIterator stubs are designed for merge conflicts. | I ran into some merge conflicts with https://github.com/pytorch/pytorch/pull/23847 because people add their stubs to the end of the list, but this is guaranteed to cause merge conflicts as we port more ops to TensorIterator.
Instead, we should arrange these in alphabetical order (maybe outside of the "basic arithmetic" ops, and have clear comments saying that you are expected to add stubs in alphabetical order. | module: internals,triaged | low | Minor |
479,792,715 | vscode | [css] SCSS/SASS Auto Intellisense doesn't work after last entry | Version: 1.37.0 (system setup)
Commit: 036a6b1d3ac84e5ca96a17a44e63a87971f8fcc8
Date: 2019-08-08T02:33:50.993Z
Electron: 4.2.7
Chrome: 69.0.3497.128
Node.js: 10.11.0
V8: 6.9.427.31-electron.0
OS: Windows_NT x64 6.1.7601
Steps to Reproduce:
1. Create .scss file enter expected autocomplete feature (e.g. `&:hover`) and notice that autocomplete works properly.

2. After last block, enter new block with `&` and intellisense is not activated

3. Before first `&` block, enter new `&:` and intellisense is activated properly

Same issue in nested blocks. Any attempt to do an `&` block with a pseudo-class after the first one appears to fail. The dev console shows no problems.
Does this issue occur when all extensions are disabled?: Yes
| bug,css-less-scss | low | Minor |
479,823,017 | rust | Error "cannot infer type" when using '?' in async block + bad diagnostic | This seems related to #42424, except I don't see an obvious workaround like in #42424.
```rust
#![feature(async_await)]
use std::io::Error;
fn make_unit() -> Result<(), Error> {
Ok(())
}
fn main() {
let fut = async {
make_unit()?;
Ok(())
};
}
```
Fails with the error
```
error[E0282]: type annotations needed for `impl std::future::Future`
--> src/main.rs:11:9
|
10 | let fut = async {
| --- consider giving `fut` the explicit type `impl std::future::Future`, with the type parameters specified
11 | make_unit()?;
| ^^^^^^^^^^^^ cannot infer type
```
https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=ea7e5f7b6e1637f6a39aee0b8209c99e
First of all, it seems like this shouldn't be an error--it should be able to deduce that the return type is `Result<(), io::Error>`, but also the diagnostic **does not work**. If you try to add `impl std::future::Future<Output=Result<(), Error>>` as a type annotation, it also fails to compile because `impl Trait` is only allowed as return types and argument types.
[Playground link](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=e2dacce6293b9d0bb238a3df0d4f2877)
| A-diagnostics,T-compiler,A-inference,A-async-await,A-suggestion-diagnostics,AsyncAwait-Triaged,D-papercut,D-terse | medium | Critical |
479,863,057 | flutter | TabView swipe animates too slow compared to Android | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
When you swipe through your tabs, it takes quite some time until the animation is finished. So you really have to wait until you can scroll down. It also slows down even more at the end of the animation. On native android apps this animation is way faster...
## Proposal
Please make the animation similar to android.
Thanks in advance :)
| framework,f: material design,a: fidelity,f: scrolling,c: proposal,P3,team-design,triaged-design | low | Critical |
479,875,999 | pytorch | deprecate cuda arch 3.5/3.7 in nightlies | Now that we just finished releasing 1.2.0, we should take this perfect opportunity to kick out 3.5/3.7 (K40, K80 GPUs) in nightlies, and see how many people are affected.
cc @ezyang | module: binaries,triaged | low | Major |
479,889,545 | react | Verify that Dehydrated Boundaries (and SuspenseList) Works with DevTools | The fixture might be a good start https://github.com/facebook/react/tree/master/fixtures/ssr (enableSuspenseServerRenderer flag to try it).
It has a long suspending thing.
It doesn't have a SuspenseList yet but might be nice. | Component: Developer Tools,Type: Needs Investigation,React Core Team | medium | Minor |
479,890,550 | TypeScript | Configure from package.json | ## Search Terms
package.json, config, configuration, tsconfig
## Suggestion
Allow `package.json` as an alternate source for `tsconfig.json` options. To be clear, I am requesting that you please re-evaluate #6590 - it has been 2 years since that issue was posted, so interests may have changed and I believe this feature holds good value.
### UPDATE: 2022-04-28
**Please vote how you would like to see this feature implemented here:** https://github.com/microsoft/TypeScript/issues/32830#issuecomment-1112372061
## Use Cases
> What do you want to use this for?
Provide a broader configuration support and decluttering the project root of configuration files that can easily be moved to `package.json`.
> What shortcomings exist with current approaches?
Some users prefer to store their all their configuration files in 1 larger file. Right now, that is not an option.
## Examples
Example of what the `package.json` would look like.
```
{
"name": "example",
"version": "1.0.0",
"description": "This is an example",
"license": "MIT",
"tsconfig": {
"compilerOptions": {
"module": "commonjs",
"moduleResolution": "node",
"outDir": ".build",
"pretty": true,
"rootDir": "./",
"sourceMap": true,
"target": "ES5",
"strict": true
},
"exclude": [
"node_modules"
],
"include": [
"index.ts",
"src/**/*"
]
}
}
```
## Checklist
My suggestion meets these guidelines:
* ✅ This wouldn't be a breaking change in existing TypeScript/JavaScript code
* ✅ This wouldn't change the runtime behavior of existing JavaScript code
* ✅ This could be implemented without emitting different JS based on the types of the expressions
* ✅ This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* ✅ This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | high | Critical |
479,938,818 | pytorch | Allow forward method to be defined with .define() in new TorchScript API | ## 🐛 Bug
Script modules support defining new methods from code with `.define()` call. With new API it's impossible to define `forward` method like this because jit scripting happens before the .define call.
## To Reproduce
This code fails:
```
class Foo(torch.nn.Module):
def __init__(self):
super(Foo, self).__init__()
m = Foo()
m = torch.jit.script(m)
m.define("""
def forward(self):
return torch.rand(3)
""")
```
Error: `RuntimeError: No forward method was defined on Foo()`
Same happens if another method refers to a method being defined
```
class Foo(torch.nn.Module):
def __init__(self):
super(Foo, self).__init__()
def forward(self):
return self.bla()
m = Foo()
m = torch.jit.script(m)
m.define("""
def bla(self):
return torch.rand(3)
""")
```
## Expected behavior
This behavior worked fine with the old API of ScriptModule:
```
class Foo(torch.jit.ScriptModule):
def __init__(self):
super(Foo, self).__init__()
self.define("""
def forward(self):
return torch.rand(3)
""")
m = Foo()
```
Or with another method:
```
class Foo(torch.jit.ScriptModule):
def __init__(self):
super(Foo, self).__init__()
self.define("""
def bla(self):
return torch.rand(3)
""")
@torch.jit.script_method
def forward(self):
return self.bla()
m = Foo()
```
cc @suo | oncall: jit,triaged,jit-backlog | low | Critical |
479,960,817 | pytorch | Confusing error message for Custom Class type mismatch | ## 🐛 Bug
When using user-defined classes, it's easy to miss type annotation of members of the class (and Python won't error out). The only error occurs when the instance of the class is passed to one of the torchscript functions and it's pretty confusing by itself.
## To Reproduce
```python
import torch
from torch import Tensor
@torch.jit.script
class Pair:
def __init__(self, first, second):
self.first = first
self.second = second
@torch.jit.script
def sum_pair(p):
# type: (Pair) -> Tensor
return p.first + p.second
# works
p = Pair(torch.tensor([1]), torch.tensor([2]))
print(sum_pair(p))
# errors
p2 = Pair(1, 2)
print(sum_pair(p2))
```
Produces:
```
17
18 p2 = Pair(1, 2)
---> 19 print(sum_pair(p2))
RuntimeError: sum_pair() Expected a value of type '__torch__.Pair' for argument 'p' but instead found type 'Pair'.
Position: 0
Value: <__main__.Pair object at 0x7fb985f6f3c8>
Declaration: sum_pair(ClassType<Pair> p) -> (Tensor)
```
## Expected behavior
Better error message (ideally with individual fields names that mismatch) would be nice.
cc @suo | oncall: jit,triaged,jit-backlog | low | Critical |
479,982,391 | go | path/filepath: Glob fails if it has a wildcard and ends with / | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/black-hole/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/black-hole/.gvm/pkgsets/go1.12/global"
GOPROXY=""
GORACE=""
GOROOT="/Users/black-hole/.gvm/gos/go1.12"
GOTMPDIR=""
GOTOOLDIR="/Users/black-hole/.gvm/gos/go1.12/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/dev/null"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/82/jkr30dwj24ncf6ng3tdv9kd40000gn/T/go-build925523187=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
https://play.golang.org/p/hOALcoEmrWz
### What did you expect to see?
Accurate return path
### What did you see instead?
Cannot match when there is a wildcard in the path and the last character is /
### Other

| NeedsInvestigation | low | Critical |
480,012,455 | pytorch | torch.utils.tensorboard.SummaryWriter fails to flush at program exit | ```python
K = 10
import torch
import torch.utils.tensorboard
param = torch.randn(10, 10)
param_grad = torch.randn(10, 10)
norm, grad_norm = param.norm(), param_grad.norm()
ratio = grad_norm / (1e-9 + norm)
tensorboard = torch.utils.tensorboard.SummaryWriter('tb_bug')
for iteration in range(K):
tensorboard.add_scalars('test', dict(norm = norm, grad_norm = grad_norm, ratio = ratio), iteration)
```
`tensorboard --logdir tb_bug` shows no data. When K = 100, TensorBoard shows the plot correctly
torch version is '1.2.0.dev20190804', tensorboard version is '1.14.0'
Full output of `tensorboard --logdir tb_bug --inspect`:
```
TensorFlow installation not found - running with reduced feature set.
======================================================================
Processing event files... (this can take a few minutes)
======================================================================
Found event files in:
tb_bug
tb_bug/test_norm
tb_bug/test_grad_norm
tb_bug/test_ratio
These tags are in tb_bug:
audio -
histograms -
images -
scalars -
tensor -
======================================================================
Event statistics for tb_bug:
audio -
graph -
histograms -
images -
scalars -
sessionlog:checkpoint -
sessionlog:start -
sessionlog:stop -
tensor -
======================================================================
These tags are in tb_bug/test_norm:
audio -
histograms -
images -
scalars -
tensor -
======================================================================
Event statistics for tb_bug/test_norm:
audio -
graph -
histograms -
images -
scalars -
sessionlog:checkpoint -
sessionlog:start -
sessionlog:stop -
tensor -
======================================================================
These tags are in tb_bug/test_grad_norm:
audio -
histograms -
images -
scalars -
tensor -
======================================================================
Event statistics for tb_bug/test_grad_norm:
audio -
graph -
histograms -
images -
scalars -
sessionlog:checkpoint -
sessionlog:start -
sessionlog:stop -
tensor -
======================================================================
These tags are in tb_bug/test_ratio:
audio -
histograms -
images -
scalars -
tensor -
======================================================================
Event statistics for tb_bug/test_ratio:
audio -
graph -
histograms -
images -
scalars -
sessionlog:checkpoint -
sessionlog:start -
sessionlog:stop -
tensor -
======================================================================
``` | triaged,module: tensorboard | low | Critical |
480,023,807 | flutter | FlutterDriver: add ability to handle pesky system dialogs during automation | Example of dialog

It remains over the top of the Flutter app being automated for the duration :-(
I've tried recreating the emulated device, and hard reboots, but this consistently comes up during FlutterDriver test suites. The tests nearly always pass* despite this dialog being up at the before the first one goes through it and intermittently through the suite of tests. There are videos and StackOverflow postings on how to twiddle Android settings to prevent it, but they are for older versions of Android, or not applicable to my Galaxy-flavored setup. The build machine is a 2018 MacMini (Mojave, patched up to date, 8GB, plenty of SDD left).
**Could the Flutter dev team provide canonical advice as to how to handle system dialogs during FlutterDriver test-automation suites, please?**
Also - can we encourage FlutterDriver to be referred to as FlutterDriver and not just 'driver'. I'm co-creator of Selenium 1.0. Selenium 2+ was an implementation called WebDriver (by Simon Stewart or ThoughtWorks then Google). We ourselves don't call that 'driver' in conversation or documentation, could y'all encourage the same for increased clarity and google-ability, please?
| tool,d: api docs,t: flutter driver,c: proposal,P3,team-tool,triaged-tool | low | Major |
480,056,237 | pytorch | Tensorboard: Add disable flag for debugging | ## 🚀 Feature
For debugging it's a bit annoying to uncomment or have thousands of if / else statements when one does not want to log a run.
A simple SummaryWriter(..., disabled=True) flag would be helpful :)
| triaged,enhancement,module: tensorboard | low | Critical |
480,196,518 | go | runtime: netpollWaiters typically not decremented | What is the semantic of [runtime.netpollWaiters](https://github.com/golang/go/blob/f686a2890b34996455c7d7aba9a0efba74b613f5/src/runtime/netpoll.go#L84)?
If it should track number of goroutines waiting for poll result, then it is implemented incorrectly (at least on Linux with epoll).
I see that [runtime.netpollWaiters](https://github.com/golang/go/blob/f686a2890b34996455c7d7aba9a0efba74b613f5/src/runtime/netpoll.go#L84) is incremented every time new goroutine getting blocked on polling:
https://github.com/golang/go/blob/f686a2890b34996455c7d7aba9a0efba74b613f5/src/runtime/netpoll.go#L354-L363
and decremented only in `func netpollgoready(gp *g, traceskip int)`:
https://github.com/golang/go/blob/f686a2890b34996455c7d7aba9a0efba74b613f5/src/runtime/netpoll.go#L365-L368
Looks like `netpollgoready()` is called only from `internal/poll.runtime_pollSetDeadline()`, i.e. in some codepaths related to setting polling deadlines:
https://github.com/golang/go/blob/f686a2890b34996455c7d7aba9a0efba74b613f5/src/runtime/netpoll.go#L204-L205
And most frequently parked goroutine waiting for poll result is being awakened somewhere in `runtime.findrunnable()`:
https://github.com/golang/go/blob/61bb56ad63992a3199acc55b2537c8355ef887b6/src/runtime/proc.go#L2210-L2221
or `runtime.pollWork()`:
https://github.com/golang/go/blob/61bb56ad63992a3199acc55b2537c8355ef887b6/src/runtime/proc.go#L2395-L2409
Apparently `atomic.Load(&netpollWaiters) > 0` condition in the referenced above `runtime.findrunnable()` and `runtime.pollWork()` functions is always true, as soon as as single goroutine will wait for poll result and get awakened from those functions.
I verified that `runtime.netpollWaiters` is increased with each wait of a goroutine on network in an example of handling TCP conection:
```
// tcp-server.go
package main
import (
"bufio"
"fmt"
"log"
"net"
"strings"
)
func main() {
fmt.Println("Launching server...")
ln, _ := net.Listen("tcp", ":8081")
conn, _ := ln.Accept()
for {
message, err := bufio.NewReader(conn).ReadString('\n')
if err != nil {
log.Fatal(err)
}
fmt.Print("Message Received:", string(message))
newmessage := strings.ToUpper(message)
conn.Write([]byte(newmessage + "\n"))
}
}
```
```
$ dlv debug tcp-socket.go
Type 'help' for list of commands.
(dlv) p runtime.netpollWaiters
0
(dlv) c
Launching server...
received SIGINT, stopping process (will not forward signal)
> runtime.epollwait() /usr/lib/golang/src/runtime/sys_linux_amd64.s:675 (PC: 0x4619e0)
Warning: debugging optimized function
(dlv) p runtime.netpollWaiters
1
(dlv) c
Message Received:message 1 from netcat
Message Received:message 2 from netcat
received SIGINT, stopping process (will not forward signal)
> runtime.epollwait() /usr/lib/golang/src/runtime/sys_linux_amd64.s:675 (PC: 0x4619e0)
Warning: debugging optimized function
(dlv) p runtime.netpollWaiters
4
(dlv)
```
```
$ go version
go version go1.12.5 linux/amd64
```
Snippet from `go env`:
```
$ go env
GOARCH="amd64"
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
```
| NeedsInvestigation,compiler/runtime | low | Critical |
480,206,002 | kubernetes | Compute and propagate EndpointsLastChangeTriggerTime for all cases | This is important to implement [Networking programming latency SLI](https://github.com/kubernetes/community/blob/master/sig-scalability/slos/network_programming_latency.md). At the moment it's only changed when list of pods changes.
According to @wojtek-t this issue would be resolved, when server-side apply would be implemented and we timestamp all changes. At the moment it's hardly feasible to do it.
Tracking issue, so we don't forget to remove temporary workarounds.
/cc @mm4tt | kind/bug,sig/scalability,lifecycle/frozen | low | Major |
480,227,280 | nvm | No npm prefix set, but nvm requires 'nvm use --delete-prefix' | - Operating system and version:
<details>
<!-- do not delete the following blank line -->
```sh
$ lsb_release -a
LSB Version: 1.4
Distributor ID: Arch
Description: Arch Linux
Release: rolling
Codename: n/a
```
</details>
- `nvm debug` output:
<details>
<!-- do not delete the following blank line -->
```sh
$ nvm debug
nvm --version: v0.34.0
$SHELL: /bin/bash
$SHLVL: 2
$HOME: /home/paulk/
$NVM_DIR: '$HOME/.nvm'
$PATH: /usr/local/sbin:/usr/local/bin:/usr/bin:/opt/android-sdk/platform-tools:/usr/lib/jvm/default/bin:/usr/lib32/jvm/default/bin:/opt/ti/msp-flasher:/opt/ti/mspgcc/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:$HOME/bin/:$HOME.gem/ruby/2.3.0/bin:/opt/android-sdk/platform-tools/:$HOME.bin:$HOMEbin/gyb:
$PREFIX: ''
$NPM_CONFIG_PREFIX: ''
$NVM_NODEJS_ORG_MIRROR: ''
$NVM_IOJS_ORG_MIRROR: ''
shell version: 'GNU bash, version 5.0.7(1)-release (x86_64-pc-linux-gnu)'
uname -a: 'Linux 5.1.15-zen1-1-zen #1 ZEN SMP PREEMPT Tue Jun 25 04:49:28 UTC 2019 x86_64 GNU/Linux'
OS version: Arch Linux ()
curl: /usr/bin/curl, curl 7.65.3 (x86_64-pc-linux-gnu) libcurl/7.65.3 OpenSSL/1.1.1c zlib/1.2.11 libidn2/2.2.0 libpsl/0.21.0 (+libidn2/2.2.0) libssh2/1.8.2 nghttp2/1.36.0
wget: /usr/bin/wget, GNU Wget 1.20.3 built on linux-gnu.
git: /usr/bin/git, git version 2.22.0
grep: /usr/bin/grep (grep --color=auto), grep (GNU grep) 3.3
awk: /usr/bin/awk, GNU Awk 4.2.1, API: 2.0 (GNU MPFR 4.0.2, GNU MP 6.1.2)
sed: /usr/bin/sed, sed (GNU sed) 4.7
cut: /usr/bin/cut, cut (GNU coreutils) 8.31
basename: /usr/bin/basename, basename (GNU coreutils) 8.31
rm: /usr/bin/rm, rm (GNU coreutils) 8.31
mkdir: /usr/bin/mkdir, mkdir (GNU coreutils) 8.31
xargs: /usr/bin/xargs, xargs (GNU findutils) 4.6.0
nvm current: system
which node: /usr/bin/node
which iojs: which: no iojs in (/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/android-sdk/platform-tools:/usr/lib/jvm/default/bin:/usr/lib32/jvm/default/bin:/opt/ti/msp-flasher:/opt/ti/mspgcc/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:$HOME/bin/:$HOME.gem/ruby/2.3.0/bin:/opt/android-sdk/platform-tools/:$HOME.bin:$HOMEbin/gyb:)
which npm: /usr/bin/npm
npm config get prefix: /usr
npm root -g: /usr/lib/node_modules
```
</details>
- `nvm ls` output:
<details>
<!-- do not delete the following blank line -->
```sh
$ nvm ls
v8.16.0
v10.16.2
-> system
node -> stable (-> v10.16.2) (default)
stable -> 10.16 (-> v10.16.2) (default)
iojs -> N/A (default)
unstable -> N/A (default)
lts/* -> lts/dubnium (-> v10.16.2)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.16.0
lts/dubnium -> v10.16.2
</details>
- How did you install `nvm`? (e.g. install script in readme, Homebrew):
nvm installed from https://aur.archlinux.org/packages/nvm/
My ~/.bashrc ends with:
<details>
<!-- do not delete the following blank line -->
```sh
export PATH="$PATH:$(yarn global bin)"
export NVM_DIR="$HOME/.nvm"
source /usr/share/nvm/init-nvm.sh
```
</details>
- What steps did you perform?
Anytime I run `nvm use {NODE_VERSION}` it complains
<details>
<!-- do not delete the following blank line -->
```sh
nvm is not compatible with the npm config "prefix" option: currently set to "/home/paulk/.nvm/versions/node/v{NODE_VERSION}"
Run `npm config delete prefix` or `nvm use --delete-prefix v{NODE_VERSION}` to unset it.
```
</details>
If I set a default alias, then the error message appears anytime I open a shell. NVM seems to work just fine if I use `nvm use --delete-prefix` every time I try to load a different NVM version.
- What did you expect to happen?
I expected nvm to silently switch to the specified node. I'd like to be able to set a default prefix.
- Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`?
Yes, but it's all before the lines that load nvm.
My ~/.npmrc contains only the `local-address=10.1.2.123` option. | feature requests,pull request wanted | low | Critical |
480,236,261 | TypeScript | null check of a const property incorrectly resolved | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.3 (also tried with `@next`)
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** const symbol property "not assignable to type"
**Code**
```ts
/*
* For reasons of how the JSON is converted, I'm using Symbols to hide certain properties.
*/
class Server {
public auth: string | null = null;
}
/** `tsc` complains about this[AUTH_PROP] even though it must always be a string inside the `if()` */
const AUTH_PROP = Symbol();
class TestSymbol {
private readonly [AUTH_PROP]: string | null = null;
public readonly server: Server | null = null;
public get auth(): string | undefined {
if (this[AUTH_PROP] !== null) {
return this[AUTH_PROP];
}
if (this.server && this.server.auth) {
return this.server.auth;
}
return undefined;
}
}
/** same thing with a string literal and it works */
class TestNoSymbol {
private readonly _auth: string | null = null;
public readonly server: Server | null = null;
public get auth(): string | undefined {
if (this['_auth'] !== null) {
return this['_auth'];
}
if (this.server && this.server.auth) {
return this.server.auth;
}
return undefined;
}
}
/** same thing with a const string and it fails */
const AUTH_PROP_S = '_auth';
class TestConstString {
private readonly [AUTH_PROP_S]: string | null = null;
public readonly server: Server | null = null;
public get auth(): string | undefined {
if (this[AUTH_PROP_S] !== null) {
return this[AUTH_PROP_S];
}
if (this.server && this.server.auth) {
return this.server.auth;
}
return undefined;
}
}
```
**Expected behavior:**
All 3 versions of this should be fine, and not throw a `tsc` error. The `const` values can never be changed so it must always be a `string` inside the `if()` statement.
**Actual behavior:**
```
Type 'string | null' is not assignable to type 'string | undefined'.
Type 'null' is not assignable to type 'string | undefined'.
```
**Playground Link:** https://is.gd/V7Jai9
**Related Issues:** not sure
| Needs Investigation | low | Critical |
480,238,286 | godot | Error spam Condition ' p_size < 0 ' is true. when setting internal_vertices in Polygon2D | **Godot version:**
3.2.dev.custom_build. 3ea33c0e4
**Issue description:**
Error spam when move mouse at the 2d viewport with Polygon2D
```
ERROR: resize: Condition ' p_size < 0 ' is true. returned: ERR_INVALID_PARAMETER
At: ./core/cowdata.h:252.
```
**Minimal reproduction project:**
[The-worst-Godot-test-project2.zip](https://github.com/godotengine/godot/files/3497772/The-worst-Godot-test-project2.zip)
| bug,topic:core,confirmed | low | Critical |
480,243,578 | TypeScript | Strange any with export interface and variable of same name | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:**
3.5.3. Also reproduces on all Playground versions from 2.7.2 to 3.5.1. Does not reproduce on 2.4.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
interface export any duplicate
**Code**
```ts
export interface A {
}
const A = {
};
```
**Expected behavior:**
Quick info on `const A` shows type `{}`
**Actual behavior:**
Quick info on `const A` shows type `any`
If `export` is removed, the problem goes away.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
[Playground](https://www.typescriptlang.org/play/#code/KYDwDg9gTgLgBASwHY2FAZgQwMbDgQTgG8AoAXxJOwiQGd5CBeY8gbhKA)
**Related Issues:**
https://github.com/microsoft/TypeScript/issues/31031
| Bug,Domain: Quick Info | low | Minor |
480,252,445 | create-react-app | Allow jest `verbose` configuration option | ### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
When executing tests that may output a number of log lines irrelevant to the test run it would be nice to hide those by using Jest's built in `verbose` option.
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
Add `verbose` to the whitelist for jest options in `createJestConfig.js`. After making this modification locally I haven't run into any negative side effects.
### Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
- Providing a custom logging interface seemed a bit excessive when this functionality is built into the test tool but is gated by a whitelist.
- Removing the whitelist entirely and allow freeform Jest configuration.
### Additional context
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here, if you haven't already.
-->
The suggestion to eject when trying to use a non-whitelisted jest configuration option seems a bit heavy handed vs a link to create an issue about adding a new option or an explanation as to why there is a whitelist. | issue: proposal,needs triage | low | Minor |
480,252,541 | pytorch | hasSideEffects INTERNAL ASSERT FAILED when using .split method with JIT | When using the code snippet bellow, I get this error: `RuntimeError: kind_.is_prim() INTERNAL ASSERT FAILED at /pytorch/torch/csrc/jit/ir.cpp:904, please report a bug to PyTorch. Only prim ops are allowed to not have a registered operator but aten::cat doesn't have one either. We don't know if this op has side effects. (hasSideEffects at /pytorch/torch/csrc/jit/ir.cpp:904)`
```python
import torch
import numpy as np
@torch.jit.script
def get_loss(a, idxs):
idxs = idxs.split(split_size=1, dim=1)
b = a.index(idxs)
return b.sum()
c = torch.zeros(3, 4, 5)
c.requires_grad = True
idxs = torch.tensor(np.array([[0, 1, 2],
[0, 0, 4],
[0, 2, 4],
[2, 3, 0]], dtype=np.long))
loss = get_loss(c, idxs)
```
If `idxs.split(split_size=1, dim=1)` replaced with `list(idxs.split(split_size=1, dim=1))` the error disappears.
PyTorch version is 1.2.0
cc @ezyang @gchanan @suo | high priority,oncall: jit,triaged | low | Critical |
480,265,205 | flutter | CupertinoTimerPicker minor fidelity issues. | When compared with the native count down timer, `CupertinoTimerPicker` looks and behaves differently in a few places:
- The user should not be allowed to pick 0 hour 0 min. Attempting to do so will cause the min picker to roll up, so it becomes 0 hour 1 min (even when you are changing the hour picker), and the numbers in the min picker becomes gray when the picker is close to 0 hour 0 min.
- The abbreviations should not end with a ".". i.e. "min." should be "min". The second picker doesn't exist for the native component but in the same vein it should be "sec" instead of "sec.".
- Magnification should be enabled for the selected row of each column. | framework,f: date/time picker,a: fidelity,f: cupertino,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Minor |
480,270,840 | TypeScript | jsdoc object index signature syntax doesn't instantiate type variables | **Code**
```js
/** @template T */
class C {
/** @param {T} t */
constructor(t) {
/** @type {Object<string, T> } -- does not instantiate T */
this.ot = { p: t }
/** @type {{ [s: string]: T }} -- instantiates T */
this.st = { p: t }
}
}
var c = new C(1)
c.ot.p // should have type number, has type T
c.st.p // has type number
```
**Expected behavior:**
`c.ot.p : number`
**Actual behavior:**
`c.ot.p : T`
`Object<string, T>` has the syntax of a type alias, but doesn't do any of the instantiation that type aliases do, which is probably the problem. | Bug,Domain: JSDoc,Domain: JavaScript | low | Minor |
480,270,871 | rust | Once `impl_trait_in_bindings` is stable, suggest using it in local bindings | https://github.com/rust-lang/rust/pull/63507#discussion_r313274523 introduces a check to _not_ suggest `let foo: impl Trait`. Once this is valid, we should remove that gate. | C-enhancement,A-diagnostics,T-compiler,S-blocked,A-suggestion-diagnostics,F-impl_trait_in_bindings,requires-nightly | low | Minor |
480,275,870 | pytorch | Loading custom Torchscript C++ operators in python segfaults due to ABI compatibility issue between pytorch and libtorch | I followed [this](https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html) tutorial and built my custom extension. Everything works with torch 1.1. With torch 1.2 (pytorch and libtorch), building succeeds, the code can be called from C++, but will segfault when I try to load the op in python with `torch.ops.load_library('xx.so')`.
This is what GDB says:
```
Thread 1 "python" received signal SIGSEGV, Segmentation fault.
0x00007ffff2c3119f in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&) ()
from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
```
After some investigation, it seems as long as the .so file contains a call to `torch::RegisterOperators` (or the deprecated `torch::jit::RegisterOperators`) the loading of the .so file in python will trigger the segfault. The function being exported is irrelevant. Even an identity function passing a tensor unchanged will do.
cc @suo | oncall: jit,triaged | low | Minor |
480,291,098 | TypeScript | Hovering over JSDoc annotation resolves differently to Intellisense | Issue Type: <b>Bug</b>
It's possible when hovering over the JSDoc annotation for it to resolve to a different object to what Intellisense will autocomplete to.
To re-create ...
```
// account.js
/**
* @class
*/
class Account {
constructor () {
this.foo = 'foo'
}
}
module.exports = {
Account
}
```
```
// index.js
const { account } = require('./account.js')
/**
* @param {Account} a
*/
const main = (a) => {
a.foo = ''
}
```
Hovering over the JSDoc will resolve it to the `lib.d.ts` `Account` interface but when typing the `a.foo` in `main` it will autocomplete to the correct Account class.
VS Code version: Code 1.36.1 (2213894ea0415ee8c85c5eea0d0ff81ecc191529, 2019-07-08T22:56:38.504Z)
OS version: Darwin x64 18.6.0
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz (8 x 2700)|
|GPU Status|2d_canvas: enabled<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: enabled<br>oop_rasterization: disabled_off<br>protected_video_decode: unavailable_off<br>rasterization: enabled<br>skia_deferred_display_list: disabled_off<br>skia_renderer: disabled_off<br>surface_synchronization: enabled_on<br>video_decode: enabled<br>viz_display_compositor: disabled_off<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|2, 2, 2|
|Memory (System)|16.00GB (1.87GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (10)</summary>
Extension|Author (truncated)|Version
---|---|---
scratchpad|awe|0.1.0
insert-unicode|bru|0.6.0
vscode-standardjs|che|1.2.3
gitlens|eam|9.8.5
nunjucks-template|ese|0.1.2
vscode-docker|ms-|0.7.0
addDocComments|ste|0.0.8
code-spell-checker|str|1.7.17
vscodeintellicode|Vis|1.1.8
vscode-todo-highlight|way|1.0.4
</details>
<!-- generated by issue reporter --> | Bug,Domain: JSDoc,Domain: Quick Info | low | Critical |
480,312,107 | pytorch | "PyTorch core" thread local flag | I propose that we define a thread local flag which is false (default value) in PyTorch user code, and true when we have called into "PyTorch core" code (defined to be the set of code we maintain in this codebase). The most obvious place this flag is flipped is when a user calls an operator we define. The semantics of some operators may behave differently depending on if we are in core or not.
Here are some applications of the flag:
* #7535 is a longstanding, frequently requested feature to have a "global GPU flag" that causes all tensors to be allocated in CUDA by default. We have been wary of implementing this, as such a flag would also affect internal allocations in our library, which probably would not be able to handle this change correctly. With a PyTorch core flag, matters are simple: respect the global GPU flag if !core, and ignore it otherwise. We can similarly make the "default dtype is double" flag more safe this way.
* #23899 wishes to make a major backwards-compatibility breaking change to the stride-handling of some major operations. @VitalyFedyunin proposes that we introduce a flag to let users pick between which behavior they want; @gchanan is concerned it will be too difficult to maintain core code to work in both cases, in this regime. With a PyTorch core flag, respect the memory propagation flag if !core, and ignore it otherwise. | module: internals,triaged,enhancement | low | Minor |
480,314,457 | flutter | Refactor GPU surface APIs take into account the fact that an external view embedder may want to render to the root surface. | Currently, the external view embedder (which was authored later than the GPU surfaces) can only render into overlays. However, this prevents us from presenting a unified view of composition into the root layer as well as overlays without adding hacks to the GPU surfaces to bypass rendering into the root. These hacks come in the form of the `render_to_surface_ ` ivar on the `GPUSurfaceGL`. In the absence of these hacks, extra resources may need to be allocated unnecessarily. Notably, this hack is absent from the software surface.
The GPU surface APIs must be refactored to make the external view embedder a first class citizen of the rasterization subsystem. | platform-ios,engine,e: embedder,a: platform-views,P2,team-ios,triaged-ios | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.