id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
520,491,311 | rust | `--remap-path-prefix` does not apply to secondary files in diagnostics | Since #64151 added some new diagnostics, Rocket's UI tests have had mismatches in stderr that I can't seem to fix. In particular these diagnostics include "secondary file" paths that aren't easily normalizable or remappable.
I haven't tried to make a minimal reproduction yet, but this output should illustrate the problem:
```
error[E0277]: the trait bound `usize: rocket::http::uri::Ignorable<rocket::http::uri::Query>` is not satisfied
- --> $DIR/typed-uri-bad-type.rs:81:34
- |
-81 | uri!(other_q: rest = S, id = _);
- | ^ the trait `rocket::http::uri::Ignorable<rocket::http::uri::Query>` is not implemented for `usize`
- |
- = note: required by `rocket::http::uri::assert_ignorable`
+ --> foo/typed-uri-bad-type.rs:41:16
+ |
+41 | fn other_q(id: usize, rest: S) { }
+ | ^^^^^ the trait `rocket::http::uri::Ignorable<rocket::http::uri::Query>` is not implemented for `usize`
+...
+81 | uri!(other_q: rest = S, id = _);
+ | -------------------------------- in this macro invocation
+ |
+ ::: /home/jeb/code/Rocket/core/http/src/uri/uri_display.rs:467:40
+ |
+467 | pub fn assert_ignorable<P: UriPart, T: Ignorable<P>>() { }
+ | ------------ required by this bound in `rocket::http::uri::assert_ignorable`
(...)
status: exit code: 1
command: "rustc" "tests/ui-fail/typed-uri-bad-type.rs" "-L" "/tmp" "--target=x86_64-unknown-linux-gnu" "--error-format" "json" "-C" "prefer-dynamic" "-o" "/tmp/typed-uri-bad-type.stage-id" "--remap-path-prefix" "/home/jeb/code/Rocket=/Rocket" "--remap-path-prefix" "tests/ui-fail=foo" "-L" "crate=/home/jeb/code/Rocket/target/debug" "-L" "dependency=/home/jeb/code/Rocket/target/debug/deps" "--extern" "rocket_http=/home/jeb/code/Rocket/target/debug/deps/librocket_http-1faeb9d2934513de.rlib" "--extern" "rocket=/home/jeb/code/Rocket/target/debug/deps/librocket-44cf6b9b35acb885.rlib" "-L" "/tmp/typed-uri-bad-type.stage-id.aux" "-A" "unused"
```
Here I'm remapping `tests/ui-fail` to `foo` to demonstrate that `--remap-path-prefix` works at all. Remapping `/home/jeb/code/Rocket` to `/Rocket` is the real goal, because if I can get that to work we should have errors that are identical across build environments. However, it is not actually remapped in the output. Is `--remap-path-prefix` intended to apply to these?
My first attempt at this was to update compiletest to normalize `$SRC_DIR` (laumann/compiletest-rs#198), but that has some problems that I think the `--remap-path-prefix` approach would neatly avoid.
## Meta
`rustc --version --verbose`:
rustc 1.40.0-nightly (1423bec54 2019-11-05)
binary: rustc
commit-hash: 1423bec54cf2db283b614e527cfd602b481485d1
commit-date: 2019-11-05
host: x86_64-unknown-linux-gnu
release: 1.40.0-nightly
LLVM version: 9.0
| A-diagnostics,T-compiler,C-bug,A-reproducibility,D-diagnostic-infra | low | Critical |
520,510,005 | TypeScript | Allow type aliases to reference themselves in type argument positions | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.8.0-dev.20191105
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
[circular](https://github.com/microsoft/TypeScript/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+circular)
**Code**
```ts
type A<T> = { x: T }
type R = A<R> // FAIL
type B = { x: B } // OK
```
**Expected behavior:**
Both cases throw no error.
**Actual behavior:**
Type R is rejected due to a circular reference.
**Playground Link:**
[Link](http://www.typescriptlang.org/play/?ts=3.8.0-dev.20191105&ssl=1&ssc=1&pln=64&pc=3#code/C4TwDgpgBAggPAFQHxQLxQN5QB4C4oJQC+AUKJFAEpqxyUoD0DUAYjAJIAyJZ40AQjSx4ogolCZQA8gGkgA)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
https://github.com/microsoft/TypeScript/pull/33050
Regression: https://github.com/microsoft/TypeScript/pull/33050#issuecomment-526637037 | Suggestion,Awaiting More Feedback | medium | Critical |
520,515,906 | flutter | Separate writeable files/cache from main flutter directory | I want to package flutter for nix / nixos but i cannot do that due to it trying to write to its install directory, it would be nice for flutter to not attempt to write to the folder it is installed in as it make it very hard to package it correctly. | c: new feature,tool,P2,team-tool,triaged-tool | low | Major |
520,516,992 | pytorch | torch.stack: bad shape error message | ## 🐛 Bug
torch.stack results in `RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 1 and 4 in dimension 1` errors for perfectly valid input. At first I thought I was misunderstanding how indexing works, but when I changed "dim" in the stack call to 1, I got the error with `RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 1 and 4 in dimension 0` which is really really weird.
## To Reproduce
Steps to reproduce the behavior:
1. Run this code snippet:
```python
import torch
class_probs = torch.zeros([4,1024,1024], device="cpu")
bg_probs = torch.zeros([1,1024,1024], device="cpu")
# class_probs = torch.zeros([4,1024,1024], device="cuda")
# bg_probs = torch.zeros([1,1024,1024], device="cuda")
print(bg_probs.shape, class_probs.shape)
combo = torch.stack((bg_probs, class_probs), dim=0)
print(combo.shape)
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
PyTorch version: 1.3.0
Is debug build: No
CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce RTX 2080 Ti
Nvidia driver version: 418.87.01
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.17.2
[pip] torch==1.3.0
[pip] torchcontrib==0.0.2
[pip] torchfile==0.1.0
[pip] torchnet==0.0.4
[pip] torchvision==0.4.1a0+d94043a
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.14 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] pytorch 1.3.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] torchcontrib 0.0.2 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchnet 0.0.4 pypi_0 pypi
[conda] torchvision 0.4.1 py37_cu101 pytorch
## Additional context
The error happens for both cuda and cpu tensors. Also, torch.cat seems to work perfectly fine for the same operation !
cc @jlin27 @mruberry | module: docs,module: error checking,triaged | low | Critical |
520,518,949 | TypeScript | Suggestion: prompt rename after "convert named imports to namespace imports" | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
Every time I use the "convert to namespace imports" refactor aka code action, I almost always do a rename afterwards. It would be much more efficient if VS Code / TS automatically prompted me to rename the namespace immediately after the conversion is done, just like how "extract to constant" works.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Experience Enhancement | low | Critical |
520,538,353 | CS-Notes | 二分查找的区间为何是 0 到 len(nums) 而不是 len(nums) | 在使用二分查找来查找区间时,[该题](https://github.com/CyC2018/CS-Notes/blob/master/notes/Leetcode%20%E9%A2%98%E8%A7%A3%20-%20%E4%BA%8C%E5%88%86%E6%9F%A5%E6%89%BE.md#6-%E6%9F%A5%E6%89%BE%E5%8C%BA%E9%97%B4)中h的初始值为何要设置为len(nums)?其他题中设置的均为len(nums)-1
```private int binarySearch(int[] nums, int target) {
int l = 0, h = nums.length; // 注意 h 的初始值
while (l < h) {
int m = l + (h - l) / 2;
if (nums[m] >= target) {
h = m;
} else {
l = m + 1;
}
}
return l;
}
https://github.com/CyC2018/CS-Notes/blob/master/notes/Leetcode%20%E9%A2%98%E8%A7%A3%20-%20%E4%BA%8C%E5%88%86%E6%9F%A5%E6%89%BE.md#6-%E6%9F%A5%E6%89%BE%E5%8C%BA%E9%97%B4 | question | medium | Minor |
520,548,900 | go | cmd/vet: export list of analysis passes used by vet | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.4 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN="/home/leighmcculloch/local/bin"
GOCACHE="/home/leighmcculloch/.cache/go-build"
GOENV="/home/leighmcculloch/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/leighmcculloch/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/leighmcculloch/local/bin/go/1.13.4"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/leighmcculloch/local/bin/go/1.13.4/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build882821970=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
I was looking to include all the same analysis passes of `go vet` into my own main package using `multichecker` similar to the pattern that I saw @FiloSottile use in FiloSottile/mkcert@82ea753aa2a8050fc7799c18afc81300d8547d1c.
### What did you expect to see?
I expected to find an exported slice/list of analysis passes that vet uses so that it is easy to build multicheckers that build on-top of vet.
### What did you see instead?
I saw that in go vet's main function is where the list of analysis passes is defined and that folks are just copying that list when building multichecker's that run additional checks on-top of vet's checks.
https://github.com/golang/go/blob/78d4560793de65e21199d3c80e9c901833bdaeba/src/cmd/vet/main.go#L35-L58
### Ask
Could we export in a package inside the `go` repo, or inside the `tools` repo, a slice that holds the analysis passes that vet will run, and have vet reference that so that it can be referenced by other tools?
I'm able to submit a change doing this if this would be welcomed. | NeedsInvestigation,FeatureRequest,Analysis | low | Critical |
520,551,022 | material-ui | [material-ui] Provide more direct documentation access to props of "inherited" components | This actually is a duplicate of #7981, but that issue is quite old and I think this would be worth re-visiting.
## Summary 💡
<!-- Describe how it should work. -->
Many Material UI components delegate to other Material UI components.
There are two main flavors of this delegation:
1. Delegating to a root element wrapped by the main component. Any props not recognized by the main component will be passed down to this root element. Example: `Menu` wraps `Popover` which in turn wraps `Modal`.
2. Delegating to components other than the root element. Props are generally passed to these components via a prop on the main component with a name of `<InnerComponentName>Props`. Example: `Select` leverages `Menu` and receives props for the `Menu` element via its `MenuProps` prop.
It is the first case (delegation to a root element) for which I think the documentation could be considerably improved.
Currently the documentation forces you to follow links to these other components in order to discover the full set of props that a component supports. Many people, especially new users of the library, do not notice (or fully understand) the "Any other props supplied will be provided to the root element" portion and miss the fact that many more props may be supported by the component. I think it would be much better for users of the documentation if all of the props from "inherited" components were included directly. So in the case of `Menu`, the props for `Popover` and `Modal` would be included directly in the `Menu` documentation. It seems like it should be doable for [buildApi.js](https://github.com/mui-org/material-ui/blob/master/docs/scripts/buildApi.js) and [generateMarkdown.js](https://github.com/mui-org/material-ui/blob/master/docs/src/modules/utils/generateMarkdown.js) to be enhanced to automatically include the comprehensive props.
## Examples 🌈
<!--
Provide a link to the Material design specification, other implementations,
or screenshots of the expected behavior.
-->
[This Stack Overflow question](https://stackoverflow.com/questions/58781285/for-material-ui-how-can-i-know-what-the-properties-of-each-components-are) shows one example of users assuming that a component ([AppBar](https://material-ui.com/api/app-bar/) in this case) only supports the properties shown directly on its API page.
## Motivation 🔦
<!--
What are you trying to accomplish? How has the lack of this feature affected you?
Providing context helps us come up with a solution that is most useful in the real world.
-->
I understand that the current way of documenting the properties was an easier approach to implement, but given the place of "Better documentation" within the [roadmap priorities](https://material-ui.com/discover-more/roadmap/#priorities), I think it would be worth the additional complexity in the API doc generation (which is already aware of the inherited component) to allow users to find the comprehensive props more easily/quickly. | docs,package: material-ui,scope: docs-infra | low | Major |
520,563,422 | pytorch | RuntimeError: CUDA error: invalid device ordinal | model = model.to(device)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 426, in to
return self._apply(convert)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 202, in _apply
module._apply(fn)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 224, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 424, in convert
return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
RuntimeError: CUDA error: invalid device ordinal
| module: cuda,triaged | low | Critical |
520,574,738 | neovim | Windows: use AF_UNIX instead of named pipes | `AF_UNIX` is supported on Windows now, and has some advantages over named pipes:
https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/
libuv issue: https://github.com/libuv/libuv/issues/2537 | enhancement,platform:windows,system | low | Minor |
520,578,850 | rust | As of 1.37.0, `dylib` shared libraries no longer support interpositioning of functions defined in C | I encountered a stable-to-stable linking regression while working with a project that compiles to an x86-64 ELF shared library for Linux. The library exports a Rust interface, but also includes some wrappers of libc functions, written in C, that need to shadow the system implementations via interpositioning. I perform the final linking step using rustc, which works correctly under rustc 1.36.0 but subtly fails under 1.37.0: the libc wrapper functions are not exported as dynamic symbols, causing the program to behave differently at runtime. The offending patchset appears to be https://github.com/rust-lang/rust/pull/59752, and I've managed to construct a minimal example to illustrate the problem...
# Minimal example
The library consists of two files:
* `interpose.rs` defines a single-function Rust API:
```
#![crate_type = "dylib"]
pub fn relax() {
println!("Relax said the night guard");
}
```
* `exit.c` defines an implementation of `exit()` that should shadow the one in libc:
```
#include <stdio.h>
#include <stdlib.h>
void exit(int ign) {
(void) ign;
puts("We are programmed to receive");
}
```
The following short program, `hotel_california.rs`, will be used to test its behavior:
```
extern crate interpose;
use interpose::relax;
use std::os::raw::c_int;
extern {
// NB: Deliberately returns () instead of ! for the purpose of this example.
fn exit(_: c_int);
}
fn main() {
relax();
unsafe { exit(1); }
println!("You can check out any time you like but you can never leave");
}
```
# Expected behavior (past stable releases)
This is how the program used to behave when built with a stable compiler:
```
$ rustc --version
rustc 1.36.0 (a53f9df32 2019-07-03)
$ rustc -Cprefer-dynamic -Clink-arg=exit.o interpose.rs
$ rustc -L. -Crpath hotel_california.rs
$ ./hotel_california
Relax said the night guard
We are programmed to receive
You can check out any time you like but you can never leave
$ echo $?
0
```
Notice that the call to `exit()` gets intercepted and does not, in fact, exit the program.
# Broken behavior (as of 1.37.0 stable)
Newer versions of the compiler result in different program output:
```
$ rustc --version
rustc 1.37.0
$ rustc -Cprefer-dynamic -Clink-arg=exit.o interpose.rs
$ rustc -L. -Crpath hotel_california.rs
$ ./hotel_california
Relax said the night guard
$ echo $?
1
```
# Discussion: symbol table entries
The problem is evident upon examining the static and dynamic symbol tables of the `libinterpose.so` file. When built with rustc 1.36.0, we see that `exit` is exported in the dynamic symbol table (indicated by the `D`):
```
$ objdump -tT libinterpose.so | grep exit$
000000000000118c g F .text 000000000000001a exit
000000000000118c g DF .text 000000000000001a exit
```
In contrast, the output from rustc 1.37.0 doesn't list `exit` in the dynamic symbol table because the static symbol table lists it as a local symbol (`l`) rather than a global one (`g`):
```
$ objdump -tT libinterpose.so | grep exit$
000000000000118c l F .text 000000000000001a exit
```
# Discussion: linker invocation
I was curious to see how rustc was invoking cc to link the program, so I traced the command-line arguments by substituting the fake linker `false`. Here's with rustc 1.36.0:
```
$ rustc -Clinker=false -Cprefer-dynamic -Clink-arg=exit.o interpose.rs
error: linking with `false` failed: exit code: 1
|
= note: "false" "-Wl,--as-needed" "-Wl,-z,noexecstack" "-m64" "-L" "/home/solb/Desktop/rust-1.36/lib/rustlib/x86_64-unknown-linux-gnu/lib" "interpose.interpose.3a1fbbbh-cgu.0.rcgu.o" "interpose.interpose.3a1fbbbh-cgu.1.rcgu.o" "-o" "libinterpose.so" "in
terpose.54bybojgvbim5uqh.rcgu.o" "-Wl,-zrelro" "-Wl,-znow" "-nodefaultlibs" "-L" "/home/solb/Desktop/rust-1.36/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-Wl,--start-group" "-L" "/home/solb/Desktop/rust-1.36/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-lst
d-9895e8982b0a79e7" "-Wl,--end-group" "-Wl,-Bstatic" "/tmp/user/1000/rustchJWjrY/libcompiler_builtins-38e90baf978bc428.rlib" "-Wl,-Bdynamic" "-ldl" "-lrt" "-lpthread" "-lgcc_s" "-lc" "-lm" "-lrt" "-lpthread" "-lutil" "-lutil" "-shared" "exit.o"
= note:
error: aborting due to previous error
```
And with rustc 1.37.0:
```
$ rustc -Clinker=false -Cprefer-dynamic -Clink-arg=exit.o interpose.rs
error: linking with `false` failed: exit code: 1
|
= note: "false" "-Wl,--as-needed" "-Wl,-z,noexecstack" "-m64" "-L" "/usr/lib/rustlib/x86_64-unknown-linux-gnu/lib" "interpose.interpose.3a1fbbbh-cgu.0.rcgu.o" "interpose.interpose.3a1fbbbh-cgu.1.rcgu.o" "-o" "libinterpose.so" "-Wl,--version-script=/tmp/
user/1000/rustc7Re7af/list" "interpose.54bybojgvbim5uqh.rcgu.o" "-Wl,-zrelro" "-Wl,-znow" "-nodefaultlibs" "-L" "/usr/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-Wl,--start-group" "-L" "/usr/lib/x86_64-linux-gnu" "-lstd-6c8733432f42c6a2" "-Wl,--end-group"
"-Wl,-Bstatic" "/tmp/user/1000/rustc7Re7af/libcompiler_builtins-67541964815c9eb5.rlib" "-Wl,-Bdynamic" "-ldl" "-lrt" "-lpthread" "-lgcc_s" "-lc" "-lm" "-lrt" "-lpthread" "-lutil" "-lutil" "-shared" "-Wl,-soname=libinterpose.so" "exit.o"
= note:
error: aborting due to previous error
```
Notice the newly-added `-Wl,--version-script` flag, which has no knowledge of the symbols from the `exit.o` object file.
# Discussion: static library instead of bare object file
One might be tempted to work around the problem by telling rustc about the object file so it can keep the symbols it defines global. I tried this on rustc 1.36.0:
```
$ ar rs libexit.a exit.o
ar: creating libexit.a
$ rustc -Cprefer-dynamic -L. interpose.rs -lexit
$ rustc -L. -Crpath hotel_california.rs
```
This has a very surprising result: the exit symbol is not present at all in `libinterpose.so`, but it does exist somewhere (the LLVM bitcode for monomorphization, maybe?) that allows the compiler to statically link it into the **executable**:
```
$ objdump -tT libinterpose.so | grep exit$
$ objdump -tT hotel_california | grep exit$
0000000000001337 g F .text 000000000000001a exit
0000000000001337 g DF .text 000000000000001a Base exit
```
This is no good either because it leads to subtly different interposition behavior. For example:
* Before, `exit()` could be further shadowed by libraries loaded via the `LD_PRELOAD` environment variable. Building it directly into the executable breaks this.
* The `cc` and `ld` apply very different optimizations to `exit()` because it is now part of a PIE instead of a PIC object; depending on how wrapping is implemented, this can break it and even result in infinite recursion.
* If a C program links against `libinterpose.so`, it will no longer get the interposed version of `exit()`. This is a very real situation for my project, because it also exports a C API via Rust's FFI.
# Possible mitigation: expose a `-Climit_rdylib_exports` command-line switch
The simplest way to allow users to work around this would be to allow invokers of rustc to opt out of the change introduced by https://github.com/rust-lang/rust/pull/59752. However, the change is likely to have broken other use cases as well, so perhaps it needs to be revisited in more detail.
# See also
The same changeset seems to be causing problems with inline functions, as observed at https://github.com/rust-lang/rust/issues/65610. | A-linkage,P-medium,T-compiler,regression-from-stable-to-stable,C-bug | low | Critical |
520,580,319 | youtube-dl | match-filter with wildcard/regex | I can download all videos of a playlist when i use the complete plalyist name in the match-filter command
`--match-filter "playlist_title = 'playlist name'"`
But when i only use a part of the title it doesn't work. Aren't wirdcard/regex not supported here? This can be useful when a new playlist with a slightly different playlist name is released from time to time. I want to download all the champions league hightlights from skyhd channel. | question | low | Minor |
520,588,828 | TypeScript | Non-null assertions infringe a responsibility of optional chaining | Because @RyanCavanaugh couldn't understand what is the problem, I reexplain it.
In the following case, a responsibility of optional chaining is making a return type `Element | undefined`.
```ts
// a is string | null | undefined
const a = document.querySelector('_')?.textContent;
```
In the following case, non-null assertion has broken the safeness made by optional chaining.
```ts
// a is string
const a = document.querySelector('_')?.textContent!;
```
It is obvious that optional chaining was not considered when non-null assertion operator was designed. TypeScript has to consider what is the best design and what to do via reconsidering the design of non-null assertion operator.
**TypeScript Version:** 3.7.x-dev.20191105
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
const a = document.querySelector('_')?.textContent!;
```
**Expected behavior:**
a is string | undefined.
**Actual behavior:**
a is string.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Suggestion,Needs Proposal | medium | Critical |
520,591,820 | pytorch | Indexing with torch tensors and NumPy arrays is different | ## 🐛 Bug
When I use a numpy.ndarray type boolean index for a tensor, it sometimes goes wrong when the tensor is a relatively small one.
## To Reproduce
Steps to reproduce the behavior:
```
np.random.seed(1024)
A = torch.tensor(np.random.rand(26,54))
W = (A > 0.5).numpy()
A[W]
```
Then it reports an error: "IndexError: too many indices for tensor of dimension 2"
However, if you transpose both A and W, it works fine!
```
A = A.t()
W = W.T
C = A[W]
```
## Expected behavior
It should return a 1d tensor consisting of elements whose corresponding boolean index is True, like what it is done for the transposed tensor.
## Environment
Collecting environment information...
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0
OS: Microsoft Windows 10 Pro
GCC version: (x86_64-posix-seh, Built by strawberryperl.com project) 8.3.0
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 1050
Nvidia driver version: 431.70
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] torch==1.2.0
[pip] torchvision==0.4.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 245
[conda] mkl-service 2.0.2 py36he774522_0
[conda] mkl_fft 1.0.14 py36h14836fe_0
[conda] mkl_random 1.0.2 py36h343c172_0
[conda] pytorch 1.2.0 py3.6_cuda100_cudnn7_1 pytorch
[conda] torchvision 0.4.0 py36_cu100 pytorch
cc @mruberry | triaged,module: numpy | low | Critical |
520,621,439 | youtube-dl | Add support for akvideo | ## Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I'm running youtube-dl version **2019.11.05**
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that none of provided URLs violate any copyrights
- [X] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
- Single video: https://akvideo.stream/video/0bvtm479w9iw
- Playlist: no playlist
## Description
I already developed an extractor, works but is not very good.
AKVideo support more formats: low, normal and high resolution.
For extract all resolutions I need to get a webpage for each format over the main get, but before generate a new url akvideo require a sleep time.
Can I extract the real url after the user choose the format?
Now I extract all urls and return the array formats.
class AKVideoIE(InfoExtractor):
_VALID_URL = r'https?://akvideo\.stream/video/(?P<id>\w+)'
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
title = ''
formats = []
videoinfo = re.findall(r'download_video\(\'' + video_id + '\',\'(\w)\',\'(.*)\'\)".*<td nowrap *>(\d+)x(\d+)', webpage)
for info in videoinfo:
webpage = self._download_webpage('https://akvideo.stream/dl?op=download_orig&id=' + video_id + '&mode=' + info[0] + '&hash=' + info[1], video_id)
movie_url = re.findall(r'<a class="dwnb" href="(.*?)"', webpage)
title = str(movie_url[0]).split("/")[-1]
formats.append({'url': movie_url[0], 'height': int(info[3]), 'width': int(info[2]) })
time.sleep(3)
self._sort_formats(formats)
return {
'id': video_id,
'title': title,
'description': title,
'formats': formats
}
| site-support-request | low | Critical |
520,636,829 | flutter | Semantics link does not show link in SemanticsDebugger | Internal: b/144019142
FR form customer:money:
The elements show as '(button)' and '(textfield)'
However for Semantics link=true, '(link)' is not shown in the SemanticsDebugger. | framework,a: accessibility,a: debugging,customer: money (g3),P2,team-framework,triaged-framework | low | Critical |
520,637,411 | flutter | Exception while getting the endpoints for a selection | Internal: b/143844802
No repro instructions unfortunately. This happens to customer:quill clients sporadically.
It's likely that the text was updated on the TextField at the time of the error, as the last redux action received would have attempted to update TextField.text.
```
StateError: Bad state: No element
at _List.get:first (array.dart:121 )
at RenderEditable.getEndpointsForSelection (editable.dart:1264 )
at TextSelectionOverlay._buildToolbar (text_selection.dart:528 )
at _OverlayEntryState.build (overlay.dart:170 )
at StatefulElement.build (framework.dart:4040 )
at ComponentElement.performRebuild (framework.dart:3934 )
at Element.rebuild (framework.dart:3731 )
at StatefulElement.update (framework.dart:4113 )
at Element.updateChild (framework.dart:2886 )
at RenderObjectElement.updateChildren (framework.dart:4845 )
at MultiChildRenderObjectElement.update (framework.dart:5245 )
at Element.updateChild (framework.dart:2886 )
at _TheatreElement.update (overlay.dart:607 )
at Element.updateChild (framework.dart:2886 )
at ComponentElement.performRebuild (framework.dart:3954 )
``` | a: text input,c: crash,framework,customer: quill (g3),P2,team-framework,triaged-framework | low | Critical |
520,643,628 | godot | After bottom tab changes on run, pressing on a node which changes tab does not change back | **Godot version:**
_3.3.2 stable_, also happens in _3.2 beta 1_ and _3.1.1 stable_
**OS/device including version:**
All
**Issue description:**
After the bottom tab changes to Output when you run the game, pressing on a node which changes tab to another when first pressed (i.e. AnimationPlayer) does not change after tab switch. You then have to click on another node then the original to re-open the desired tab.
**Steps to reproduce:**
Create a tab-opening node (i.e. AnimationPlayer), click on it so that it opens it's bottom tab, run the game, click on the node again.
**Minimal reproduction project:**
[Animation Save Demo.zip](https://github.com/godotengine/godot/files/3828869/Animation.Save.Demo.zip) | bug,topic:editor,confirmed,usability | low | Minor |
520,654,736 | pytorch | Half precision cdist | ## 🚀 Feature
Allow `torch.cdist` to work with half precision tensors
## Motivation
The new `torch.dist` function doesn't allow half precision tensors as inputs. However, using half precision can potentially speed up computation of pairwise distances especially if tensor cores are available to speed up the matrix multiplications.
cc @ngimel | module: cuda,triaged,module: half,function request,module: distance functions | low | Minor |
520,671,118 | godot | Classes are not considered constants in GDScript | Godot 3.2 alpha1
The following examples all produce an `Expected a constant expression` error:
```gdscript
class Test:
var a
const test = Test
const test2 = [Test]
const test3 = {"test": Test}
```
I wanted to use a class in a constant, but it's not possible, while it is possible with scripts obtained with `preload`. | bug,topic:gdscript | low | Critical |
520,708,440 | rust | Waker::will_wake() gets mostly defeated by executor optimizations | The Futures `Waker` API provides a [Waker::will_wake()](https://doc.rust-lang.org/beta/std/task/struct.Waker.html#method.will_wake) method which tries to determine whether the 2 `Waker`s are equivalent. It intends to allow `Future`s so determine whether they have to swap a `Waker` or have to can keep an already stored one which is equivalent. That can avoid some churn on atomic refcounts on those `Waker`s.
The `will_wake` method [is implemented by a `PartialEq` derive on `RawWaker`](https://doc.rust-lang.org/beta/src/core/task/wake.rs.html#279-281) - which means `will_wake()` will return `true` only if the pointer as well as the vtable in `RawWaker` are equivalent.
However futures executors are moving more and more towards a design where they lazily create the "real" storeable `Waker`, and only provide a `Waker` reference to tasks they poll. E.g. `futures-rs` uses [WakerRef](https://github.com/rust-lang-nursery/futures-rs/blob/master/futures-task/src/waker_ref.rs) [tokio does the same](https://github.com/tokio-rs/tokio/blob/d5c1119c881c9a8b511aa9000fd26b9bda014256/tokio/src/runtime/task/waker.rs#L9-L28). This avoids an atomic increment and decrement cycle for each task that an executor polls.
However this means the `RawWaker` that the executor passes through the `Context` parameter will now always be different to the one stored inside a `Future` and `will_wake()` will return false. Which causes the `Future` to update the `Waker` reference (2 atomic ops). If the executor polls a couple of sub-tasks which now all swap out their `Waker`s the original executor optimization could now even lead to de-optimization - since the result is more atomic ops in other places.
Using the `will_wake()` method makes most sense exactly inside the `.poll()` method of a `Future` in order to determine whether a stored `Waker` needs to get updated. This is now the exact same point where the `WakerRef` would return the false negative. Therefore using `.will_wake()` is not that helpful given the state of the executor ecosystem.
So far for the description of the issue. The next question is whether this can be improved or solved. I don't think changing `PartialEq for RawWaker` - e.g. to conly compare the data pointer - makes sense. It will likely only lead to false positives (`waker.will_wake(other) == true`). Which then leads to missing wakeups aka live-locks.
The original design of `RawWaker` plus `Vtable` contained a vtable entry for `will_wake()`. This would have definitely helped to solve the issue, since the executors would have been able to overwrite the check and to be able to associate `Waker`s and their `WakerRef`s. However this was removed for simplification purposes.
I think one possible outcome could be to keep this issue as a tracking issue for the deficit. And if there is ever a change to the vtable for other reasons to also add a `will_wake` method back. Up to then it's probably not worth an update.
| C-enhancement,T-libs-api,A-async-await,AsyncAwait-Triaged,C-optimization | low | Minor |
520,751,751 | neovim | API: listen only for cursor state without grid_lines | From: #11367
> 3\. Don't need `lines` / `grid_lines` event at all if there will be API to get HL states. So toggle/ui-option to get only cursor updates will be very good to save rpc bandwidth
Allow to listen for only for the cursor state, without ```lines/grid_lines``` events. May be ui-option like ```ext_cursor``` which will send updates for both screen line/col positions and buffer line/col positions | enhancement,performance,api,ui | low | Minor |
520,759,825 | rust | Missing object files when linking with LTO | ### Situation
I'm trying to add a dependency to [SPIRV-Cross](https://github.com/KhronosGroup/SPIRV-Cross) for Gecko, and I'm seeing [linking errors](https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=274841436&repo=try&lineNumber=37327) (with undefined symbols) *only* in the "OS X Cross Compiled asan" job (cross-compiled macosx with address sanitizer and fuzziness. Non-fuzzy (non-asan) cross-compiled builds are fine.
This has been blocking a very [large and important change](https://phabricator.services.mozilla.com/D49458) to Gecko for about a week. I would appreciate any hints on how to approach this!
### Problem
The objects of SPIRV-Cross are supposed to reach the linker by the following path:
1. The C++ project is built into `libspirv-cross-rust-wrapper.a` by "cc-rs" that is invoked from the `build.rs` of [spirv_cross](https://github.com/kvark/spirv_cross/tree/wgpu) crate
2. That crates produces a regular Rust library that is statically linked with this `.a` file
3. It is a dependency of other crates, eventually making its way to the `gkrust` crate, which is the root.
4. `libgkrust.a` is a static C library build by Gecko
5. Gecko static libraries are linked into XUL dynamic library
I can inspect `libspirv-cross-rust-wrapper.a` with `llvm-objdump` and I see everything is in place as expected: [objdump-local.txt](https://github.com/rust-lang/rust/files/3829870/objdump-local.txt). I don't know how to properly inspect the `rlib` or `rmeta` of the Rust library products. I can, however, inspect `libgkrust.a` and see that some of the SPIRV-Cross objects didn't make it: [libgkrust-objdump.txt](https://github.com/rust-lang/rust/files/3829874/libgkrust-objdump.txt) (e.g. it has `spirv_cfg.o` but no `spirv_cross.o`). The selection appears to be arbitrary but consistent between compilations. From that, I conclude that the metadata about statically linked objects is not propagated through steps 2-3, which is what Cargo/Rustc are responsible for (!).
I'm able to reproduce the linking errors locally on linux, by downloading the artifacts and enabling both ASAN and fuzziness options in [mozconfig-cross.txt](https://github.com/rust-lang/rust/files/3829841/mozconfig-cross.txt). I confirmed that removing the "--enable-fuzzing" option makes it link fine.
I understand that this is a very big project, and it's not trivial to replicate the setup. However, so far, attempts to reproduce this on a reduced test case were not fruitful.
### Further observations
It appears that Gecko has only handful of dependencies on C/C++ code by the vendored Rust crates, and all of them are either C or Obj-C, using `clang` for compiling (via "cc-rs" crate). SPIRV-Cross dependency is the only one in C++ using `clang++`, which makes it rather unique.
I'm also attaching the reduced verbose build log that only rebuilds spirv-cross related pieces:
[detailed-build.log](https://github.com/rust-lang/rust/files/3829865/detailed-build.log).
cc @jrmuizel @alexcrichton | A-linkage,O-macos,A-cross,T-compiler,C-bug | low | Critical |
520,880,283 | rust | Stabilize `#[bench]` and `Bencher`? | I’ll take the liberty of copying a section of @hsivonen’s [Rust 2020 blog post](https://hsivonen.fi/rust2020/):
> ## Non-Nightly Benchmarking
>
> The library support for the `cargo bench` feature has been in the state “[basically, the design is problematic, but we haven’t had anyone work through those issues yet](https://users.rust-lang.org/t/timeline-for-libtest-stability/3123/3)” since 2015. It’s a useful feature nonetheless. Like I said a year ago and the year before, it’s time to let go of the possibility of tweaking it for elegance and just let users use it on non-nighly Rust.
Indeed the existing benchmarking support has basically not changed in years, and I’m not aware of anyone planning to work on it. To keep reserving the right to make breaking changes is not useful at this point. [Custom test frameworks](https://github.com/rust-lang/rust/issues/50297) offer another way forward for when someone does want to work on better benchmarking.
So I’d like to propose a plan:
* Move `test::Bencher` to `std::bench::Bencher`. <del>I have a PR coming soon that</del> <ins>https://github.com/rust-lang/rust/pull/66290</ins> demonstrates that this is possible. This move avoids the need to stabilize the `test` crate.
* Optionally, make some surface-only API tweaks to `Bencher`. For example, the public `bytes` field could become a parameter to some method.
* Although not strictly necessary for changes to an unstable type, we can keep the existing `bytes` field and `iter` method unchanged as unstable + deprecated for a while.
* **Stabilize** the `#[bench]` attribute and just enough of `Bencher` to make it usable with `#[bench]`. (For example, no stable constructor.)
@rust-lang/libs, @rust-lang/lang, do you feel this needs an RFC? | T-lang,T-libs-api,B-unstable,A-libtest | medium | Major |
520,883,237 | flutter | [in_app_purchase] Not receiving receipt in the purchase process | We have implemented subscriptions using the in_app_purchase plugin and we detected that our users are experiencing problems activating the subscription. Our flow is the following:
When user subscribe to our service we save the first receipt to our backend and then we use server-to-server notifications to get the renewals and bind the renewal to the user and extend the access to the service.
The problem we have detected is that a high percentage of users are not receiving the receipt during the purchase flow. It seems that the purchaseUpdatedStream is not receiving the receipt. That happens with a percentage of our Android users and most of users in IOS. Why do that usually happens?
| p: in_app_purchase,package,team-ecosystem,P2,triaged-ecosystem | low | Major |
520,900,634 | pytorch | Tensorboard GPU Problems | ## 🐛 Bug
When I train a model without importing tensorboard everything works fine. when I import it I get the following warning:
> lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py:26: UserWarning:
> There is an imbalance between your GPUs. You may want to exclude GPU 2 which
> has less than 75% of the memory or cores of GPU 0. You can do so by setting
> the device_ids argument to DataParallel, or by setting the CUDA_VISIBLE_DEVICES
> environment variable.
> warnings.warn(imbalance_warn.format(device_ids[min_pos],
Without importing tensorboard, the training is on one gpu. With importing tensorboard the training is on all four gpus, which I don't want.
`print("torch.cuda.device_count() with tensorboard ",torch.cuda.device_count())`
torch.cuda.device_count() with tensorboard 4
`print("torch.cuda.device_count() without tensorboard ",torch.cuda.device_count())`
torch.cuda.device_count() without tensorboard 1
## Environment
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Debian GNU/Linux 9.9 (stretch)
GCC version: (GCC) 5.4.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: Quadro P6000
GPU 1: TITAN X (Pascal)
GPU 2: GeForce GTX 1080 Ti
GPU 3: TITAN X (Pascal)
Nvidia driver version: 390.87
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] pytorch-ignite==0.1.2
[pip] torch==1.1.0
[pip] torchvision==0.3.0
[conda] blas 1.0 mkl
[conda] ignite 0.1.2 py37_0 pytorch
[conda] mkl 2019.4 243
[conda] mkl-service 2.0.2 py37h7b6447c_0
[conda] mkl_fft 1.0.14 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.1.0 py3.7_cuda9.0.176_cudnn7.5.1_0 pytorch
[conda] torchvision 0.3.0 py37_cu9.0.176_1 pytorch
tensorboard.__version__ 2.0.0
How can I import Tensorboard without changing anything?
Tried:
```
os.environ['CUDA_VISIBLE_DEVICES'] = 0
torch.cuda.set_device(0)
```
Did't work | module: cuda,triaged,module: tensorboard | low | Critical |
520,904,329 | flutter | `CupertinoContextMenuAction` should have a `trailing` widget rather than just an `IconData trailingIcon` | ## Use case
Now we can't change the icon color or use others widget for trailing
## Proposal
https://github.com/flutter/flutter/blob/9e0df259df9c521d0ffbdf5ba664f6ab30143884/packages/flutter/lib/src/cupertino/context_menu_action.dart#L46
`IconData trailingIcon` -> `Widget trailing` | c: new feature,framework,c: API break,f: cupertino,c: proposal,P3,team-design,triaged-design,fyi-text-input | low | Minor |
520,918,652 | flutter | Dialog maxWidth is not configurable | The `Dialog` class uses a `ConstrainedBox` widget with `constraints: const BoxConstraints(minWidth: 280.0)`.
It would be useful to add `maxWidth` as a constructor parameter since currently all dialogs appear very wide on large screens (e.g. tablet or web). | framework,f: material design,c: proposal,P3,workaround available,team-design,triaged-design | low | Major |
520,934,362 | vscode | [themes] provide command to change theme | There is a command to open the theme prompt (`workbench.action.selectTheme`), however, I would like a command to change the theme (the name of which would be provided as an argument).
The reason I would like this is that I frequently switch between a dark and a light theme, depending on my mood, time of day, etc. Currently, I am forced to use the theme picker, but I would really like to create keyboard shortcuts to switch themes immediately. | feature-request,themes | low | Minor |
520,937,116 | flutter | Path.combine not implemented in the HTML backend | this code below works fine on flutter mobile
but on web it throws an error
```dart
import 'package:flutter/material.dart';
void main() => runApp(App());
class App extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
debugShowCheckedModeBanner: false,
home: Scaffold(
body: SafeArea(
child: Center(
child: Column(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
UnicornOutlineButton(
strokeWidth: 2,
radius: 24,
gradient: LinearGradient(colors: [Colors.black, Colors.redAccent]),
child: Text('OMG', style: TextStyle(fontSize: 16)),
onPressed: () {},
),
SizedBox(width: 0, height: 24),
UnicornOutlineButton(
strokeWidth: 4,
radius: 16,
gradient: LinearGradient(
colors: [Colors.blue, Colors.yellow],
begin: Alignment.topCenter,
end: Alignment.bottomCenter,
),
child: Text('Wow', style: TextStyle(fontSize: 16)),
onPressed: () {},
),
],
),
),
),
),
);
}
}
class UnicornOutlineButton extends StatelessWidget {
final _GradientPainter _painter;
final Widget _child;
final VoidCallback _callback;
final double _radius;
UnicornOutlineButton({
@required double strokeWidth,
@required double radius,
@required Gradient gradient,
@required Widget child,
@required VoidCallback onPressed,
}) : this._painter = _GradientPainter(strokeWidth: strokeWidth, radius: radius, gradient: gradient),
this._child = child,
this._callback = onPressed,
this._radius = radius;
@override
Widget build(BuildContext context) {
return CustomPaint(
painter: _painter,
child: GestureDetector(
behavior: HitTestBehavior.translucent,
onTap: _callback,
child: InkWell(
borderRadius: BorderRadius.circular(_radius),
onTap: _callback,
child: Container(
constraints: BoxConstraints(minWidth: 88, minHeight: 48),
child: Row(
mainAxisSize: MainAxisSize.min,
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
_child,
],
),
),
),
),
);
}
}
class _GradientPainter extends CustomPainter {
final Paint _paint = Paint();
final double radius;
final double strokeWidth;
final Gradient gradient;
_GradientPainter({@required double strokeWidth, @required double radius, @required Gradient gradient})
: this.strokeWidth = strokeWidth,
this.radius = radius,
this.gradient = gradient;
@override
void paint(Canvas canvas, Size size) {
// create outer rectangle equals size
Rect outerRect = Offset.zero & size;
var outerRRect = RRect.fromRectAndRadius(outerRect, Radius.circular(radius));
// create inner rectangle smaller by strokeWidth
Rect innerRect = Rect.fromLTWH(strokeWidth, strokeWidth, size.width - strokeWidth * 2, size.height - strokeWidth * 2);
var innerRRect = RRect.fromRectAndRadius(innerRect, Radius.circular(radius - strokeWidth));
// apply gradient shader
_paint.shader = gradient.createShader(outerRect);
// create difference between outer and inner paths and draw it
Path path1 = Path()..addRRect(outerRRect);
Path path2 = Path()..addRRect(innerRRect);
var path = Path.combine(PathOperation.difference, path1, path2);
canvas.drawPath(path, _paint);
}
@override
bool shouldRepaint(CustomPainter oldDelegate) => oldDelegate != this;
}
```
here is full log
```
Performing hot restart...
Reloaded application in 423ms.
══╡ EXCEPTION CAUGHT BY RENDERING LIBRARY ╞═════════════════════════════════════════════════════════
The following UnimplementedError was thrown during paint():
UnimplementedError
The relevant error-causing widget was:
CustomPaint org-dartlang-app:///packages/mywebapp/main.dart:64:12
When the exception was thrown, this was the stack:
package:dart-sdk/lib/_internal/js_dev_runtime/private/ddc_runtime/errors.dart 194:49 throw_
package:build_web_compilers/lib/ui/src/ui/canvas.dart 2200:5 combine
package:mywebapp/main.dart 115:21 paint
package:flutter/src/rendering/custom_paint.dart 531:12 [_paintWithPainter]
package:flutter/src/rendering/custom_paint.dart 572:7 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/box.dart 2508:14 defaultPaint
package:flutter/src/rendering/flex.dart 948:7 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/shifted_box.dart 70:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/shifted_box.dart 70:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/box.dart 2508:14 defaultPaint
package:flutter/src/rendering/custom_layout.dart 396:5 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/material/material.dart 530:11 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 384:12 pushLayer
package:flutter/src/rendering/proxy_box.dart 1757:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 135:10 _repaintCompositedChild
package:flutter/src/rendering/object.dart 95:5 repaintCompositedChild
package:flutter/src/rendering/object.dart 201:7 [_compositeChild]
package:flutter/src/rendering/object.dart 182:7 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 921:16 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/proxy_box.dart 2464:13 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 135:10 _repaintCompositedChild
package:flutter/src/rendering/object.dart 95:5 repaintCompositedChild
package:flutter/src/rendering/object.dart 201:7 [_compositeChild]
package:flutter/src/rendering/object.dart 182:7 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/proxy_box.dart 3174:11 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/box.dart 2508:14 defaultPaint
package:flutter/src/rendering/stack.dart 589:5 paintStack
package:flutter/src/rendering/stack.dart 597:7 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/proxy_box.dart 123:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 184:12 paintChild
package:flutter/src/rendering/view.dart 210:14 paint
package:flutter/src/rendering/object.dart 2211:7 [_paintWithContext]
package:flutter/src/rendering/object.dart 135:10 _repaintCompositedChild
package:flutter/src/rendering/object.dart 95:5 repaintCompositedChild
package:flutter/src/rendering/object.dart 937:29 flushPaint
package:flutter/src/rendering/binding.dart 346:19 drawFrame
package:flutter/src/widgets/binding.dart 774:13 drawFrame
package:flutter/src/rendering/binding.dart 283:5 [_handlePersistentFrameCallback]
package:flutter/src/scheduler/binding.dart 1101:15 [_invokeFrameCallback]
package:flutter/src/scheduler/binding.dart 1040:9 handleDrawFrame
package:flutter/src/scheduler/binding.dart 849:7 <fn>
package:dart-sdk/lib/_internal/js_dev_runtime/private/isolate_helper.dart 48:19 internalCallback
The following RenderObject was being processed when the exception was fired: RenderCustomPaint#dc268 relayoutBoundary=up4:
creator: CustomPaint ← UnicornOutlineButton ← Column ← Center ← MediaQuery ← Padding ← SafeArea ←
_BodyBuilder ← MediaQuery ← LayoutId-[<_ScaffoldSlot.body>] ← CustomMultiChildLayout ←
AnimatedBuilder ← ⋯
parentData: offset=Offset(0.0, 0.0); flex=null; fit=null (can use size)
constraints: BoxConstraints(0.0<=w<=1920.0, 0.0<=h<=Infinity)
size: Size(88.0, 48.0)
This RenderObject had the following descendants (showing up to depth 5):
child: RenderSemanticsGestureHandler#078b7 relayoutBoundary=up5 NEEDS-PAINT
child: RenderPointerListener#fd406 relayoutBoundary=up6 NEEDS-PAINT
child: RenderSemanticsAnnotations#9008b relayoutBoundary=up7 NEEDS-PAINT
child: RenderMouseRegion#dec56 relayoutBoundary=up8 NEEDS-PAINT
child: RenderSemanticsGestureHandler#cea3a relayoutBoundary=up9 NEEDS-PAINT
════════════════════════════════════════════════════════════════════════════════════════════════════
Another exception was thrown: UnimplementedError
```
if I commented this line, everything if ok and working fine, but I need subtract one path from another. Actually all path operations are throwing errors on flutter web.
`var path = Path.combine(PathOperation.difference, path1, path2);`
Tried on MAC Catalina and Windows 10.
flutter doctor -v
```[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 3.5)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 40.2.2
• Dart plugin version 191.8593
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[✓] Connected device (2 available)
• Chrome • chrome • web-javascript • Google Chrome 78.0.3904.97
• Web Server • web-server • web-javascript • Flutter Tools
• No issues found!
```
my original post on SO https://stackoverflow.com/questions/55395641/outlined-transparent-button-with-gradient-border-in-flutter/55638138 | c: crash,framework,customer: crowd,platform-web,a: images,customer: webeap,e: web_html,has reproducible steps,customer: web10,customer: thrive,P3,team: skip-test,team-web,triaged-web,found in release: 3.13,found in release: 3.17 | high | Critical |
521,013,506 | terminal | Feature request: 'age' output visually | Similar to #3519, when watching logs, or build processes, similar lines being shown repeatedly can often make it hard to determine if anythings happening (as each new page of data looks the same as the previous one)
#3519 mentions smooth scrolling, but it would also be useful to apply some visual effect to old output. **This is a bit unusual, I don't know of any other terminal that does this, but it would also be very useful**
Eg, remove brightness by X percent for output older than 5 minutes, y% for output older than 15 minutes. | Issue-Feature,Area-Rendering,Area-Extensibility,Product-Terminal | low | Minor |
521,041,088 | pytorch | Dispatch key reorganization | Motivation:
* Projects like XLA occasionally need the ability to override meta operators in PyTorch. Right now, it's not possible to do so using a normal override at XLA, as meta operators get processed before we get to the XLA override. This leads to things like `generic_convolution` hack to make things work out.
* There is some confusion about what the priority of various "fallback" mechanisms are. In the current dispatcher, we have a per-operator default fallback kernel, as well as per-backend fallback. Which one takes precedence?
* Profiling and tracing are traditionally code generated as part of autograd, even though they are separate concerns. We should separate these concerns.
Right now, we define dispatch keys Variable, XLA, CPU; we attempt dispatches to concrete implementations in that order. The new pseudocode for dispatch looks like this:
```
# included_keys and excluded_keys are thread local
# getDispatchKey extracts dispatch keys from tensor arguments in args
# dispatch_set is an ordered set; first() retrieves the highest priority dispatch key in the set
def dispatch(op, args):
dispatch_set = ((get_dispatch_keys(args) & included_keys) - excluded_keys
return dispatchWithSet(op, args)
def dispatchWithSet(dispatch_set, op, args):
dispatch_key = dispatch_set.first()
if dispatch_key is None:
raise RuntimeError("no dispatch key available")
op_fn = op.table[dispatch_key]
if op_fn is not None:
# TODO: It would be convenient if dispatch_set is passed along to these
# calls to, so redispatch can be written as dispatchWithSet(dispatch_set.remove(dispatch_key), op, args)
return op_fn(args)
fallback_fn = fallback_table[dispatch_key]
return fallback_fn(dispatch_set, dispatch_key, op, args)
# reference implementations
def fallthrough_fn(dispatch_set, dispatch_key, op, args):
return dispatchWithSet(dispatch_set.remove(dispatch_key), op, args)
def error_fn(dispatch_set, dispatch_key, op, args):
raise RuntimeError("no implementation")
```
The ordering of dispatch keys in the set specify the priority of various handlers. The proposed new dispatch order is as follows:
1. Profiling
2. Tracing
3. XLAPreAutograd, MSNPUPreAutograd...
4. CompoundOperator
5. Autograd
6. XLA, MSNPU...
7. CPU
For each backend, you can register both a per-op handler and a backend global handler; per-op handler takes precedence. There is a default for fallthrough, we suggest the default be raise an error (to avoid unintentional fallthrough when it is not permissible.)
Some things to note:
* Fallthrough performance is important. The reference implementation above processes fallthrough in a loop, but a better optimization is to maintain a set of fallthrough keys, and mask them out of the dispatch set immediately.
* Fallthrough does NOT add a dispatch key to the excluded set. If you need to redispatch and skip earlier layers which don't add themselves to the excluded key set (Compound, in particular), you MUST call `dispatchWithSet` and mask out the key you don't want to get redispatched to.
Explanation of the new keys
* CompoundOperator. Implementations of compound operators are registered at this layer. This is a replacement for the current “fallback” dispatch key. It occurs BEFORE variable, since CompoundOperators may call other CompoundOperators and Variables. Usually, CompoundOperators are terminal: there are never implementations for Variable/XLA/CPU, because, a CompoundOperator will not redispatch; it will usually call other functions. Every tensor has CompoundOperator set.
* XLAPreAutograd. Implementations of backend-specific, autograd implementations. Sometimes, a backend needs a different implementation of a derivative that was already defined in PyTorch, or needs to override a meta operator. In those cases, they need a dispatch key that occurs before MetaOperator/Variable. So we insert a pre-autograd hook for every extensible backend here. All XLA tensors would have both the XLA and XLAPreAutograd dispatch keys set on all tensors.
* Autograd: This is just what Variable used to be, but with Profiling and Tracing removed
Some other changes related to this proposal:
* Our `variable_factories.h` currently always turn off VariableDispatch no matter it's compound or not. In other words, we treat all factory functions as non-compound ops, which is confusing. It’d be helpful to call compound factory ops without `AutoNonVariableTypeMode`. That way factory functions are no longer special, they’re the same as normal ops.
**CHANGELOG**
* Jan 16, 2020. Updated to not privilege fallthrough or error reporting: they're expressible purely as fallthrough functions, but we special case them.
**APPENDIX: Optimized fallthrough.** This implementation applies the optimization of masking out fallthrough immediately, so that no loop is necessary.
```
def dispatch(op, args):
dispatch_set = ((get_dispatch_keys(args) & included_keys) - excluded_keys
return dispatchWithSet(op, args)
def dispatchWithSet(dispatch_set, op, args):
dispatch_set_without_fallthrough = dispatch_set - op.fallthrough_keys
dispatch_key = dispatch_set_without_fallthrough.first()
if dispatch_key is None:
raise RuntimeError("no dispatch key available")
op_fn = op.table[dispatch_key]
if op_fn is not None:
return op_fn(args)
fallback_fn = fallback_table[dispatch_key]
if fallback_fn == error_fn:
raise RuntimeError("no implementation")
else:
return fallback_fn(dispatch_set, dispatch_key, op, args)
``` | module: internals,triaged | low | Critical |
521,048,500 | go | encoding/json: tag `json:"-"` doesn't hide an embedded field | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.3 linux/amd64
$ gotip version
go version devel +696c41488a Mon Nov 11 15:37:55 2019 +0000 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes, see `gotip` version.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/ainar/.cache/go-build"
GOENV="/home/ainar/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/ainar/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/ainar/go/go1.13"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/ainar/go/go1.13/pkg/tool/linux_amd64"
GCCGO="/usr/bin/gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/ainar/dev/tmp/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build199093238=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
``` go
type A struct {
Name string `json:"name"`
}
type B struct {
A
Name string `json:"-"`
}
```
https://play.golang.org/p/uKY3umGYEp3
### What did you expect to see?
Either:
``` none
<nil> {}
```
Or:
``` none
some error about tag conflict
```
### What did you see instead?
``` none
<nil> {"name":"1234"}
``` | NeedsInvestigation | medium | Critical |
521,088,349 | go | image/gif: decode fails with "gif: too much image data" | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.4 windows/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
set GO111MODULE=
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\User\AppData\Local\go-build
set GOENV=C:\Users\User\AppData\Roaming\go\env
set GOEXE=.exe
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GONOPROXY=
set GONOSUMDB=
set GOOS=windows
set GOPATH=E:\vagrant\go
set GOPRIVATE=
set GOPROXY=https://proxy.golang.org,direct
set GOROOT=c:\go
set GOSUMDB=sum.golang.org
set GOTMPDIR=
set GOTOOLDIR=c:\go\pkg\tool\windows_amd64
set GCCGO=gccgo
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=C:\Users\User\AppData\Local\Temp\go-build365934466=/tmp/go-build -gno-record-gcc-switches
</pre></details>
### What did you do?
I have attached a zip file with three gif images that image/gif can't open (I get the message "gif: too much image data") but that other viewers can open fine. I assume they are actually invalid but perhaps image/gif could be changed to be more forgiving?
[unknown.zip](https://github.com/golang/go/files/3832483/unknown.zip) | NeedsInvestigation | low | Critical |
521,093,946 | go | crypto/tls: improve default performance of SupportsCertificate | As discussed in https://golang.org/cl/205059, SupportsCertificate requires c.Leaf to be set not to be extremely slow (because it needs to re-parse the leaf every time). This also impacts automatic selection from multiple Certificates candidates.
There are multiple solutions suggested on the CL, I will pick one and turn this into a proposal for further discussion. | Performance,NeedsDecision | low | Major |
521,097,233 | pytorch | `print` uses lots of GPU memory | ## 🐛 Bug
I'm not totally certain if this is a bug or just an oddity I don't understand, but calling Python's `print` function on a GPU tensor uses a lot of GPU memory.
That's probably a reasonably low priority bug by itself, but it's also kind of a strange one, so I thought I'd raise it in case it's indicative of something else going wrong somewhere.
## To Reproduce
Consider the following three code snippets.
```python
# Snippet 1
import torch
torch.cuda.reset_max_memory_allocated()
x = torch.rand(10000, device='cuda')
print(torch.cuda.max_memory_allocated())
```
```python
# Snippet 2
import torch
torch.cuda.reset_max_memory_allocated()
x = torch.rand(10000, device='cuda')
print(x.cpu())
print(torch.cuda.max_memory_allocated())
```
```python
# Snippet 3
import torch
torch.cuda.reset_max_memory_allocated()
x = torch.rand(10000, device='cuda')
print(x)
print(torch.cuda.max_memory_allocated())
```
On my machine the first two both print out `40448`, but the third prints out `173568`. Several times as much, just to print out a few numbers!
## Expected behavior
It's not clear to me why this should use so much memory. The second example demonstrates that this information can clearly be printed out without incurring any extra memory cost.
## Environment
- PyTorch Version (e.g., 1.0): tested with both 1.2.0 and 1.3.0
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): conda
- Build command you used (if compiling from source): N?A
- Python version: 3.7.5
- CUDA/cuDNN version: 10.1
- GPU models and configuration: GeForce RTX 2080 Ti
- Any other relevant information: -
| module: printing,module: cuda,triaged,enhancement | low | Critical |
521,098,806 | pytorch | [feature request] Print some measure of fragmentation at CUDA out-of-memory | It would be nice to include by default some information about memory fragmentation into the default CUDA OOM exception message (and not just amount of allocated / cached memory). It may help to understand OOM's coming from variable batch size. | module: cuda,triaged,enhancement | low | Minor |
521,098,955 | rust | Add `RUST_BACKTRACE=full` to CI Docker images | Currently, none of the CI docker builders have backtraces enabled. This can make debugging a failed PR very time consuming - since the panic message itself is often useless, it can be necessary to run the entire CI script locally to generate a proper backtrace.
If all of the CI images had backtraces enabled, it would be possible to get a backtrace directly from the logs, saving PR authors a significant amount of time.
I wasn't sure the best way to go about this - I didn't want to duplicate `RUST_BACKTRACE` across ever single `Dockerfile`, but there doesn't seem to be any infrastructure in place for sharing common thigns between the various `Dockerfile`s.
This came up in https://github.com/rust-lang/rust/pull/60026#issuecomment-552238786 | C-enhancement,T-infra | low | Critical |
521,117,086 | kubernetes | Add Container Storage Interface integration to conformance | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Continuing the discussion in https://github.com/kubernetes/kubernetes/issues/65155#issuecomment-510247066, while we can't test provider-specific CSI drivers in conformance, we can test provider-agnostic drivers like CSI hostpath, and make sure that K8s distributions are properly supporting the CSI interface.
There are some challenges that need to be solved before we can test CSI in conformance:
* sig-storage has refactored some 30+ [tests](https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/testsuites) to be driver-agnostic (can run against any driver, in-tree or CSI). However, that currently makes it difficult to promote those tests to conformance using the current conformance tooling because the tests case definitions are now a library shared by many drivers. We either need to: 1) enhance the current conformance tooling to handle conditional test cases based off of other inputs (such as the driver being tested) or 2) duplicate all of the test cases to a conformance-specific suite.
* CSI drivers need to be privileged in order to do mount propagation on kubelet. I forget if conformance tests allow privileged containers?
* CSI hostpath needs "/dev" access on the host to support raw block (filesystem PVC is fine, the driver just writes to the container fs). It may be that we can't test raw block in conformance.
* In addition, we have some test cases outside of our driver suite that uses the [mock driver](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/storage/csi_mock_volume.go), and checks various CSI calls toggled by different settings (via K8s objects like CSIDriver, as well as CSI driver config options). Most of the tests validate that the right CSI calls are made by grepping mock driver logs for specific msgs since the mock driver isn't actually very functional (compared to the hostpath driver). Right now most of these test cases are testing beta features, but it would be a good idea to write a very basic test testing all the GA features. These test cases may be less functional than the generic driver suite since the mock driver doesn't actually write any data and most of our functional tests check for data persistence.
**Why is this needed**:
It is important to make sure that all K8s distributions support the CSI spec.
@kubernetes/sig-storage-feature-requests
cc @johnbelamaric | sig/storage,kind/feature,lifecycle/frozen | low | Major |
521,119,068 | godot | Consensus and media for translation partners | I put it here and not in the documentation because this problem affects the editor's translations.
Recently there was a problem between translation collaborators in which a new user without knowing the translation conventions we have been following began to make changes without prior notice, this caused quite a lot of work reversing changes and not little discussion trying to make the user understand that this type of problems should be consulted before doing it on their own, but this problem left something else patent and is that we need a means to agree between translation collaborators:
- There is already a Discord channel in Spanish, but it is general, if there is a debate about translations, it is often lost among other topics of conversation outside the subject and the same thing I imagine will happen with the other languages; we could make consensus directly in GitHub but this medium I see more oriented to programming issues, problems of the editor and so on, so each language needs a means to bring together all the collaborators and be able to make translation consensus more easily.
- Another issue is that in Weblate to be open precisely is difficult to control when a new user enters and without consulting very drastic changes, I do not know if you can put a level of permissions but regardless of this each language would need between 1 and 2 moderators for those who focus on Spanish translations (for example) moderate translations in the language in which you are collaborating and not have people who do not know the language trying to mediate in a language they do not know.
Greetings and thank you for your attention. | discussion,topic:editor,documentation | low | Major |
521,137,064 | vscode | Add vscode.workspace.fs.createWriteStream(). | (see #84175)
For the Python extension, we currently use node's fs.createWriteStream() for a variety of purposes:
1. downloading multi-MB files (and tracking progress)
* calling into a third-party library that expects an `fs.WriteStream`
2. writing generated data out to a file a chunk at a time
3. writing log data out to a file, one entry at a time
For example:
* https://github.com/microsoft/vscode-python/blob/master/src/client/debugger/extension/adapter/logging.ts#L55
* https://github.com/microsoft/vscode-python/blob/master/src/client/common/net/fileDownloader.ts#L39
@jrieken | feature-request,api,file-io,api-proposal | medium | Critical |
521,162,159 | TypeScript | Drop emit of last semicolon in single-line object literal types | Tracking follow-up to #33402
When synthetic object literal type nodes are printed on a single line, they come out like
```ts
type X = { a: string; b: string; };
```
While working on semicolon formatting features, I briefly had the formatter remove the semicolon after `b: string;`. The result was well-received but I ultimately decided the logic didn’t belong in the formatter; if this was going to be the canonical formatting of single-line object type literals, they should simply come out of the emitter that way. | Suggestion | low | Minor |
521,168,590 | flutter | Refactor the Animation / section organizer demo in Flutter Gallery to use LinkedScrollView-like logic | The Animation / section organizer demo is the last place using the deprecated jumpToWithoutSettling. It's using it as a workaround to the way it was designed. It currently uses scroll notification listeners to force two unrelated lists to share a scroll position. A better solution would be to have them share a scroll position, in a similar way to how the LinkedScrollView test does its thing.
(I'm trying to remove jumpToWithoutSettling, since it's been deprecated since forever.) | team,framework,f: material design,f: scrolling,P2,c: tech-debt,team-design,triaged-design | low | Minor |
521,201,072 | TypeScript | Suggestion: Config files authored in JS should automatically have intellisense support | Some libraries (e.g. Babel, Webpack, ESLint) use JS files for their config files.
We don't automatically provide IntelliSense support for these - however, the existing definitions (at least for Babel and Webpack) do provide types that describe the module shape of the files, so they can be manually set up today with a JSDoc comment.
It may be possible to provide a mechanism for associating these config files with their @types packages, and allowing those packages to specify that a given type expresses the module shape of the config file.
Courtesy of @uniqueiniquity
First suggested here: https://developercommunity.visualstudio.com/content/idea/385119/better-webpack-support.html
| Suggestion | low | Minor |
521,231,700 | vscode | breadcrumb menu doesn't need to start in disclosed state on go files | Go can only have one package in a file so it's unnecessary (pointless) to force another click on the disclosure triangle to show the rest of the menu to see the functions you wanted to see to begin with.
<img width="459" alt="Screen Shot 2019-11-11 at 4 42 48 PM" src="https://user-images.githubusercontent.com/20665/68627465-6b06a780-04a3-11ea-90ab-5be34b5e5fa9.png">
| under-discussion,breadcrumbs | low | Minor |
521,232,702 | pytorch | [jit] Script class attributes aren't automatically added | See https://discuss.pytorch.org/t/jit-scripted-attributes-inside-module/60645
cc @suo | oncall: jit,triaged | low | Minor |
521,235,533 | go | math/big: apply simplification suggested in CL 73231 (or abandon) | Reminder issue: Decide whether to apply https://go-review.googlesource.com/c/go/+/73231 for Go 1.15 (early) or abandon.
Specifically: The change must be applied to the new function lehmerSimulate. Also, need to verify that there's no performance regression.
| NeedsDecision | low | Major |
521,235,619 | go | cmd/compile: infinite loop in -m=2 | This compilation unit goes into an infinite loop when compiled with `-m=2`:
```
package x
type Node struct {
Orig *Node
}
func (n *Node) copy() *Node {
x := *n
x.Orig = &x
return &x
}
func f(n0 *Node) {
n1 := n0.copy()
n2 := n1.copy()
sink = n2
}
var sink *Node
```
Distilled from cmd/compile/internal/gc, pointed out by @dr2chase . | NeedsFix | low | Major |
521,253,367 | go | x/tools/gopls: improve error message when using wrong go command with Go source distribution | #### What did you do?
From the [go/src/cmd](https://github.com/golang/go/tree/master/src/cmd) dir:
```
gopls -rpc.trace -v check compile/main.go
```
#### What did you expect to see?
The command succeeds.
#### What did you see instead?
The command times out after 30s. Output: [gopls.txt](https://github.com/golang/go/files/3833678/gopls.txt)
#### Build info
```
golang.org/x/tools/gopls 0.2.0
golang.org/x/tools/[email protected] h1:ddCHfScTYOG6auAcEKXCFN5iSeKSAnYcPv+7zVJBd+U=
github.com/BurntSushi/[email protected] h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/sergi/[email protected] h1:Kpca3qRNrduNnOQeazBd0ysaKrUJiIuISHxogkT9RPQ=
golang.org/x/[email protected] h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU=
golang.org/x/[email protected] h1:FNzasIzfY1IIdyTs/+o3Qv1b7YdffPbBXyjZ5VJJdIA=
golang.org/x/[email protected] h1:9zdDQZ7Thm29KFXgAX/+yaf3eVbP7djjWp/dXAppNCc=
honnef.co/go/[email protected] h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
```
#### Go info
```
go version go1.13.4 darwin/amd64
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/russell/Library/Caches/go-build"
GOENV="/Users/russell/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/russell/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.13.4/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.13.4/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/russell/dev/go/go/src/cmd/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/bk/1vdjn7vd19x41x0ztsbvmw400000gq/T/go-build918707310=/tmp/go-build -gno-record-gcc-switches -fno-common"
```
| gopls,Tools | medium | Critical |
521,256,785 | godot | Modifying a bone's transform will freeze the pose | **Godot version:**
3.2 Beta 1
**OS/device including version:**
Windows 10
**Issue description:**
Afaik there was a breaking change in `Skeleton` where the function `set_bone_global_pose` has been renamed to `set_bone_global_pose_override `with two additional arguments `(amount, persistent)`
My understanding is, that `get_bone_global_pose` and `set_bone_global_pose` complemented each other. So that you could actually get a bone's transform by using `get_bone_global_pose`, modify for example the rotation and put it back to the skeleton's transforms by using `set_bone_global_pose`, like stated here in thise reddit topic: https://www.reddit.com/r/godot/comments/b1aqdq/how_to_set_global_y_rotation_of_a_3d_bone/
@fire recently pointed out, that for deprecation purposes, the old method will internally call the new `set_bone_global_pose_override` with arguments `amount = 1.0` and `persistent = true`.
https://github.com/godotengine/godot/issues/32637#issuecomment-539591516
**So here starts the problem / bug I'm facing:**
When I'm using the easiest setup, getting a bone's global pose, do no changes on it and then set it by using the new method, the bone will then stand still. (I'm using AnimationTrees)
If you want to "unfreeze" bones, you explicitly must call `set_bone_global_pose_override` with `persistent = false` then it will start moving again. If that is the purpose of the persistent argument, it kinda makes sense to me, but actually no (when I consider that set_bone_global_override should replace the old function by 1:1.
What I would expect here is, that at least the "normal" animation should be shown. This happens whether running in `_process` or `_physics_process`
**Steps to reproduce:**
Create a simple scene and add an animated character to it. Use one bone for transformation tests.
These lines of code should be enough to show the issue.
```gdscript
var bone_index = 0
var bone_transform:Transform = skeleton.get_bone_global_pose(bone_index)
skeleton.set_bone_global_pose_override(bone_index, bone_transform, 1.0, true)
```
My first guess is, that `get_bone_global_pose` should always get the global pose like it is without the overridden pose.
**Minimal reproduction project:**
[bone_bug.zip](https://github.com/godotengine/godot/files/3833706/bone_bug.zip)
Here a video showing the issue:
https://youtu.be/VfGeCb2KK6g
btw: Is this really a bug, or is the usage of the override different? Maybe I first need to substract any other transforms? But then fire's comment would be falsly? I'm really unsecure about this. | bug,discussion,topic:animation | low | Critical |
521,309,963 | create-react-app | Proposal: more advanced Service Worker support | ### Is your proposal related to a problem?
Currently, the Service Worker support is very limited because the `service-worker.js` is generated by CRA and users can't customize it.
It's bad cause Service Workers can do much more than we currently use today, and by their inherent design, *should* allow app-specific logic.
### Describe the solution you'd like
We're using `workbox-webpack-plugin` go generate service worker via it's `GenerateSW`.
Instead, we could switch to `InjectManifest`, and add a sample service worker file `src/service-worker.js` to all the app templates. Therefore users will be able to customize the behavior of the Service Worker by editing the `src/service-worker.js` file, and we can preserve the existing runtime defaults by putting the same logic at the `src/service-worker.js` template that `GenerateSW` generates by default.
Regarding TypeScript support, we should be able to transpile that service worker file too!
Additionally, we can benefit from this approach by disabling service worker generation logic if the `src/service-worker.js` is removed. This way we can solve long outstanding request to disable service worker generation.
### Describe alternatives you've considered
I've looked into issues #5369 and #5359. They both essentially propose providing more flexibility by allowing webpack config customization. I think that would be a more generic solution, but it has some significant downsides:
1. We'd have to support unbound set of build configurations. My proposal allows flexibility while only having a single configuration. This can be arguably a strong point, and I've seen it a lot in the decision making of this project before. Yet, personally, I don't think it's that problematic.
2. We can provide better support using this proposal than by allowing webpack build configuration. By this, I mean that we can integrate the `InjectManifest` better if we only have that, in contrary to supporting both `InjectManifest` and `GenerateSW`, or even allowing users to provide an arbitrary plugin there. We can add the logic to disable service worker generation naturally, TypeScript support and possibly other built-time service worker injectors if we ever need to.
3. This proposal will also simplify bug fixing cause there will be less possible configuration states that the user can put the build system into. This kind of repeats the first point (1).
The #7866 might provide flexible webpack build process configuration, and relieve the pain points with the current service workers support. However, this proposal is focusing on more advanced support for Service Worker - not just solving the current pain points - and, in my opinion, has value independently of whether #7866 (or similar) is accepted or not.
### Additional context
I'll be able to submit a PR if this proposal is accepted. | issue: proposal,needs triage | medium | Critical |
521,312,220 | TypeScript | Expose emitted module specifiers on AMD outfile emit | ## Search Terms
amd outfile module specifier
## Suggestion
Currently when utilising the compiler APIs to emit as AMD to a single outfile, the emitted module specifier in the outfile is not exposed, but does not match the source file `SourceFile.fileName` or any other attribute.
It appears that there are internals of the TypeScript emitter which determine the module specifier that is included output. For example, if given `/foo/bar/baz.ts` and `/foo/qat/qux.ts` being part of the same program, when emitted as AMD in a single bundle the module specifier, the outfile will `define()` a module named `bar/baz` and `qat/qux`. It appears that the algorithm used looks for the common root and removes the extension to determine the module specifier.
## Use Cases
When loading the bundle with an AMD loader, knowing what a module specifier is in the outfile is needed to be able to specify a module to load in a `require()` statement. So if we were automating a custom build, outputting a "bundle" and determining which module to `require()` to bootstrap the application, we have to "guess" and hope that the emitter doesn't change its algorithm of determining the module specifiers.
## Examples
I'm not totally sure, but a public API that would take a `SourceFile[]` and provide back a `string[]` array of module specifiers. This could be used in `Host.writeFile()` to determine what module specifiers are contained in the written file.
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Minor |
521,335,992 | flutter | Consider implementing support for Android granularity movement API for navigating text in accessibility nodes | Currently, Flutter for Android supports granularity movements for edit fields only.
It is important, however, that a screen reader user is able to navigate by characters, words, lines, etc, whenever a node contains text, including content descriptions.
/CC @darrenaustin | framework,a: accessibility,c: proposal,dependency: android,P3,team-framework,triaged-framework | low | Minor |
521,347,319 | pytorch | Compiled functions can't take variable number of arguments | ```
torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
at /usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py:138:32
def forward(self, *inputs, **kwargs):
~~~~~~~ <--- HERE
if not self.device_ids:
return self.module(*inputs, **kwargs)
for t in chain(self.module.parameters(), self.module.buffers()):
if t.device != self.src_device_obj:
raise RuntimeError("module must have its parameters and buffers "
"on device {} (device_ids[0]) but found one of "
"them on device: {}".format(self.src_device_obj, t.device))
```
As you can see,this varargs is in torch lib itself:
`/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py:138:32`
cc @suo | oncall: jit,feature,triaged | low | Critical |
521,357,384 | flutter | iOS a11y bridge should respect `hasImplicitScrolling` | Android does this - right now, per @goderbauer, iOS exclusively looks at cacheExtent instead of also respecting hasImplicitScrolling.
It should be possible to set a cache extent on something but also set `hasImplicitScrolling` to false and have the iOS bridge respect that. | platform-ios,engine,a: accessibility,P3,team-ios,triaged-ios | low | Major |
521,387,029 | go | cmd/compile: export non-integer constants to debug info | ### What version of Go are you using (`go version`)?
go1.13.4
### Does this issue reproduce with the latest release?
I haven't tried with Go 1.14 yet
### What operating system and processor architecture are you using (`go env`)?
windows/amd64
### What did you do?
- use the code at https://github.com/dlsniper/democonstdbg
- use the latest Delve version from master
- place a breakpoint on line 23
- evaluate the following expression:
```
ot == AnotherConst
```
### What did you expect to see?
A boolean result
### What did you see instead?
```
could not find symbol value for AnotherConst
``` | NeedsInvestigation,Debugging,compiler/runtime | low | Critical |
521,441,044 | flutter | Prevent soft keyboard to show when physical keyboard is used | Hi,
I have a custom keyboard in my app and I have successfully prevented the default soft input panel (SIP) to show when I focus my text field. This is accomplished using two methods:
1. Using a custom focus node for the text field that has `@override bool consumeKeyboardToken() { return false; }`
2. Overriding the `void requestKeyboard()` in the text field state and hiding the keyboard.
See more here: https://github.com/flutter/flutter/issues/16863
This works as expected but the soft input panel is displayed as soon as I press a key on my physical keyboard (either my PC keyboard in the emulator or a bluetooth connected keyboard to my physical device).
Overriding a RawKeyBoardListener and hiding the keyboard using `SystemChannels.textInput.invokeMethod('TextInput.hide');` makes it popup and then dissapear, causing a very irritating "flicker" and the entire layout of the application shifts around for a split second.
Hiding the keyboard per default when a physical keyboard is connected would be a great feature to add, I guess it's not implemented as it's asked here without response the last 7 months: https://stackoverflow.com/questions/55754646/detect-if-a-physical-keyboard-is-connected-in-flutter | a: text input,framework,c: proposal,a: desktop,P3,team-text-input,triaged-text-input | medium | Critical |
521,454,076 | pytorch | error: ‘struct torch::jit::RegisterOperators’ has no member named ‘op’ | ```
static auto registry =
torch::jit::RegisterOperators()
.op("maskrcnn_benchmark::nms", &nms)
.op("maskrcnn_benchmark::roi_align_forward(Tensor input, Tensor rois, float spatial_scale, int pooled_height, int pooled_width, int sampling_ratio) -> Tensor", &ROIAlign_forward)
// .op("maskrcnn_benchmark::add_annotations", &add_annotations)
.op("maskrcnn_benchmark::upsample_bilinear", &upsample_bilinear);
```
these code no longer compatible with pytorch 1.4, how to resolve it?
cc @suo | oncall: jit,triaged | low | Critical |
521,477,808 | opencv | Drawing into a numpy view with a negative step in the channel dimension doesn't work |
##### System information (version)
- OpenCV => opencv-contrib-python-headless==4.1.1.26
- Operating System / Platform => OS X
##### Detailed description
I create three 3-channel images and draw a circle into them. In the second case I use the step trick `A[:, :, ::-1]` to flip the R and B channels in the image, emulating `cv.cvtColor` with `COLOR_BGR2RGB`. I then try to draw into that array. The result, as seen in the screenshot, is that nothing gets drawn. In the third case I explicitly make a copy of the channel-flipped array (which is actually a view) before drawing, and it works.

Expected is that cv.circle works in all three cases.
##### Steps to reproduce
```python
import cv2 as cv
image1 = np.zeros((100, 100, 3), dtype='uint8')
cv.circle(image1, (50, 50), 10, (255, 255, 255), -1)
image2 = np.zeros((100, 100, 3), dtype='uint8')
image2 = image2[:, :, ::-1]
cv.circle(image2, (50, 50), 10, (255, 255, 255), -1)
image3 = np.zeros((100, 100, 3), dtype='uint8')
image3 = np.array(image3[:, :, ::-1])
cv.circle(image3, (50, 50), 10, (255, 255, 255), -1)
imshow(image1, image2, image3)
``` | feature,category: python bindings | low | Minor |
521,489,266 | flutter | Video Player Plugin - Change to diffirent Url dynamically | It would be great to be able to change to another Video file or Url without calling a new instance of the video player.
Now, if you call it just feels the memory. You need to call new activity or a hacky way to dispose previous controller. | c: new feature,p: video_player,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | medium | Major |
521,640,346 | rust | Compiler error doesn't show full query chain when evaluating const cycle | Consider the following code (both on stable 1.39.0 and nightly)
trait Const {
const A: i32 = Self::B;
const B: i32 = Self::A;
const C: i32 = Self::A;
}
impl <T> Const for T {}
pub fn main() -> () {
dbg!(i32::C);
}
There's a cycle evaluating the constants `C -> A -> B -> A -> ...`
The `dbg!(i32::C)` line is the offending code which causes this cycle to be evaluated.
However, the emitted error message doesn't actually reference the usage `dbg!(i32::C)` which caused the evaluation, as below:
Compiling playground v0.0.1 (/playground)
error[E0391]: cycle detected when const-evaluating `Const::A`
--> src/main.rs:4:24
|
4 | const A: i32 = Self::B;
| ^^^^^^^
|
note: ...which requires const-evaluating `Const::B`...
--> src/main.rs:5:24
|
5 | const B: i32 = Self::A;
| ^^^^^^^
= note: ...which again requires const-evaluating `Const::A`, completing the cycle
note: cycle used when const-evaluating `Const::C`
--> src/main.rs:6:24
|
6 | const C: i32 = Self::A;
| ^^^^^^^
error: aborting due to previous error
For more information about this error, try `rustc --explain E0391`.
error: could not compile `playground`.
To learn more, run the command again with --verbose.
This error could probably be improved by having an additional `note:` showing that the cycle was used when evaluating `dbg!(i32::C)`.
cc @oli-obk | C-enhancement,A-diagnostics,T-compiler,A-const-eval | low | Critical |
521,682,349 | opencv | Feature Request: 3+ dimension GpuMat | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
Please:
* Read the documentation to test with the latest developer build.
* Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue.
* Try to be as detailed as possible in your report.
* Report only one problem per created issue.
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
##### Detailed description
To my knowledge, OpenCV still has no implementation for GpuMats with more than 2 dimensions.
This has been a feature of `cv::Mat` for a long time, but for some reason it has never been applied to `GpuMat`, which creates a disconnect when transferring algorithms from CPU implementations to GPU implementations.
This is an example of a `cv::Mat` function that creates a multi-dimensional matrix:
https://docs.opencv.org/master/d3/d63/classcv_1_1Mat.html#a156df5a1326dd5c30b187b0e721a5f57
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
-->
| category: gpu/cuda (contrib),RFC | low | Critical |
521,705,763 | pytorch | Improved detection of repeated observers | ## 🚀 Feature
We currently do not detect if the same observer/fake quant is reused multiple times during calibration and quantization. This is always an error because observers collect statistics and the statistics are unique for every invocation of a module in the forward call.
## Motivation
Users are seeing this issue: particularly for floatfunctional modules, which are stateless in floating point but become stateful in fixed point.
Possible solution is to detect number of invocations of each observer module in a forward call. If any observer module is invoked more than once, an error should be reported.
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @dzhulgakov | oncall: quantization,low priority,triaged | low | Critical |
521,712,421 | flutter | SemanticsDebugger should use hasEnabledState/isEnabled to determine if something is disabled | The semantics debugger currently uses the presence of a tap handler to determine if a textfield or button is enabled/disabled. It should use the SemanticsFlags hasEnabledState/isEnabled to do that:
https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/widgets/semantics_debugger.dart
https://master-api.flutter.dev/flutter/dart-ui/SemanticsFlag/hasEnabledState-constant.html
https://master-api.flutter.dev/flutter/dart-ui/SemanticsFlag/isEnabled-constant.html | framework,a: accessibility,c: proposal,P3,team-framework,triaged-framework | low | Critical |
521,713,667 | create-react-app | A feature to use package.json imports in create-react-app. | **Problem** : While making an atomic design pattern with create-react-app and a component driven folder structure, i want to use package.json imports to import my components in the app. Currently we can do absolute imports by creating an .env file and use it as from the node tree.
But package.json imports somehow doesn't work in create-react-app
Suppose my folder structure is like follows:
```-- components
-header
-leftheader.js
-rightheader.js
-header.js
-package.json
```
And in package json:
```
{ "name" : "@header" }
```
Let's example in App.js i want to import headerComponent which is under header folder and App.js is in root level. I can do something like :
```
import Header from "@header/header"
```
Currently this is not supported in create-react-app. I think this feature will be great for large projects where we can have multilevel component tree.
| issue: proposal,needs triage | low | Minor |
521,724,229 | flutter | Add focusability to a11y of textfield | The Textfield widget should provide information about its focusability to the semantics tree.
/cc @gspencergoog @darrenaustin | a: text input,c: new feature,framework,f: material design,a: accessibility,P2,team-text-input,triaged-text-input | low | Minor |
521,747,835 | youtube-dl | Disney Plus | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.11.05. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [ X] I'm reporting a new site support request
- [ X] I've verified that I'm running youtube-dl version **2019.11.05**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [ X] I've checked that none of provided URLs violate any copyrights
- [ X] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.disneyplus.com/video/79529cd0-f1cf-4eec-8b1d-ea1e1b76043b
## Description
<!--
This is newly launched in the United States
-->
WRITE DESCRIPTION HERE
| site-support-request,geo-restricted | low | Critical |
521,754,859 | pytorch | [jit] Dict construction fails at runtime | This throws at runtime, it should at least be a compile error.
This is a little confusing since `IValue(std::tuple<Args...>)` is implemented but it's not implemented to recover a `TypePtr` from a `std::tuple<Args...>`
```cpp
#include <torch/script.h>
#include <iostream>
#include <memory>
int main(int argc, const char *argv[]) {
// Dict[int, Tuple[Tensor, Dict[int, Tuple[Tensor, Tensor]]]]
auto dict = c10::Dict<
int64_t,
std::tuple<
torch::Tensor,
c10::Dict<int64_t, std::tuple<torch::Tensor, torch::Tensor>>>>();
torch::IValue dict_ivalue(dict);
std::cout << dict_ivalue << "\n";
return 0;
}
```
cc @suo | oncall: jit,triaged | low | Critical |
521,765,883 | go | proposal: cmd/doc: module documentation | ### What did you see today?
Currently there is no place to put module level documentation that is consumable via the toolchain.
### What would you like to see?
We should standardise a doc string format that reads like the `// Package xxx ...` syntax. This will follow the current godoc rules and allow for code blocks, etc.:
```go
// Module golang.org/x/sys provides OS specific interfaces for low level operations.
//
// ...
module golang.org/x/sys
go 1.13
```
We should add a new `-m` flag to put `go doc` in module mode, much like `go list -m` vs `go list`. This will only accept module import paths not packages, again like `go list`, and will display the module doc string, and full package listing.
```sh
$ go doc -m golang.org/x/sys
module golang.org/x/sys
Module golang.org/x/sys provides OS specific interfaces for low level operations.
...
package cpu // import "golang.org/x/sys/cpu"
package plan9 // import "golang.org/x/sys/plan9"
package unix // import "golang.org/x/sys/unix"
package main // import "golang.org/x/sys/unix/linux"
package windows // import "golang.org/x/sys/windows"
package main // import "golang.org/x/sys/windows/mkwinsyscall"
package svc // import "golang.org/x/sys/windows/svc"
package debug // import "golang.org/x/sys/windows/svc/debug"
package eventlog // import "golang.org/x/sys/windows/svc/eventlog"
package main // import "golang.org/x/sys/windows/svc/example"
package mgr // import "golang.org/x/sys/windows/svc/mgr"
package registry // import "golang.org/x/sys/windows/svc/registry"
```
### What alternatives did you consider?
#### `README`
The forthcoming discovery service, #33654, currently uses the repo's README for top level documentation, since this is a fairly strong pseudo-convention for most repos. Unfortunately, while this works well for a web ui, this syntax may not be as readable on the command line, or format well with godoc.
#### versions
It may be useful to list all known versions, but this implies that the list is up to date, and I would rather not impose a full `go get` flow. This could be gated behind `-all` or a new flag.
#### packages
##### noise
For complex packages, say `k8s.io/kubernetes`, listing all the packages may be really noisy. As such it may be useful to gate this data behind `-all`. That being said many protobuf generated packages suffer from the same noise.
##### stutter
The current syntax also suffers from stutter, because all package lines start `package xxx // import ...`. This format borrows from the package declaration syntax to be more legible to readers. It also feels like the type declarations that can be found in regular godoc:
```go
package unix // import "internal/syscall/unix"
const AT_REMOVEDIR = 0x200
const AT_SYMLINK_NOFOLLOW = 0x100
```
#### submodules
While they are not contained within the module, many customers may be confused about why they cannot find the `cloud.google.com/go/datastore` package within the `cloud.google.com/go` module. This could be displayed like so, in addition to being guarded behind `-all` or a custom flag:
```go
module cloud.google.com/go
package cloud // import "cloud.google.com/go"
package asset // import "cloud.google.com/go/asset/apiv1"
...
module cloud.google.com/go/bigquery
module cloud.google.com/go/bigtable
...
``` | Documentation,Proposal,Proposal-Hold | low | Critical |
521,769,267 | vscode | Git Merge Resolve Dialog | Issue Type: <b>Feature Request</b>
This is a request for the ability, when a user resolves all of the merge conflicts in a file, for VS Code to ask whether they would like to mark the entire file as resolved in Git.
VS Code version: Code 1.36.1 (2213894ea0415ee8c85c5eea0d0ff81ecc191529, 2019-07-08T22:56:38.504Z)
OS version: Darwin x64 19.0.0
<!-- generated by issue reporter --> | feature-request,merge-conflict | low | Minor |
521,776,940 | flutter | TimeOfDay.format returns 16:00 instead of 4:00 PM | ## Steps to Reproduce
With the following sample app and with the time set to 12-hour support, the result of `TimeOfDay.fromDateTime(DateTime(2019, 1, 1, 16)).format(context)` will be `16:00` instead of `4:00 PM` (as expected). If the support locale is changed to `Locale('en', 'US')`, then we get the expect result.
```
import 'package:flutter/material.dart';
import 'package:flutter_localizations/flutter_localizations.dart';
Future<void> main() async {
runApp(MaterialApp(
localizationsDelegates: [
GlobalMaterialLocalizations.delegate,
],
supportedLocales: const [Locale('en', 'GB')],
home: TimeOfDayFormatIssue(),
));
}
class TimeOfDayFormatIssue extends StatelessWidget {
@override
Widget build(BuildContext context) => Scaffold(
body: Center(
child: Text(TimeOfDay.fromDateTime(DateTime(2019, 1, 1, 16)).format(context)),
),
);
}
```
**Target Platform:** iOS and Android
## Logs
```
flutter doctor -v
[✓] Flutter (Channel unknown, v1.10.7, on Mac OS X 10.14.6 18G95, locale en-NL)
• Flutter version 1.10.7 at /Users/user/IdeaProjects/hue_blue/dev/flutter
• Framework revision e70236e36c (6 weeks ago), 2019-10-02 09:32:30 -0700
• Engine revision 9e6314d348
• Dart version 2.6.0 (build 2.6.0-dev.0.0 1103600280)
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.0-rc3)
• Android SDK at /Users/user/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.0-rc3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[✓] Xcode - develop for iOS and macOS (Xcode 11.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.2, Build version 11B52
• CocoaPods version 1.8.0
[✓] Android Studio (version 3.5)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 38.2.3
• Dart plugin version 191.8423
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[✓] IntelliJ IDEA Community Edition (version 2019.1)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin version 35.2.2
• Dart plugin version 191.6183.88
[✓] Connected device (2 available)
• Miguel’s iPhone • 98b2b39514d586ebac7ee0537c84a7b23f8e46de • ios • iOS 13.2
• iPhone X • 50F6A837-0F74-41F7-BC8E-3B71EFE3DFD8 • ios • com.apple.CoreSimulator.SimRuntime.iOS-13-2 (simulator)
! Doctor found issues in 1 category.
```
| framework,f: material design,f: date/time picker,a: internationalization,f: cupertino,a: quality,customer: solaris,has reproducible steps,P2,found in release: 1.20,found in release: 2.3,team-design,triaged-design | low | Major |
521,782,681 | godot | Unsigned uint64_t overflow in Variant.cpp | **Godot version:**
3.2.beta.custom_build. 7d836a7cc
**OS/device including version:**
Ubuntu 19.10
**Issue description:**
uint64_t argument is saved to int64_t(_data._int) which have different scope of values(-2^63,2^63-1), (0, 2^64)
https://github.com/godotengine/godot/blob/62a09a2ee351430f9d55eee337691958e877cc68/core/variant.cpp#L2159-L2163
The overflow can be easily checked with this code:
```
Variant::Variant(uint64_t p_int) {
if(p_int > 9223372036854775807U)
{
p_int--; // Put breakpoint here
p_int++;
}
type = INT;
_data._int = p_int;
}
```
Overflow seems to happens mostly in Color functions which return 4 x uint16_t

To fix this, maybe will be enough to add to union _uint variable
```
union {
bool _bool;
int64_t _int;
uint64_t _uint;
double _real;
Transform2D *_transform2d;
::AABB *_aabb;
Basis *_basis;
Transform *_transform;
void *_ptr; //generic pointer
uint8_t _mem[sizeof(ObjData) > (sizeof(real_t) * 4) ? sizeof(ObjData) : (sizeof(real_t) * 4)];
} _data GCC_ALIGNED_8;
```
**Minimal reproduction project:**
https://github.com/qarmin/The-worst-Godot-test-project/archive/master.zip - should be enough | bug,discussion,topic:core | low | Minor |
521,788,191 | terminal | `TerminalInputModifierKeyTests` Don't work right on non-EN-US keyboard layouts | #### @j4james:
> Windows Terminal version (if applicable): commit [e2994ff](https://github.com/microsoft/terminal/commit/e2994ff8908f5a8418f24f2e2954a6c163f910b6)
>
>
> I'm used to having a few failures in the `TerminalInputModifierKeyTests`, and I know a couple of `TabTests` are blocked, but the rest of the tests I would usually expected to pass.
>
>
> In case you care about the `TerminalInputModifierKeyTests` too, I think those are failing for me because I have a UK keyboard - the test dies on the `VK_OEM_3` key. Here's a section of the test output where it fails:
>
> ```
> Testing Key 0xc0
> Expected, Buffer = "", ""
> Verify: SUCCEEDED(StringCchLengthW(s_pwszInputExpected, STRSAFE_MAX_CCH, &cInputExpected))
> Error: Verify: AreEqual(cInputExpected, records.size()): Verify expected and actual input array lengths matched. - Values (0, 1) [File: C:\Users\James\CPP\terminal\src\terminal\adapter\ut_adapter\inputTest.cpp, Function: Microsoft::Console::VirtualTerminal::InputTest::s_TerminalInputTestCallback, Line: 87]
> TAEF: A crash with exception code 0xC0000409 occurred in module "ConAdapter.Unit.Tests.dll" in process "te.processhost.exe" (pid:13732).
> ```
>
> This could be resolved by simply skipping that key the same way we skip `VK_OEM_2`, but it wouldn't surprise me if other international keyboards failed on other keys as well. I've been happy to accept that those are just expected failures for my particular setup.
(moved from #3536 ) | Product-Conhost,Issue-Bug,Area-CodeHealth | low | Critical |
521,826,936 | youtube-dl | Frontend Masters - Unable to log in | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.11.05. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Read bugs section in FAQ: http://yt-dl.org/reporting
- Finally, put x into all relevant boxes (like this [x])
-->
- [ ] I'm reporting a broken site support issue
- [x] I've verified that I'm running youtube-dl version **2019.11.05**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar bug reports including closed ones
- [x] I've read bugs section in FAQ
## Verbose log
<!--
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2019.11.05
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', '-u', 'PRIVATE', '-p', 'PRIVATE', 'https://frontendmasters.com/courses/vue/']
[debug] Encodings: locale cp1252, fs utf-8, out utf-8, pref cp1252
[debug] youtube-dl version 2019.11.05
[debug] Python version 3.7.4 (CPython) - Windows-10-10.0.18362-SP0
[debug] exe versions: ffmpeg N-93544-g0a347ff422, ffprobe N-93544-g0a347ff422
[debug] Proxy map: {}
[FrontendMastersCourse] Downloading login page
[FrontendMastersCourse] Logging in
ERROR: Unable to log in; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "c:\users\andre\appdata\local\programs\python\python37\lib\site-packages\youtube_dl\YoutubeDL.py", line 796, in extract_info
ie_result = ie.extract(url)
File "c:\users\andre\appdata\local\programs\python\python37\lib\site-packages\youtube_dl\extractor\common.py", line 529, in extract
self.initialize()
File "c:\users\andre\appdata\local\programs\python\python37\lib\site-packages\youtube_dl\extractor\common.py", line 433, in initialize
self._real_initialize()
File "c:\users\andre\appdata\local\programs\python\python37\lib\site-packages\youtube_dl\extractor\frontendmasters.py", line 32, in _real_initialize
self._login()
File "c:\users\andre\appdata\local\programs\python\python37\lib\site-packages\youtube_dl\extractor\frontendmasters.py", line 70, in _login
raise ExtractorError('Unable to log in')
youtube_dl.utils.ExtractorError: Unable to log in; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
I was trying to download a course from FrontendMasters. I provided my login with the `-u` and `-p` flags but it gives me this error. Tried both Python 2 and Python 3, Windows and Linux, same error. I also checked several times my credentials and they are correct, my FrontendMasters account is active and i can watch the videos via browser.
| account-needed | low | Critical |
521,835,902 | flutter | On iOS, plugins should have access to a UiViewController | In any scenario (add-to-app or greenfield apps), we need to have a way of publishing an appropriate UiViewController to plugins so that they can be used in native APIs that push other native view controllers.
The current solutions are ugly. We either have to ask the app to provide it or we have to write ugly hacks within plugin code: https://github.com/flutter/plugins/blob/master/packages/google_sign_in/google_sign_in/ios/Classes/GoogleSignInPlugin.m#L210-L224.
Let's fix it. | platform-ios,engine,a: existing-apps,P3,a: plugins,team-ios,triaged-ios | low | Minor |
521,844,357 | rust | Reduce size of `target` directory | The size of the `target` directory is becoming a severe problem for me. I use rust at work, and we have many services written in rust. The `target` directory for each service is typically 2-3 GB for a fresh build, and increases without bound due to the lack of GC, so that after a short period you're looking at 4 GB per service.
For reference, the final size of the musl-based statically compiled binaries we deploy is about 100 MB each, or 40x smaller than the intermediate artefacts, and the majority of that is debug info (once we separate out debug info it's down to maybe 30 MB).
I have a decently sized SSD (500GB), but even so, this very quickly gets out of hand. For various reasons I can only really afford to give 10 GB of space dedicated to the rust services, which means I can only do local development on two at a time (the other services are run by pulling prebuilt docker images with no possibility for local changes) and this is assuming I regularly nuke my `target` directories.
If I compress a 4 GB target directory with 7-zip, it compresses to about 700 MB, so even if all of the data is required, there's quite a bit of scope for reducing the disk usage. Also, maybe some intermediate artefacts can be deleted entirely and just recreated as needed?
I've seen elsewhere the suggestion to share the target directory between services, but:
1) I don't think that will actually help much here - the versions of dependencies often differ slightly between services and even a single service can reach the 10 GB limit on its own if left unchecked.
2) Due to the lack of GC, it will become impossible to nuke the target directory for a single service, so the only option will be to nuke *everything* and then rebuild it all from scratch.
| C-enhancement,T-compiler,T-cargo | medium | Critical |
521,846,627 | TypeScript | [navtree] Only a single nameSpan listed for multiple declarations of interface | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.7.2
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
- navtree
- tsserver
**Code**
For the TypeScript:
```ts
interface IFoo2 {
foo: any;
}
interface IFoo2 {
foo2: any;
}
```
1. Run a `navTree` request
**Bug:**
Two spans for `IFoo2` are returned but there is only a single `nameSpan `
```
{
"text": "<global>",
"kind": "script",
"kindModifiers": "",
"spans": [
{
"start": {
"line": 1,
"offset": 1
},
"end": {
"line": 8,
"offset": 2
}
}
],
"childItems": [
{
"text": "IFoo2",
"kind": "interface",
"kindModifiers": "",
"spans": [
{
"start": {
"line": 1,
"offset": 1
},
"end": {
"line": 3,
"offset": 2
}
},
{
"start": {
"line": 5,
"offset": 1
},
"end": {
"line": 7,
"offset": 2
}
}
],
"nameSpan": {
"start": {
"line": 1,
"offset": 11
},
"end": {
"line": 1,
"offset": 16
}
},
...
}
]
}
```
For VS Code's tooling, we'd like to have a list of all namespans
| Suggestion,In Discussion,Domain: TSServer | low | Critical |
521,873,965 | TypeScript | `!.` after `?.` should be warned | This is an anti-pattern on the current type system. And the compiler must not do quick fix so. `!.` after `?.` is simply replaceable with `?.` and should do so.
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.7.x-dev.20191105
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
document.querySelector('_')!.textContent!.split('') // old code
document.querySelector('_')?.textContent!.split('') ?? 0 // string[]
document.querySelector('_')?.textContent?.split('') ?? 0 // string[] | 0
```
**Expected behavior:**
```ts
document.querySelector('_')!.textContent!.split('') // old code
document.querySelector('_')?.textContent!.split('') ?? 0 // unsafe, warning
document.querySelector('_')?.textContent?.split('') ?? 0 // safe
```
**Actual behavior:**
```ts
document.querySelector('_')!.textContent!.split('') // old code
document.querySelector('_')?.textContent!.split('') ?? 0 // unsafe, no warning
document.querySelector('_')?.textContent?.split('') ?? 0 // safe
```
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Suggestion,Awaiting More Feedback | low | Critical |
521,879,677 | pytorch | [FR] add generator= kwarg support for torch.randn and torch.rand | All stochastic functions should accept `generator=`. These should as well. | triaged,enhancement,module: random | low | Minor |
521,883,191 | react | Possibility to set min duration of Suspense fallback | **Do you want to request a *feature* or report a *bug*?**
Feature
**What is the current behavior?**
I have played a bit with Concurrent Mode and the Suspense API.
Really exiting features and I look forward to use them in a stable release. Thank you for everything you are doing!
Regarding the `Suspense` component, could it be nice to have a property (both in Concurrent Mode and in "normal/synchronous" mode) which would allow us to set the minimum duration of the `Suspense` fallback UI in case the fallback UI ever gets rendered?
**What is the expected behavior?**
Let me do an example. Try clicking on the `Next` button in this codesandbox:
https://codesandbox.io/s/cold-monad-ifr29.
You will see that the `Suspense` fallback UI is rendered and stays in the tree just for a little moment (`~200ms`) because both promises resolve in `1200ms`, while `useTransition` has a `timeoutMs` of 1 second.
In my opinion, this is a bit unpleasant to the eye.
Wouldn't it be nicer if we could tell the `Suspense` component something like "If you ever render the fallback, show it for at least N millisec."? E.g.:
```jsx
...
function ProfilePage({ resource }) {
return (
<Suspense fallback={<h1>Loading profile...</h1>}
// If the fallback ever gets rendered,
// it will be shown for at least 1500 millisec.,
// even if the promise resolves right after rendering the fallback.
fallbackMinDurationMs={1500}>
<ProfileDetails resource={resource} />
<Suspense fallback={<h1>Loading posts...</h1>}>
<ProfileTimeline resource={resource} />
</Suspense>
</Suspense>
);
}
...
```
Consider an animated spinner used as a fallback of `Suspense`, if it happens that the promise resolves just a few milliseconds after rendering the fallback like above, the spinner will be rendered and suddenly disappear, without completing its animation cycle and showing an incomplete animation.
Whereas, if we could keep the spinner in the tree for at least `fallbackMinDurationMs` millisec. once rendered, we could improve its appearance in such cases.
The `Suspense` component responsible for rendering the fallback would have to wrap the caught Promise in a promise which would look something like this:
```js
function maxDelayFallbackPromise({
promise,
timeoutMs, // ---> This would be the value of `useTransition`'s `timeoutMs`
onFallback = () => {}, // ---> This code would run in case `timeoutMs` exceeds (i.e. when `Suspense`'s fallback UI is rendered)
fallbackMinDurationMs
} = {}) {
// Generate a unique identifier, like a string, a number, in order to identify which promise resolves first...
const uniqueIdentifier = `promise_value_${Math.random()}`
return Promise.race([
promise,
timeout(timeoutMs).then(() => uniqueIdentifier)
]).then(value => {
if (value === uniqueIdentifier) {
onFallback()
return minDelayPromise(promise, fallbackMinDurationMs)
}
else {
return value
}
})
}
```
Where `timeout` and `minDelayPromise` are:
```js
function timeout(delayMs) {
return new Promise(resolve => setTimeout(resolve, delayMs))
}
function minDelayPromise(promise, minDelay) {
return Promise.all([
promise,
timeout(minDelay)
]).then(([value]) => {
return value
})
}
```
This could also apply to the `isPending` flag of `useTransition`...
Do you think such a feature could improve the UX in such cases?
**UPDATE - 04/09/2022** - For anyone looking at this issue, there is a workaround to achieve this fallback min duration behaviour in React 17 🎉 , described here: https://github.com/facebook/react/issues/17351#issuecomment-1236303278
| Type: Feature Request,Component: Concurrent Features,Resolution: Backlog | medium | Critical |
521,898,017 | pytorch | MultiStepLR does not return good lr after load_state_dict | The param_group's lr's cannot be trusted if the optimizer state is not restored (and this can be okay, because optimizer buffers can double the checkpoint size).
In this line they are trusted if last_epoch is between milestones https://github.com/pytorch/pytorch/blob/master/torch/optim/lr_scheduler.py#L389
The closed_form formula is correct, since it does bisect_right to reset the lr value but for some reason it is not called.
I'm not sure if this is a problem with some assumed updated state of optimizer param_groups' lr or a problem with API of schedulers or sth else...
My use case is: I use MultiStepLR to drop learning rate at some #iterations milestones. After loading a checkpoint, the iteration index is restored, but the optimizer state is not, so I was relying on MultiStepLR to recompute the lr from the `last_epoch` field which is way greater than the values in milestones, so lr should be dropped.
cc @vincentqb @ezyang | module: optimizer,triaged | medium | Critical |
521,989,081 | pytorch | Reliable way to identify RuntimeErrors (CUDA) | ## 🚀 Feature
Reliable way to check for CUDA out of memory (and CUDA Runtime errors in general).
## Motivation
Currently there I see no way to reliably check for a cuda out of memory error except parsing the exception arg for
``CUDA out of memory``.
(After a quick grep on the pytorch sources this seems to work **at the moment**)
As this text may change in future I do not feel comfortable with this work-around as it screams for breaking.
In application code reliably detecting such an error seems crucial to me.
If there is a way to do so and I did not find it, this issue may be a well place to document this?
## Pitch
A solution would be quiet standard, e.g., RuntimeError subclasses or an attached error code.
@soumith @albanD How do you folks think about this?
cc @ngimel | module: cuda,triaged,enhancement | low | Critical |
522,030,078 | pytorch | [FR] F.pad support syntax sugars for specifying the padding amount | Most of the times the same amount of padding is applied on both sides of all spatial dimensions. Instead of writing `F.pad(x, tuple_of_size_twice_spatial_ndim)`, I would be happy to see the two following syntax sugars:
1. `F.pad(x, p : int)` applies `p` amount of padding on both sides of all spatial dimensions in `x`.
2. `F.pad(x, t : Tuple[int])`, if `t` is a tuple of size equal to the number of spatial dimensions in `x`, each value of `t` is used as the padding amount on **both** sides for the corresponding dimension. This would be consistent with the Conv and Pooling layers too.
I'd submit a PR if this is reasonable. | module: convolution,triaged,enhancement | low | Minor |
522,041,659 | go | cmd/dist: timeout hanging bootstrap | Because of a runtime bug, Android builders hang in the bootstrap process:
https://farmer.golang.org/temporarylogs?name=android-arm-corellium&rev=99957b6930c76b683dbca1ff4bcdd56e59b1e035&st=0xc009856f20
This issue is about implementing some sort of timeout in cmd/dist to ensure forward progress even in the face of hangs. For quite some time I though the hangs were caused by a flaky Corellium network. | NeedsFix | low | Critical |
522,051,134 | pytorch | CPU and CUDA error messages are divergent in type promotion | Discovered while writing #29417.
test_non_promoting_ops was not running on the CPU and CUDA. When I modified it to do so it began failing because the error message it's checking for is not the same across our native device types. I removed the check for the particular message as part of the PR. However, the error message should be the same on our native device types and should be checked for.
@gchanan | module: tests,triaged,module: type promotion | low | Critical |
522,055,145 | pytorch | Provide a mechanism to set global state per test in thread-safe manner | Global state, like the default tensor dtype, is set by some tests. This is unsafe if tests are running concurrently or the test author forgets to reset the former dtype (which has happened before, and broke running pytest on test_jit.py).
Ideally we would have a mechanism where we can set global state in a test that lets tests be run concurrently and safely.
@gchanan | module: tests,triaged,enhancement | low | Minor |
522,087,890 | TypeScript | Warning or error when importing files from a project's own outDir | ## Search Terms
import, outDir, composit, TS5055
## Suggestion
Add a specific warning/error when trying to import something from the projects own outDir.
## Use Cases
Help track down files that are wrongly importing from it's own dist directory.
I don't know if there are valid use cases where you would want to do this but I can only think of (refactor) errors.
## Examples
In a big refactor I moved a class from a project to a shared directory (compiled with composite: true, imported into projects with "references").
I made the following change without thinking because it satisfied the compiler.
```diff
- import mongoose from "../../shared/dist/mongo";
+ import mongoose from "../dist/mongo";
```
Everything seemed to work fine (no error was shown in vscode on the import line itself, and the now shared class worked) until the next week I notice the following errors when compiling the shared dir:
```
error TS5055: Cannot write file '.../shared/dist/interfaces/branch.d.ts' because it would overwrite input file.
error TS5055: Cannot write ...
```
I did not notice these errors while doing the refactor itself because the `tsc -w` for the shared directory was running in a forgotten terminal tab. A warning or error on the import line itself when doing something like this would save a lot of debugging time.
## Checklist
My suggestion meets these guidelines: (this is my guess, don't know exactly what the change would entail)
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
522,119,415 | pytorch | Slow (20-50x) RNN tutorial/example when torch is installed using pip comp. to conda installation | ## ❓ Questions and Help
The example https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html
runs slow on pytorch installations 1.[320].x[+cudaxxx ]
I first noticed that behavior on an really old version of torch, then updated. The example ran 50 times faster on a Colab notebook and on a windows machine then on my workstation. Then i tried different environments in docker and found that the behavior is observable for all environments with and without nvidia cuda and different distributions when using pip to install torch.
Conda installations are running fine.
Did i do something wrong?
cc @ezyang @VitalyFedyunin @ngimel @mruberry @zou3519 | module: binaries,module: performance,module: rnn,triaged,module: mkl | medium | Major |
522,203,579 | flutter | TimePickerDialog is the wrong kind and behavior | ## Steps to Reproduce
```dart
showTimePicker( context: context)
```
1. A white line is visible below - the dialogue does not overlap the entire screen. You can set SystemUiOverlayStyle to the desired state before opening the dialog, but why can't it be the default?
2. Gesture control is enabled on my phone - that's why the dialogue behaves strangely.
3. The System TimePicker vibrates every time the selected hour or minute changes. Why is this not the case in the current implementation?
```dart
const Duration _kVibrateCommitDelay = Duration(milliseconds: 100);
void _vibrate() {
switch (Theme.of(context).platform) {
case TargetPlatform.android:
case TargetPlatform.fuchsia:
_vibrateTimer?.cancel();
_vibrateTimer = Timer(_kVibrateCommitDelay, () {
HapticFeedback.vibrate();
_vibrateTimer = null;
});
break;
case TargetPlatform.iOS:
break;
}
}
```

**Target Platform: Android**
**Target OS version/browser: 10**
**Devices: Google Pixel 4**
## Logs
```
[✓] Flutter (Channel stable, v1.9.1+hotfix.6, on Mac OS X 10.15.1 19B88, locale en-BY)
• Flutter version 1.9.1+hotfix.6 at /Users/nikitadol/development/flutter
• Framework revision 68587a0916 (9 weeks ago), 2019-09-13 19:46:58 -0700
• Engine revision b863200c37
• Dart version 2.5.0
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at /Users/nikitadol/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.2
• Java binary at: /Users/nikitadol/Library/Application Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/191.5977832/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[✓] Xcode - develop for iOS and macOS (Xcode 11.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.2, Build version 11B52
• CocoaPods version 1.8.3
[✓] Android Studio (version 3.5)
• Android Studio at /Users/nikitadol/Library/Application Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/191.5977832/Android Studio.app/Contents
• Flutter plugin version 40.2.2
• Dart plugin version 191.8580
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[✓] IntelliJ IDEA Ultimate Edition (version 2019.2.4)
• IntelliJ at /Users/nikitadol/Applications/JetBrains Toolbox/IntelliJ IDEA Ultimate.app
• Flutter plugin version 39.0.5
• Dart plugin version 192.6603.23
[✓] Connected device (1 available)
• Pixel 4 • 99161FFAZ00D3L • android-arm64 • Android 10 (API 29)
! Doctor found issues in 1 category.
```
| framework,f: material design,f: date/time picker,a: fidelity,a: quality,has reproducible steps,P2,found in release: 3.3,found in release: 3.5,team-design,triaged-design | low | Major |
522,220,108 | godot | Tilemap not displaying correctly with vertex shader | **Godot version:**
3.2 beta
**OS/device including version:**
mac / android
**Issue description:**
I have a tilemap

the solid red area is initially off the screen and not visible. By using a vertex shader to apply a barrel distortion of sorts, the solid red area will ordinarily become visible.

this is working ok for me in 3.1.1 on my mac and android devices with the default quadrant size of 16.
in 3.2beta 1 I am seeing this

its always like that on my android devices but sometimes its ok on the mac until you resize the screen and then the left solid portion will flicker on and off. I have tried changing the quadrant size but was unable to get it working for me as it does in 3.1.1
**Steps to reproduce:**
run the attached project, if it looks okay try resizing the screen as well.
**Minimal reproduction project:**
[BARRELDIST.zip](https://github.com/godotengine/godot/files/3841259/BARRELDIST.zip)
| bug,topic:rendering,confirmed,regression | low | Minor |
522,221,184 | rust | ICE: Adding `-C save-temps` to incremental compile causes rustc_codegen_ssa::back::write panic | Steps to reproduce:
```
% mkdir temp
% cd temp
% touch lib.rs
% rustc +nightly lib.rs --crate-type=lib -C incremental=incr
% rustc +nightly lib.rs --crate-type=lib -C incremental=incr -C save-temps
thread '<unnamed>' panicked at 'assertion failed: `(left == right)`
left: `false`,
right: `true`', src/librustc_codegen_ssa/back/write.rs:872:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
error: internal compiler error: unexpected panic
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.40.0-nightly (50f8aadd7 2019-11-07) running on x86_64-unknown-linux-gnu
note: compiler flags: -C incremental -C save-temps --crate-type lib
thread '<unnamed>' panicked at 'src/librustc_codegen_ssa/back/write.rs:1488: worker thread panicked', src/librustc/util/bug.rs:37:26
error: internal compiler error: unexpected panic
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.40.0-nightly (50f8aadd7 2019-11-07) running on x86_64-unknown-linux-gnu
note: compiler flags: -C incremental -C save-temps --crate-type lib
thread 'rustc' panicked at 'src/librustc_codegen_ssa/back/write.rs:1758: panic during codegen/LLVM phase', src/librustc/util/bug.rs:\
37:26
error: internal compiler error: unexpected panic
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.40.0-nightly (50f8aadd7 2019-11-07) running on x86_64-unknown-linux-gnu
note: compiler flags: -C incremental -C save-temps --crate-type lib
``` | I-ICE,E-needs-test,P-medium,T-compiler,A-incr-comp,C-bug,glacier | low | Critical |
522,224,646 | TypeScript | Incremental type narrowing, enabling gradual object initialization | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
## Search Terms
delayed initialization, gradual initialization
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
What if, on every assignment to either a value or a property, that the type of that value is narrowed? It's probably easiest explained with an example:
## Use Cases
It means that this code for gradually initializing an object would just work:
```typescript
type Person = { firstName: string, surname: string }
function f() : Person {
const p : Partial<Person> = {}
p.firstName = ""
p.surname = ""
return p; // Error: Type 'Partial<Person>' is not assignable to type 'Person'
}
```
Currently this code does not type check because the compiler doesn't change the type of `p` after the assignments to its properties.
My thinking is that, for any value `x` of type `X : { a : unknown }`,
```typescript
x.a = "some string"
```
would result in `x` having its type narrowed to:
```typescript
X & { a : string }
```
This means that the partial initialization example would work. If it works for variables as well as properties then so would this:
```typescript
function f(mode: "read" | "read/write") {
// ... Code ...
}
let s: string;
s = "read"
f(s)
```
## Examples
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
522,260,050 | go | cmd/compile: encoded pkg path shown in stack trace | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +99957b6930 Tue Nov 12 05:35:33 2019 +0000 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/myitcv/.cache/go-build"
GOENV="/home/myitcv/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/myitcv/gostuff"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/myitcv/gos"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/myitcv/gos/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build670035236=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Run the following [`testscript`](https://godoc.org/github.com/rogpeppe/go-internal/cmd/testscript) repro:
```
go run main.go
-- go.mod --
module mod.com
go 1.14
require (
golang.org/x/net v0.0.0-20190620200207-3b0461eec859 // indirect
gopkg.in/tomb.v2 v2.0.0-20161208151619-d5d1b5820637
)
-- main.go --
package main
import "gopkg.in/tomb.v2"
func main() {
done := make(chan struct{})
var t tomb.Tomb
t.Go(func() error {
panic("here")
close(done)
return nil
})
<-done
}
```
### What did you expect to see?
```
panic: here
goroutine 18 [running]:
main.main.func1(0x0, 0x0)
/home/myitcv/gostuff/src/github.com/myitcv/playground/main.go:9 +0x39
gopkg.in/tomb.v2.(*Tomb).run(0xc00018e000, 0xc00018a060)
/home/myitcv/gostuff/pkg/mod/gopkg.in/[email protected]/tomb.go:163 +0x38
created by gopkg.in/tomb.v2.(*Tomb).Go
/home/myitcv/gostuff/pkg/mod/gopkg.in/[email protected]/tomb.go:159 +0xba
exit status 2
```
### What did you see instead?
```
panic: here
goroutine 18 [running]:
main.main.func1(0x0, 0x0)
/home/myitcv/gostuff/src/github.com/myitcv/playground/main.go:9 +0x39
gopkg.in/tomb%2ev2.(*Tomb).run(0xc00018e000, 0xc00018a060)
/home/myitcv/gostuff/pkg/mod/gopkg.in/[email protected]/tomb.go:163 +0x38
created by gopkg.in/tomb%2ev2.(*Tomb).Go
/home/myitcv/gostuff/pkg/mod/gopkg.in/[email protected]/tomb.go:159 +0xba
exit status 2
```
Note the `%2e` in the stack trace.
cc @bcmills @randall77 @jayconrod | NeedsInvestigation | low | Critical |
522,293,326 | java-design-patterns | Microkernel architecture | ## Description
The Microkernel Architecture design pattern, also known as the plug-in architecture, is ideal for product-based applications that need to extend their core functionalities through plug-ins. This pattern is characterized by a minimal core system that contains the essential functionality of the application, and additional features or customizations are provided through independent plug-in modules. This approach promotes extensibility, flexibility, and maintainability of the software by isolating the core system from the custom processing logic.
### Main Elements:
1. **Core System**: The foundational part of the application containing the minimal functionality required to run the system.
2. **Plug-in Modules**: Independent components that add specialized processing or additional features to the core system. They should remain isolated from each other to avoid dependencies.
3. **Plug-in Registry**: A system that manages the available plug-in modules, providing information such as their names, data contracts, and access protocols.
## References
1. Priyal Walpita's article on Microkernel Architecture: [Software Architecture Patterns: Microkernel Architecture](https://priyalwalpita.medium.com/software-architecture-patterns-microkernel-architecture-97cee200264e)
2. ["Software Architecture Patterns" by Mark Richards: Chapter 3 (Microkernel Architecture)](https://github.com/iluwatar/java-design-patterns/files/3841915/Software-Architecture-Patterns.pdf)
## Acceptance Criteria
1. **Core System Implementation**: Develop a core system with essential functionalities and a mechanism to interact with plug-in modules.
2. **Plug-in Module Integration**: Create at least two plug-in modules that demonstrate the extensibility of the core system by providing additional features.
3. **Plug-in Registry**: Implement a plug-in registry to manage the available plug-in modules, ensuring they can be dynamically discovered and loaded by the core system.
| info: help wanted,epic: pattern,epic: architecture,type: feature | low | Minor |
522,297,005 | java-design-patterns | Space-based architecture | ## Description
The Space-Based Architecture design pattern is aimed at addressing scalability and concurrency issues by minimizing factors that limit application scaling. This pattern is ideal for applications with high user load and variable concurrent user volumes. The key components include processing units and virtualized middleware, with the primary aim of distributing and replicating data in-memory across processing units, eliminating the traditional database bottleneck.
### Main Elements of the Pattern
1. **Processing Units**: These contain application components, in-memory data grids, optional asynchronous persistent stores for failover, and a data replication engine. They can be dynamically started or stopped based on user load.
2. **Virtualized Middleware**: This component includes:
- **Messaging Grid**: Manages request routing to available processing units.
- **Data Grid**: Manages data replication across processing units.
- **Processing Grid** (optional): Manages distributed request processing across different processing units.
- **Deployment Manager**: Monitors load conditions to dynamically scale processing units up or down.
## References
- [Space-Based Architecture](https://en.wikipedia.org/wiki/Space-based_architecture)
- [Richards, Mark. "Software Architecture Patterns". O’Reilly Media, 2015.](https://github.com/iluwatar/java-design-patterns/files/3841915/Software-Architecture-Patterns.pdf)
## Acceptance Criteria
1. **Implementation of Processing Units**: Develop processing units that include application modules, in-memory data grids, and data replication engines.
2. **Middleware Components**: Implement virtualized middleware components such as messaging grid, data grid, and deployment manager to manage request routing, data replication, and dynamic scaling.
3. **Scalability and Performance Testing**: Conduct extensive testing to ensure the system can handle high concurrency and variable loads, achieving near-infinite scalability and high performance.
| info: help wanted,epic: pattern,epic: architecture,type: feature | low | Major |
522,321,138 | svelte | Examples need documentation | Some of the examples on the Svelte website are straightforward/intuitive but others need some explanation/documentation. For example, the "Deferred transitions" example is excellent but there are a lot of things going on and it's difficult to understand. In particular, what is the essential lesson of the example vs. understanding all the code needed to demonstrate it.
Therefore, I recommend:
- Review of each example
- Adding a README file to each example. Perhaps a README.svelte file could be added to each example.
- Adding more comments to each example.
e.g.
**App.svelte**
```
<script>
import README from './README.svelte';
let name = 'world';
</script>
<README />
<h1>Hello {name}!</h1>
```
**README.svelte**
```
<hr>
<h1>
README - Hello world
</h1>
This example shows how a variable in the script section can be used in a HTML template.
```

| stale-bot,temp-stale,documentation | low | Minor |
522,336,536 | react | "Should not already be working" in Firefox after a breakpoint/alert | **Do you want to request a *feature* or report a *bug*?**
Bug
**What is the current behavior?**
I'm seeing "Error: Should not already be working" after upgrading to React 16.11
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
This is exclusively happening on an older version of Chrome, 68.0.3440 on Windows 7
I was unable to reproduce this in a VM environment but our Sentry is getting littered with these errors.
I know it's a long shot, but I wasn't able to find any information about this error anywhere, just a reference in the error codes file in react, so thought it would be a good idea to report this just in case. Curious if anyone has seen this.
| Type: Bug,Difficulty: medium,Type: Needs Investigation,good first issue | high | Critical |
522,372,591 | go | net: better docs for PreferGo in Resolver | Guys, how are you?
I have a suggestion to improve doc about parameter `PreferGo` on struct Resolver based on a comment in another issue. https://github.com/golang/go/issues/19268#issuecomment-345384876
Suggestion adds more a small text:
```
When you use the variable PreferGo set to true you will be
able to use your dial, If you use PreferGo set to false it will ignore
your dial and use directly "/etc/resolv.conf "
```
Reference Doc:
https://github.com/golang/go/blob/a38a917aee626a9b9d5ce2b93964f586bf759ea0/src/net/lookup.go#L120 | Documentation,help wanted,NeedsInvestigation | low | Major |
522,381,849 | pytorch | Tensor.nbytes() returns itemsize * numel for sparse tensors | scipy sparse tensors don't have an "nbytes". I guess you could argue for this or doing nnz * itemsize, but it seems best to just error in this case.
cc @ezyang @SsnL @albanD @zou3519 @gqchen @vincentqb | module: sparse,module: autograd,triaged,actionable,fixathon | low | Critical |
522,401,369 | pytorch | [build] gcc 7.4 needs CMAKE_CXX_FLAGS="-std=gnu++11" | ## 🐛 Bug
Since I can't use conda gcc 7.3 (https://github.com/pytorch/pytorch/issues/29093), I tried to build master with system gcc `7.4` and met
```
CMake Error at third_party/fbgemm/third_party/asmjit/CMakeLists.txt:100 (target_compile_features):
target_compile_features no known features for CXX compiler
"GNU"
version 7.4.0.
Call Stack (most recent call first):
third_party/fbgemm/third_party/asmjit/CMakeLists.txt:332 (asmjit_add_target)
```
I was able to fix it by explicitly adding stdlib11, i.e., adding
```
CMAKE_CXX_FLAGS="-std=gnu++11",
```
to https://github.com/pytorch/pytorch/blob/f0dd7517f2c71a1ae566e75d935d6aefd651ff72/tools/setup_helpers/cmake.py#L289
Maybe I should submit this issue to `asmjit` repo? | module: dependency bug,module: build,triaged | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.