id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
481,832,743 | TypeScript | Provide TypeScript as an ESM | ## Search Terms
commonjs, esm, lib/typescript
## Suggestion
Currently `lib/typescript.js` and other `lib` files do not support being loaded as ES modules. They only support loading as a global script or a CommonJS module.
## Use Cases
For runtimes that want to load TypeScript as a module and not in the global namespace, they have to do some pre-processing using a packager like webpack, rollup, etc. to be able to load TypeScript as an ES Module.
In addition, there are useful CDNs like [Pike](https://www.pika.dev/cdn) which can parse npm packages, find the ES modules, and will host an optimised version designed for loading in greenfield modern/browsers.
## Examples
For example, it is impossible to currently load `lib/typescript.js` in Deno as it only supports ESM and each `import` is assumed to be a module, and therefore the `var ts` is scoped to the module. Also loading directly as a module in a browser would be possible.
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | medium | Major |
481,835,570 | neovim | Excessive memory usage with large job output (OOM) | Given t-oom.vim:
```vim
function! F(...) abort
endfunction
let j = jobstart(
\ 'rg -H --no-heading --vimgrep --sort-files .', {
\ 'on_stdout': function('F'),
\ })
echom j
echom string(jobwait([j]))
```
Run it using `nvim -u t-oom.vim` in Neovim's source directory.
After ~15s it gets killed by the OOM killer for me.
> Out of memory: Killed process 2782 (nvim) total-vm:16207288kB, anon-rss:13903108kB, file-rss:2544kB, shmem-rss:0kB
This basically gets triggered through ripgrep's `--vimgrep` option repeating
lines for every match ("." in this case).
This produces a lot of output, but it seems like it gets not freed internally,
after being passed to the output handler?
Using `stdout_buffered` results in a quicker OOM.
(I have noticed this with vim-grepper initially, where it was triggered by a
legit query, but which got found multiple times in JSON fixtures with very long
lines seemingly. In this case it gets added to the quickfix list a lot, but
there it is limited to 1024 bytes/chars)
This is on master (e56f62e9a). Neovim 0.3.8 shows the same behavior. | bug,needs:design,job-control,needs:discussion | low | Major |
481,852,809 | rust | rustc hangs/spins on example involving associated types | The following 5-line example causes `rustc` to spin:
```rust
pub struct Chicken<'a, T: Trait>(&'a Frog<'a, T::Item>);
pub struct Frog<'a, T: Trait>(&'a Chicken<'a, T>);
pub trait Trait {
type Item;
}
```
The spin happens with or without the `--edition=2018` flag.
This may be the same issue as #62430, just an even simpler reproduction case.
Output of `rustc --version --verbose`:
```
rustc 1.36.0 (a53f9df32 2019-07-03)
binary: rustc
commit-hash: a53f9df32fbb0b5f4382caaad8f1a46f36ea887c
commit-date: 2019-07-03
host: x86_64-apple-darwin
release: 1.36.0
LLVM version: 8.0
```
Using the [Playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=48ead29154c28e6dd4d0384e4569782e), I also reproduced the issue with stable Rust 1.37.0, beta Rust 1.38.0-beta.1, and nightly Rust 1.39.0 2019-08-15. | A-trait-system,A-associated-items,T-compiler,C-bug,I-hang | low | Critical |
481,866,654 | rust | Non-exhaustive patterns lead to bogus "unused variable" / "dead code" warnings | Consider this code:
```rust
fn foo() -> Option<i32> { None }
pub fn bar(x: i32) {
match foo() {
Some(_) => return,
}
let _val = Box::new(x);
}
```
This generates the following diagnostics on current nightly:
```
warning: unreachable statement
--> src/lib.rs:8:5
|
8 | let _val = Box::new(x);
| ^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(unreachable_code)]` on by default
error[E0004]: non-exhaustive patterns: `None` not covered
--> src/lib.rs:4:11
|
4 | match foo() {
| ^^^^^ pattern `None` not covered
|
= help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms
warning: unused variable: `x`
--> src/lib.rs:3:8
|
3 | pub fn bar(x: i32) {
| ^ help: consider prefixing with an underscore: `_x`
|
= note: `#[warn(unused_variables)]` on by default
```
The "unused variable" and "dead code" lints are spurious, and not helpful. This only seems like dead code because there's a missing match arm. After adding `_ => {}` in the match, all diagnostics go away. | A-lints,T-compiler,C-bug | low | Critical |
481,869,668 | flutter | Flutter can not render calendar widgets in a single frame at 60fps | ## Details
I was trying to build a calendar app and I found that flutter can not build calendars in appropriate time (under 16ms) and using multiple calendar widgets in one page causing jank. The widget that I'm trying to build is a page that shows a complete year view. It's a common design in calendar apps such as Samsung calendar:

Here is a simple code and also I didn't include the logic. Every calendar widget is a simple combination of `Column`s and `Row`s.
```dart
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: TestScreen(),
);
}
}
class TestScreen extends StatelessWidget {
TestScreen() : controller = PageController(initialPage: 1000);
final PageController controller;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Flutter Demo'),
),
body: PageView.builder(
controller: controller,
itemBuilder: (context, index) => YearWidget(),
)
);
}
}
class YearWidget extends StatelessWidget {
@override
Widget build(BuildContext context) {
return GridView.builder(
itemCount: 12,
addAutomaticKeepAlives: true,
gridDelegate: SliverGridDelegateWithFixedCrossAxisCount(
crossAxisCount: 2,
),
itemBuilder: (BuildContext context, int index) => CalendarWidget()
);
}
}
class CalendarWidget extends StatelessWidget {
@override
Widget build(BuildContext context) {
List<Widget> rows = <Widget>[];
for (var i = 0; i < 5; i++) {
rows.add(
Row(
children: List<Widget>.generate(7,
(int j) => Expanded(
child: Text((i * 7 + j + 1).toString(),
textAlign: TextAlign.center,),)),
)
);
}
return Container(
padding: EdgeInsets.all(10),
child: Column(
children: rows,
),
);
}
}
```
As you can see the code is not very complicated and the performance overlay is like this. Some of frames takes over 100ms to build:

Also it's interesting that also flutter date picker has a similar issue and can not render frames under 16ms:

## Logs
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
```
Analyzing flutter_performance_issue...
No issues found! (ran in 20.8s)
```
<!-- Finally, paste the output of running `flutter doctor -v` here, with your device plugged in. -->
```
[√] Flutter (Channel stable, v1.7.8+hotfix.4, on Microsoft Windows [Version
10.0.18362.10013], locale en-US)
• Flutter version 1.7.8+hotfix.4 at C:\tools\flutter-sdk
• Framework revision 20e59316b8 (4 weeks ago), 2019-07-18 20:04:33 -0700
• Engine revision fee001c93f
• Dart version 2.4.0
[√] Android toolchain - develop for Android devices (Android SDK version 28.0.3) • Android SDK at C:\dev\Android\SDK\android-sdk-essential
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• ANDROID_HOME = C:\dev\Android\SDK\android-sdk-essential
• Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
• Java version OpenJDK Runtime Environment (build
1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[√] Android Studio (version 3.4)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin version 35.3.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build
1.8.0_152-release-1343-b01)
[!] IntelliJ IDEA Ultimate Edition (version 2018.2)
• IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA 2018.2.5
X Flutter plugin not installed; this adds Flutter specific functionality.
X Dart plugin not installed; this adds Dart specific functionality.
• For information about installing plugins, see
https://flutter.dev/intellij-setup/#installing-the-plugins
[√] VS Code (version 1.37.1)
• VS Code at C:\Users\mahdi\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 2.22.3
[√] VS Code, 64-bit edition (version 1.34.0)
• VS Code at C:\Program Files\Microsoft VS Code
• Flutter extension version 2.22.3
[√] Connected device (1 available)
• SM G950F • 988a1b315143304a34 • android-arm64 • Android 9 (API 28)
! Doctor found issues in 1 category.
```
| platform-android,framework,c: performance,f: date/time picker,has reproducible steps,P2,found in release: 3.0,found in release: 3.1,team-android,triaged-android | low | Critical |
481,884,000 | scrcpy | Scripts to unlock the phone and to swipe up - mainly for non touchscreen devices | I am testing scrcpy on a lenovo t430 without a touchscreen, my phone is locked and these are the steps to unlock it:
adb shell input keyevent 26 && adb shell input swipe 200 500 200 0 && adb shell input text "your pin"
The second command
adb shell input swipe 200 500 200 0
allows me to perform a swipe up action, which is needed to go to the app drawer.
It could be useful to bind such events to a preconfigured key combination.
Very nice piece of software, anyway,
| feature request | low | Minor |
481,889,605 | youtube-dl | --max-filesize may not work | ## Checklist
- [* ] I'm reporting a broken site support issue
- [* ] I've verified that I'm running youtube-dl version **2019.08.13**
- [* ] I've checked that all provided URLs are alive and playable in a browser
- [ *] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [* ] I've searched the bugtracker for similar bug reports including closed ones
- [ *] I've read bugs section in FAQ
## Verbose log
```
:~/test# youtube-dl --max-filesize 20m https://www.youtube.com/watch?v=h2zkV-l_TbY
[youtube] h2zkV-l_TbY: Downloading webpage
[youtube] h2zkV-l_TbY: Downloading video info webpage
[download] Destination: ☕ Restaurant Ambience • 10H Busy Coffee Shop Background Noise-h2zkV-l_TbY.f313.webm
[download] 0.2% of 5.55GiB at 5.91MiB/s ETA 15:59^C
ERROR: Interrupted by user
```
## Description
Hello,
I notice that the "--max-size" function no longer works on youtube-dl updated.
| bug,mpd,hls | low | Critical |
481,906,087 | rust | Some closures are not inlined in release mode | Consider the following code ([playground](https://play.rust-lang.org/?version=stable&mode=release&edition=2018&gist=e58aaff523a8948c15e6f60d11efd952)):
```
fn main() {
let err = Err(());
let _: usize = err.unwrap_or_else(|err| err_exit(err));
unreachable!();
}
fn err_exit(_: ()) -> ! {
std::process::exit(1);
}
```
When compiled with rustc 1.36, it gives the following assembly:
```
core::result::Result<T,E>::unwrap_or_else:
pushq %rax
callq playground::main::{{closure}}
ud2
playground::main:
pushq %rax
callq core::result::Result<T,E>::unwrap_or_else
ud2
playground::main::{{closure}}:
pushq %rax
callq playground::err_exit
ud2
playground::err_exit:
pushq %rax
movl $1, %edi
callq *std::process::exit@GOTPCREL(%rip)
ud2
```
Note how the closure is not inlined, even though it would be trivial to do so (replace ` callq playground::main::{{closure}}` with ` callq playground::err_exit`). | A-LLVM,I-slow,A-codegen,T-compiler,C-bug | low | Major |
481,908,111 | rust | Local rustdoc search doesn't work with --no-deps | I'm having an issue that is similar to https://github.com/rust-lang/docs.rs/issues/316, but it's about local docs.
I'm using `cargo doc --no-deps --open`, and on the opened page the search does not work. It shows "Loading search results..." and doesn't complete, just like in the link above.
The JS console says: `TypeError: paths is undefined (main.js:95:15458)`, and in the JS debugger I see the following: It's trying to access `rawSearchIndex['byteorder'].p`, but `rawSearchIndex['byteorder']` does not have `p`. It looks like the following:
```
rawSearchIndex['byteorder']: {…}
doc: "This crate provides convenience methods for encoding and decoding numbers in either [big-endian or little-endian order]."
items: (265) […]
paths: (5) […]
<prototype>: {…}
```
However, for my own crate it works correctly:
```
rawSearchIndex['climeta']: {…}
doc: ""
i: (2184) […]
p: (143) […]
<prototype>: {…}
```
Actually I guess there should be nothing about `byteorder` at all in the index, because I built the docs with `--no-deps`, and alas, when building without `--no-deps`, the search works!
Now, I tried to build again with `--no-deps`, and it still works. Apparently it was trying to load some old index file that had an outdated format ...
It's great when a problem gets solved while writing the issue report 😆, but I think this issue might still serve a purpose, so I'll open it anyway, if only for others to find it. And maybe rustdoc should try harder to not load outdated index files. | T-rustdoc,E-needs-test,C-bug,A-rustdoc-search | low | Critical |
481,915,659 | TypeScript | test-262 integration | ## Search Terms
test-262, test262
## Suggestion
Run [test-262](https://github.com/tc39/test262) on TS downlevel
## Use Cases
> What do you want to use this for?
- Verify that downleveling is spec-compliant to avoid issues when changing the test target.
- Possible benefits for TS team:
- Reduce triage overhead by catching semantic errors quickly
- Reduce duplicate effort by putting creative testing work into the test-262 project
## Examples
```sh
gulp runtests # runs relevant test-262 tests
```
## Checklist
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Experience Enhancement | low | Critical |
481,922,463 | go | proposal: runtime/pprof: add new WithLabels* function that requires fewer allocations | `runtime/pprof.Labels` is used in conjunction with `runtime/pprof.WithLabels` to set pprof labels in a context for performance profiling.
https://github.com/golang/go/blob/c485506b0aae298652448e80fca35036bfa755ac/src/runtime/pprof/label.go#L59
Adding information for fine grained on demand profiling of already running binaries should idealy be very efficient so it can always stay enabled with minimal overhead. The current API could be made more efficient by requiring fewer heap allocations. Pprof information sourced from contexts added by census frameworks is used in large go deployments on every RPC request and thereby small performance gains add up to a larger resource saving across many servers.
The current `runtime/pprof` API requires census frameworks such as OpenCensus to first convert their internal representation of key and value tag pairs (in a slice or map) to a slice of strings for input to `runtime/pprof.Labels`.
https://github.com/census-instrumentation/opencensus-go/blob/df6e2001952312404b06f5f6f03fcb4aec1648e5/tag/profile_19.go#L24
This requires at least one heap allocation for a variable amount of labels. Then internaly the `Labels` functions constructs a `LabelSet` data structure which requires another allocation (the case where this uses more than one allocation will be improved with [cl/181517](https://go-review.googlesource.com/c/go/+/181517) ). All in all this makes two heap allocations per context creation with pprof labels which can potentially be avoided.
I propose to extend `runtime/pprof` to have an API that takes e.g. a mapping/iteration interface such that census frameworks can implement that interface on their internal tag representations (e.g. maps and slices with custom types) and `runtime/pprof` can then source the labels to be set in a new `runtime/pprof.WithLabels*` function without first requiring conversion between multiple internal and external data structures.
[cl/188499](https://go-review.googlesource.com/c/go/+/188499) is a quick prototype as an example how this could look like. Different other ways of making an interface that can be used are possible to reduce allocations. Note that the LabelSet struct cant be changed to an interface itself (which seems the cleaner approach) due being not API backwards compatible.
/cc @aclements @randall77 @matloob | Proposal,Proposal-Hold | medium | Major |
481,927,301 | rust | Types in error message are incorrect when saying what trait is not implemented | When trying to implement Product for a newtype the error messages saying what I should implement was incorrect.
I tried this code:
```rust
struct MyInt(i32);
impl<__RhsT> ::core::ops::Mul<__RhsT> for MyInt
where
i32: ::core::ops::Mul<__RhsT, Output = i32>,
{
type Output = MyInt;
#[inline]
fn mul(self, rhs: __RhsT) -> MyInt {
MyInt(self.0.mul(rhs))
}
}
impl ::std::iter::Product for MyInt {
#[inline]
fn product<I: ::core::iter::Iterator<Item = Self>>(iter: I) -> Self {
iter.fold(
MyInt(::core::iter::empty::<i32>().product()),
::core::ops::Mul::mul,
)
}
}
```
I expected to see this happen:
It complain that there's no Mul implemented for `MyInt * MyInt`
Instead, this happened: It complained about not being able to multiply `i32 * MyInt`
```
error[E0277]: cannot multiply `MyInt` to `i32`
--> tests/lib.rs:17:14
|
17 | iter.fold(
| ^^^^ no implementation for `i32 * MyInt`
|
= help: the trait `std::ops::Mul<MyInt>` is not implemented for `i32`
= note: required because of the requirements on the impl of `std::ops::Mul` for `MyInt`
error[E0277]: cannot multiply `MyInt` to `i32`
--> tests/lib.rs:19:13
|
19 | ::core::ops::Mul::mul,
| ^^^^^^^^^^^^^^^^^^^^^ no implementation for `i32 * MyInt`
|
= help: the trait `std::ops::Mul<MyInt>` is not implemented for `i32`
= note: required because of the requirements on the impl of `std::ops::Mul` for `MyInt`
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0277`.
error: Could not compile `derive_more`.
```
## The weird part
The correct error message is shown when you remove the unrelated Mul implementation.
```rust
struct MyInt(i32);
impl ::std::iter::Product for MyInt {
#[inline]
fn product<I: ::core::iter::Iterator<Item = Self>>(iter: I) -> Self {
iter.fold(
MyInt(::core::iter::empty::<i32>().product()),
::core::ops::Mul::mul,
)
}
}
```
## Meta
`rustc --version --verbose`:
rustc 1.36.0 (a53f9df32 2019-07-03)
binary: rustc
commit-hash: a53f9df32fbb0b5f4382caaad8f1a46f36ea887c
commit-date: 2019-07-03
host: x86_64-unknown-linux-gnu
release: 1.36.0
LLVM version: 8.0 | A-type-system,C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics,T-types | low | Critical |
481,942,333 | pytorch | Doesn't install the python module "torch" | I've built the FreeBSD package, here is the log: https://people.freebsd.org/~yuri/py36-pytorch-1.2.0.log
However, it didn't install the "torch" module, see the plist: https://people.freebsd.org/~yuri/py36-pytorch-1.2.0-plist.txt
The shar archive of the FreeBSD port: https://people.freebsd.org/~yuri/py-pytorch.shar
What could be wrong?
| module: build,triaged | low | Major |
481,959,329 | rust | Redundant semicolons are parsed as an empty tuple semicolon statement | ```rust
fn foo() {
let _ = 3;;;;
}
```
Given the above example, the rustc parser used to silently drop the redundant semicolons (`;;;`) from the AST. However, recently the parse has changed how it treats the redundant semicolons - they are now parsed as a semicolon statement with an empty tuple:
```rust
Stmt {
node: StmtKind::Semi(Expr {
node: ExprKind::Tup(vec![]),
..
}),
..
}
```
Found this while updating rustc-ap-syntax to 562.0.0 (which is built from the commit fc8765d6d8623b2b5b4ca1d526ed1d7beb3fce18) in rustfmt. | A-parser,T-compiler,C-bug | low | Minor |
481,974,509 | rust | Warn about non-printable characters in string literals | Currently a string literal with control characters like `\0` or `\v` is accepted without any warnings. The only exception is `\r`, which gives a hard error.
It makes more sense to treat all non-[printable](https://en.wikipedia.org/wiki/ASCII#Printable_characters), non \t, \n ASCII characters as a warning.
Steps to fix:
1. Add `NonPrintableAsii` to [EscapeError](https://github.com/rust-lang/rust/blob/ef1ecbefb8719e408150738664d443a73f757ffd/src/librustc_lexer/src/unescape.rs#L11)
2. Produce this error somewhere around [here](https://github.com/rust-lang/rust/blob/ef1ecbefb8719e408150738664d443a73f757ffd/src/librustc_lexer/src/unescape.rs#L135-L137)
3. Add lexer-level [tests](https://github.com/rust-lang/rust/blob/ef1ecbefb8719e408150738664d443a73f757ffd/src/librustc_lexer/src/unescape/tests.rs)
4. Handle this "error" in [`unespcape_error_reporting`](https://github.com/rust-lang/rust/blob/ef1ecbefb8719e408150738664d443a73f757ffd/src/libsyntax/parse/unescape_error_reporting.rs#L38). Note that, unlike other real errors, this one should be just a warning.
5. Adjust the affected ui tests
I am not sure how to make this warning work with `#[allow]` lint infrastructure: we definitely can't do this in the lexer.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | C-enhancement,A-lints,A-diagnostics,A-parser,T-lang,T-compiler,E-medium | low | Critical |
481,989,089 | youtube-dl | Viki old subtitles sync | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.08.13. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar site feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a site feature request
- [x] I've verified that I'm running youtube-dl version **2019.08.13**
- [x] I've searched the bugtracker for similar site feature requests including closed ones
## Description
<!--
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
Some shows of the site viki had updated their subs link sometime back because the original subs weren't synced properly, Some example shows being Dream High and Pinocchio.
For example:-
[https://www.viki.com/videos/91001v-dream-high-episode-1](url) this episode uses subtitle from the link [https://api.viki.io/v4/videos/91001v/subtitles/en.vtt?app=100000a&sig=8b1dd0339766408903c35af2f6cd1f4184af187c&stream_id=78495332&t=1566130938&token=kQ_BXDV5TLSHhRI9rKVRQmPru0024185034uti00j8h5lU000_01x](url)
but the one which gets downloaded is the other one.. included in the zip are both the files the one with correct in its name is the proper one.. when comparing you can see the difference in timestamps
[Dream High - 1x01 - Episode 1.en.forced.zip](https://github.com/ytdl-org/youtube-dl/files/3513013/Dream.High.-.1x01.-.Episode.1.en.forced.zip)
| subtitles,geo-restricted | low | Critical |
481,996,762 | go | x/crypto/scrypt: implementation not compliant with RFC 7914? | See: https://tools.ietf.org/html/rfc7914. In particular, [Section 2: scrypt Parameters](https://tools.ietf.org/html/rfc7914#section-2):
> The CPU/Memory cost parameter N ("costParameter") must be larger than 1, a power of 2, and less than 2^(128 * r / 8). The parallelization parameter p ("parallelizationParameter") is a positive integer less than or equal to ((2^32-1) * 32) / (128 * r).
Compare with: https://github.com/golang/crypto/blob/master/scrypt/scrypt.go#L200.
That is, it doesn't enforce `N < 2^(128 * r / 8)` as far as I can tell (I'm not fluent in Go). In the [Go playground](https://play.golang.org/p/M927ae0ZcQJ) I find that `N`'s upper limit when `r` is `1` is `16777215` such that `N=262144`, `r=1`, `p=8` won't cause scrypt to choke even though it should per the RFC.
Context: https://github.com/ethereum/go-ethereum/issues/19977. | ExpertNeeded,NeedsInvestigation | low | Major |
482,000,301 | youtube-dl | Download from medici.tv as logged in user account | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions
- Search the bugtracker for similar questions: http://yt-dl.org/search-issues
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm asking a question
- [x] I've looked through the README and FAQ for similar questions
- [x ] I've searched the bugtracker for similar questions including closed ones
## Question
<!--
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.
-->
I tried to download a video from medici.tv. The trailer video can be downloaded without any problem with youtube-dl. But I tried to download the whole video - so I logged in in my account.
With youtube-dl I gave -u USERNAME and -p PASSWORD as well. But there is still only the trailer video which is downloaded. It seems that trailer and whole video have the same URL.
Thanks.
| question | low | Critical |
482,007,221 | flutter | AnimatedIcon textDirection has no affect on icon directionality | i just set textDirection to TextDirection.rtl but this makes no difference when setting to TextDirection.ltr .
flutter doctor -v:
```
[✓] Flutter (Channel stable, v1.7.8+hotfix.4, on Linux, locale en_US.UTF-8)
• Flutter version 1.7.8+hotfix.4 at /home/abed/.local/flutter
• Framework revision 20e59316b8 (4 weeks ago), 2019-07-18 20:04:33 -0700
• Engine revision fee001c93f
• Dart version 2.4.0
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at /home/abed/Android/Sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.2
• Java binary at: /usr/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_212-8u212-b03-0ubuntu1.18.04.1-b03)
• All Android licenses accepted.
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/setup/#android-setup for detailed instructions).
[✓] IntelliJ IDEA Community Edition (version 2019.2)
• IntelliJ at /opt/idea-IC
• Flutter plugin version 38.2.4
• Dart plugin version 192.6459
[✓] VS Code (version 1.36.1)
• VS Code at /usr/share/code
• Flutter extension version 3.2.0
[✓] Connected device (1 available)
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 9 (API 28) (emulator)
! Doctor found issues in 1 category.
``` | framework,f: material design,a: quality,a: typography,has reproducible steps,P2,found in release: 3.7,found in release: 3.10,team-design,triaged-design | low | Minor |
482,007,607 | react | Refactor ProfilerContext to use reducer instead of multi-state | The `ProfilerContext` is currently comprised of several pieces of related state, each managed with `useState`. This necessitates awkward checks like [this](https://github.com/bvaughn/react-devtools-experimental/blob/4697f5b37967b85b2c844044aeebb5b1a740875d/src/devtools/views/Profiler/ProfilerContext.js#L126-L131) or even worse like [this](https://github.com/bvaughn/react-devtools-experimental/blob/4697f5b37967b85b2c844044aeebb5b1a740875d/src/devtools/views/Profiler/SnapshotSelector.js#L62-L73) or [this](https://github.com/bvaughn/react-devtools-experimental/blob/source/src/devtools/views/Profiler/Profiler.js#L71-L83).
This context should be refactored to use a single reducer (`useReducer`) like `TreeContext`. This is a bit more involved at the moment because of suspense and the `ProfilerContext` being higher level than the suspense cache. Although maybe we could work around this by using some sort of [subscription](https://github.com/bvaughn/react-devtools-experimental/blob/4697f5b37967b85b2c844044aeebb5b1a740875d/src/devtools/views/Profiler/ProfilerContext.js#L118-L124)?
See related issues like #16441 and commit [4697f5b](https://github.com/bvaughn/react-devtools-experimental/commit/4697f5b37967b85b2c844044aeebb5b1a740875d). | Type: Enhancement,Component: Developer Tools,React Core Team | medium | Minor |
482,009,435 | godot | Path3D Curve's points and control points are indistinguishable | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** 3.1.1 stable
<!-- Specify commit hash if non-official. -->
**Issue description:**
In the editor, the points in a curve and their respective control points(handles) are very similar (ie: both are orange circles with white border). After adding a lot of points, it's very difficult to identify the exact path without zooming in and out. If the level has a lot of props it's very difficult to trace path.
Any possible differentiation between those points(actual and control) would be helpful:
1) Color
2) Border/Outline
3) Shape
eg: Inkscape uses shapes to distinguish (and colors for indicating active selection).
<!-- What happened, and what was expected. -->
**Screenshot:**

**Steps to reproduce:**
1. Open a project
2. In a 3D scene add a Path node and add a lot of points to it
3. Try dragging control points out from many points(left-click & drag from a point by pressing Shift key)
4. Then try to distinguish the actual points from the control points
**Minimal reproduction project:**
[PathCurveIssue.zip](https://github.com/godotengine/godot/files/3513213/PathCurveIssue.zip)
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| enhancement,topic:editor,usability,topic:3d | low | Critical |
482,013,059 | godot | Using `HashMap::get` with a null value crashes the engine when the key type is `const char *` | Godot 3.2 master
It crashes because it then uses `hash_djb2`, which doesn't check for `NULL`.
https://github.com/godotengine/godot/blob/ef37f00525643e391e19b79f84fc6fd15762b3be/core/hashfuncs.h#L50-L60 | bug,topic:core,confirmed,crash | low | Critical |
482,013,264 | godot | FileAccessMemory::eof_reached() is likely off by one | Godot 3.2 master
Found out `eof_reached` is implemented this way in `FileAccessMemory`:
https://github.com/godotengine/godot/blob/ef37f00525643e391e19b79f84fc6fd15762b3be/core/io/file_access_memory.cpp#L136-L139
Which is off by one. EOF should trigger at `length`, not one after `length`.
It is further confirmed by this other function, which does check `index == length`:
https://github.com/godotengine/godot/blob/ef37f00525643e391e19b79f84fc6fd15762b3be/core/io/file_access_memory.cpp#L169-L172
This didn't cause any bug yet because this class isn't used much and luckily mostly used with `get_line()`, which checks string null-termination and then doesn't hit buffer overrun. | bug,topic:core,confirmed | low | Critical |
482,013,945 | rust | Transition rustc Parser to proc_macro token model | Currently, there are two different approaches for dealing with composite tokens like `>>` in rustc.
1. Keep tokens in composed form, and split into pieces, `>` and `>`, when necessary.
2. Keep tokens decomposed, with jointness information, and join tokens when necessary.
At the moment, the first approach is used by the parser, and the second approach is used by the proc_macro API. It would be good to move the parser to the decomposed approach as well, as it is somewhat more natural, more future-compatible (one can introduce new tokens) and having two of a thing is bad in itself!
Here are some relevant bits of the code that handle composed model:
* Composed tokens as produced by [rustc_lexer](https://github.com/rust-lang/rust/blob/71e2882973e63b9ddc837a61ac8631e6451d31a9/src/librustc_lexer/src/lib.rs#L271-L281)
* Composed tokens preserved by the [token cooking](https://github.com/rust-lang/rust/blob/71e2882973e63b9ddc837a61ac8631e6451d31a9/src/libsyntax/parse/lexer/mod.rs#L306)
* Here's the bit when we produce [a TokenTree](https://github.com/rust-lang/rust/blob/71e2882973e63b9ddc837a61ac8631e6451d31a9/src/libsyntax/parse/lexer/tokentrees.rs#L207-L210), consumed by the parser. Note that, although we are tracking jointness here, the tokens are composed.
* Here's the bit of the parser which [decomposes](https://github.com/rust-lang/rust/blob/71e2882973e63b9ddc837a61ac8631e6451d31a9/src/libsyntax/parse/parser.rs#L700-L736) tokens on the fly.
Here are the bits relevant to decomposed model:
* Gluing tokens in [TokenStreamBuilder](https://github.com/rust-lang/rust/blob/71e2882973e63b9ddc837a61ac8631e6451d31a9/src/libsyntax/tokenstream.rs#L412-L429)
* [Token::glue](https://github.com/rust-lang/rust/blob/71e2882973e63b9ddc837a61ac8631e6451d31a9/src/libsyntax/parse/token.rs#L554-L612)
Note that the `tt` matcher in `macro_rules` eats one composed token, and this is affects language specification.
That is, when we transition to decomposed model, we'll need to fix [this code](https://github.com/rust-lang/rust/blob/71e2882973e63b9ddc837a61ac8631e6451d31a9/src/libsyntax/ext/tt/macro_parser.rs#L903-L905) to eat one *composed* token to maintain backwards compatibility.
| C-cleanup,A-parser,T-compiler | low | Major |
482,014,580 | storybook | Addon-docs: Allow linking within mdx with addon-links | **Is your feature request related to a problem? Please describe.**
I can't seem to get linking using `addon-links` within an mdx file.
Neither of these options worked for me:

**Describe the solution you'd like**
Allow linking to sections, stories, and when implemented, articles, within an mdx file.
**Are you able to assist bring the feature to reality?**
No
| feature request,addon: docs,mdx | medium | Major |
482,015,269 | flutter | Support JIT mode in production/release builds. | ## Use case
Using flutter in environments where AOT isn't needed, like kiosk or desktop environments.
For certain use cases AOT adds little benefit, and can even impede performance (the JIT mode apparently can perform better).
## Proposal
Add the ability to build a 'release' mode of the flutter engine that only supports JIT. This would be basically the debug build without debugging features (observatory, etc) that might affect security and performance. | c: new feature,tool,t: gradle,P3,team-tool,triaged-tool | low | Critical |
482,019,104 | opencv | VNG demosaicing does not support 16U or Neon | ##### System information (version)
- OpenCV => 4.1.0
- Operating System / Platform => Ubuntu 16.04 64 Bit
- Compiler => GCC 5.4.0
##### Detailed description
VNG debayering options [do not support 16U images](https://github.com/opencv/opencv/blob/19a4b5137149e43fa85bef16707c40b535d195b4/modules/imgproc/src/demosaicing.cpp#L1719). The documentation is not clear about this. It would be great to see VNG support for 16U images. It would also be great to see adoption of universal intrinsics for arm SIMD support.
##### Steps to reproduce
Attempt to debayer/demosaic a 16U image with VNG:
```
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.1.0) /path/to/opencv/modules/imgproc/src/demosaicing.cpp:1719: error: (-215:Assertion failed) depth == CV_8U in function 'demosaicing'
Aborted (core dumped)
``` | feature,category: imgproc,priority: low | low | Critical |
482,039,751 | terminal | Feature Request: ⬇️ button to appear when scrolled many pages up | # Description of the new feature/enhancement
Sometimes people using the Terminal need to scroll very far up, and read a lot of previous output. They then want to return immediately all the way to the bottom of the terminal, to see the latest output.
**Currently**: users scroll a lot to get to the bottom of the terminal.
**Proposed**: when scrolled more than X lines up, a ⬇️ button should appear that scrolls to UI all the way down to the latest output. I'll leave the UI discussion up to you.
| Issue-Feature,Area-UserInterface,Area-Extensibility,Product-Terminal | low | Minor |
482,041,701 | pytorch | Problematic handling of NaN and inf in grid_sample, causing segfaults, corrupted CUDA memory, and incorrect results | _This issue is an expansion of the issue reported in https://github.com/pytorch/pytorch/issues/19826.
The discussion there diagnoses the segfault that occurs in the vectorized 2D CPU kernel. This issue covers the wider problematic handling of `NaN` and `inf` in all versions of `grid_sample` kernels. For details on `inf`, see the comment below._
### Summary
The `grid_sample` function does not have proper handling of `NaN` values for in its grid input.
The 2D CPU version segfaults under certain conditions and parameters, as described in https://github.com/pytorch/pytorch/issues/19826, and with simplified examples below.
The other `grid_sample` kernels (3D CPU, and 2D/3D CUDA) do not segfault, but produce incorrect results under certain conditions when the grid contains a `NaN` value.
Proper handling would place a `NaN` in the output for every grid location that has a `NaN`.
### Segmentation fault in the CPU 2D kernel
This is covered and diagnosed by @SsnL at https://github.com/pytorch/pytorch/issues/19826, but I want to provide a simple example to reproduce the segfault behavior, and expand on the exact conditions in which it occurs.
- Here is a simple example to reproduce the segmentation fault:
```python
>>> image = torch.rand(1, 1, 3, 3, device='cpu')
>>> grid = torch.rand(1, 3, 3, 2, device='cpu')
>>> grid[:,1,1,0] = float('nan')
>>> torch.nn.functional.grid_sample(image, grid, padding_mode='border')
Segmentation fault (core dumped)
```
- This segfault does not, however, happen if both components of a grid point are `NaN`.
Example:
```python
>>> image = torch.rand(1, 1, 3, 3, device='cpu')
>>> grid = torch.rand(1, 3, 3, 2, device='cpu')
>>> grid[:,1,1,:] = float('nan')
>>> torch.nn.functional.grid_sample(image, grid, padding_mode='border')
tensor([[[[0.2587, 0.1807, 0.2114],
[0.1993, nan, 0.2673],
[0.2065, 0.1258, 0.2002]]]])
```
which is, in fact, the correct and desired behavior.
- The segfault occurs for padding modes `border` and `reflection`, but not for `zeros` (where it works correctly).
### The CUDA kernels (both 2D and 3D)
The CUDA kernel does not segfault. However, in `border` padding mode, it produces an incorrect result as if the `NaN` value were a `-1`.
Example:
```python
>>> image = torch.arange(9, 0, -1, dtype=torch.float, device='cuda').view(1,1,3,3)
tensor([[[[9., 8., 7.],
[6., 5., 4.],
[3., 2., 1.]]]], device='cuda:0')
# set grid to identity. Note: for old versions, drop the align_corners option
>>> grid = torch.nn.functional.affine_grid(torch.tensor([[[1.,0.,0.],[0.,1.,0.]]], device='cuda'), (1,1,3,3), align_corners=True)
>>> grid[:,1,1,0] = float('nan') # set the x-coordinate of the central grid point to NaN
>>> torch.nn.functional.grid_sample(image, grid, padding_mode='border', align_corners=True)
tensor([[[[9., 8., 7.],
[6., 6., 4.],
[3., 2., 1.]]]], device='cuda:0')
>>> grid[:,1,1,:] = float('nan') # set both coordinates of the central grid point to NaN
>>> torch.nn.functional.grid_sample(image, grid, padding_mode='border', align_corners=True)
tensor([[[[9., 8., 7.],
[6., 9., 4.],
[3., 2., 1.]]]], device='cuda:0')
```
Notice the result at the central output pixel. It behaves as if the `NaN` values of the grid were actually `-1`. Unlike the `border` padding mode, however, the `zeros` and `reflection` modes work correctly (produce a `NaN` in that pixel).
### The 3D CPU kernel
- The 3D CPU implementation also does not segfault, but unlike in the CUDA version, a `NaN` value is effectively treated as if it were a `+1` under the `border` padding mode.
```python
>>> image = torch.arange(27, 0, -1, dtype=torch.float, device='cpu').view(1,1,3,3,3)
tensor([[[[[27., 26., 25.],
[24., 23., 22.],
[21., 20., 19.]],
[[18., 17., 16.],
[15., 14., 13.],
[12., 11., 10.]],
[[ 9., 8., 7.],
[ 6., 5., 4.],
[ 3., 2., 1.]]]]])
# set grid to identity. Note: for old versions, drop the align_corners option
>>> grid = torch.nn.functional.affine_grid(torch.tensor([[[1.,0.,0.,0.],[0.,1.,0.,0.],[0.,0.,1.,0.]]], device='cpu'), (1,1,3,3,3), align_corners=True)
>>> grid[:,1,1,1,0] = float('nan') # set the x-coordinate of the central grid point to NaN
>>> torch.nn.functional.grid_sample(image, grid, padding_mode='border', align_corners=True)
tensor([[[[[27., 26., 25.],
[24., 23., 22.],
[21., 20., 19.]],
[[18., 17., 16.],
[15., 13., 13.],
[12., 11., 10.]],
[[ 9., 8., 7.],
[ 6., 5., 4.],
[ 3., 2., 1.]]]]])
>>> grid[:,1,1,1,:] = float('nan') # set all 3 coordinates of the central grid point to NaN
>>> torch.nn.functional.grid_sample(image, grid, padding_mode='border', align_corners=True)
tensor([[[[[27., 26., 25.],
[24., 23., 22.],
[21., 20., 19.]],
[[18., 17., 16.],
[15., 1., 13.],
[12., 11., 10.]],
[[ 9., 8., 7.],
[ 6., 5., 4.],
[ 3., 2., 1.]]]]])
```
Notice the result in the central output pixel is `13.` in the first case (as if the grid there is `[0,0,+1]`), and `1.` in the second case (as if the grid is `[+1,+1,+1]`). As mentioned above, the same thing on CUDA results in a `15.` and a `27.` (as if the grid at that point were `[0,0,-1]` and `[-1,-1,-1]`, respectively).
That output pixel should just be a `NaN`.
- The `zeros` and `reflection` padding modes on 3D CPU always produce a `0.` result in the output wherever there is a `NaN` in the grid. I am not yet sure how to explain this last one.
### Desired behavior
Every pixel in the output for which the corresponding grid point has a `NaN` in one of its components should come out to be a `NaN`.
An alternative behavior (not advocating for this - I'm just presenting it as an option, for completeness) is to fill in a border value wherever there is a `NaN`. For the `zero` padding mode, this would fill in a `0.`. For the `border` and `reflection` padding modes, it's not clear how this would work.
In any case, the behaviors should be standardized across the different kernels.
### (Partial) Diagnoses
The 2D CPU segfault issue is diagnosed in https://github.com/pytorch/pytorch/issues/19826, and I think I have a decent idea of what's going on in the CUDA and 3D CPU `border` mode. The 3D CPU `zeros` and `reflection` modes might need another look at the code to diagnose.
I thought it would be good to write out all these cases explicitly, as I did above, since it helps for reproducing these errors and fixing them.
Once they're fixed, this will also be a good list of test cases to verify that the issues are indeed fixed.
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @ngimel | high priority,module: crash,module: cuda,triaged,module: interpolation | low | Critical |
482,045,079 | go | x/image/tiff: sony .arw files decode as a 0x0 image.Gray | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13beta1 windows/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
set GO111MODULE=
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\Paolo\AppData\Local\go-build
set GOENV=C:\Users\Paolo\AppData\Roaming\go\env
set GOEXE=.exe
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GONOPROXY=
set GONOSUMDB=
set GOOS=windows
set GOPATH=C:\Users\Paolo\go
set GOPRIVATE=
set GOPROXY=https://proxy.golang.org,direct
set GOROOT=c:\go
set GOSUMDB=sum.golang.org
set GOTMPDIR=
set GOTOOLDIR=c:\go\pkg\tool\windows_amd64
set GCCGO=gccgo
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=C:\Users\Paolo\Desktop\test\go.mod
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\Paolo\AppData\Local\Temp\go-build477435338=/tmp/go-build -gno-record-gcc-switches
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I installed the latest golang.org/x/image
I tried it on multiple .ARW files from different models but the issue occurs with all of them.
Sample image:
[_DSC4438.zip](https://github.com/golang/go/files/3513625/_DSC4438.zip)
```go
package main
import (
"fmt"
"image"
"os"
_ "golang.org/x/image/tiff"
)
func main() {
f, err := os.Open("_DSC4438.ARW")
if err != nil {
panic(err)
}
img, format, err := image.Decode(f)
f.Close()
if err != nil {
panic(err)
}
fmt.Printf("format: %s image: %#v width: %d height: %d", format, img, img.Bounds().Dx(), img.Bounds().Dy())
}
```
Result of running the code:
```
format: tiff image: &image.Gray{Pix:[]uint8{}, Stride:0, Rect:image.Rectangle{Min:image.Point{X:0, Y:0}, Max:image.Point{X:0, Y:0}}} width: 0 height: 0
```
</pre>
### What did you expect to see?
I expected to get an error when trying to decode the Sony .ARW raw image file, since Go doesn't know how to decode it.
### What did you see instead?
The image decode function returned no errors, and the image is incorrectly decoded as a 0 pixel image:
<pre>
&image.Gray{Pix:[]uint8{}, Stride:0, Rect:image.Rectangle{Min:image.Point{X:0, Y:0}, Max:image.Point{X:0, Y:0}}}
</pre>
The issue seems to have been introduced by https://github.com/golang/image/commit/7e034cad644213bc79b336b52fce73624259aeca since https://github.com/golang/image/commit/92942e4437e2b065806587df0f5d8afa565a8567 instead returns the following error when trying to decode the raw image:
<pre>
tiff: invalid format: BitsPerSample tag missing
</pre> | NeedsInvestigation | low | Critical |
482,047,190 | pytorch | Transformer Lack of Embedding Layer and Positional Encodings | The Transformer implementation docs (https://pytorch.org/docs/stable/nn.html?highlight=transformer#torch.nn.Transformer) state that they implement the original paper but fail to acknowledge that they don’t implement the following:
* Layer Norm as the default normalization option.
* Positional Encodings
* Embeddings before the encoding and decoding step
It’s fine that these are all not implemented directly in the module but making it more clear that they aren’t and were in the original paper would be helpful.
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @brianjo @mruberry @albanD @walterddr @bhosmer @cpuhrsch @anjali411 @zhangguanheng66 @jlin27 | high priority,module: docs,feature,module: nn,triaged,needs design | medium | Critical |
482,063,287 | flutter | Convert existing Objective-C TUs to use ARC. | There is an unfortunate and confusing mix of TU's in the engine that use ARC and those that don't. Migration to ARC was not done on all TU's because of utility classes in FML that that were not ARC ready. We should just migrate to ARC fully. There are too many Objective-C TUs in the engine so the migration should not be super painful. | engine,P2,team-engine,triaged-engine | low | Minor |
482,083,671 | pytorch | subprocess.CalledProcessError: Compile source in NVIDIA TX2 | ## ❓ Questions and Help
### I got some error when compile from source : after enter "python3 setup.py build" , the error displayed.
- **Environment:Jatpack 3.1 cuda 8.0 cudnn6.0.1 python3.5**
- Here is the full log:
Building wheel torch-1.3.0a0+354ecc4
-- Building version 1.3.0a0+354ecc4
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/nvidia/usb_disk/pytorch/torch -DCMAKE_PREFIX_PATH=/usr/lib/python3/dist-packages -DNUMPY_INCLUDE_DIR=/usr/lib/python3/dist-packages/numpy/core/include -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.5m -DPYTHON_LIBRARY=/usr/lib/libpython3.5m.so.1.0 -DTORCH_BUILD_VERSION=1.3.0a0+354ecc4 -DUSE_CUDA=True -DUSE_DISTRIBUTED=True -DUSE_NUMPY=True /home/nvidia/usb_disk/pytorch
-- Performing Test COMPILER_WORKS
-- Performing Test COMPILER_WORKS - Failed
CMake Error at cmake/MiscCheck.cmake:52 (message):
Could not run a simple program built with your compiler. If you are trying
to use -fsanitize=address, make sure libasan is properly installed on your
system (you can confirm if the problem is this by attempting to build and
run a small program.)
Call Stack (most recent call first):
CMakeLists.txt:294 (include)
-- Configuring incomplete, errors occurred!
See also "/home/nvidia/usb_disk/pytorch/build/CMakeFiles/CMakeOutput.log".
See also "/home/nvidia/usb_disk/pytorch/build/CMakeFiles/CMakeError.log".
Traceback (most recent call last):
File "setup.py", line 756, in <module>
build_deps()
File "setup.py", line 321, in build_deps
cmake=cmake)
File "/home/nvidia/usb_disk/pytorch/tools/build_pytorch_libs.py", line 60, in build_caffe2
rerun_cmake)
File "/home/nvidia/usb_disk/pytorch/tools/setup_helpers/cmake.py", line 314, in generate
self.run(args, env=my_env)
File "/home/nvidia/usb_disk/pytorch/tools/setup_helpers/cmake.py", line 143, in run
check_call(command, cwd=self.build_dir, env=env)
File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '-GNinja', '-DBUILD_PYTHON=True', '-DBUILD_TEST=True', '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_INSTALL_PREFIX=/home/nvidia/usb_disk/pytorch/torch', '-DCMAKE_PREFIX_PATH=/usr/lib/python3/dist-packages', '-DNUMPY_INCLUDE_DIR=/usr/lib/python3/dist-packages/numpy/core/include', '-DPYTHON_EXECUTABLE=/usr/bin/python3', '-DPYTHON_INCLUDE_DIR=/usr/include/python3.5m', '-DPYTHON_LIBRARY=/usr/lib/libpython3.5m.so.1.0', '-DTORCH_BUILD_VERSION=1.3.0a0+354ecc4', '-DUSE_CUDA=True', '-DUSE_DISTRIBUTED=True', '-DUSE_NUMPY=True', '/home/nvidia/usb_disk/pytorch']' returned non-zero exit status 1
| module: build,module: cuda,triaged | low | Critical |
482,091,388 | scrcpy | REQUEST: can you please add virtual navigation | can you please add an option for virtual navigation ?? i'm using gesture navigation mode on my MIUI 10 .. it's very hard to draw those gestures with mouse or touch-pad and i don't want to switch from full-screen mode on my phone. thanks in advance :) | feature request | low | Major |
482,094,144 | pytorch | PyTorch 1.2 'module' object has no attribute 'BFloat16StorageBase' | I installed PyTorch 1.2 from pip:
`pip2 install torch torchvision --user`
Then when I import torch, it got the following error:
`Python 2.7.12 (default, Nov 12 2018, 14:36:49) `
`[GCC 5.4.0 20160609] on linux2`
`Type "help", "copyright", "credits" or "license" for more information.`
`>>> import torch`
`Traceback (most recent call last):`
` File "<stdin>", line 1, in <module>`
`File "/home/in4ight/.local/lib/python2.7/site-packages/torch/__init__.py", line 228, in <module>`
`class BFloat16Storage(_C.BFloat16StorageBase, _StorageBase):`
`AttributeError: 'module' object has no attribute 'BFloat16StorageBase'`
| triaged,module: undefined reference,module: vision | low | Critical |
482,097,134 | pytorch | NetworkX's Version | ## 📚 Documentation
I wonder if this is still the case:
https://github.com/pytorch/pytorch/blob/dfdb86a59577d8b0fc4565988a8ac01b5ecd339f/docker/caffe2/jenkins/common/install_python.sh#L143-L146
I tried the latest nextworkx for memonger and haven't hit an issue so far. | caffe2 | low | Minor |
482,161,481 | pytorch | Gloo scatter gives wrong result for stride != 1 | ## 🐛 Bug
`torch.distributed.scatter` on Gloo backend gives wrong result if input tensor stride != 1.
## To Reproduce
On worker 0:
```python
import torch.distributed
torch.distributed.init_process_group('gloo', world_size=2, rank=0, init_method='file:///tmp/pt-dist')
out = torch.zeros([3])
torch.distributed.scatter(out, [torch.linspace(0, 5, 6)[::2]]*2, 0)
print(out) # tensor([0., 1., 2.])
```
On worker 1:
```python
import torch.distributed
torch.distributed.init_process_group('gloo', world_size=2, rank=1, init_method='file:///tmp/pt-dist')
out = torch.zeros([3])
torch.distributed.scatter(out, [], 0)
print(out) # tensor([0., 1., 2.])
```
## Expected behavior
```
tensor([0., 2., 4.])
```
## Environment
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] numpy==1.16.4
[pip3] torch==1.2.0
[pip3] torchvision==0.4.0
[conda] Could not collect
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera | oncall: distributed,module: bootcamp,triaged | low | Critical |
482,190,450 | vscode | [css] code completion for gradient functions | There is no code hint when VSCode edits the `linear-gradient`, `radial-gradient` function of the CSS `background-image` property.
- VSCode Version: 1.36.1
- OS Version: Windows 10
Reproduce steps:
1. Open a CSS file
2. Edit the `background-image` property and write the `linear-gradient` function.
3. Write code in parentheses only highlights without code hints
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes

---
The above is Google Translate, the original is as follows
# CSS background-image 属性代码提示有点问题
在 VSCode 编辑 CSS `background-image` 属性的 `linear-gradient`, `radial-gradient` 函数时,没有任何代码提示。
- VSCode Version: 1.36.1
- OS Version: Windows 10
重现步骤:
1. 打开一个 CSS 文件
2. 编辑 `background-image` 属性,写上 `linear-gradient` 函数
3. 在括号内写代码只有高亮而没有代码提示 | feature-request,css-less-scss | low | Minor |
482,246,633 | go | proposal: encoding/json: opt-in for true streaming support | # Overview
I have long wanted proper streaming support in the `encoding/json` library. I’ve been doing some homework to understand the current state of things, and I think I’ve come to grips with most of it.
A number of previous issues relate to this topic: https://github.com/golang/go/issues/7872, https://github.com/golang/go/issues/11046, https://github.com/golang/go/issues/12001, https://github.com/golang/go/issues/14140
In a nutshell: The library implicitly guarantees that marshaling will never write an incomplete JSON object due to an error, and that during unmarshaling, it will never pass an incomplete JSON message to `UnmarshalJSON`, and this seems a reasonable, conservative default, but is not always the desired behavior.
Work toward this has been done on a couple of occasions, but abandoned or stalled for various reasons. See https://go-review.googlesource.com/c/go/+/13818/ and https://go-review.googlesource.com/c/go/+/135595
See also my related post on golang-nuts: https://groups.google.com/d/msg/golang-nuts/ABD4fTkP4Nc/bliIAAAeAQAJ
# The problem to be solved
Dealing with large JSON structures is inefficient, due to the internal buffering done by `encoding/json`. `json.NewEncoder` and `json.NewDecoder` appear to offer streaming benefits, but this is mostly an idiomatic advantage, not a performance one, as internal buffering still takes place.
To elaborate:
When encoding, even with `json.Encoder`, the entire object is marshaled into memory, before it is written to the `io.Writer`. This proposal allows writing the JSON output immediately, rather than waiting for the entire process to complete successfully first.
The same problem occurs in reverse--when reading a large JSON object: you cannot begin processing the result until the entire result is received.
# A naïve solution
I believe a simple solution (simple from the perspective of a consumer of the library--the internal changes are not so simple) would be to add two interfaces:
type StreamMarshaler interface {
MarshalJSONStream(io.Writer) error
}
type StreamUnmarshaler interface {
UnmarshalJSONStream(io.Reader) error
}
During (un)marshaling, where `encoding/json` looks for `json.Marshaler` and `json.Unmarshaler` respectively, it will now look for (and possibly prefer) the new interfaces instead. Wrapping either the old or new interfaces to work as the other is a trivial matter.
With this change, and the requisite internal changes, it would be possible to begin streaming large JSON data to a server immediately, from within a `MarshalJSONStream()` implementation, for instance.
The drawback is that it violates the above mentioned promise of complete reads and writes, even with errors.
# Making it Opt-in
To accommodate this requirement, I believe it would be possible to expose the streaming functionality _only_ with the `json.Encoder` and `json.Decoder` implementations, and only when `SetDirect*` (name TBD, borrowed from https://go-review.googlesource.com/c/go/+/135595/8/src/encoding/json/stream.go#283) is enabled. So further, the following two functions would be added to the public API:
func (*Encoder) SetDirectWrite()
func (*Decoder) SetDirectRead()
The default behavior, even when a type implements one of the new `Stream*` interfaces, will be to operate on an entire JSON object at once. That is to say, the Encoder will internally buffer `MarshalJSONStream`'s output, and process any error before continuing, and a decoder will read an entire JSON object into a buffer, then pass it to `UnmarshalJSONStream` only if there are no errors.
However, when `SetDirect*` is enabled, the library will bypass this internal buffering, allowing for immediate streaming to/from the source/destination.
Enabling streaming with the `SetDirect*` toggle could be enough to already experience a benefit for many users, even without the use of the additional interfaces above.
Toggling `SetDirect*` on will, of course, enable streaming for all types, not just those which implement the new interface above, so this could be considered a separate part of the proposal. In my opinion, this alone would be worth implementing, even if the new interface types above are done later or never.
# Internals
CLs 13818 and 135595 can serve as informative for this part of the discussion. I've also done some digging in the `encoding/json` package (as of 1.12) recently, for more current context.
A large number of internal changes will be necessary to allow for this. I started playing around with a few internals, and I believe this is doable, but will mean a lot of code churn, so will need to be done carefully, in small steps with good code review.
As an exercise, I have successfully rewritten`indent()` to work with streams, rather than on byte slices, and began doing the same with `compact()`. The `encodeState` type would need to work with a standard `io.Writer` rather than specifically a `bytes.Buffer`. This seems to be a bigger change, but not technically difficult. I know there are other changes needed--I haven't done a complete audit of the code.
An open question is how these changes might impact performance. My benchmarks after changing `indent()` showed no change in performance, but it wasn't a particularly rigorous test.
With the internals rewritten to support streams, then it's just a matter of doing the internal buffering at the appropriate place, such as at API boundaries (i.e. in `Marshal()` and `Unmarshal()`), rather than as a bulit-in fundamental concept. Then, as described above, turning off that buffering when properly configured above.
# Final comments
To be clear, I am interested in working on this. I’m not just trying to throw out a “nice to have, now would somebody do this for me?” type of proposal. But I want to make sure I fully understand the history and context of this situation before I start too far down this rabbit hole.
I'm curious to hear the opinions of others who have been around longer. Perhaps such a proposal was already discussed (and possibly rejected?) in greater length than I can find in the above linked tickets. If so, please point me to the relevant conversation(s).
I am aware of several third-party libraries that offer some support like this, but most have various drawbacks (relying on code generation, or over-complex APIs). I would love to see this kind of support in the standard library.
If this general direction is approved, I think the first step is to break it into smaller parts that can be accomplished incrementally. I have given this thought, but so as not to jump the gun too much, will withhold my thoughts for a while, to allow proper discussion.
And one last aside: CL 13818 also added support for marshaling channels. That may or may not be a good idea (my personal feeling: probably not), but that can be addressed separately. | Proposal | medium | Critical |
482,299,954 | angular | Only partial bootstrap without warning when using `ngDoBootstrap()` and `@NgModule.bootstrap` components | # 🐞 bug report
### Is this a regression?
No, I don't think so.
### Description
Only partial bootstrap happens while using bootstrap for entry-components in hybrid app. It looks like `ngDoBootstrap` is being ignored while `AppModule` contains entry in `bootstrap` but tit also depends how do we use `bootstrapModule()`.
Is it intended behaviour? If so, please document this in [upgrade guid](https://angular.io/guide/upgrade).
This should be fixed/simplified while keeping in mind:
- Ability to render standalone Angular component without having it to be downgraded
- Ability to render downgraded Angular component (as it is now)
## 🔬 Minimal Reproduction
### Bootstraps both frameworks, renders both AngularJS and Angular components with downgrading
```typescript
ng1Module.directive("appComponent", downgradeComponent({ component: AppComponent }));
@NgModule({
declarations: [
AppComponent
],
imports: [
BrowserModule,
UpgradeModule
],
providers: [],
entryComponents: [
AppComponent
]
})
export class AppModule {
public constructor(private upgrade: UpgradeModule) {}
public ngDoBootstrap(): void {
this.upgrade.bootstrap(document.documentElement, ["ng1Module"], { strictDi: true });
}
}
platformBrowser().bootstrapModule(AppModule);
```
### Bootstraps only Angular, only one component is being rendered
```typescript
@NgModule({
declarations: [
AppComponent
],
imports: [
BrowserModule,
UpgradeModule
],
providers: [],
bootstrap: [
AppComponent
]
})
export class AppModule {
public constructor(private upgrade: UpgradeModule) { }
public ngDoBootstrap(): void {
// Ignored?
this.upgrade.bootstrap(document.documentElement, ["ng1Module"], { strictDi: true });
}
}
platformBrowser().bootstrapModule(AppModule);
```
### Bootstraps both frameworks, renders both AngularJS and Angular components without downgrading
```typescript
@NgModule({
declarations: [
AppComponent
],
imports: [
BrowserModule,
UpgradeModule
],
providers: [],
bootstrap: [
AppComponent
]
})
export class AppModule {
public constructor(private upgrade: UpgradeModule) { }
public ngDoBootstrap(): void {
// Obsolete because its being ignored?
this.upgrade.bootstrap(document.documentElement, ["ng1Module"], { strictDi: true });
}
}
platformBrowser().bootstrapModule(AppModule).then((platformRef) => {
const upgrade = platformRef.injector.get(UpgradeModule);
upgrade.bootstrap(document.documentElement, ["ng1Module"], { strictDi: true });
});
```
## 🌍 Your Environment
**Angular Version:**
<pre><code>
@angular/[email protected]
@angular/[email protected]
@angular/[email protected]
@angular/[email protected]
@angular/[email protected]
@angular/platform@ser": "~8.2.2
@angular/platform@ser-dynamic": "~8.2.2
@angular/[email protected]
@angular/[email protected]
</code></pre>
| hotlist: error messages,area: core,state: confirmed,P4 | low | Critical |
482,306,949 | terminal | Figure out what to do with keybindings whose characters could be triggered by multiple VKs | I tried to map ctrl+shift+| for vertical split but keybinding changed to \\(but everything works correctly) additionally i have second backslash button between left shift and z key and using this key combination doesn't work.
# Environment
Windows build number: 10.0.18963.0
Windows Terminal version (if applicable): 0.3.2171.0
Keyboard: Logitech K280e
# Steps to reproduce
1. Add binding:
`{
"command" : "splitVertical",
"keys" :
[
"ctrl+shift+|"
]
},`
2. Save and close settings
3. Close Terminal
4. Open Terminal and settings
5. Binding looks like this:
`{
"command" : "splitVertical",
"keys" :
[
"ctrl+shift+\\"
]
},`
No matter what during this steps left backslash never works.
# Expected behavior
Keep original symbol and accept left backslash key as trigger.
| Issue-Feature,Help Wanted,Area-Input,Product-Terminal | low | Major |
482,307,427 | rust | Unhelpful error message E0505 for self-referencing lifetime constraint | Given the following code:
```rust
trait Handler {}
struct Easy2<H: Handler>(H);
struct Context<'r>(Option<&'r [u8]>);
impl<'r> Handler for &'r mut Context<'r> {}
fn perform<'a>(ctx: &'a mut Context<'a>) {
Easy2(ctx);
}
fn build_response(_ctx: Context) {}
fn call() {
let mut ctx = Context(None);
perform(&mut ctx);
build_response(ctx);
}
```
Rust reports:
```
error[E0505]: cannot move out of `ctx` because it is borrowed
--> src/lib.rs:15:20
|
14 | perform(&mut ctx);
| -------- borrow of `ctx` occurs here
15 | build_response(ctx);
| ^^^
| |
| move out of `ctx` occurs here
| borrow later used here
```
This seems to be a confusing error, and doesn't help diagnosing where the problem is. The real fix here is to remove the lifetime annotations on `perform` and have `impl Handler` use different lifetimes for the two places, but this is totally unobvious from the error message.
It might be more helpful if it can point out that this is still borrowed because `ctx` forms a self-reference due to `<'a>(ctx: &'a mut Context<'a>)` and the field.
And it might also be useful to warn `impl<'r> Handler for &'r mut Context<'r>`. I'm not sure whether a declaration like this where self-reference can arise has any legit use cases. It might be helpful to suggest people giving them two different lifetimes. | C-enhancement,A-diagnostics,A-lifetimes,A-borrow-checker,T-compiler | low | Critical |
482,316,138 | rust | Type inference fail in a closure | Let's have a look at two similar closures:
```rust
fn foo(v: &Vec<i32>) -> i32 {
let x: usize = 0;
let condition = |idx| v[idx] > 0;
if condition(x) {0} else {1}
}
fn bar(v: &Vec<(i32, i32)>) -> i32 {
let x: usize = 0;
let condition = |idx| v[idx].0 > 0;
if condition(x) {0} else {1}
}
```
The first one is successfully compiled, but the second one requires additional type annotations: [playground 4](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=348436b25088e1bf3e53c52401aff118)
It seems that there is enough information for fully automatic type inference in both cases. Is it a compilation bug then? | A-closures,T-compiler,A-inference,C-bug | low | Critical |
482,335,959 | scrcpy | Request: when an emulator is running, and a device is connected, still auto-use the device | In this case it doesn't make sense that scrcpy will try to connect to the emulator. It should connect automatically directly to the device. | feature request | low | Minor |
482,408,480 | godot | Visual script: Class Constant list does not include Input | **Godot version:**
v3.1.stable.official
**OS/device including version:**
Windows 10 Enterprise 10.0.17134 Build 17134
**Issue description:**
I created a Class Constant node to get access to the MouseMode enum in the Input class, the Input class does not show up on the list even though Performance does for example.
**Steps to reproduce:**
Create a Visual Script
Create a Class Constant node
Click Base Type in the inspector with the node selected
Search for Input and see it isn't there
| bug,confirmed,topic:visualscript | low | Major |
482,413,342 | TypeScript | [Feature Request] A way to specify preference to emit `typeof` a variable instead | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
emit, typeof, variable
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
I'm just opening this issue so I have somewhere to refer to.
I don't necessarily think this is a good idea.
I'm just thinking about a problem I have aloud.
It would be nice if I had a way to annotate when I would prefer emit to use `typeof variable` instead of emitting the `variable`'s type.
```ts
type SomeType<T> = { x : T };
//Unsure of what syntax to use
type SomeType2<typeof T> = { x : T };
const variable = { prop : "value" };
/*
Expected:
type Foo = {
x: {
prop: string;
};
}
*/
type Foo = SomeType<typeof variable>;
/*
Expected:
type Foo2 = {
x: typeof variable;
}
*/
type Foo2 = SomeType2<typeof variable>;
/*
Expected:
type Bar = {
x: { prop : "value" };
}
*/
type Bar = SomeType<{ prop : "value" }>;
/*
Expected:
type Bar2 = {
x: { prop : "value" };
}
*/
type Bar2 = SomeType2<{ prop : "value" }>;
declare function f_1_1<T> (t : T) : SomeType<T>;
declare function f_1_2<T> (t : T) : SomeType2<T>;
declare function f_2_1<typeof T> (t : T) : SomeType<T>;
declare function f_2_2<typeof T> (t : T) : SomeType2<T>;
//Expected: { x: { prop: string } }
export const r_1_1 = f_1_1(variable);
//Call-site does not have preference for `typeof` emit
//Expected: { x: { prop: string } }
export const r_1_2 = f_1_2(variable);
//Expect to propagate preference to emit `typeof`
//Expected: { x: typeof variable }
export const r_2_1 = f_2_1(variable);
//Expected: { x: typeof variable }
export const r_2_2 = f_2_2(variable);
/*
Variable with nested properties
*/
const baz = { a : { b : { c : "hi" } } };
//Expect to propagate preference to emit `typeof`
//Expected: { x: typeof baz.a }
export const r_2_1 = f_2_1(baz.a);
//Expected: { x: typeof baz.a }
export const r_2_2 = f_2_2(baz.a);
```
It's just a **preference**. If such an emit can't be done, then it falls back to the default emit behaviour.
## Use Cases
My use case is that one of my subprojects initially spent 15s on check but 45s on emit.
A lot of it is because it's expanding the type of some variables (110+ lines) when it can just use `typeof variable` (1 line)
Right now, it's at 18s to check, 47s to emit.
If I can make it emit `typeof variable` for the parts where it is possible,
I can probably reduce my emit size and time by a lot.
-----
A SQL table has a bunch of columns, primary keys, candidate keys, etc.
For type-safe `JOIN` clauses and expressions in a query builder, these expressions, clauses, queries, etc. need to have information about the table(s) being used be part of their generic type param.
However, because TS always expands the type as much as possible, I end up with 110+ lines when a simple `typeof myTable` or `typeof myTable.columns` or `typeof import("../table").myTable` or `typeof import("../table").myTable.columns` would do in most cases.
I'm still trying to trim the amount of information each type needs about a table but there's just no escaping that a lot of information is still needed for SQL queries to be checked properly during compile-time.
-----
## Examples
See above suggestion.
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Needs Investigation | low | Critical |
482,458,985 | flutter | Setting Google Maps key in Runtime | ## Use case
I have an app that should show a map for different clients by a white label application.
But I need to use a different key for each white-labeled app
## Proposal
1-Simple solution: add documentation for white label google maps key and SHA-1
2-Half solution: in iOS, it could be programmatically
3-You talk to Android Maps developers to support runtime key in SKD beta 3 | c: new feature,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Minor |
482,470,065 | youtube-dl | Vider support | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.08.13. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.08.13**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://vider.info/vid/+fn11es1
- Single video (embed): https://vider.info/embed/video/n11es1
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Hello,
Please add support for vider.info,
Thx
| site-support-request | low | Critical |
482,498,227 | flutter | Textfield in Drawer is hidden behind keyboard when in focus | In this video, I click on the password text field.
https://youtu.be/-q6xS5Q3CKo | a: text input,framework,f: material design,f: scrolling,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Major |
482,504,621 | pytorch | Improve binary release for PyTorch domain library | ## 🚀 Feature
Based on the retro meeting following PyTorch 1.2.0 release, the team agreed to improve binary release process across PyTorch domain libraries:
+ torchvision @fmassa
+ torchtext @zhangguanheng66
+ torchaudio @vincentqb @jamarshon
A few general points:
+ Clear timeline for release to avoid the last minute stress.
+ Unified release scripts for domain libraries. Some common scripts could stay in the repo and are checked out for release branches.
+ Ship nightlies in the future?
+ Standardize the binary testing with a test suite
+ A general guideline for testing quality
Binary release for Windows
+ Better binary support for torchvision. Support Windows for torchtext, torchaudio
+ The Windows binary is actually under high demand, based on the feedback from torchvision
+ Nightlies?
CC @soumith @cpuhrsch @ezyang @peterjc123
cc @ezyang | module: binaries,triaged,better-engineering | medium | Critical |
482,539,346 | react | DevTools: An easier way to see all siblings | I have a particular pattern that I struggle with when navigating deep trees in devtools: I want to see all siblings of a node together.
Say I'm in the middle of something and I wonder what are all nodes on the same level. It's super hard to actually get to that state. I wonder if we could tweak "left" button to do that as an intermediate state.
* first press: collapse the current node
* second press: collapse all siblings (new)
* third press: move to the parent
Maybe this is too crazy :-) Or maybe there's another mechanic that can achieve the same effect. The goal here is to be able to make sense of the tree structure by going _upwards_. Currently implementation details of children prevent me from seeing it. (At least, with the "expand" mode on — which is now on by default.)
---
Originally reported by @gaearon via https://github.com/bvaughn/react-devtools-experimental/issues/384 | Type: Enhancement,Component: Developer Tools,React Core Team | low | Major |
482,540,711 | react | DevTools: Occasional FOUC when loading DevTools | Seems to only happen the first time DevTools is opened after being installed (or perhaps the first time after the browser is opened).
---
Originally reported via https://github.com/bvaughn/react-devtools-experimental/issues/381 | Type: Bug,Component: Developer Tools,React Core Team | medium | Minor |
482,542,595 | react | Remember saved component filters by url or in bookmarks | It would be cool if it will support saving filter based on url or some bookmarks. I think it will be really useful for switching between different projects. And seems that bookmarks is better solution because usually people have some different stages like production/pre-production/local development.
---
Originally reported by @7rulnik via https://github.com/bvaughn/react-devtools-experimental/issues/359 | Type: Discussion,Component: Developer Tools,React Core Team | low | Minor |
482,543,940 | react | DevTools: React Native: Remember saved component filters between reloads | DevTools v4 added a pretty powerful new component filtering feature that enables devs to filter out components by type, name, or file system location. Because these filters can be a bit elaborate to create, they are saved between sessions to improve dev experience.
**Unfortunately, I don't think I am going to be able to support the persistence functionality for React Native.** (In other words, filters will be forgotten each time you reload the app.)
The reason for this is a mix of timing and context. The biggest limiting factor is the lack of a synchronous storage option. React Native has a couple of faux sync storage options, but they just in-memory wrappers around an async storage layer and they require async initialization. That _could_ work if the React Native backend waited to initialize DevTools until it also initialized the async storage layer, _but_ this has implications on reload-and-profile support (#336).
---
Originally reported via https://github.com/bvaughn/react-devtools-experimental/issues/337 | Type: Enhancement,Component: Developer Tools,React Core Team | medium | Minor |
482,544,193 | react | DevTools: React Native: Support reload-and-profile | React DevTools v4 adds a new reload and profile feature to measure perf for application "mount" ([although it required a bit of hacking](https://github.com/bvaughn/react-devtools-experimental/pull/35)). I don't feel knowledgeable enough about React Native to tackle it, so my current plan is to just **not** support this feature for RN.
If we did decide to support it,I think we would need to solve the following:
1. A reload hook on the backend that worked for all bundle types (not just DEV).
2. Some assurance that the backend will be injected/initialized _before_ the first mount/commit (or a mechanism to delay the first commit, like we do in the browser).
3. Some way for third party code to request a production+profiling build ([similar to how DOM does it](https://fb.me/react-profiling)).
4. A sync storage mechanism (or some other way for DevTools could leave a flag for itself so it knows to begin profiling immediately after reload+connection).
---
Originally reported via https://github.com/bvaughn/react-devtools-experimental/issues/336 | Type: Enhancement,Component: Developer Tools,React Core Team | medium | Major |
482,547,222 | react | DevTools: Profiler: Show which hooks changed | # "Can you show which hooks changed?"
...is a question I've heard a couple of times with regard to the new Profiler change-tracking feature. This request is certainly understandable, but it presents a couple of challenges:
1. Identifying which hooks values change would requires shallowly re-rendering each function component.
2. Identifying a hook in a non-ambiguous way requires displaying the full hooks tree structure, since hooks aren't named. (Alternately we could support named hooks, #16474)
Let's take each of a look at each of these below.
## 1 - Identifying which hooks values change
One of the challenge for DevTools when it comes to hooks is identifying custom hooks. Sebastian's [proposed solution](https://github.com/bvaughn/react-devtools-experimental/blob/master/src/backend/ReactDebugHooks.js) is that DevTools temporarily overrides React's hooks dispatcher while it shallowly re-renders the component. During the re-render, each time one of the built-in hooks is used, our override implementation parses the stack to identify "custom hooks" (functions higher up in the callstack that begin with "use"). After render is completed, we reassemble this information into a tree structure which DevTools can display.
Currently we only do this shallow render when a component is [inspected](https://github.com/bvaughn/react-devtools-experimental/blob/master/OVERVIEW.md#inspecting-an-element), but in order for us to track which hooks have changed while profiling, we would need to shallowly render _every_ component using hooks during the profiling session. Mostly likely we would have to do this during the performance sensitive "commit" phase since that's when DevTools is notified of an update.
I think we could do better than re-running the above hooks override for every component on every commit if we:
* Created a map of Fiber to cached hooks tree structure.
* Lazily populate the above map (by shallow re-rendering) only when a component was updated for the first time.
* Compared Fiber `memoizedState`s to identify changes on future commits and map them back to the tree structure based on their position in the list structure. <sup>1</sup>
However, even with the above optimizations this would still add significant overhead to a performance sensitive phase.
<sup>1</sup> I think this should work but might also end up being complicated to implement.
## 2 - Identifying a hook
Although the variables that hooks values are assigned to are meaningfully named, the hooks themselves are unnamed. Because of this, DevTools has no feasible way of identifying a hook short of displaying the entire hooks tree structure. Consider the following example code:
```js
function useCustomHook(...) {
const [foo, setFoo] = useState(...);
// ...
}
function ExampleComponent(props) {
const [bar, setBar] = useState(...);
const [baz, setBaz] = useState(...);
const custom = useCustomHook(...);
// ...
}
```
The example above shows 4 hooks: three `useState` and one custom. Let's say that "foo" and "baz" changed in a particular render. How would DevTools identify this? It could just show "two state hooks" but that's not very helpful. I think the only way we could identify it would be to show the entire tree, and visually highlight which hooks in it have changed:
```
State
State *
CustomHook
State *
```
This is _okay_ but it's not great unless the developer is cross-referencing the component (and probably the custom hooks definition as well). To help with this, we could also _show the values_ but now we're adding more overhead in terms of trackin and bridge traffic.
## In summary
Clearly both of these challenges can be overcome but they are non-trivial to implement and they will certainly add more runtime overhead to the profiler. Because of this, it may be a while before we add this feature to the DevTools.
---
Originally reported via https://github.com/bvaughn/react-devtools-experimental/issues/312 | Type: Enhancement,Component: Developer Tools,React Core Team | high | Critical |
482,547,444 | react | DevTools: Component bookmarks | Product developers sometimes find it useful to jump back and forth between a few components in a tree. Currently this requires scrolling or using the selection tool. Maybe we could allow you to temporarily bookmark one or more components somehow? Then the existing Search interface could maybe be repurposed to let you step between bookmarked components (when there's no search text).
These bookmarks would probably not need to be persisted between reloads, so they could be associated with the specific in-memory element<sup>1</sup>.
<sup>1</sup> Although this association would be lost with a filter change.
---
Originally reported via https://github.com/bvaughn/react-devtools-experimental/issues/305 | Type: Enhancement,Type: Discussion,Component: Developer Tools,React Core Team | low | Minor |
482,550,567 | react | DevTools: Fix disabled hooks lint rule | Disabled via 00f6466
More context at https://github.com/bvaughn/react-devtools-experimental/pull/154#discussion_r275134664
---
Originally reported via https://github.com/bvaughn/react-devtools-experimental/issues/156 | Component: Developer Tools,React Core Team | medium | Minor |
482,550,722 | react | DevTools: Collect more info when profiling | Some feedback I've heard from a DevTools user (roughly transcribed by me):
> I'm trying to pinpoint those renders...with hooks, it's sometimes more unclear to me why something is rendering...I generally don't use devtools much anywhere. I use console.log. But Redux devtools worked really well for me because I could see when things were changing and what exactly changed.
Maybe we could add an opt-in mode (in Settings > Profiler) to collect more data when profiling about _why_ a component rendered. For example, if `props` or `state` changed, we could show which keys changed (just their name, not their values). Maybe we could do something similar for context and for hooks?
Then we could add this information to the right side panel for the selected fiber in the Profiler UI.
---
Originally reported via https://github.com/bvaughn/react-devtools-experimental/issues/98 | Type: Enhancement,Component: Developer Tools,React Core Team | low | Minor |
482,550,920 | react | DevTools: Should Profiler surface the base duration? | Benoit shared feedback that it would be helpful to show the base duration for the tree (and/or selected element) to get a sense of the total cost over time. (Not sure yet what we'd call this.)
---
Originally reported via https://github.com/bvaughn/react-devtools-experimental/issues/55 | Type: Enhancement,Component: Developer Tools,React Core Team | medium | Minor |
482,551,102 | react | DevTools: Better linking between browser Performance tab and DevTools Profiler | Notes from a chat with Benoit:
It would be nice if starting profiling (or reload and profiling) in the browser auto-started profiling in DevTools.
It would also be nice if viewing a range of time in the Performance tab narrowed down the commits within the Profiler. At least starting the profilers at the same time would enable a manual match-up.
To my knowledge, none of the currently available APIs (including experimental) would enable us to support this level of integration.
---
If we were to make use of the DevTools protocol, we could [`Profiler.start`](https://vanilla.aslushnikov.com/?Profiler.start) and [`Profiler.stop`](https://vanilla.aslushnikov.com/?Profiler.stop) the built-in profiler in sync with React's own profiler. Chrome's profiler also dispatches [`Profiler.consoleProfileStarted`](https://vanilla.aslushnikov.com/?Profiler.consoleProfileStarted) and [`Profiler.consoleProfileFinished`](https://vanilla.aslushnikov.com/?Profiler.consoleProfileFinished) events which we _could_ use to follow the browser's profiler if we wanted to.
There do not appear to be any APIs/events for syncing the zoomed-in range.
---
Originally reported via https://github.com/bvaughn/react-devtools-experimental/issues/37 | Type: Enhancement,Component: Developer Tools,React Core Team | medium | Major |
482,552,635 | create-react-app | [Option] Disable automatic dependency installation | ### Is your proposal related to a problem?
No
### Describe the solution you'd like
It would be nice to be able to use create-react-app to create a new project without automatically installing all of the dependencies. Something like ```create-react-app app-name --skipInstall``` similar to how the [@angular/cli](https://angular.io/cli/new) works.
### Describe alternatives you've considered
It doesn't appear that there are any options for configuring create-react-app other than additional logging. Manually creating a React project from scratch would be counterproductive.
### Additional context
I find myself generating a lot of template/placeholder projects with create-react-app, without the need to run them right away. It would greatly speed up the process if I could optionally omit the dependency installation step.
| issue: proposal | low | Major |
482,554,044 | flutter | Flutter crashes when you press multiple buttons using a mouse on Chromebook and release out of window | ## Description
- Run flutter_gallery on a Chromebook and use a mouse
- Press both mouse buttons anywhere and hold
- Drag out of the window
- Release one of the buttons (or both buttons)
- Move back into the window
- App crashes
On the other hand, if you do the same thing but only presses only one button at the 2nd step, then it doesn't crash
## Investigation
This is the assertion that caused the crash:
```
E/flutter ( 3522): [ERROR:flutter/lib/ui/ui_dart_state.cc(148)] Unhandled Exception: 'package:flutter/src/gestures/converter.dart': Failed assertion: line 134 pos 20: '!state.down': is not true.
E/flutter ( 3522): #0 _AssertionError._doThrowNew (dart:core-patch/errors_patch.dart:40:39)
E/flutter ( 3522): #1 _AssertionError._throwNew (dart:core-patch/errors_patch.dart:36:5)
E/flutter ( 3522): #2 PointerEventConverter.expand (package:flutter/src/gestures/converter.dart:134:20)
E/flutter ( 3522): #3 _SyncIterator.moveNext (dart:core-patch/core_patch.dart:144:12)
E/flutter ( 3522): #4 ListQueue.addAll (dart:collection/queue.dart:655:25)
E/flutter ( 3522): #5 GestureBinding._handlePointerDataPacket (package:flutter/src/gestures/binding.dart:84:27)
E/flutter ( 3522): #6 _rootRunUnary (dart:async/zone.dart:1136:13)
E/flutter ( 3522): #7 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
E/flutter ( 3522): #8 _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
E/flutter ( 3522): #9 _invoke1 (dart:ui/hooks.dart:263:10)
E/flutter ( 3522): #10 _dispatchPointerDataPacket (dart:ui/hooks.dart:172:5)
```
I added print statements of `event` at the beginning of `onGenericMotionEvent` and `onTouchEvent` in `AndroidTouchProcessor.java`. This is the log when you press two buttons, move out, release, then move back: (duplicate items removed)
```
I/System.out( 3604): onGenericMotionEvent: Change 3 buttons 0
I/System.out( 3604): onTouchEvent: Change 4 buttons 2
// When presseing the second button; when dragging
I/System.out( 3604): onTouchEvent: Change 5 buttons 3
// When releasing one of the buttons
I/System.out( 3604): onTouchEvent: Change 5 buttons 2
// Whether you release the other button or not, no more logs are added.
// When moving back into the window
I/System.out( 3604): onGenericMotionEvent: Change 3 buttons 0
```
The last line `onGenericMotionEvent: Change 3 buttons 0` dispatches a hover event, which is unexpected by `PointerEventConverter` because the previous event has not been ended by an up event or cancel event yet, leading to the assertion failure.
On the contrary, this is the log when you press one button, move out, release, then move back:
```
I/System.out( 3604): onGenericMotionEvent: Change 3 buttons 0
I/System.out( 3604): onTouchEvent: Change 5 buttons 2
// When releasing the button
I/System.out( 3604): onTouchEvent: Change 6 buttons 0
```
Somehow this time an up event is dispatched.
## Environment
- Framework: ecf9748a77c86747935ba2ae59c7c19ff8bef8e1
- Engine: 5a33d6ab06fa2cc94cdd864ff238d2c58d6b7e4e
- Device: Chromebook Pixel 75.0.3770.129 | framework,engine,platform-chromebook,a: desktop,a: mouse,P2,team-framework,triaged-framework | low | Critical |
482,555,609 | rust | Cannot use `use Self::*;` inside a method | ```rust
enum A {}
impl A {
fn foo() {
use Self::*; // error: unresolved import `Self`
}
}
```
It's counterintuitive that this fails. At the very least (if it's not possible to fix this), we should special case the error message to explain the problem.
https://github.com/rust-lang/rust/issues/49683#issuecomment-458886172 may be related. | C-enhancement,A-diagnostics,A-resolve,T-lang,T-compiler,D-confusing | low | Critical |
482,555,925 | pytorch | Recommendations for Grid Sample/Affine Grid/Displacement Fields/Optical Flow | I compiled a list of recommendations (as well as outstanding issues needing attention) for improving the `grid_sample` and `affine_grid` functions (the functions that are used to form a spatial transformer module), to make them easier to use for a number of computer vision applications.
The list is not exhaustive, so feel free to help out by adding other known issues and/or recommendations.
Some of these are more quick-and-easy fixes that could be easily moved out of the way, while others require some more in-depth discussion and design (marked with [DISCUSSION]). For the latter category, I would appreciate discussion and feedback from some of the existing stakeholders such as @ssnl, @fmassa, @zou3519, @ailzhang, @soumith, and others.
### Bugs and Issues
- [x] [Fixed in #32829] Fix the border gradient issue, which causes training to occasionally get wild results for grid points at image borders. (originally #23925)
- [ ] [#24823] Fix handling of `NaN` and `inf` grid inputs, which causes segfaults and corrupted CUDA memory (also #19826, [forum/39837](https://discuss.pytorch.org/t/what-may-cause-runtime-error-in-grid-sample/39837)).
- [x] [Fixed in #24929] `affine_grid` does currently support generating 3D affine grids. Update the documentation to support it, and update the tests to test the functionality. (originally #24821, [#24380](https://github.com/pytorch/pytorch/issues/24380#issuecomment-522294803), #24878)
- [x] [Fixed in #24929] `affine_grid` error messages: Produce a more useful error message when the affine matrix has the wrong shape or dtype (#12362), or when it is not a tensor.
- [ ] [#5565] `bilinear` mode for 3D `grid_sample` should really be `trilinear`. Personally, I would prefer to have just a single mode name for `linear`, `bilinear`, and `trilinear`, but this would bring it in line with `interpolate()`.
### Support Optical Flow More Directly
_These changes would greatly simplify the implementation of convolutional networks to do Optical Flow Estimation and Image Registration tasks._
- [x] [Fixed in #24929] Add `align_corners=False` option to make `grid_sample` resolution agnostic, which is important for predicting optical flow at less-than-full resolution (originally #20785, #23923)
- [ ] [#36108] Allow option for sampling using only the residual displacement/flow. Eliminates the need to constantly add an identity grid to the flow/displacement field, which is imprecise, slow, and very [prone to user error](https://discuss.pytorch.org/t/warp-video-frame-from-optical-flow/6013).
- [ ] [pending at #41241] Allow option for the grid/flow input to be in pixel units (absolute coordinates). This is much more natural for some applications, including image registration, where the grid-like input is actually a displacement field, and this eliminates the need for the user to convert to normalized [-1,1] coordinates, which, again, is prone to user error, (especially when users don’t know which setting of `align_corners` to target). (originally #36107)
Implementation: Combinations of these options (residual and/or pixel units) could be added to `grid_sample()`, or as suggested in #4085, put in a separate `flow_sample()` high-level function (which should probably use the same kernels as the existing `grid_sample()`). Either way, the kernel code needs to be changed very little in order to allow them. For pixel units, just skip the call to the `unnormalize` function on the grid. For residual displacement, just add the destination x,y coordinates to the (unnormalized) grid.
### Inconsistencies and Conventions
_These inconsistencies are not necessarily incorrect, so to speak, but some of these do tend to cause a lot of headaches and confusion._
- [ ] Change padding modes names in `grid_sample` to match the names of padding modes in [`torch.nn.functional.pad`](https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.pad).
- [ ] [DISCUSSION] The function which on the user-facing python side is named `grid_sample`, is behind the scenes on both the python and the C++ side always named `grid_sampler` (notice the extra ‘r’) and `GridSampler`. Perhaps standardize it by renaming the behind-the-scenes versions to `grid_sample` (respectively `GridSample`), because this direction would not break backwards compatibility. Could still cause BC issue with saved models, though.
- [x] [Fixed in #24929] Add `align_corners` option so that `grid_sample` matches the indexing conventions of `interpolate` (originally #20785, #23923)
- [ ] [DISCUSSION] Switch from a “channels-last” to a “channels-first” convention in order to match the format of all other PyTorch tensors. That is, `[N, C, H, W]` rather than `[N, H, W, C]`, where `C` represents the grid components. Would eliminate the need to constantly call `permute()` to move channels back and forth. This would allow grids/flows/fields to be used normally in most other PyTorch functions, such as `interpolate`, among others. (The only other exceptions to the channels-first rule, as far as I know, are the FFT functions, which also expect channels-last `[N, H, W, C]`.)
Similarly, the components should be expressed as `(y,x)` instead of the current `(x,y)` to match the indexing order. This is currently [very](https://twitter.com/ducha_aiki/status/1024596593302011904) [confusing](https://github.com/pytorch/pytorch/issues/4223) to users and developers alike.
Downside: would be very BC-breaking.
- [ ] [DISCUSSION] Standardize support for half precision grids in CUDA kernels of `grid_sample`. This was partially implemented before, but in a way that caused problems for double precision (by casting everything to single precision floats). Perhaps there should be a conditional cast only if the input grid has less than single precision.
### New Features
- [ ] [#2732] Support broadcasting in `grid_sample` and `affine_grid`.
Specifically, for `affine_grid`, allow it to accept a single 2x3 2D affine matrix, or a single 3x4 3D affine matrix, with no batch dimension, and just produce a single-batch grid (or one with no batch dimension in conjunction with `grid_sample` broadcasting).
For `grid_sample`, apply broadcasting rules to the batch dimension. (Broadcasting doesn’t make sense in the spatial dimensions, of course.)
- [ ] [DISCUSSION] Allow `affine_grid` to accept a python list or a numpy `ndarray`, since when not used as part of a spatial transformer module, users often think of `affine_grid` as a constructor. For example, construct a grid tensor that rotates an image. Can possibly just have it call `torch.tensor()` on the input if not already a tensor.
- [ ] Add a cyclic/circular padding mode, where the image is treated as if repeated/tiled in each dimension
- [ ] [DISCUSSION: #25039] Add new interpolation modes, including `bicubic`, `area`, and possibly [`lanczos`](https://en.wikipedia.org/wiki/Lanczos_resampling).
- [ ] [DISCUSSION: #21457] Improve the ability of `grid_sample` to downsample while warping (that is, when the grid points are spaced out far apart). This would allow a single warp-downsample operation which is more capable than applying the operations in sequence.
### Refactoring
- [x] [Fixed in #24929] There is a ‘dispatcher’ for `affine_grid_generator` that is now left all alone in `nn/_functions/vision.py`, and its sole purpose is to decide if to use cuDNN. I don’t see why we need to keep `vision.py` around just for that. It could probably be moved either up directly into the `affine_grid` function of `nn/functional.py` or down into the C++ implementation.
- [ ] [#24470, #25014] Profile CUDA kernels vs. cuDNN kernels, and consider dropping use of cuDNN grid sampler and affine grid generator if they don’t give much extra value, as [suggested](https://github.com/pytorch/pytorch/issues/20785#issuecomment-496014108) by @soumith
- [ ] Vectorize the 3D CPU kernel of `grid_sample`. See if gives performance gain like it does for the 2D version at #10980.
- [ ] [DISCUSSION] Group together some of the code for `grid_sample` and `interpolate`. They have some shared behavior and kernel code (such as the bilinear/trilinear interpolation) that could potentially be standardized and factored out.
Feel free to open individual issues for any of these points in order to start a discussion (especially for the ones marked with DISCUSSION) and then link back to them here. | proposal accepted,triaged,module: interpolation | medium | Critical |
482,556,578 | react | DevTools: Crashes and warnings when quickly collapsing | This is weird. Happens if I select a node deeply and then long-press "left" arrow.
```
Invalid index 154 specified; store contains 154 items.
Uncaught Invariant Violation: Expected to find a host parent. This error is likely caused by a bug in React. Please file an issue.
at ReactError (file:///Users/gaearon/p/react-devtools-experimental/shells/dev/build/devtools.js:8529:40)
```
It starts with "invalid index" and then gives me different React invariants or warnings depending on how lucky I am.

---
More weird symptoms:
<img width="889" alt="Screen Shot 2019-04-25 at 6 38 22 PM" src="https://user-images.githubusercontent.com/810438/56756281-5671f480-6789-11e9-8d0b-631a5217e63b.png">
---
<img width="832" alt="Screen Shot 2019-04-25 at 6 41 37 PM" src="https://user-images.githubusercontent.com/810438/56756492-c7191100-6789-11e9-8814-cb849590ee01.png">
---
This "fixes" it:
```diff
runWithPriority(UserBlockingPriority, () => dispatch(action));
- next(() => dispatch({ type: 'UPDATE_INSPECTED_ELEMENT_ID' }));
+ runWithPriority(UserBlockingPriority, () => dispatch({ type: 'UPDATE_INSPECTED_ELEMENT_ID' }));
},
```
So I suspect it's a bug with `Scheduler.next()`.
---
This also looks funky. Note how somewhere in the middle right pane gets "stuck" showing the same cycle of values:

---
React bug: https://github.com/facebook/react/issues/15512
---
Originally reported by @gaearon via https://github.com/bvaughn/react-devtools-experimental/issues/228 | Type: Bug,Component: Developer Tools,React Core Team | medium | Critical |
482,556,701 | react | DevTools: Re-enable postMessage transferable for faster ArrayBuffer transfers | I got this on FB.com sandbox:
<img width="815" alt="screen shot 2019-03-01 at 1 15 24 pm" src="https://user-images.githubusercontent.com/810438/53640457-26dcbb00-3c24-11e9-828f-a987ffeec4da.png">
---
Originally reported by @gaearon via https://github.com/bvaughn/react-devtools-experimental/issues/25 | Type: Bug,Component: Developer Tools,React Core Team | medium | Minor |
482,556,957 | react | DevTools: Check if accessibility regressions exist compared to old DevTools | Before this becomes stable, we need to check if we are regressing accessibility on any important existing interactions.
At least, we should probably make the tree view focusable.
---
Originally reported by @gaearon via https://github.com/bvaughn/react-devtools-experimental/issues/52 | Component: Developer Tools,React Core Team | medium | Minor |
482,557,333 | react | DevTools: Audit places where we change tags or disconnect alternates in React | Cases like https://github.com/bvaughn/react-devtools-experimental/issues/197 (where a dehydrated Suspense node turns into a regular one) produce confusing failures because we expect Fiber alternates to be "for life", whereas in practice they can actually get disconnected by React in some cases. (Search for "Disconnect" in ReactFiberBeginWork.)
Additionally, I think changing `tag` can also produce confusing failures if it changes from a value that was filtered out, to a value that is not filtered out.
We need to be more proactive about handling these cases when we make such changes to React, and we need to look at existing cases where this happens and whether we can handle them.
---
Originally reported by @gaearon via https://github.com/bvaughn/react-devtools-experimental/issues/198 | Component: Developer Tools,React Core Team | medium | Critical |
482,557,409 | react | DevTools: Components tree is sometimes unexpectedly empty after navigation | 1. Open FB page
2. Open Components tab
3. Change address bar to `https://reactjs.org` and press Enter
Expected: Components tab gets populated.
Actual:
<img width="783" alt="Screen Shot 2019-04-23 at 7 27 37 PM" src="https://user-images.githubusercontent.com/810438/56606380-00247a80-65fe-11e9-988c-2ad3e69eb579.png">
~~If I **inspect background page**, I see this:~~ (fixed by #229)
<img width="652" alt="Screen Shot 2019-04-23 at 7 27 12 PM" src="https://user-images.githubusercontent.com/810438/56606408-0b77a600-65fe-11e9-9f65-5502401b7e4a.png">
---
Originally reported by @gaearon via https://github.com/bvaughn/react-devtools-experimental/issues/217 | Type: Bug,Component: Developer Tools,React Core Team | low | Major |
482,557,603 | react | DevTools: Write tests for preserving selection | See https://github.com/bvaughn/react-devtools-experimental/pull/215. It would be nice to have test coverage for it.
---
I got stuck here:
```js
const Component = () => <div>Hi</div>;
act(() =>
ReactDOM.render(<Component />, document.createElement('div'))
);
const id = store.getElementIDAtIndex(0);
const rendererID = store.getRendererIDForElement(id);
act(() => {
global.bridge.send('selectElement', { id, rendererID });
})
```
This test fails on master because bridge object is shared between agent and store. Separating it and emulating having two bridges didn't work because of some regression in the Suspense test. I haven't dug into why because the stack trace display is obscured and points to the wrong line in the test. The stack trace display points to the wrong line likely because of regenerator code. The regenerator code is likely coming from `babel-preset-env` thinking we need to polyfill async/await. I don't know why `babel-preset-env` doesn't realize my Node already has async/await. At that point I punted on this.
---
Originally reported by @gaearon via https://github.com/bvaughn/react-devtools-experimental/issues/219 | Component: Developer Tools,React Core Team | medium | Minor |
482,558,777 | flutter | [flutter_tool] Add support for unknown/unsupported platforms for the plugin tools templates | c: new feature,tool,P3,team-tool,triaged-tool | low | Minor |
|
482,578,906 | flutter | -[FlutterEngine maybeSetupPlatformViewChannels] is not thread-safe. | The platform view may have been collected in [the method call handler](https://github.com/flutter/engine/blob/99ee3c2b0df7911baa390fa897cefc836605d154/shell/platform/darwin/ios/framework/Source/FlutterEngine.mm#L305) which will cause the dereference of a collected value. | c: crash,platform-ios,engine,P2,team-ios,triaged-ios | low | Minor |
482,647,307 | pytorch | RuntimeError on PyTorch 1.2 under NVIDIA Nsight Systems | ## 🐛 Bug
Creating a pinned tensor in PyTorch 1.2 fails with `RuntimeError: Error in dlopen or dlsym: libcaffe2_nvrtc.so...` under NVIDIA Nsight Systems.
I tested both PyTorch versions 1.2 and 1.1. This problem happened only in PyTorch 1.2.
## To Reproduce
Steps to reproduce the behavior:
1. Install PyTorch 1.2
1. Install NVIDIA Nsight Systems 2019.3
1. Run `nsys profile -snone python -c 'import torch; torch.rand(100, pin_memory=True)'`.
```console
$ nsys profile -snone python -c 'import torch; torch.rand(100, pin_memory=True)'
WARNING: Backtraces will not be collected because sampling is disabled.
**** collection configuration ****
force-overwrite = false
stop-on-exit = true
export_sqlite = false
stats = false
delay = 0 seconds
duration = 0 seconds
inherit-environment = true
show-output = true
trace-fork-before-exec = false
sample_cpu = false
backtrace_method = LBR
trace_cublas = false
trace_cuda = true
trace_cudnn = false
trace_nvtx = true
trace_openacc = false
trace_vulkan = false
trace_opengl = true
trace_osrt = true
osrt-threshold = 0 nanoseconds
profile_processes = tree
application command = python
application arguments = -c import torch; torch.rand(100, pin_memory=True)
application working directory = /home/sublee
environment variables:
Collecting data...
Traceback (most recent call last):
File "<string>", line 1, in <module>
RuntimeError: Error in dlopen or dlsym: libcaffe2_nvrtc.so: cannot open shared object file: No such file or directory
Generating the file.
Capturing raw events...
2878 total events collected.
Saving diagnostics...
Saving qdstrm file to disk...
Finished saving file.
Importing the qdstrm file using /opt/nvidia/nsightsystems/nsightsystems-cli-2019.3.6/Host-x86_64/QdstrmImporter.
Importing...
Importing [==================================================100%]
Saving report to file "/home/sublee/nvidia_nsight_systems/report19.qdrep"
Report file saved.
Please discard the qdstrm file and use the qdrep file instead.
Removed /home/sublee/nvidia_nsight_systems/report19.qdstrm as it was successfully imported.
Please use the qdrep file instead.
```
`torch.rand(100, pin_memory=True)` failed with this error:
```python
Traceback (most recent call last):
File "<string>", line 1, in <module>
RuntimeError: Error in dlopen or dlsym: libcaffe2_nvrtc.so: cannot open shared object file: No such file or directory
```
## Expected behavior
`torch.rand(100, pin_memory=True)` should not fail under NVIDIA Nsight Systems.
- It doesn't fail in *PyTorch 1.1* under NVIDIA Nsight Systems.
- It doesn't fail in PyTorch 1.2 *without* NVIDIA Nsight Systems.
## Environment
```
Collecting environment information...
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.2 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: Tesla P40
GPU 1: Tesla P40
GPU 2: Tesla P40
GPU 3: Tesla P40
GPU 4: Tesla P40
GPU 5: Tesla P40
GPU 6: Tesla P40
GPU 7: Tesla P40
Nvidia driver version: 410.104
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.2
Versions of relevant libraries:
[pip3] numpy==1.16.4
[pip3] torch==1.2.0
[pip3] torchvision==0.4.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-include 2019.4 243
[conda] mkl-service 2.0.2 py36h7b6447c_0
[conda] mkl_fft 1.0.12 py36ha843d7b_0
[conda] mkl_random 1.0.2 py36hd81dba3_0
[conda] torch 1.2.0 pypi_0 pypi
[conda] torchvision 0.4.0 pypi_0 pypi
```
## Additional context
### Environment variables to reproduce
I tried to run the Python code with `LD_PRELOAD` and `QUADD_INJECTION_PROXY` set by NVIDIA Nsight Systems. The problem was simply reproduced.
```console
$ export LD_PRELOAD=/opt/nvidia/nsightsystems/nsightsystems-cli-2019.3.6/Target-x86_64/x86_64/libToolsInjectionProxy64.so
$ export QUADD_INJECTION_PROXY=OSRT
$ python -c 'import torch; torch.rand(100, pin_memory=True)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
RuntimeError: Error in dlopen or dlsym: libcaffe2_nvrtc.so: cannot open shared object file: No such file or directory
```
<!--
### Specify `libcaffe2_nvrtc.so` path
In my system, `libcaffe2_nvrtc.so` is at `/opt/conda/lib/python3.6/site-packages/torch/lib/libcaffe2_nvrtc.so`. I specified the path at `LD_PRELOAD` when running NVIDIA Nsight Systems. Then the problem has been fixed.
```console
$ nsys profile -eLD_PRELOAD=/opt/conda/lib/python3.6/site-packages/torch/lib/libcaffe2_nvrtc.so -snone python -c 'import torch; torch.rand(100, pin_memory=True)'
WARNING: Backtraces will not be collected because sampling is disabled.
**** collection configuration ****
force-overwrite = false
stop-on-exit = true
export_sqlite = false
stats = false
delay = 0 seconds
duration = 0 seconds
inherit-environment = true
show-output = true
trace-fork-before-exec = false
sample_cpu = false
backtrace_method = LBR
trace_cublas = false
trace_cuda = true
trace_cudnn = false
trace_nvtx = true
trace_openacc = false
trace_vulkan = false
trace_opengl = true
trace_osrt = true
osrt-threshold = 0 nanoseconds
profile_processes = tree
application command = python
application arguments = -c import torch; torch.rand(100, pin_memory=True)
application working directory = /home/sublee
environment variables:
LD_PRELOAD=/opt/conda/lib/python3.6/site-packages/torch/lib/libcaffe2_nvrtc.so
Collecting data...
Generating the file.
Capturing raw events...
4597 total events collected.
Saving diagnostics...
Saving qdstrm file to disk...
Finished saving file.
Importing the qdstrm file using /opt/nvidia/nsightsystems/nsightsystems-cli-2019.3.6/Host-x86_64/QdstrmImporter.
Importing...
Importing [==================================================100%]
Saving report to file "/home/sublee/nvidia_nsight_systems/report20.qdrep"
Report file saved.
Please discard the qdstrm file and use the qdrep file instead.
Removed /home/sublee/nvidia_nsight_systems/report20.qdstrm as it was successfully imported.
Please use the qdrep file instead.
```
--> | module: cuda,triaged,module: third_party | low | Critical |
482,702,938 | react | Chrome's Custom Formatters | **Do you want to request a *feature* or report a *bug*?**
Feature
_Transferring feature request from the old repo https://github.com/facebook/react-devtools/issues/989_
Hi! Is there any plans on supporting [Chrome's custom formatters](https://docs.google.com/document/d/1FTascZXT9cxfetuPRT2eXPQKXui4nWFivUnS_335T3U/preview) to display custom data structures in readable format in React dev tools?
For example when debugging ClojureScript's immutable data structures we have a custom formatter that outputs data into the console in readable and inspectable format.
Here how it looks like

And here's how data looks like in React Dev Tools inspector (basically underlying implementation of a data structure as seen in plain JS)

I think this can be done for React Dev Tools since once Custom Formatters are defined they are applied everywhere in Chrome's Dev Tools where it's possible to inspect data.
| Type: Discussion,Component: Developer Tools | low | Critical |
482,702,948 | TypeScript | TSServer: Find all reference for per overloaded function type |
## Search Terms
Find references, overload, function, overloaded function
## Suggestion
Provide a mechanism to search references of a function with particular overloaded type
## Use Cases
In current implementation find all references for overloaded function shows all references for every overload
Some times we need to find references of particular overload only (i.e. a overload is deprecated and we want to update all use cases)
## Examples
```ts
/**
* @deprecated
**/
declare function fn():void; // overload 1
declare function fn(param:string):number; // overload 2
fn(); // use 1
const x = fn('a'); // use 2
const y = fn('b'); // use 3
```
Finding references for `overload 1` should list `use 1`.
Finding references for `overload 2` should list `use 2` and `use 3`.
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback,Domain: Refactorings | low | Minor |
482,808,077 | create-react-app | disable TS check on start script | ### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
Yep. My project has some interface exportation files just for IDE type definition (nothing imported on the project), but, when I try to `npm start`, the script tells me to install (unnecessarily) typescript and creates a `tsconfig.json` file in project's root dir. This is annoying.
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
Add some way (maybe a flag?) to disable this check.
### Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
(Write your answer here.)
### Additional context
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here if you haven't already.
-->
The project's team does not have TypeScript fluency, but we're applying it in some cases that involve sensitive data, but just to get the IDE code time report (`/** @type...`). There are no reasons to install whole ts lib just for this.
| issue: proposal,needs triage | low | Minor |
482,821,805 | flutter | Expose Skia Paint.getTextPath | Flutter fails to expose https://skia.org/user/api/SkPaint_Reference#SkPaint_getTextPath
## Use case
Numerous. Using the text as clipping path, or using it as a base for more complex painting operations. | c: new feature,engine,dependency: skia,P3,team-engine,triaged-engine | low | Minor |
482,847,199 | pytorch | tensorboard add_graph error | I update torch to `1.2.0` and use `torch.utils.tensorboard import SummaryWriter` save the graph, report error:
```
Only tensors or tuples of tensors can be output from traced functions(getOutput at /pytorch/torch/csrc/jit/tracer.cpp:208)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f67d5059273 in /opt/anaconda3/lib/python3.7/site-packages/lib/libc10.so)
...
```
When I use torch 1.1.0 + cuda9.0 don't report error. | triaged,module: tensorboard | medium | Critical |
482,858,911 | opencv | OpenCV VideoCapture::read reads an empty matrix mid video |
##### System information
- OpenCV => 4.1
- Operating System / Platform => x86_64 GNU/Linux (Debian-based)
- Compiler => g++ (8.3.0)
##### Detailed description
OpenCV VideoCapture::read fails to read certain video file formats until the end.
Three test videos were tested (videos provided below):
rottest.avi (fps: 24; total frames: 337, reads 107 frames - 108th was unsuccessful)
rottest.mp4 (fps: 24; total frames: 337, reads 337 frames - 338th was unsuccessful; as expected)
rottest.webm (fps: 30; total frames: 423, reads 107 frames - 108th was unsuccessful)
All videos have the same content, only underwent format conversion.
The expected behavior is for rottest.webm to successfully read 423 frames and fail on 424th,
while for rottest.avi is should read 337 frames successfully and fail on the 338th frame.
To be more specific, in all instances I have checked manually and the call `cap.read(curr_img);` returns false on the unsuccessful frame numbers given above.
I did not try to visualize the images using cv::imshow but can do that as well if someone considers it necessary.
##### Steps to reproduce
testvideoin.cpp
```
#include <opencv2/videoio.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <string>
int main(int argc, char const *argv[])
{
if (argc<2)
return 0;
std::string path(argv[1]);
cv::VideoCapture cap(path);
cv::Mat curr_img;
unsigned frame_no=1;
bool success=false;
std::cout << "Video total frame number: " << static_cast<int>(cap.get(cv::CAP_PROP_FRAME_COUNT)) << std::endl;
success=cap.read(curr_img);
while (!curr_img.empty())
{
std::cout << "Frame no: " << frame_no <<"; SUCCESS: "<< success << std::endl;
success=cap.read(curr_img);
if (!success)
curr_img=cv::Mat();
frame_no++;
}
std::cout <<"Frame no: " << frame_no << "; Image empty: " << curr_img.empty() << std::endl;
return 0;
}
```
Compilation:
g++ -std=c++14 -Wall -Wextra -g -O0 -I/usr/local/include/opencv4 -L/usr/local/lib -o testvideoin -lopencv_core -lopencv_videoio -lopencv_imgproc -lopencv_imgcodecs testvideoin.cpp
Test set: https://drive.google.com/open?id=1FdyrYWi7qNGB3KSaELLQDZ2ugAnxp2OL
(rottest.webm, rottest.avi, rottest.mp4) | bug,category: videoio,RFC,future,confirmed | low | Major |
482,902,861 | go | cmd/gc: go:nointerface pragma is undocumented | The Go compiler has a pragma
//go:nointerface
As far as I can tell, it prevents a method from being used to satisfy an interface. However, I cannot find any (public) documentation on it.
Even https://golang.org/src/cmd/compile/internal/gc/lex.go, which has comments for the other pragmas, does not describe this one. | Documentation,help wanted,NeedsInvestigation | low | Minor |
482,929,302 | go | doc: do not embed style in release-note HTML files | When styles are embedded in release note html files, it may also require you to do the same thing in the previous ones to keep the same style across all versions.
[The embedded style](https://github.com/golang/go/blob/d6ffc1d8394d6f6420bb92d79d320da88720fbe0/doc/go1.6.html#L12) in go1.6 html seems to cause #33718.
I think that it is better to try not to embed style in release note html files, and write it in stylesheet instead. | Documentation,NeedsFix | low | Major |
482,967,645 | vue | Scoped CSS attribute is reused or discarded when switching between components with scoped CSS | ### Version
2.6.10
### Reproduction link
- Functional components: [https://github.com/sin1ght/test](https://github.com/sin1ght/test)
- Regular components and slots: https://codesandbox.io/embed/vue-template-3pnsx
### Steps to reproduce
After npm run serve, click the toggle button and find that child has no style.
Child and child2 components are reused, child's data-v-* disappears, causing the style to disappear
### What is expected?
Child should have a black background scope style
### What is actually happening?
Child without style
<!-- generated by vue-issues. DO NOT REMOVE --> | bug,contribution welcome | low | Major |
482,980,626 | flutter | Disable crash reporting if the flutter_tool checkout is dirty | This can be the source of confusing crash reports that are not possible to debug. | c: new feature,team,tool,P2,team-tool,triaged-tool | low | Critical |
482,984,604 | flutter | Add any experimental flags enabled to crash reports | This will help with narrowing down the cause of tool crashes, reproducing and fixing them. | c: new feature,team,tool,P3,team-tool,triaged-tool | low | Critical |
482,985,760 | pytorch | Shared Dataset Functionality | ## 🚀 Feature
We want to build a unified data pipeline interface that offers building blocks for others to build on with the following objectives:
* Standardize datasets across domains.
* Offer flexible building blocks that can be combine to obtain other datasets.
* Enable datasets that do not fit in memory.
* Share code among domains.
* Facilitate parallel loading and processing of data.
* Decouple data loading and preprocessing/transformation.
* Offer static typing for datasets
## Motivation
* The Domains currently each have their own non-standard dataset structure that may also download the data. This duplicate efforts and adds complexity to the user.
* A common bottleneck when generating datasets is reading the data. We want to offer an interface that enables reading the data and running initial preprocessing while maximizing available computing resources utilization.
* We may want to leverage specialize libraries such as NVIDIA DALI.
## Additional Information
* [torch.utils.data](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/)
* [tf.data](https://www.tensorflow.org/beta/guide/data) (e.g. uses dictionary for data point iteration)
* fast.ai's [basic_data](https://docs.fast.ai/basic_data.html) and [data_block](https://docs.fast.ai/data_block.html)
* [tnt](https://github.com/pytorch/tnt/blob/master/torchnet/dataset/dataset.py)
* [torchnet](https://github.com/torchnet/torchnet/tree/master/dataset)
* [~~torchdata~~](https://pypi.org/project/torchdata/)
Datasets:
* pytorch/text#624 pytorch/text#610 pytorch/audio#303 new datasets in domains
* pytorch/vision#1193 wants to select which metadata to return
* Internal: [overview](https://fb.quip.com/vlWwA35cmq0t) [torchtext](https://fb.quip.com/LncwAsC1cUZt) [core](https://fb.quip.com/B0PeACndlZEE) [torchvision](https://fb.quip.com/WGsUApsce6xN)
* [safe datasets](https://github.com/msamogh/nonechucks)
Dataloader:
* [torchaudio background iterator](https://github.com/pytorch/audio/blob/master/torchaudio/datasets/utils.py#L314)
* #24915 wants to re-use worker processes
* [FastDataLoader](https://github.com/pytorch/pytorch/issues/15849#issuecomment-573921048)
* [python 3.8 shared memory](https://docs.python.org/3/library/multiprocessing.shared_memory.html)
* Internal: [torchdata](https://fb.quip.com/ekJJAsYqMG7X) [gil](https://docs.google.com/document/d/1InJP79dWTIYj-xGVU65Y2r-K2HeL6t2l1xGDfKTU4Rw/edit#) [experiment](https://fb.quip.com/imVLAOdyJfAI) [DataLoader+Iterable](https://fb.workplace.com/groups/2162019300778793/permalink/3398854433474998/)
Features:
* #12672 wants to move collate_fn functionality to datasets
* #26547 wants distributed random sampling
* #28743 for sampler for iterable datasets
* pytorch/vision#1315 wants to apply an instance of random transform sequence to many images
cc @SsnL @fmassa @zhangguanheng66 @vincentqb @mrshenli | module: dataloader,triaged,better-engineering | low | Major |
483,008,387 | rust | Cargo can't find std crates using local-rust-root on macOS. | <!-- Thanks for filing a 🐛 bug report 😄! -->
**Problem**
<!-- A clear and concise description of what the bug is. -->
<!-- including what currently happens and what you expected to happen. -->
Cargo won't find std or core crates if I try to bootstrap using the following command:
```
cd /opt/mxe/tmp-rustc-x86_64-apple-darwin18.7.0//rustc-1.36.0-src && ./configure --prefix=/opt/mxe/usr --enable-vendor --default-linker=gcc --disable-codegen-tests --disable-docs --release-channel=stable --llvm-root=/opt/mxe/usr/libexec/llvm-8.0.0 --build=x86_64-apple-darwin --local-rust-root=/opt/mxe/tmp-rust-stage0-x86_64-apple-darwin --set=target.x86_64-apple-darwin.cc=gcc --set=target.x86_64-apple-darwin.cxx=g++ --set=target.x86_64-apple-darwin.linker=gcc --set=build.python=python2.7
configure: processing command line
configure:
configure: rust.default-linker := gcc
configure: build.vendor := True
configure: rust.channel := stable
configure: build.docs := False
configure: rust.codegen-tests := False
configure: install.prefix := /opt/mxe/usr
configure: target.x86_64-apple-darwin.cc := gcc
configure: target.x86_64-apple-darwin.cxx := g++
configure: target.x86_64-apple-darwin.linker := gcc
configure: build.python := python2.7
configure: build.build := x86_64-apple-darwin
configure: build.rustc := /opt/mxe/tmp-rust-stage0-x86_64-apple-darwin/b ...
configure: build.cargo := /opt/mxe/tmp-rust-stage0-x86_64-apple-darwin/b ...
configure: target.x86_64-apple-darwin.llvm-config := /opt/mxe/usr/libexec/llvm-8. ...
configure: build.configure-args := ['--prefix=/opt/mxe/usr', '--enable-vendor', ' ...
configure:
configure: writing `config.toml` in current directory
configure:
configure: run `python /opt/mxe/tmp-rustc-x86_64-apple-darwin18.7.0/rustc-1.36.0-src/x.py --help`
configure:
cd /opt/mxe/tmp-rustc-x86_64-apple-darwin18.7.0//rustc-1.36.0-src && /opt/Xcode.app/Contents/Developer/usr/bin/make -j6 -w rustc-stage1 VERBOSE=1 BOOTSTRAP_ARGS="-v -j6"
```
I don't run into this issue if I add `--enable-local-rust`, but then something else happens: configure.py overrides `build.rustc` like so:
```
cd /opt/mxe/tmp-rustc-x86_64-apple-darwin18.7.0//rustc-1.36.0-src && ./configure --prefix=/opt/mxe/usr --enable-vendor --default-linker=gcc --disable-codegen-tests --disable-docs --release-channel=stable --llvm-root=/opt/mxe/usr/libexec/llvm-8.0.0 --build=x86_64-apple-darwin --enable-local-rust --set=target.x86_64-apple-darwin.cc=gcc --set=target.x86_64-apple-darwin.cxx=g++ --set=target.x86_64-apple-darwin.linker=gcc --set=build.python=python2.7 --local-rust-root=/opt/mxe/tmp-rust-stage0-x86_64-apple-darwin
configure: processing command line
configure:
configure: rust.default-linker := gcc
configure: build.vendor := True
configure: rust.channel := stable
configure: build.docs := False
configure: rust.codegen-tests := False
configure: install.prefix := /opt/mxe/usr
configure: target.x86_64-apple-darwin.cc := gcc
configure: target.x86_64-apple-darwin.cxx := g++
configure: target.x86_64-apple-darwin.linker := gcc
configure: build.python := python2.7
configure: build.build := x86_64-apple-darwin
configure: build.rustc := /opt/mxe/tmp-rust-stage0-x86_64-apple-darwin/b ...
configure: build.cargo := /opt/mxe/tmp-rust-stage0-x86_64-apple-darwin/b ...
configure: target.x86_64-apple-darwin.llvm-config := /opt/mxe/usr/libexec/llvm-8. ...
configure: build.rustc := /opt/local/bin/rustc
configure: build.configure-args := ['--prefix=/opt/mxe/usr', '--enable-vendor', ' ...
configure:
configure: writing `config.toml` in current directory
configure:
configure: run `python /opt/mxe/tmp-rustc-x86_64-apple-darwin18.7.0/rustc-1.36.0-src/x.py --help`
configure:
```
The second bit is evidently not related to cargo itself I guess but it's a bit puzzling. Anyway, why won't cargo find the crates in the local-rust-root unless I specifically omit `--enable-local-rust`?
**Notes**
Output of `cargo version`: 0.37.0
<!-- Also, any additional context or information you feel may be relevant to the issue. -->
<!-- (e.g rust version, OS platform/distribution/version, target toolchain(s), release channel.. -->
| O-macos,T-bootstrap,requires-custom-config | low | Critical |
483,018,603 | godot | Godot leaks a lot of things if started with inexistent scene path in command line | Godot 3.2 71a6d2cd17b9b48027a6a36b4e7b8adee0eb373c
I launched Godot with a command line argument so it starts a specific scene. However, the path was wrong, and Godot failed to load it as expected. However, it leaked a bunch of things after that:
```
Godot Engine v3.2.dev.custom_build.71a6d2cd1 - https://godotengine.org
Using GLES3 video driver
OpenGL ES 3.0 Renderer: GeForce GTX 1060 6GB/PCIe/SSE2
WASAPI: wFormatTag = 65534
WASAPI: nChannels = 2
WASAPI: nSamplesPerSec = 48000
WASAPI: nAvgBytesPerSec = 384000
WASAPI: nBlockAlign = 8
WASAPI: wBitsPerSample = 32
WASAPI: cbSize = 22
WASAPI: detected 2 channels
WASAPI: audio buffer frames: 1962 calculated latency: 44ms
Loading resource: res://default_bus_layout.tres
CORE API HASH: -8489505122774148893
EDITOR API HASH: -1606062373362734782
Loading resource: res://default_env.tres
Loading resource: res://ddd.gd
Loaded builtin certs
Loading resource: res://conversion_test/conversion_test.tscn
ERROR: ResourceFormatLoaderText::load_interactive: Condition ' err != OK ' is true. returned: Ref<ResourceInteractiveLoader>()
At: scene\resources\resource_format_text.cpp:1228
ERROR: Failed loading resource: res://conversion_test/conversion_test.tscn
At: core\io\resource_loader.cpp:283
ERROR: Failed loading scene: res://conversion_test/conversion_test.tscn
At: main\main.cpp:1752
ERROR: SelfList<class GDScriptFunction>::List::~List: Condition ' _first != 0 ' is true.
At: C:\Projects\Godot\Engine\godot_fork\core/self_list.h:111
ERROR: SelfList<class GDScript>::List::~List: Condition ' _first != 0 ' is true.
At: C:\Projects\Godot\Engine\godot_fork\core/self_list.h:111
WARNING: ObjectDB::cleanup: ObjectDB Instances still exist!
At: core\object.cpp:2098
Leaked instance: ViewportTexture:1165 - Resource name: Path:
Leaked instance: Viewport:1162 - Node name: root
Leaked instance: World2D:1163 - Resource name: Path:
Leaked instance: World:1166 - Resource name: Path:
Leaked instance: BulletPhysicsDirectSpaceState:1167
Leaked instance: MultiplayerAPI:1168
Leaked instance: Physics2DDirectSpaceStateSW:1164
Leaked instance: Environment:1171 - Resource name: Path: res://default_env.tres
Leaked instance: GDScriptNativeClass:1054
Leaked instance: ProceduralSky:1170 - Resource name: Path: res://default_env.tres::1
Leaked instance: SceneTree:1161
Leaked instance: GDScript:1172 - Resource name: Path: res://ddd.gd
Leaked instance: Node:1173 - Node name: DDD
ERROR: ResourceCache::clear: Resources Still in use at Exit!
At: core\resource.cpp:445
Orphan StringName: tree_changed
Orphan StringName: draw_line
Orphan StringName: get_frames_drawn
Orphan StringName: _server_disconnected
Orphan StringName: add_child
Orphan StringName: network_peer_connected
Orphan StringName: frame
Orphan StringName: PRIMITIVE_LINES
Orphan StringName: color
Orphan StringName: _texts
Orphan StringName: World2D
Orphan StringName: BulletPhysicsDirectSpaceState
Orphan StringName: _network_peer_connected
Orphan StringName: ProceduralSky
Orphan StringName: _lines
Orphan StringName: _network_peer_disconnected
Orphan StringName: node_added
Orphan StringName: new
Orphan StringName: flags_unshaded
Orphan StringName: GDScriptNativeClass
Orphan StringName: World
Orphan StringName: TEXT_LINGER_FRAMES
Orphan StringName: queue_free
Orphan StringName: Viewport
Orphan StringName: a
Orphan StringName: b
Orphan StringName: network_peer_disconnected
Orphan StringName: LINES_LINGER_FRAMES
Orphan StringName: vertex_color_use_as_albedo
Orphan StringName: node
Orphan StringName: _vp_gui_input1162
Orphan StringName: node_removed
Orphan StringName: _vp_input1162
Orphan StringName: key
Orphan StringName: Physics2DDirectSpaceStateSW
Orphan StringName: begin
Orphan StringName: add_vertex
Orphan StringName: set_color
Orphan StringName: append
Orphan StringName: text
Orphan StringName: pop_back
Orphan StringName: root
Orphan StringName: _vp_unhandled_key_input1162
Orphan StringName: keys
Orphan StringName: MultiplayerAPI
Orphan StringName: material_override
Orphan StringName: erase
Orphan StringName: Environment
Orphan StringName: Node
Orphan StringName: _line_material
Orphan StringName: _get_line_material
Orphan StringName: _connection_failed
Orphan StringName: SceneTree
Orphan StringName: connected_to_server
Orphan StringName: node_renamed
Orphan StringName: _process
Orphan StringName: connection_failed
Orphan StringName: _vp_unhandled_input1162
Orphan StringName: DDD
Orphan StringName: set_text
Orphan StringName: res://ddd.gd
Orphan StringName: _init
Orphan StringName: _label
Orphan StringName: _connected_to_server
Orphan StringName: ViewportTexture
Orphan StringName: GDScript
Orphan StringName: server_disconnected
Orphan StringName: end
Orphan StringName: delta
StringName: 69 unclaimed string names at exit.
``` | bug,topic:core,confirmed | low | Critical |
483,018,613 | go | context: Remove allocation discussion from WithValue documentation | This is a proposed package documentation change. I'm happy to submit a code change with this update, if it makes sense. I took the liberty of abbreviating the questions in the template.
### What version of Go are you using (`go version`)?
Documentation in the [latest source of context.go (commit d6ffc1d8394d6f6420bb92d79d320da88720fbe0)](https://github.com/golang/go/blob/d6ffc1d8394d6f6420bb92d79d320da88720fbe0/src/context/context.go)
### What does the current documentation say
WithValue: "To avoid allocating when assigning to an interface{}, context keys often have concrete type struct{}. Alternatively, exported context key variables' static type should be a pointer or interface."
Current documentation: https://tip.golang.org/pkg/context/#WithValue
Code: https://github.com/golang/go/blob/d6ffc1d8394d6f6420bb92d79d320da88720fbe0/src/context/context.go#L476-L479
### What should it say
Those two sentences should be removed. With Go >= 1.9, it no longer matters. To verify, I ran the following test with different versions of Go on a VM with a command like:
```docker run --workdir=/wtf/test -v $HOME:/wtf -ti --rm golang:1.12 go test -bench=. .```
I tested each release from 1.12 through to 1.8. With version 1.8, this mattered a lot. It no longer does. From the output below, you can see that using an `int` key is slower, but does not allocate. The other choices (`interface{}`, pointer, custom string), all appear to be equivalent.
I think it would simplify the package documentation to omit this.
This was previously changed after the discussion in https://github.com/golang/go/issues/17826 . My test is based on that one.
#### Go 1.12
```
goos: linux
goarch: amd64
BenchmarkInterfaceKey-2 1000000000 2.64 ns/op 0 B/op 0 allocs/op
BenchmarkIntKey-2 300000000 4.56 ns/op 0 B/op 0 allocs/op
BenchmarkStringKey-2 1000000000 2.64 ns/op 0 B/op 0 allocs/op
BenchmarkCustomStringKey-2 1000000000 2.64 ns/op 0 B/op 0 allocs/op
BenchmarkEmptyStructKey-2 1000000000 2.63 ns/op 0 B/op 0 allocs/op
BenchmarkPtrKey-2 1000000000 2.63 ns/op 0 B/op 0 allocs/op
```
#### Go 1.9
```
goos: linux
goarch: amd64
BenchmarkInterfaceKey-2 1000000000 2.66 ns/op 0 B/op 0 allocs/op
BenchmarkIntKey-2 300000000 5.28 ns/op 0 B/op 0 allocs/op
BenchmarkStringKey-2 1000000000 2.64 ns/op 0 B/op 0 allocs/op
BenchmarkCustomStringKey-2 1000000000 2.63 ns/op 0 B/op 0 allocs/op
BenchmarkEmptyStructKey-2 1000000000 2.66 ns/op 0 B/op 0 allocs/op
BenchmarkPtrKey-2 1000000000 2.67 ns/op 0 B/op 0 allocs/op
```
#### Go 1.8
```
BenchmarkInterfaceKey-2 300000000 4.37 ns/op 0 B/op 0 allocs/op
BenchmarkIntKey-2 50000000 30.6 ns/op 8 B/op 1 allocs/op
BenchmarkStringKey-2 30000000 47.6 ns/op 16 B/op 1 allocs/op
BenchmarkCustomStringKey-2 30000000 47.6 ns/op 16 B/op 1 allocs/op
BenchmarkEmptyStructKey-2 100000000 15.1 ns/op 0 B/op 0 allocs/op
BenchmarkPtrKey-2 300000000 4.57 ns/op 0 B/op 0 allocs/op
```
### Test code
```go
package test
import (
"context"
"testing"
)
type key interface{}
var keyInterface key = 0
type keyIntType int
var keyInt keyIntType = 0
type List struct{}
type emptyStruct struct{}
var emptyStructKey = emptyStruct{}
const stringKey = "somestring"
type customStringKeyT string
const customStringKey = customStringKeyT("customstring")
var someString = "hello"
var ptrKey *string = &someString
func BenchmarkInterfaceKey(b *testing.B) {
b.ReportAllocs()
ctx := context.Background()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
ctx.Value(keyInterface)
}
})
}
func BenchmarkIntKey(b *testing.B) {
b.ReportAllocs()
ctx := context.Background()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
ctx.Value(keyInt)
}
})
}
func BenchmarkStringKey(b *testing.B) {
b.ReportAllocs()
ctx := context.Background()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
ctx.Value(stringKey)
}
})
}
func BenchmarkCustomStringKey(b *testing.B) {
b.ReportAllocs()
ctx := context.Background()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
ctx.Value(customStringKey)
}
})
}
func BenchmarkEmptyStructKey(b *testing.B) {
b.ReportAllocs()
ctx := context.Background()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
ctx.Value(emptyStructKey)
}
})
}
func BenchmarkPtrKey(b *testing.B) {
b.ReportAllocs()
ctx := context.Background()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
ctx.Value(ptrKey)
}
})
}
```
| Documentation,NeedsDecision | low | Major |
483,024,050 | flutter | Allow setting the paint used on icons | For example, if you wanted to draw a stroked icon rather than filled, it is not possible without custom code today. We should allow specifying a foreground paint on icons, which would allow for specifying shaders or stroke/fill styles. | c: new feature,framework,P2,team-framework,triaged-framework | low | Minor |
483,044,684 | node | http2 compat missing writableFinished | http2 compat is missing `writableFinished` for `OutgoingMessage`.
I tried fixing this myself but the whole finished flow for http2 compat confuses me.
- `finished` seems to be looking at a lot of different stuff (e.g. what does abort have to do with finish?).
- `'finish'` is emitted on `stream.on('close', ...)`.
- Sometimes we listen to `this.on('finish', ...)` and sometimes `stream.on('finish', ...)`.
I think someone with a clearer understanding needs to look at this. | http2 | low | Minor |
483,046,865 | flutter | Support full-screen for GLFW embedding | May be this implementation can be useful for Full Screen Apps
This Also requires minimize, maximize and close handles Api for App, so that we can do such tasks within Full Screen App | engine,a: desktop,e: glfw,P3,team-linux,triaged-linux | low | Minor |
483,085,177 | godot | Mouse input remains disabled even after re-enabling viewport input | **Godot version:**
3.1.1.stable.mono.official
**OS/device including version:**
Windows 7 Professional Service Pack 1
**Issue description:**
After a SetDisableInput(true); if I try to enable again the viewport input with SetDisableInput(false); it won't be processed until I minimize and bring back the test window.
**Steps to reproduce:**
-Place a button on the scene;
-Disable the root viewport input;
-Enable the viewport input back, you should now see that, at a first look, the button doesn't catch the "hovered" state anymore, it cannot be clicked either, to solve this simply minimize the window and then recover the window back and the button will catch the mouse.
-After 3.2-alpha1 the input is recovered only if the mouse passes beyond the application canvas (see posts below)
**Minimal reproduction project:**
[InputBug.zip](https://github.com/godotengine/godot/files/3699253/InputBug.zip) | bug,topic:input | low | Critical |
483,085,672 | pytorch | Successive Layer Normalization in nn.Transformer | In the nn.transformer.py module, the Transformer*Layer objects always have a layer norm at the very end of their forward method. However, the main Transformer object passes additional layer norms to both the TransformerEncoder and TransformerDecoder, effectively computing layer norm twice after the encoder, and twice after the decoder.
This seems wrong, I'm not sure there is a need for the extra layer norms inside Transformer.
Let me know if I'm missing something!
cc @SsnL | module: nn,triaged | low | Major |
483,086,655 | pytorch | Consider not checking in autogenerated core/{Tensor.h,TensorMethods.h} | ATen/core/Tensor.h, ATen/core/TensorMethods.h are autogenerated and checked in. However, checking in autogenerated files is a recipe for merge conflicts; I've ran into 3 in the past two weeks. I think this is because git thinks it can merge two of these files from different branches together but they get merged together in a way that isn't consistent with how they get autogenerated.
Potential solutions:
- stop checking in the autogenerated files
- Have some sort of landing hook (I'm not sure if this is possible) where if the generated file doesn't match what is checked in, then we block the land.
cc @gchanan @ezyang | module: build,module: cpp,triaged | low | Major |
483,091,799 | go | x/mobile: Issues converting Data to []byte in Swift | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.9 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12.9/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12.9/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/gg/29c0x6691455vgkxwl2hrpgc0000gn/T/go-build354245371=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
The Go source code
```go
package gocode
import "fmt"
type Bytes struct {
elements []byte
}
func NewBytes(elements []byte) *Bytes {
fmt.Println("Constructor: ", elements)
return &Bytes{elements: elements}
}
func (bytes *Bytes) GetElements() []byte {
fmt.Println("Getter: ", bytes.elements)
return bytes.elements
}
```
is compiled into an iOS framework using `$GOPATH/bin/gomobile bind -target ios -o Gocode.framework gocode`.
In a minimal iOS application there is the Swift code
```swift
import Foundation
import Gocode
let data = Data(repeating: 0, count: 8)
print("Data: \([UInt8](data))")
let bytes = GocodeNewBytes(data)!
print("Elements: \([UInt8](bytes.getElements()!))")
print("Data: \([UInt8](data))")
```
which can be compiled and run just fine. However, the printed values are not what I would expect.
### What did you expect to see?
```
Data: [0, 0, 0, 0, 0, 0, 0, 0]
Constructor: [0 0 0 0 0 0 0 0]
Getter: [0 0 0 0 0 0 0 0]
Elements: [0, 0, 0, 0, 0, 0, 0, 0]
Data: [0, 0, 0, 0, 0, 0, 0, 0]
```
### What did you see instead?
```
Data: [0, 0, 0, 0, 0, 0, 0, 0]
Constructor: [0 0 0 0 0 0 0 0]
Getter: [184 144 247 1 1 0 0 0]
Elements: [184, 144, 247, 1, 1, 0, 0, 0]
Data: [0, 0, 0, 0, 0, 0, 0, 0]
```
The data is correctly passed to the Go functions and back to Swift, but internally constructing the `Bytes` object it somehow changes. If I rewrite the constructor in Go as
```go
func NewBytes(elements []byte) *Bytes {
fmt.Println("Constructor: ", elements)
tmp := make([]byte, len(elements))
copy(tmp, elements)
return &Bytes{elements: tmp}
}
```
it works as anticipated. I'm not sure whether this is a bug or expected but unpleasant behavior due to Go's and Swift's handling of values and references. If the latter is true I really have no idea how to solve this issue on the Swift side in order to utilize Go libraries using `[]byte` at any interface function. | ExpertNeeded,NeedsInvestigation,mobile | low | Critical |
483,148,353 | flutter | Move desktop target platform artifacts out of host platform artifact directory. | The debug desktop target platform is currently cohabitating with the host platform artifacts of the same name. The debug target platform artifacts should be moved to `platform-debug` to match the reset of the artifact repository.
This could be done via a soft transition by upload the artifacts to both directories, and then removing the old upload once the tool is updated. | tool,a: desktop,P3,c: tech-debt,team-tool,triaged-tool | low | Critical |
483,167,908 | pytorch | [RPC] Make ProcessGroupAgent send task non-blocking | @xush6528 pointed out in #23968 that we should make send task non-blocking in `ProcessGroupAgent`. It currently [waits](https://github.com/pytorch/pytorch/blob/0bf63f483a9cf4fbb63c680cb9b71e2dd09a110a/torch/csrc/distributed/rpc/process_group_agent.cpp#L181) until both preamble and payload send finishes. We could use a separate send GC thread that captures the `preamble`, `payload` and `ProcessGroup::Work` (all as `std::shared_ptr`) in a GC work, keep them in a queue, wait for completion in order, and destruct `preamble` and `payload` tensors.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera | todo,triaged,module: rpc | low | Major |
483,184,728 | rust | The rules for how non-Send local variables "infect" an async function, making its Future type non-Send also, are stricter than they need to be. | Here's an example which I think should compile, but which doesn't (cargo 1.39.0-nightly 3f700ec43 2019-08-19):
```rust
#![feature(async_await)]
fn require_send<T: Send>(_: T) {}
struct NonSendStruct { _ptr: *mut () }
async fn my_future() {
let nonsend = NonSendStruct { _ptr: &mut () };
async {}.await;
}
fn main() {
require_send(my_future()); // error: `*mut ()` cannot be sent between threads safely
}
```
The error is "`*mut ()` cannot be sent between threads safely", which is to say `my_future()` is `!Send`. I'm surprised by that, because the `nonsend` variable is never used after the `.await` point, and it's not `Drop`. Some other notes:
- Adding a `drop(nonsend)` call after the `let nonsend = ...` line doesn't help.
- This _does_ compile if swap the two lines in `my_future`. That is, if `nonsend` is created after the `.await` point, the future is still `Send`.
Are there any future plans to have the rustc look more closely at which locals do or do not need to be stored across an `.await`? | T-lang,C-feature-request,A-async-await,AsyncAwait-Triaged | medium | Critical |
483,224,354 | flutter | Provide a way to configure the open/close animation curves for the BottomSheet widget | <!-- Thank you for using Flutter!
Please check out our documentation first:
* https://flutter.dev/
* https://api.flutter.dev/
If you can't find the answer there, please consider asking a question on
the Stack Overflow Web site:
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
Please don't file a GitHub issue for support requests. GitHub issues are
for tracking defects in the product. If you file a bug asking for help, we
will consider this a request for a documentation update.
-->
| c: new feature,framework,a: animation,f: material design,c: proposal,P3,team-design,triaged-design | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.