id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
497,130,323 | flutter | Check if route contains a specific coordinates or not | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
I am using google map to draw a route between source and destination, With that I need to check if given lat long coords falls on path that is being drawn or not.
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
## Proposal
Need to know if polyutil is avaialbe in flutter or not? if yes how to install it or use it.
this is what I am looking for to be specific.
`PolyUtil.isLocationOnPath(LatLng point, java.util.List<LatLng> polyline, boolean geodesic, double tolerance)`
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
--> | c: new feature,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Critical |
497,211,721 | rust | Incorrect suggestion given private function | In `constants.rs` I have a private method: `fn defaultConfig() -> bool { ... }`.
If I import all members into another file using: `use crate::constants::*`, then call `defaultConfig`, I get the following error:
```
error[E0425]: cannot find function `defaultConfig` in this scope
--> src/playgame.rs:28:18
|
28 | let config = defaultConfig();
| ^^^^^^^^^^^^^ not found in this scope
help: possible candidate is found in another module, you can import it into scope
|
4 | use crate::constants::defaultConfig;
|
```
However, the suggested fix won't help. Adding `use crates::constants::defaultConfig` suggests adding `pub` to `defaultConfig`. Suggesting `crates::constants::defaultConfig` isn't useful when we already have `crates::constants::*`, so should go straight to the suggestion about adding `pub`? | A-diagnostics,A-visibility,T-compiler,C-bug,A-suggestion-diagnostics,D-invalid-suggestion | low | Critical |
497,222,976 | terminal | Support multi-code-point characters in TerminalInput::HandleKey | # Description of the new feature/enhancement
Starting with #2836 key events containing combinations like <kbd>Shift</kbd><kbd>.</kbd> (here: US keyboard layout) will be mapped to their matching, potentially non-ASCII, counterparts (here: <kbd>></kbd>).
This is achieved using the [`ToUnicodeEx` method](https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-tounicodeex).
But this leads to a problem: `ToUnicodeEx` can potentially return multiple code points.
[`TerminalInput::HandleKey`](https://github.com/microsoft/terminal/blob/8afc5b2f596335b47ecc89172ccd9820ec510579/src/terminal/input/terminalInput.cpp#L364) and its [`KeyEvent`](https://github.com/microsoft/terminal/blob/8afc5b2f596335b47ecc89172ccd9820ec510579/src/types/inc/IInputEvent.hpp#L120) parameter only accept a single code point though.
π `TerminalInput` and `KeyEvent` should be modified to accommodate multi-code-point characters (i.e. e.g. by replacing `wchar_t` with `std::wstring`). | Issue-Feature,Product-Conhost,Area-Input,Product-Terminal | low | Minor |
497,233,535 | flutter | App crashes on hot restart with custom FlutterApplication | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
Hot restart crashes when defining a custom `FlutterApplication` class. A custom class is needed for some plugins, such as the `android_alarm_manager`.
## Steps to Reproduce
1. Clone the `android_alarm_manager` example app
1. Run with `flutter run --verbose`
1. Hot restart multiple times (it never crashed on the first hot restart for me)
## Logs
<!--
Run your application with `flutter run --verbose` and attach all the
log output below between the lines with the backticks. If there is an
exception, please see if the error message includes enough information
to explain how to solve the issue.
-->
```
[+4193 ms] Performing hot restart...
[ +54 ms] Scanned through 443 files in 50ms
[ +3 ms] Syncing files to device Wileyfox Swift...
[ +1 ms] Scanning asset files
[ +7 ms] <- reset
[ +1 ms] Compiling dart to kernel with 0 updated files
[ +8 ms] <- recompile package:android_alarm_manager_example/main.dart b0d7dbd5-3aad-445b-81c5-3109bbb64628
[ +1 ms] <- b0d7dbd5-3aad-445b-81c5-3109bbb64628
[ +9 ms] -> result c3961f4c-514f-4f6d-960b-769bb2476be9
[ +361 ms] -> c3961f4c-514f-4f6d-960b-769bb2476be9
[ +2 ms] -> c3961f4c-514f-4f6d-960b-769bb2476be9 build\app.dill 0
[ +6 ms] Updating files
[+1123 ms] DevFS: Sync finished
[ +2 ms] Syncing files to device Wileyfox Swift... (completed in 1,525ms)
[ +6 ms] Synced 17.3MB.
[ +2 ms] <- accept
[ ] Sending to VM service: getIsolate({isolateId: isolates/3272278510078987})
[ +7 ms] Sending to VM service: getIsolate({isolateId: isolates/843250359330915})
[ +2 ms] Sending to VM service: getIsolate({isolateId: isolates/2521375301490347})
[ +26 ms] Result: {type: Isolate, id: isolates/3272278510078987, name: main, number: 3272278510078987, _originNumber: 3272278510078987, startTime:
1569259838857, _heaps: {new: {type: HeapSpace, name: new, vmName: Scavenger, collections: 0, avgCollectionPeriodMillis...
[ +2 ms] Sending to VM service: resume({isolateId: isolates/3272278510078987})
[ +15 ms] Result: {type: Isolate, id: isolates/843250359330915, name: main, number: 843250359330915, _originNumber: 843250359330915, startTime:
1569259838699, _heaps: {new: {type: HeapSpace, name: new, vmName: Scavenger, collections: 0, avgCollectionPeriodMillis: 0...
[ +17 ms] Result: {type: Isolate, id: isolates/2521375301490347, name: main, number: 2521375301490347, _originNumber: 2521375301490347, startTime:
1569259838100, _heaps: {new: {type: HeapSpace, name: new, vmName: Scavenger, collections: 0, avgCollectionPeriodMillis...
[ +6 ms] Error 105 received from application: Isolate must be runnable
[ +7 ms] {request: {method: resume, params: {isolateId: isolates/3272278510078987}}, details: Isolate must be runnable before this request is made.}
[ +2 ms] Performing hot restart... (completed in 1,680ms)
[ +4 ms] Restarted application in 1,688ms.
[ +9 ms] Sending to VM service: _flutter.runInView({viewId: _flutterView/0x7f6c4f3f20, mainScript:
file:///data/user/0/io.flutter.plugins.androidalarmmanagerexample/code_cache/exampleMNGMMD/example/lib/main.dart.dill, packagesFile:
file:///data/user/0/io.flutter.plugins.androidalarmmanagerexample/code_cache/exampleMNGMMD/example/.packages, assetDirectory:
file:///data/user/0/io.flutter.plugins.androidalarmmanagerexample/code_cache/exampleMNGMMD/example/build/flutter_assets})
[ +3 ms] Sending to VM service: _flutter.runInView({viewId: _flutterView/0x7f6c504f20, mainScript:
file:///data/user/0/io.flutter.plugins.androidalarmmanagerexample/code_cache/exampleMNGMMD/example/lib/main.dart.dill, packagesFile:
file:///data/user/0/io.flutter.plugins.androidalarmmanagerexample/code_cache/exampleMNGMMD/example/.packages, assetDirectory:
file:///data/user/0/io.flutter.plugins.androidalarmmanagerexample/code_cache/exampleMNGMMD/example/build/flutter_assets})
[ +6 ms] Sending to VM service: _flutter.runInView({viewId: _flutterView/0x7f8904c120, mainScript:
file:///data/user/0/io.flutter.plugins.androidalarmmanagerexample/code_cache/exampleMNGMMD/example/lib/main.dart.dill, packagesFile:
file:///data/user/0/io.flutter.plugins.androidalarmmanagerexample/code_cache/exampleMNGMMD/example/.packages, assetDirectory:
file:///data/user/0/io.flutter.plugins.androidalarmmanagerexample/code_cache/exampleMNGMMD/example/build/flutter_assets})
[ +10 ms] Application finished.
hot restart failed to complete
#0 throwToolExit (package:flutter_tools/src/base/common.dart:28:3)
#1 TerminalHandler._commonTerminalInputHandler (package:flutter_tools/src/resident_runner.dart:1067:11)
<asynchronous suspension>
#2 TerminalHandler.processTerminalInput (package:flutter_tools/src/resident_runner.dart:1117:13)
<asynchronous suspension>
#3 _rootRunUnary (dart:async/zone.dart:1132:38)
#4 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#5 _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
#6 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:336:11)
#7 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:263:7)
#8 _SyncBroadcastStreamController._sendData (dart:async/broadcast_stream_controller.dart:375:20)
#9 _BroadcastStreamController.add (dart:async/broadcast_stream_controller.dart:250:5)
#10 _AsBroadcastStreamController.add (dart:async/broadcast_stream_controller.dart:474:11)
#11 _rootRunUnary (dart:async/zone.dart:1132:38)
#12 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#13 _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
#14 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:336:11)
#15 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:263:7)
#16 _SinkTransformerStreamSubscription._add (dart:async/stream_transformers.dart:68:11)
#17 _EventSinkWrapper.add (dart:async/stream_transformers.dart:15:11)
#18 _StringAdapterSink.add (dart:convert/string_conversion.dart:236:11)
#19 _StringAdapterSink.addSlice (dart:convert/string_conversion.dart:241:7)
#20 _Utf8ConversionSink.addSlice (dart:convert/string_conversion.dart:312:20)
#21 _ErrorHandlingAsciiDecoderSink.addSlice (dart:convert/ascii.dart:252:17)
#22 _ErrorHandlingAsciiDecoderSink.add (dart:convert/ascii.dart:238:5)
#23 _ConverterStreamEventSink.add (dart:convert/chunked_conversion.dart:72:18)
#24 _SinkTransformerStreamSubscription._handleData (dart:async/stream_transformers.dart:120:24)
#25 _rootRunUnary (dart:async/zone.dart:1132:38)
#26 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#27 _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
#28 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:336:11)
#29 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:263:7)
#30 _SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:764:19)
#31 _StreamController._add (dart:async/stream_controller.dart:640:7)
#32 _StreamController.add (dart:async/stream_controller.dart:586:5)
#33 _Socket._onData (dart:io-patch/socket_patch.dart:1791:41)
#34 _rootRunUnary (dart:async/zone.dart:1136:13)
#35 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#36 _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
#37 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:336:11)
#38 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:263:7)
#39 _SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:764:19)
#40 _StreamController._add (dart:async/stream_controller.dart:640:7)
#41 _StreamController.add (dart:async/stream_controller.dart:586:5)
#42 new _RawSocket.<anonymous closure> (dart:io-patch/socket_patch.dart:1339:33)
#43 _NativeSocket.issueReadEvent.issue (dart:io-patch/socket_patch.dart:860:14)
#44 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
#45 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
#46 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:116:13)
#47 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:173:5)
[ +31 ms] DevFS: Deleting filesystem on the device (file:///data/user/0/io.flutter.plugins.androidalarmmanagerexample/code_cache/exampleMNGMMD/example/)
[ +1 ms] Sending to VM service: _deleteDevFS({fsName: example})
```
## Flutter analyze
```
Analyzing example...
No issues found! (ran in 13.7s)
```
## VSCode Output
Crash on second hot reload
```
Restarted application in 2,628ms.
I/flutter (29380): [2019-09-23 10:28:42.220606] main run
I/flutter (29380): [2019-09-23 10:28:42.349451] main run
Error -32601 received from application: Method not found
method not available: ext.flutter.platformOverride
Error -32601 received from application: Method not found
Error -32601 received from application: Method not found
method not available: ext.flutter.inspector.setPubRootDirectories
Error -32601 received from application: Method not found
method not available: ext.flutter.inspector.isWidgetCreationTracked
Error -32601 received from application: Method not found
method not available: ext.flutter.platformOverride
Error -32601 received from application: Method not found
Error -32601 received from application: Method not found
method not available: ext.flutter.inspector.setPubRootDirectories
Error -32601 received from application: Method not found
method not available: ext.flutter.inspector.isWidgetCreationTracked
Application finished.
Exited (sigterm)
```
## Flutter Doctor
```
[β] Flutter (Channel stable, v1.9.1+hotfix.2, on Microsoft Windows [Version 10.0.18362.356], locale en-US)
β’ Flutter version 1.9.1+hotfix.2 at C:\app\dev\sdk\flutter
β’ Framework revision 2d2a1ffec9 (2 weeks ago), 2019-09-06 18:39:49 -0700
β’ Engine revision b863200c37
β’ Dart version 2.5.0
[β] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
β’ Android SDK at C:\app\dev\sdk\android-sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-29, build-tools 29.0.2
β’ Java binary at: C:\app\editor\android-studio\jre\bin\java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
β’ All Android licenses accepted.
[β] Android Studio (version 3.5)
β’ Android Studio at C:\app\editor\android-studio
β’ Flutter plugin version 39.0.3
β’ Dart plugin version 191.8423
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
[β] VS Code (version 1.38.1)
β’ VS Code at C:\Users\<user>\AppData\Local\Programs\Microsoft VS Code
β’ Flutter extension version 3.4.1
[β] Connected device (1 available)
β’ Wileyfox Swift β’ b6d5f2fa β’ android-arm64 β’ Android 7.1.2 (API 25)
β’ No issues found!
```
| c: crash,tool,t: hot reload,P2,team-tool,triaged-tool | low | Critical |
497,238,115 | rust | Tracking issue for reserved impl `impl<T> From<!> for T` | ## Background
This is a **tracking issue** for a temporary limitation related to the `From` trait and the `!` type. Specifically, we wish to eventually add an impl like the following:
```rust
impl<T> From<!> to T { }
```
We cannot do so now because it would overlap with existing impls. Specifically, the impl `impl<T> From<T> for T` as well as impls of the form `impl<T> From<T> for Foo<T>`, which exist for a number of smart pointer types. There are some plans for how we might add such an impl in the future, described below.
## What is allowed today
Currently you are permitted to add impls of `From<!>` for your own types:
```rust
struct LocalType;
impl From<!> for LocalType { }
```
This is true even though such impls will overlap our planned addition: after all, we already have a number of overlapping cases to deal with.
However, you are **not** permitted to assume that `From<!>` is **not** implemented. If that double negative threw you for a loop, consider this example (which will not compile):
```rust
struct LocalType;
trait SomeTrait { }
impl<T: From<!>> SomeTrait for T { }
impl SomeTrait for LocalType { }
```
Here, the two impls do not presently overlap. This is because `LocalType: From<!>` is not implemented. However, if we were to add the `impl<T> From<!> for T` impl that we would like to add, then these two impls would start to overlap, and your code would stop compiling. Thus we say that this program assumes that `From<!>` is **not** implemented -- because it cannot pass the coherence check unless that is the case. This is precisely the sort of case that is not currently allowed. For more information, see [RFC 1023](https://rust-lang.github.io/rfcs/1023-rebalancing-coherence.html), which introduced the rules limiting negative reasoning.
## How might we add the reserved impl in the future?
The precise mechanism to permit us to add the `From<!> for T` impl is not yet clear. The current "plan of record" is to extend the ["marker trait mechanism"](https://github.com/rust-lang/rust/issues/29864) to accommodate the idea of impls whose entire body consists of unreachable methods and to permit overlap.
cc #64631 -- the internal rustc mechanism used to achieve this limitation
| A-trait-system,T-lang,C-tracking-issue,S-tracking-perma-unstable | low | Minor |
497,247,689 | node | Impossible to catch error during tls.connect(duplex) | * **Version**: 10.16.3
* **Platform**: Ubuntu 16.04.1
* **Subsystem**: tls
There seems to be no way to catch synchronous error of underlying duplex stream during `tls.connect` operation.
Code sample:
```javascript
const stream = require('stream');
const tls = require('tls');
const async = false;
process.on('uncaughtException', e=>console.log('uncaught: '+e));
const socket = new stream.Duplex({
read(size){},
write(data, encoding, cb){
let error = new Error('intended error');
if (async)
setTimeout(()=>cb(error), 1000);
else
cb(error);
},
});
socket.on('error', e=>console.log('socket error: '+e));
const tls_socket = tls.connect({socket});
tls_socket.on('error', e=>console.log('tls_socket error: '+e));
```
Expected output:
```
socket error: Error: intended error
tls_socket error: Error: intended error
```
Actual output:
```
socket error: Error: intended error
uncaught: Error: intended error
```
Changing `async` to true solves the issue.
Can we allow passing `onError` handler to `tls.connect` method? So, it will be set before calling `_start()` method of TLSSocket. | tls | low | Critical |
497,282,202 | opencv | SIGABRT by cv::Exception on ocl.cpp:4908 -> CV_Assert(u->origdata == data); | I'm having a lot of intermittent buy frequent aborts raised by this assertion:
https://github.com/opencv/opencv/blob/47007224445af3dce8dadb11174df14d81fd5a34/modules/core/src/ocl.cpp#L4908
inside a loop doing face detection and tracking. It usually happens on the tracking part, FaceEngine.hpp:135
I'm not sure if it's a real bug or a miss use on my side but some sort of race condition is involved on opencv/ocl library/driver side since it is not exactly 100% reproducible (it's somewhere around 70~50%) and depends on run modes and input timming.
You can find attached the outputs of clinfo, getBuildInformation() and a sample project.
[opencv-repro.tar.gz](https://github.com/opencv/opencv/files/3643700/opencv-repro.tar.gz)
[clinfo.txt](https://github.com/opencv/opencv/files/3643702/clinfo.txt)
[getBuildInformation.txt](https://github.com/opencv/opencv/files/3643703/getBuildInformation.txt)
To reproduce the error, run the sample using the "StaticFrameProvider" (as provided). Plz try several times alternating between "run" and "debug" modes if using an IDE (I'm using CLion). Although it fails too if make and run from the command line.
You can also change main.cpp line 36 with the "OpenCvFrameProvider" that uses opencv::VideoCapture.
When using VideoCapture, the behavior becomes even more weird on an IDE, as launching the program in debug mode will most likely (but not always) result in an abort the first time a cv::Tracker is updated. Otoh if launch in "run" mode, it will only result in an abort if the very first frame contains a face but not if a face appears later in front of the camera.
I tried making copies and clones of the "colorFrame" variable (lenna image or frame from cv::VideoCapture over a webcam) before passing it to the Tracker but it doesn't help much. | category: ocl | low | Critical |
497,283,914 | go | x/tools/benchmark: support parsing custom metrics | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE="on"
GOARCH="amd64"
GOBIN="/Users/jsternberg/go/pkg/bin/github.com/influxdata/flux"
GOCACHE="/Users/jsternberg/Library/Caches/go-build"
GOENV="/Users/jsternberg/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/jsternberg/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.13/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.13/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/jsternberg/go/src/github.com/influxdata/flux/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/w5/25fg3zv56y7cgd20q4fx_24m0000gn/T/go-build317574403=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
I used [ReportMetric](https://godoc.org/testing#B.ReportMetric) when running a benchmark and attempted to use `benchcmp` to compare the benchmarks.
### What did you expect to see?
I expected `benchcmp` to add an additional section showing the change in numbers for the custom benchmarks.
I then took a look at `golang.org/x/tools/benchmark/parse` to see if that functionality had been added or if the library had been updated so I could write the tool myself, but I see that the code there has not been updated to either read or expose the new custom metrics.
It would be very helpful to update this tool to support these custom metrics. We are beginning to use them to track other metrics and we would like to see how these other metrics change between two changesets.
### What did you see instead?
Nothing. | NeedsInvestigation,Tools | low | Critical |
497,299,063 | flutter | MergeSemantics crashes when wrapping a recognizer-containing TextSpan | When wrapping a `RichText` (and its `TextSpan` children) with `MergeSemantics`, a crash will be triggered if one of the `TextSpan`s has a `recognizer` set.
## Steps to Reproduce
Run this:
```dart
import 'package:flutter/gestures.dart';
import 'package:flutter/material.dart';
import 'package:url_launcher/url_launcher.dart' as launcher;
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) => MaterialApp(
home: MyHomePage(title: 'Demo'));
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text(widget.title)),
body: Center(
child: MergeSemantics(
child: RichText(
text: TextSpan(
style: TextStyle(color: Colors.black87),
children: <TextSpan>[
TextSpan(text: 'Visit'),
TextSpan(
text: 'Flutter',
style: TextStyle(color: Colors.blue),
recognizer: TapGestureRecognizer()
..onTap = () => launcher.launch('https://flutter.dev/'),
),
],
),
),
)
),
);
}
}
```
## Logs
<!--
Run your application with `flutter run --verbose` and attach all the
log output below between the lines with the backticks. If there is an
exception, please see if the error message includes enough information
to explain how to solve the issue.
-->
```
flutter: βββ‘ EXCEPTION CAUGHT BY SCHEDULER LIBRARY ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
flutter: The following assertion was thrown during a scheduler callback:
flutter: 'package:flutter/src/semantics/semantics.dart': Failed assertion: line 2607 pos 16: 'node.parent ==
flutter: null || !node.parent.isPartOfNodeMerging || node.isMergedIntoParent': is not true.
flutter:
flutter: Either the assertion indicates an error in the framework itself, or we should provide substantially
flutter: more information in this error message to help you determine and fix the underlying cause.
flutter: In either case, please report this assertion by filing a bug on GitHub:
flutter: https://github.com/flutter/flutter/issues/new?template=BUG.md
flutter:
flutter: When the exception was thrown, this was the stack:
flutter: #2 SemanticsOwner.sendSemanticsUpdate
package:flutter/β¦/semantics/semantics.dart:2607
flutter: #3 PipelineOwner.flushSemantics
package:flutter/β¦/rendering/object.dart:1039
flutter: #4 RendererBinding.drawFrame
package:flutter/β¦/rendering/binding.dart:344
flutter: #5 WidgetsBinding.drawFrame
package:flutter/β¦/widgets/binding.dart:776
flutter: #6 RendererBinding._handlePersistentFrameCallback
package:flutter/β¦/rendering/binding.dart:279
flutter: #7 SchedulerBinding._invokeFrameCallback
package:flutter/β¦/scheduler/binding.dart:1040
flutter: #8 SchedulerBinding.handleDrawFrame
package:flutter/β¦/scheduler/binding.dart:982
flutter: #9 SchedulerBinding.scheduleWarmUpFrame.<anonymous closure>
package:flutter/β¦/scheduler/binding.dart:791
flutter: #18 _Timer._runTimers (dart:isolate-patch/timer_impl.dart:382:19)
flutter: #19 _Timer._handleMessage (dart:isolate-patch/timer_impl.dart:416:5)
flutter: #20 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:172:12)
flutter: (elided 10 frames from class _AssertionError, package dart:async, and package dart:async-patch)
flutter: ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
[VERBOSE-2:ui_dart_state.cc(148)] Unhandled Exception: Bad state: Future already completed
#0 _AsyncCompleter.complete (dart:async/future_impl.dart:39:31)
#1 WidgetsBinding.drawFrame.<anonymous closure>
package:flutter/β¦/widgets/binding.dart:769
#2 _rootRunUnary (dart:async/zone.dart:1136:13)
#3 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#4 _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
#5 _invoke1 (dart:ui/hooks.dart:265:10)
#6 _reportTimings (dart:ui/hooks.dart:203:3)
[VERBOSE-2:ui_dart_state.cc(148)] Unhandled Exception: Bad state: Future already completed
#0 _AsyncCompleter.complete (dart:async/future_impl.dart:39:31)
#1 WidgetsBinding.drawFrame.<anonymous closure>
package:flutter/β¦/widgets/binding.dart:769
#2 _rootRunUnary (dart:async/zone.dart:1136:13)
#3 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#4 _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
#5 _invoke1 (dart:ui/hooks.dart:265:10)
#6 _reportTimings (dart:ui/hooks.dart:203:3)
ββββββββ Exception caught by scheduler library βββββββββββββββββββββββββββββββββ
The following assertion was thrown during a scheduler callback:
'package:flutter/src/semantics/semantics.dart': Failed assertion: line 2607 pos 16: 'node.parent == null || !node.parent.isPartOfNodeMerging || node.isMergedIntoParent': is not true.
Either the assertion indicates an error in the framework itself, or we should provide substantially more information in this error message to help you determine and fix the underlying cause.
In either case, please report this assertion by filing a bug on GitHub:
https://github.com/flutter/flutter/issues/new?template=BUG.md
When the exception was thrown, this was the stack
#2 SemanticsOwner.sendSemanticsUpdate
package:flutter/β¦/semantics/semantics.dart:2607
#3 PipelineOwner.flushSemantics
package:flutter/β¦/rendering/object.dart:1039
#4 RendererBinding.drawFrame
package:flutter/β¦/rendering/binding.dart:344
#5 WidgetsBinding.drawFrame
package:flutter/β¦/widgets/binding.dart:776
#6 RendererBinding._handlePersistentFrameCallback
package:flutter/β¦/rendering/binding.dart:279
...
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββ Exception caught by scheduler library βββββββββββββββββββββββββββββββββ
The following assertion was thrown during a scheduler callback:
'package:flutter/src/semantics/semantics.dart': Failed assertion: line 2607 pos 16: 'node.parent == null || !node.parent.isPartOfNodeMerging || node.isMergedIntoParent': is not true.
Either the assertion indicates an error in the framework itself, or we should provide substantially more information in this error message to help you determine and fix the underlying cause.
In either case, please report this assertion by filing a bug on GitHub:
https://github.com/flutter/flutter/issues/new?template=BUG.md
When the exception was thrown, this was the stack
#2 SemanticsOwner.sendSemanticsUpdate
package:flutter/β¦/semantics/semantics.dart:2607
#3 PipelineOwner.flushSemantics
package:flutter/β¦/rendering/object.dart:1039
#4 RendererBinding.drawFrame
package:flutter/β¦/rendering/binding.dart:344
#5 WidgetsBinding.drawFrame
package:flutter/β¦/widgets/binding.dart:776
#6 RendererBinding._handlePersistentFrameCallback
package:flutter/β¦/rendering/binding.dart:279
...
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
```
$ flutter analyze
Analyzing playground...
No issues found! (ran in 2.5s)
```
<!-- Finally, paste the output of running `flutter doctor -v` here. -->
```
$ flutter doctor -v
Downloading android-arm-profile/darwin-x64 tools... 0.6s
Downloading android-arm-release/darwin-x64 tools... 0.3s
Downloading android-arm64-profile/darwin-x64 tools... 0.3s
Downloading android-arm64-release/darwin-x64 tools... 0.3s
[β] Flutter (Channel master, v1.10.6-pre.33, on Mac OS X 10.14.6 18G95, locale en-US)
β’ Flutter version 1.10.6-pre.33 at /Users/katelovett/github/flutter
β’ Framework revision 961f1b746d (3 hours ago), 2019-09-23 10:09:38 -0700
β’ Engine revision b875c7a5ff
β’ Dart version 2.6.0 (build 2.6.0-dev.0.0 7c1821c4aa)
[β] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
β’ Android SDK at /Users/katelovett/Library/Android/sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-28, build-tools 28.0.3
β’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
β’ All Android licenses accepted.
[β] Xcode - develop for iOS and macOS (Xcode 10.1)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Xcode 10.1, Build version 10B61
β’ CocoaPods version 1.7.2
[β] Android Studio (version 3.4)
β’ Android Studio at /Applications/Android Studio.app/Contents
β’ Flutter plugin version 36.1.1
β’ Dart plugin version 183.6270
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[β] IntelliJ IDEA Ultimate Edition (version 2019.1.2)
β’ IntelliJ at /Applications/IntelliJ IDEA.app
β’ Flutter plugin version 39.0.2
β’ Dart plugin version 191.8423
[β] VS Code (version 1.38.1)
β’ VS Code at /Applications/Visual Studio Code.app/Contents
β’ Flutter extension version 3.4.1
[β] Connected device (1 available)
β’ SixthSense β’ 1c017dd332591e8a3553c3a7ec3f7a6daf2d1d27 β’ ios β’ iOS 12.4.1
β’ No issues found!
```
| c: crash,platform-ios,framework,a: accessibility,a: typography,platform-linux,a: desktop,has reproducible steps,P2,found in release: 3.0,found in release: 3.1,team-framework,triaged-framework | low | Critical |
497,309,621 | pytorch | torch::nn::Sequential not compatible with torch::nn::RNN | ## π Bug
(Note from @VitalyFedyunin: CPP implementation fails to report input/hidden sizes errors properly and returns cryptic asserts instead. Actual error hides is this line https://github.com/pytorch/pytorch/blob/420b37f3c67950ed93cd8aa7a12e673fcfc5567b/aten/src/ATen/native/cudnn/RNN.cpp#L1231 and similar for miopen).
-------------
Hello,
I am very new to Pytorch and trying to create a simple example using C++ frontend. After a long time of trying, I found out that I cannot get `torch::nn::Sequential` to work with `torch::nn::RNN`. Nor could I find any example code online where these two are combined.
## To Reproduce
To reproduce the issue, use the following simple example
```c++
#include <torch/torch.h>
int main(int /*argc*/, char* /*argv*/[]) {
// Use GPU when present, CPU otherwise.
torch::Device device(torch::kCPU);
if (torch::cuda::is_available()) {
device = torch::Device(torch::kCUDA);
std::cout << "CUDA is available! Training on GPU." << std::endl;
}
torch::nn::Sequential time_serie_detector(
torch::nn::RNN(torch::nn::RNNOptions(1, 10).dropout(0.2).layers(2).tanh()));
time_serie_detector->to(device);
std::cout << time_serie_detector << std::endl;
auto x = torch::ones(1).toBackend(c10::Backend::CUDA);
auto a = torch::ones(10).toBackend(c10::Backend::CUDA);
std::cout << "x = " << x << std::endl;
std::cout << "a = " << a << std::endl;
time_serie_detector->forward(x, a);
time_serie_detector->zero_grad();
return 0;
}
```
The above code compiles but gives run-time error:
```
terminate called after throwing an instance of 'c10::IndexError'
what(): Dimension out of range (expected to be in range of [-1, 0], but got 2) (maybe_wrap_dim at ../../c10/core/WrapDimMinimal.h:20)
```
If I declare `time_serie_detector` as `torch::nn::RNN` without using `torch::nn::Sequential`, there is no error occurring.
## Expected behavior
I expected `torch::nn::Sequential` to be compatible with `torch::nn::RNN`.
## Environment
PyTorch version: libtorch1.2.0
Is debug build: N/A
CUDA used to build PyTorch: N/A
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.13.3
Python version: 2.7
Is CUDA available: Yes
CUDA runtime version: 10.2
GPU models and configuration: GPU 0: GeForce GTX 1060 3GB
Nvidia driver version: 430.26
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3
cc @yf225 | module: cpp,module: nn,triaged | low | Critical |
497,338,435 | rust | std::process::Command doesn't follow Unix signal safety in forked child | **Update (2022-12-12)**: The issue description is outdated, but [some problems remain](https://github.com/rust-lang/rust/issues/64718#issuecomment-1346099608).
These lines from spawn can cause a deadlock. It's wrong to assume that you can lock in the parent and the unlock in the child. Read man 2 fork and man 7 signal-safety on Linux.
// Whatever happens after the fork is almost for sure going to touch or
// look at the environment in one way or another (PATH in `execvp` or
// accessing the `environ` pointer ourselves). Make sure no other thread
// is accessing the environment when we do the fork itself.
//
// Note that as soon as we're done with the fork there's no need to hold
// a lock any more because the parent won't do anything and the child is
// in its own process.
let result = unsafe {
let _env_lock = sys::os::env_lock();
cvt(libc::fork())?
};
Qumulo has a custom thread library which exposes this issue.
| P-medium,I-unsound,C-bug,E-needs-mcve,T-libs,O-unix,A-process | low | Major |
497,370,280 | go | cmd/go: provide package path for main packages to cmd/compile | Can cmd/go provide cmd/compile with the full package path to the source package, even when compiling main packages?
When benchmarking cmd/compile changes, it's useful to key stuff by `myimportpath` (i.e., the `-p` command-line flag) and just spit everything across an entire "go build -a std cmd" build into a single file, and then let `benchcmp` or `benchstat` handle it.
But this currently doesn't work for main packages, because cmd/go sets `-p main` for these:
https://github.com/golang/go/blob/7eef0ca17ae09ae40027dcc78138179e0ed19b10/src/cmd/go/internal/work/gc.go#L50-L56
So you end up with a bunch of "BenchmarkFoo:main" lines, which muddle the benchcmp/benchstat output.
I figure two main options:
1. Change cmd/go to just set `-p` to the package path regardless, and cmd/compile can rewrite it to `"main"` where/if necessary. (Looking briefly at `myimportpath`, some uses would be unaffected; but DWARF and the new ABI stuff might be impacted.)
2. Add another command-line flag for cmd/go to provide the package path, which cmd/compile can use for tagging benchmarking data with instead.
/cc @rsc @aclements | NeedsDecision,FeatureRequest | low | Minor |
497,373,848 | terminal | Add Registry Key Check/Set in Feature Tests | We need to add a way to check and set the registry keys in the feature tests.
One important registry key to check/set is the wrap key. | Product-Conhost,Issue-Task,Area-CodeHealth | low | Minor |
497,374,172 | go | x/perf/cmd/benchstat: GitHub markdown table output | GitHub supports tables in markdown format: https://help.github.com/en/articles/organizing-information-with-tables
It would be handy if benchstat could easily output in this format for pasting benchstat output into issues. | NeedsInvestigation,FeatureRequest | low | Major |
497,384,175 | pytorch | Support FPGA Xilinx | Hello, World!
We are a group intending to accelerate some Pytorch operations on Xilinx UltraScale FPGAs. However, we are a little lost to where to begin to port the functions.
From what we could see, we think we can start from the CUDA implementation and modify it to use the OpenCL API and add an FPGA device type and components _(Streams, Storage, Tensors, ...)_.
We would like to have some guidance on the right way to take. Would you be so kind to help us?
Thank you. | triaged,module: backend | medium | Major |
497,389,678 | go | go/packages: return information on missing imports | `gopls` currently has some handling for packages that were imported but not found by `go/packages`. This is something that `go/packages` could return more effectively, either through its errors or through an additional field. | NeedsFix | low | Critical |
497,428,767 | flutter | [video_player] black (first?) frame on iOS | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
The `video_player` plugin has difficulty ~looping~ loading video without artifact on iOS but works fine on Android. On iOS the first frame of video is black, and it shows black again on loop. Its as yet unclear if it's overwriting the first frame, or if some frames of black are inserted.
Devices used to validate: Pixel 2 (Android 9/10), iPhone 7 (iOS 12.x, 13), iPad Pro 2018 (iOS 12.x).
## Steps to Reproduce
MCVE: Fix the pub.dev [example code](https://pub.dev/packages/video_player#example) to use a working video URL (bigbuckbunny is down). I used https://res.cloudinary.com/demo/video/upload/f_mp4/dog.mp4. Load the app. The first frame will load as a solid black image on iOS, but will show the first frame of the video on Android.
To see the black frame flash on loop on iOS, simply modify the [example code](https://pub.dev/packages/video_player#example) to include a `setLooping(true)` in `initState()`. Note that this works fine on Android.
```dart
import 'package:video_player/video_player.dart';
import 'package:flutter/material.dart';
void main() => runApp(VideoApp());
class VideoApp extends StatefulWidget {
@override
_VideoAppState createState() => _VideoAppState();
}
class _VideoAppState extends State<VideoApp> {
VideoPlayerController _controller;
@override
void initState() {
super.initState();
_controller = VideoPlayerController.network(
'https://res.cloudinary.com/demo/video/upload/f_mp4/dog.mp4')
..initialize().then((_) {
// Ensure the first frame is shown after the video is initialized, even before the play button has been pressed.
setState(() {});
});
_controller.setLooping(true);
_controller.setVolume(0.0);
}
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Video Demo',
home: Scaffold(
body: Center(
child: _controller.value.initialized
? AspectRatio(
aspectRatio: _controller.value.aspectRatio,
child: VideoPlayer(_controller),
)
: Container(),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
setState(() {
_controller.value.isPlaying
? _controller.pause()
: _controller.play();
});
},
child: Icon(
_controller.value.isPlaying ? Icons.pause : Icons.play_arrow,
),
),
),
);
}
@override
void dispose() {
super.dispose();
_controller.dispose();
}
}
```
## Logs
<!--
Run your application with `flutter run --verbose` and attach all the
log output below between the lines with the backticks. If there is an
exception, please see if the error message includes enough information
to explain how to solve the issue.
-->
```
No exceptions present.
```
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
```
β> ~/D/video_player_test flutter analyze 20:04:33
Analyzing video_player_test...
No issues found! (ran in 2.7s)
```
<!-- Finally, paste the output of running `flutter doctor -v` here. -->
```
β> ~/D/video_player_test flutter doctor -v 20:04:41
[β] Flutter (Channel stable, v1.9.1+hotfix.2, on Mac OS X 10.14.6 18G95, locale en-US)
β’ Flutter version 1.9.1+hotfix.2 at /Users/deg/Development/flutter
β’ Framework revision 2d2a1ffec9 (2 weeks ago), 2019-09-06 18:39:49 -0700
β’ Engine revision b863200c37
β’ Dart version 2.5.0
[β] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
β’ Android SDK at /Users/deg/Library/Android/sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-29, build-tools 29.0.2
β’ Java binary at: /Users/deg/Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
β’ All Android licenses accepted.
[β] Xcode - develop for iOS and macOS (Xcode 11.0)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Xcode 11.0, Build version 11A420a
β’ CocoaPods version 1.7.5
[β] Android Studio (version 3.5)
β’ Android Studio at /Users/deg/Applications/Android Studio.app/Contents
β’ Flutter plugin version 39.0.3
β’ Dart plugin version 191.8423
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[β] VS Code
β’ VS Code at /Users/deg/Applications/Visual Studio Code.app/Contents
β’ Flutter extension version 3.4.1
[β] Connected device (1 available)
β’ bigbubble β’ 00008027-001A51A036EB002E β’ ios β’ iOS 12.4.1
β’ No issues found!
```
Thanks for considering! | platform-ios,p: video_player,package,has reproducible steps,P2,found in release: 2.2,team-ios,triaged-ios | low | Critical |
497,462,460 | vscode | [html] Jump to after next opening HTML tag | It would be great if there were a keyboard shortcut which moved the cursor to within the next HTML element, right after the opening tag. This would be really useful, because Emmet is great for building out the structure, but then I have to type in the data and it's a lot of mouse clicking / arrow keys.
For example, say I am starting here:

and I want to move the cursor here:

I'd like to do this with one keyboard shortcut instead of 5 arrow presses (I know Ctrl can be used with arrow keys to navigate between words but it is still an issue) | feature-request,html | low | Minor |
497,465,771 | go | cmd/go: clarify error message when importing a package that could be (but isn't) in the main module | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code></summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/johnrinehart/Library/Caches/go-build"
GOENV="/Users/johnrinehart/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/johnrinehart/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/johnrinehart/MyGo/playground/test/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/wg/8x33rs4j5d7bgr5z58_4ql0m0000gn/T/go-build825724138=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
I imported a path that was not a valid package (because of a combination of build tags, test files, `documentation` package files, and/or no files present, but was a valid path to a directory on a filesystem (one which contained (an)other package(s)). `go build ./...` and `go test ./...` and `go mod tidy` report an error in finding the path to the package within the module.
### What did you expect to see?
I guess it would have been nice to see (both) 2 things output (as @Helcaraxan and @thepudds have suggested in [Gophers slack (#modules)](https://gophers.slack.com/archives/C9BMAAFFB/p1569212040171800):
1. Thereβs no such module online
1. No *.go files matching current build constraints
### What did you see instead?
Before the module was pushed to a GitHub repository (everything local)
```
https://gist.github.com/johnrichardrinehart/2d9944249e4d1e0f17c608149d50ff39
```
and after the module was pushed to a repository (`github.com/johnrichardrinehart/a`) instead of (`github.com/a`, as in the gist):
```
johnrinehart@modie test (master) $ go mod tidy
go: finding github.com/johnrichardrinehart/a/folder latest
github.com/johnrichardrinehart/a/folder/pkg imports
github.com/johnrichardrinehart/a/folder: no matching versions for query "latest"
```
| help wanted,NeedsFix,modules | low | Critical |
497,479,999 | opencv | Cannot read ONNX model, error in reshape_layer.cpp, function 'computeShapeByReshapeMask' | #### System information (version)
- OpenCV => 4.1.1 (master branch, commit a74fe2ec01d9218d06cb7675af633fc3f409a6a2)
- Operating System / Platform => Debian Linux 64 Bit
- Compiler => gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)
##### Detailed description
Error while trying to read pretrained ONNX DNN model file:
> cv2.error: OpenCV(4.1.2-pre) /media/sdb2/opencv/modules/dnn/src/layers/reshape_layer.cpp:125: error: (-5:Bad argument) Copy dim[3] (which has zero size) is out of the source shape bounds in function 'computeShapeByReshapeMask'
##### Steps to reproduce
```.python
net = cv.dnn.readNetFromONNX('/media/sdb2/temp/mask_rcnn_R_50_FPN_1x.onnx')
```
Model file is available online at the [page](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn) ([direct link](https://onnxzoo.blob.core.windows.net/models/opset_10/mask_rcnn/mask_rcnn_R_50_FPN_1x.onnx) )
I also tried model from [other page](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn) with the same error
| feature,category: dnn,category: dnn (onnx) | low | Critical |
497,536,537 | opencv | Ill-defined distortion model found by calibrateCamera | Hi everyone,
The calibrateCamera() function can return distortion coefficients `k1, k2... k6` that lead to a distortion model that is ill-defined. The denominator of the radial distortion part `(1 + k4 r^2 + k5 r^4 + k6 r^6)` can have roots `r_i` such that `r_i^2 = ((u-cx)/fx)^2 + ((v-cy)/fy)^2` for some real-valued image points coordinates `(u, v)` within the image boundaries.
As a consequence, all image points located in the vicinity of this ellipsis may be attributed an incorrect value by the remap function, as the rectification maps will contain invalid coordinates.
This was observed after calibrating with a set of images in which the chessboard was placed mostly near the image center, so my guess is that the lack of constraints further from the center could explain why the optimization could converge towards such a solution. A work around is to check at the end of the calibration process that the radial distortion model is well-defined for every image pixel coordinates β i.e. compute the roots of the polynomial at the denominator.
My question is: when the rational distortion model is used, shouldn't the optimization be constrained in a way that prevents convergence towards such ill-defined sets of coefficients?
Here are the parameters found:
```
fx:530.416 fy:530.429 cx:511.626 cy:383.541
k1:22.4427 k2:-18.5172 p1:0.000102929 p2:0.000468845 k3:-4.44312 k4:22.8537 k5:-10.515 k6:-12.7093
width:1024 height:768
```

##### Steps to reproduce
I can provide calibration images upon request.
##### System information (version)
- OpenCV => 4.1.0
- Operating System / Platform => Linux 64 Bit
- Compiler => GCC 8.3.1 | category: calib3d | low | Minor |
497,591,519 | svelte | Passing values from slot to parent | From the documentation of slots it seems it should be possible to bind values of a component to a slot:
> Slots can be rendered zero or more times, and can pass values back to the parent using props. The parent exposes the values to the slot template using the `let:` directive.
but it seems that the real situation is different: this [REPL](https://svelte.dev/repl/21a2324cc6be46348e514db137353ce8?version=3.12.1) triggers the error `Cannot bind to a variable declared with the let: directive (10:32)`.
**Expected behavior**
Binding a variable in a slot, which is bound to a variable in the parent component, should work normally as it would if I manually substituted the slot content inside the container.
**Severity**
This underpins the possibility of developing a lot of components that take care of boilerplate code for my application, so in my case this effectively blocks my usage of Svelte for the project. | feature request | high | Critical |
497,635,505 | react | [eslint-plugin-react-hooks] allow configuring custom hooks as "static" | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
**Do you want to request a *feature* or report a *bug*?**
Feature/enhancement
**What is the current behavior?**
Currently the eslint plugin is unable to understand when the **return value** of a custom hook is static.
Example:
```jsx
import React from 'react'
function useToggle(init = false) {
const [state, setState] = React.useState(init)
const toggleState = React.useCallback(() => { setState(v => !v) }, [])
return [state, toggleState]
}
function MyComponent({someProp}) {
const [enabled, toggleEnabled] = useToggle()
const handler = React.useCallback(() => {
toggleEnabled()
doSomethingWithTheProp(someProp)
}, [someProp]) // exhaustive-deps warning for toggleEnabled
return <button onClick={handler}>Do something</button>
}
```
**What is the expected behavior?**
I would like to configure `eslint-plugin-react-hooks` to tell it that `toggleEnabled` is static and doesn't need to be included in a dependency array. This isn't a huge deal but more of an ergonomic papercut that discourages writing/using custom hooks.
As for how/where to configure it, I would be happy to add something like this to my .eslintrc:
```js
{
"staticHooks": {
"useToggle": [false, true], // first return value is not stable, second is
"useForm": true, // entire return value is stable
}
}
```
Then the plugin could have an additional check [after these 2 checks](https://github.com/facebook/react/blob/8b580a89d6dbbde8a3ed69475899addef1751116/packages/eslint-plugin-react-hooks/src/ExhaustiveDeps.js#L228-L231) that tests for custom names.
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
All versions of eslint-plugin-react-hooks have the same deficiency.
## Please read my first comment below and try my fork if you are interested in this feature! | Type: Enhancement,Component: ESLint Rules | high | Critical |
497,673,271 | TypeScript | compilerOptions support TDZ error option | According to the PR of #32221 , to prevent such problem like:
```typescript
(function () {
function cleanup() {
console.log(A); // throw error
}
cleanup();
const A = 1;
}());
```
this snippet will compile into:
```javascript
(function () {
function cleanup() {
console.log(A); // log undefined
}
cleanup();
var A = 1;
}());
```
Error in the first snippet will be ignored, and cause TDZ error. Can `tsc` support TDZ error enabled option like [babel-plugin-transform-block-scoping](https://babeljs.io/docs/en/next/babel-plugin-transform-block-scoping.html) ? It will notify peoples who write such pattern to prevent further weird problems.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [ ] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [ ] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
497,676,410 | youtube-dl | Wicked Weasel | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.12.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x ] I'm reporting a new site support request
- [x ] I've verified that I'm running youtube-dl version **2019.09.12.1**
- [x ] I've checked that all provided URLs are alive and playable in a browser
- [x ] I've checked that none of provided URLs violate any copyrights
- [x ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: g:\temp\ffmpeg\youtube-dl -F https://wickedweasel.com/en-us/blog/september-2019/sheer-naughty-wicked-weasel-lingerie-try-on-haul?utm_source=ActiveCampaign&utm_medium=email&utm_content=%5BVID%5D+Naughty+Lingerie+Try+On+Haul+%F0%9F%98%88&utm_campaign=Video+-+Naughty+Lingerie+try+on+haul+EDM+%2823%2F09%2F19%29
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE
Traceback (most recent call last):
File "__main__.py", line 19, in <module>
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphopugs4t\build\youtube_dl\__init__.py", line 474, in main
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphopugs4t\build\youtube_dl\__init__.py", line 464, in _real_main
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphopugs4t\build\youtube_dl\YoutubeDL.py", line 2010, in download
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphopugs4t\build\youtube_dl\YoutubeDL.py", line 796, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphopugs4t\build\youtube_dl\extractor\common.py", line 530, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphopugs4t\build\youtube_dl\extractor\generic.py", line 2284, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphopugs4t\build\youtube_dl\extractor\common.py", line 627, in _request_webpage
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphopugs4t\build\youtube_dl\YoutubeDL.py", line 2229, in urlopen
File "C:\Python\Python34\lib\urllib\request.py", line 470, in open
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphopugs4t\build\youtube_dl\utils.py", line 2633, in http_response
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphopugs4t\build\youtube_dl\utils.py", line 2575, in deflate
zlib.error: Error -5 while decompressing data: incomplete or truncated stream
'utm_medium' is not recognized as an internal or external command,
operable program or batch file.
'utm_content' is not recognized as an internal or external command,
operable program or batch file.
'utm_campaign' is not recognized as an internal or external command,
operable program or batch file. | site-support-request | low | Critical |
497,705,224 | electron | sendInputEvent doesn't work for iframe/webview | I'm trying to interact with an embedded video _using manual mouse input_ via `webContents.sendInputEvent` api. It stopped working in 5-0-x.
### Issue Details
* **Electron Version:**
* 5-0-x, 6-0-x
* **Operating System:**
* Win10 (1903), macOS 10.14.6
* **Last Known Working Electron version:**
* 4.2.10
### Expected Behavior
`sendInputEvent` should produce the same effects as OS input
### Actual Behavior
`sendInputEvent` doesn't interact with `<iframe>` or `<webview>`
### To Reproduce
```js
const { app, BrowserWindow } = require('electron')
// Fails with 5.0.10
// Works with 4.2.10
app.once('ready', () => {
const { webContents: wc } = new BrowserWindow({
width: 800,
height: 600
})
wc.once('did-finish-load', () => {
setTimeout(() => {
wc.sendInputEvent({ type: 'mouseMove', x: 600, y: 300 })
}, 1000)
setTimeout(() => {
wc.sendInputEvent({ type: 'mouseDown', x: 600, y: 300, button: 'right' })
}, 2000)
setTimeout(() => {
wc.sendInputEvent({ type: 'mouseUp', x: 600, y: 300, button: 'right' })
}, 2100)
})
wc.on('context-menu', (event, params) => {
console.log(params)
})
wc.loadURL('https://www.w3schools.com/html/tryit.asp?filename=tryhtml_youtubeiframe')
})
```
### Additional Information
The test above manually triggers a right click on the embedded video, which should
- activate iframe for the first mouse move
- open the _youtube context menu_ instead of main frame's
| platform/windows,platform/macOS,bug :beetle:,status/confirmed,5-0-x,6-1-x,10-x-y,component/webcontents,25-x-y,26-x-y | medium | Major |
497,828,301 | pytorch | [jit] Bad error when instantiating TorchScript class with incorrect types | If the Python object has a different type for an attribute than the TorchScript class, then the error is a pybind miscast:
in `__init__` of a module:
```python
self.x = BalancedPositiveNegativeSampler(1, 2)
```
```python
@torch.jit.script
class BalancedPositiveNegativeSampler(object):
"""
This class samples batches, ensuring that they contain a fixed proportion of positives
"""
def __init__(self, batch_size_per_image, positive_fraction):
# type: (int, float)
"""
Arguments:
batch_size_per_image (int): number of elements to be selected per image
positive_fraction (float): percentace of positive elements per batch
"""
self.batch_size_per_image = batch_size_per_image
self.positive_fraction = positive_fraction
```
```
RuntimeError: Unable to cast Python instance of type <class 'int'> to C++ type 'torch::autograd::Variable'
```
cc @suo | oncall: jit,triaged,jit-backlog | low | Critical |
497,863,544 | neovim | :terminal output slows down editor when handling large output | <!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`: v0.5.0-dev
- Operating system/version: Ubuntu 19.04
- Terminal name/version: gnome-terminal
- `$TERM`: xterm-256color
### Steps to reproduce using `nvim -u NORC`
```
nvim -u NORC
:vs | term
i yes
```
### Actual behaviour
Other neovim buffers slow down (navigation, editing, etc.)
### Expected behaviour
Other neovim buffers work as usual
### Suggestion
Would it be to add an option to limit the rate of buffer stdout and stderr? Similar to piping all output through [pv](https://linux.die.net/man/1/pv) -qL [rate] | enhancement,performance,terminal | low | Major |
497,865,881 | go | proposal: net: add BufferedPipe (buffered Pipe) | If one wants to connect two Go libraries which use the `net` interfaces, the only cross platform way to do it with the standard library is to use the loopback. The only in-process way to do it is to create a socket pair with syscall.Socketpair and go from FD -> `*os.File` -> `net.Conn`. This is a fairly manual process, and while local to the machine and fully in-memory, does use a non-cross-platform kernel feature to do the heavy lifting.
A native Go transport would be more ideal, but expecting users to implement their own is maybe not reasonable as the net.Conn and net.Listener interfaces are difficult to implement correctly. I propose that we add a canonical and cross-platform in-memory transport in either `net` or `x/net`.
Further, I propose the following interface:
```go
func NewConnPair(addr1, addr1 net.Addr) (net.Conn, net.Conn)
```
```go
func NewPacketConnPair(addr1, addr1 net.Addr) (net.PacketConn, net.PacketConn)
```
```go
func NewDialerListenerPair(dialAddr, listenAddr net.Addr) (func(context.Context) (net.Conn, error), net.Listener)
```
Maybe it would be better to create new exported types and return those instead of the interfaces (e.g. `MemoryConn`, `MemoryPacketConn`, and `MemoryListener`)? That seems like it would be more consistent with the `net` package.
CC @rsc | Proposal,Proposal-Hold | medium | Critical |
497,897,084 | pytorch | [jit] Default args don't work with TorchScript classes | ```python
import torch
@torch.jit.script
class X(object):
def __init__(self, a=2, y=3):
self.a = a
self.y = y
@torch.jit.script
def fn():
return X()
```
```
__init__(ClassType<X> self, Tensor a, Tensor y) -> (None):
Argument a not provided.
:
at ../test.py:11:11
@torch.jit.script
def fn():
return X()
~ <--- HERE
```
cc @suo | oncall: jit,triaged,jit-backlog | low | Minor |
497,927,855 | opencv | Make OpenCV an Official Emscripten Port | Hello,
OpenCVJS should be added to the [Emscripten port list]( https://github.com/emscripten-ports) so there will be official support for using OpenCV in C++ when compiling C++ to WASM using Emscripten.
From the [emscripten page]( https://emscripten.org/docs/compiling/Building-Projects.html#building-projects) here is what we need to do to get an official build:
> Adding more ports is fairly easy. Basically, the steps are
> β’ Make sure the port is open source and has a suitable license.
> β’ Add it to emscripten-ports on github. The ports maintainers can create the repo and add the relevant developers to a team for that repo, so they have write access.
> β’ Add a script to handle it under tools/ports/ (see existing code for examples) and use it in tools/ports/__init__.py.
> β’ Add testing in the test suite.
> Build system issues
> Build system self-execution
> Some large projects generate executables and run them in order to generate input for later parts of the build process (for example, a parser may be built and then run on a grammar, which then generates C/C++ code that implements that grammar). This sort of build process causes problems when using Emscripten because you cannot directly run the code you are generating.
> The simplest solution is usually to build the project twice: once natively, and once to JavaScript. When the JavaScript build procedure fails because a generated executable is not present, you can then copy that executable from the native build, and continue to build normally. This approach was successfully used for compiling Python (see tests/python/readme.md for more details).
> In some cases it makes sense to modify the build scripts so that they build the generated executable natively. For example, this can be done by specifying two compilers in the build scripts, emcc and gcc, and using gcc just for generated executables. However, this can be more complicated than the previous solution because you need to modify the project build scripts, and you may have to work around cases where code is compiled and used both for the final result and for a generated executable.
| priority: low,category: javascript (js) | low | Minor |
497,940,094 | rust | Parallel rustc spends a lot of time creating threads | [After some discussion on zulip](https://rust-lang.zulipchat.com/#narrow/stream/187679-t-compiler.2Fwg-parallel-rustc/topic/slowdown.20compiling.20Cargo/near/176509042) it looks like `rayon` will immediately spawn all threads for the thread pool on startup, but this isn't neessarily suitable for rustc's use case. In a profile I captured of compiling Cargo the first set of crates being compiled spent a huge amount of time just spawning threads.
I ran `perf` at the start of `cargo build` and killed it after a second or so, and the profile output of this (cleaned up) aggregated across all builds was:
<img width="277" alt="Capture" src="https://user-images.githubusercontent.com/64996/65552706-b3193d00-deea-11e9-9576-87469103d327.PNG">
I think that roughly means that 80% of the cpu time was spent transitively spawning threads/processes near the start of the build, and that seems a bit excessive!
cc @Mark-Simulacrum @Zoxc | C-enhancement,I-compiletime,T-compiler,WG-compiler-parallel | low | Major |
497,948,376 | pytorch | [RFC] RRef Protocol | With @pritamdamania87 @gqchen @aazzolini @satgera @xush6528 @zhaojuanmao
Master Design Doc:
* Distributed Model Parallel Design: #23110
Main RRef PRs: #25499 #25169
### Background
RRef stands for Remote REFerence. Each RRef is owned by a single worker
(i.e., owner) and can be used by multiple users. The owner stores the real
data referenced by its RRefs. RRef needs to support fast and scalable RPC.
Hence, in the design, we avoid using a single global master to keep RRef states,
instead owners will keep track of the global reference counts
for its RRefs. Every RRef can be uniquely identified by a global `RRefId`,
which is assigned at the time it is first created either on a user or on the
owner.
On the owner worker, there is only one `OwnerRRef` instance, which contains the
real data, while on user workers, there can be as many `UserRRef`s as necessary,
and `UserRRef` does not hold the data. All usage on the `OwnerRRef` should
retrieve the unique `OwnerRRef` instance using the globally unique ``RRefId``.
A `UserRRef` will be created when it is used as an
argument or return value in `dist.rpc` or `dist.remote` call, but RRef forking
and reference counting (RC) are completely transparent to applications. Every
`UserRRef` will also have its globally unique `ForkId`.
### Assumptions
#### Transient Network Failures
The RRef design aims to handle transient network failures by retrying messages.
Node crashes or permanent network partition is beyond the scope. When those
incidents occur, the application may take down all workers, revert to the
previous checkpoint, and resume training.
#### Non-idempotent UDFs
We assume UDFs are not idempotent and therefore cannot be retried. However,
internal RRef control messages will be made idempotent and retryable.
#### Out of Order Message Delivery
We do not assume message delivery order between any pair of nodes, because both
sender and receiver are using multiple threads. There is no guarantee on which
message will be processed first.
### RRef Lifetime
The goal of the protocol is to delete an `OwnerRRef` at an appropriate time. The
right time to delete an `OwnerRRef` is when there are no living `UserRRef`s and
Python GC also agrees to delete the `OwnerRRef` instance on the owner. The tricky
part is to determine if there are any living `UserRRef`s.
A user can get a UserRRef in three situations:
* (1). Receiving a UserRRef from the owner.
* (2). Receiving a UserRRef from another user.
* (3). Creating a new UserRRef owned by another worker.
(1) is the simplest case where the owner initiates the fork, and hence it can
easily increment local RC. The only requirement is that any `UserRRef` must
notify the owner before destruction. Hence, we need the first guarantee:
**G1. The owner will be notified when any `UserRRef` is deleted.**
As messages might come delayed or out-of-order, we need more
one guarantee to make sure the delete message is not sent out too soon. Let us
first introduce a new concept. If A sends an RPC to B that involves an RRef, we
call the RRef on A the parent RRef and the RRef on B the child RRef.
**G2. Parent RRef cannot be deleted until the child RRef is confirmed by the owner.**
Under (1), where the caller is `UserRRef` and callee is `OwnerRRef`, it simply
means that the user will not send out the delete message until all previous
messages are ACKed. Note that ACKed does not mean the owner finishes executing
the function, instead, it only means the owner has retrieved its local
`OwnerRRef` and about to pass it to the function, which is sufficient to keep
the `OwnerRRef` alive even if the delete message from the user arrives at the
owner before the function finishes execution.
With (2) and (3), it is possible that the owner only partially knows the RRef
fork graph or not even knowing it at all. For example, the RRef could be
constructed on a user, and before the owner receives the RPC call, the
creator user might have already shared the RRef with other users, and those
users could further share the RRef. One invariant is that the fork graph of
any RRef is always a tree rooted at the owner, because forking an RRef always
creates a new RRef instance, and hence every RRef has a single parent. One
nasty detail is that when an RRef is created on a user, technically the owner
is not its parent but we still consider it that way and it does not break the
argument below.
The owner's view on any node (fork) in the tree has three stages:
1) unknown β 2) known β 3) deleted.
The owner's view on the entire tree keeps changing. The owner deletes its
`OwnerRRef` instance when it thinks there are no living `UserRRefs`, i.e., when
`OwnerRRef` is deleted, all `UserRRef`s could be either indeed deleted or
unknown. The dangerous case is when some forks are unknown and others are
deleted.
G2 trivially guarantees that no parent `UserRRef` Y can be deleted before the
owner knows all of Y's children `UserRRef`s.
However, it is possible that the child `UserRRef` Z may be deleted before the
owner knows its parent Y. More specifically, this can happen when all of Z's
messages are processed by the owner before all messages from Y, including the
delete message. Nevertheless, this does not cause any problem. Because, at least
one of Y's ancestor will be alive, and it will
prevent the owner from deleting the `OwnerRRef`. Consider the following example:
```
OwnerRRef -> A -> Y -> Z
```
`OwnerRRef` forks to A, then A forks to Y, and Y forks to Z. Z
can be deleted without `OwnerRRef` knowing Y. However, the `OwnerRRef`
will at least know A, as the owner directly forks the RRef to A. A won't die
before the owner knows Y.
Things get a little trickier if the RRef is created on a user:
```
OwnerRRef
^
|
A -> Y -> Z
```
If Z calls `to_here` on the `UserRRef`, the owner at least knows A when Z is
deleted, because otherwise `to_here` wouldn't finish. If Z does not call
`to_here`, it is possible that the owner receives all messages from Z before
any message from A and Y. In this case, as the real data of the `OwnerRRef` has
not been created yet, there is nothing to be deleted either. It is the same as Z
does not exist at all Hence, it's still OK.
### Protocol Scenarios
Let's now discuss how above two guarantees translate to the protocol.
#### User to Owner RRef as Return Value
```python
import torch
import torch.distributed.rpc as rpc
# on worker A
rref = rpc.remote('B', torch.add, args=(torch.ones(2), 1))
# say the rref has RRefId 100 and ForkId 1
rref.to_here()
```
In this case, the `UserRRef` is created on the user A, then it is passed to the
owner B together with the `remote` message, and then the owner creates the
`OwnerRRef`. The method `dist.remote` returns immediately, meaning that the
`UserRRef` can be forked/used before the owner knows about it.
On the owner, when receiving the `dist.remote` call, it will create the
`OwnerRRef`, and immediately returns an ACK to acknowledge `{100, 1}`. Only
after receiving this ACK, can A deletes it's `UserRRef`. This involves both
**G1** and **G2**. **G1** is obvious. For **G2**, the `OwnerRRef` is a child
of the `UserRRef`, and the `UserRRef` is not deleted until it receives the ACK
from the owner.
<img width="571" alt="user_to_owner_ret" src="https://user-images.githubusercontent.com/16999635/69164772-98181300-0abe-11ea-93a7-9ad9f757cd94.png">
[mermaid source](https://raw.githubusercontent.com/mrshenli/pytorch/image/RRefUserOwner.txt)
The diagram above shows the message flow. Note that the first two messages from
A to B (`remote` and `to_here`) may arrive at B in any order, but the final
delete message will only be sent out when: 1) B acknowledges user RRef
`{100, 1}` (**G2**), and 2) Python GC agrees to delete the local `UserRRef`
instance.
#### User to Owner RRef as Argument
```python
import torch
import torch.distributed.rpc as rpc
# on worker A and worker B
def func(rref):
pass
# on worker A
rref = rpc.remote('B', torch.add, args=(torch.ones(2), 1))
# say the rref has RRefId 100 and ForkId 1
rpc.rpc_async('B', func, args=(rref, ))
```
In this case, after creating the `UserRRef` on A, A uses it as an argument in a
followup RPC call to B. A will keep `UserRRef {100, 1}` alive until it receives
the acknowledge from B (**G2**, not the return value of the RPC call).
This is necessary because A should not send out the delete message until all
previous `rpc`/`remote` calls are received, otherwise, the `OwnerRRef` could be
deleted before usage as we do not guarantee message delivery order. This is done
by creating a child `ForkId` of `rref`, holding them in a map until receives the
owner confirms the child `ForkId`. The figure below shows the message flow.
<img width="565" alt="user_to_owner_arg" src="https://user-images.githubusercontent.com/16999635/69164845-b67e0e80-0abe-11ea-93fa-d24674e75a2b.png">
[mermaid source](https://raw.githubusercontent.com/mrshenli/pytorch/image/RRefUserOwnerArg.txt)
Note that the `UserRRef` could be deleted on B before `func` finishes or even
starts. However this is OK, as at the time B sends out ACK for the child
`ForkId`, it already acquired the `OwnerRRef` instance, which would prevent it
been deleted too soon.
#### Owner to User
Owner to user is the simplest case, where the owner can update reference
counting locally, and does not need any additional control message to notify
others. Regarding **G2**, it is same as the parent receives the ACK from the
owner immediately, as the parent is the owner.
```python
import torch
import torch.distributed.rpc as RRef, rpc
# on worker B and worker C
def func(rref):
pass
# on worker B, creating a local RRef
rref = RRef("data")
# say the rref has RRefId 100
dist.rpc_async('C', func, args=(rref, ))
```
<img width="568" alt="owner_to_user" src="https://user-images.githubusercontent.com/16999635/69164921-c990de80-0abe-11ea-9250-d32ad00cf4ae.png">
[mermaid source](https://raw.githubusercontent.com/mrshenli/pytorch/image/RRefOwnerUser.txt)
The figure above shows the message flow. Note that when the `OwnerRRef` exits
scope after the `rpc_async` call, it will not be deleted, because internally
there is a map to hold it alive if there is any known forks, in which case is
`UserRRef {100, 2}`. (**G2**)
#### User to User
This is the most complicated case where caller user (parent `UserRRef`), callee
user (child `UserRRef`), and the owner all need to get involved.
```python
import torch
import torch.distributed.rpc as rpc
# on worker A and worker C
def func(rref):
pass
# on worker A
rref = rpc.remote('B', torch.add, args=(torch.ones(2), 1))
# say the rref has RRefId 100 and ForkId 1
rpc.rpc_async('C', func, args=(rref, ))
```
In order to guarantee that the `OwnerRRef` is not deleted before the callee user
uses it, the caller user holds its own `UserRRef` until it receives the ACK from
the callee user to confirm the child `UserRRef`, and the callee user will not
send out that ACK until the owner confirms its `UserRRef` (**G2**). The figure
below shows the message flow.
<img width="564" alt="user_to_user" src="https://user-images.githubusercontent.com/16999635/69164971-d6adcd80-0abe-11ea-971d-6b7af131f0fd.png">
[mermaid source](https://raw.githubusercontent.com/mrshenli/pytorch/image/RRefUserShare.txt)
When C receives the child `UserRRef` from A, it sends out a fork request to
the owner B. Later, when the B confirms the `UserRRef` on C, C will perform two
actions in parallel: 1) send out the child ACK to A and 2) run the UDF.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini | triaged,module: rpc | low | Critical |
497,976,551 | go | proposal: x/sync: pass errgroup.WithContext's derived context directly | Since API changes are something that is now possible to do with module versions, I thought it would be worth mentioning one that gnaws at me pretty frequently.
We use contexts heavily in my code-base, and it comes up moderately often that we want a cancellable or deadline-respecting `sync.WaitGroup`, and for this purpose authors tend to turn to `x/sync/errgroup`. This often works out fine as long as the scope of the `errgroup.Group` is confined to a single function, but it also regularly will be used with something somewhat stateful. These frequently follow a certain pattern, the core elements of which are exemplified by this contrived example:
```go
type ServerRunner struct {
group *errgroup.Group // *Group as a field
groupCtx context.Context // context as a field (heavily discouraged)
}
func (sr *ServiceRunner) Run(server *servers.Server) {
sr.Group.Go(func() error { return server.Run(sr.groupCtx) }) // wrapper that calls Go
}
func (sr *ServiceRunner) Wait(ctx context.Context) {
select {
case <-ctx.Done():
return ctx.Err()
case <-sr.groupCtx.Done():
sr.groupCtx.Wait() // wrapper that calls wait and/or ctx.Done
return sr.groupCtx.Err()
}
}
```
I assert (without evidence) that this boilerplate is pretty common among context-respecting code that interacts with `errgroup.Group`, and in fact the above could be almost directly converted into a utility package, but then we would have ContextGroup wrapping errgroup wrapping WaitGroup... which feels excessive.
I have also observed that the context returned by `WithContext` is occasionally misused for code other than the goroutines spawned by `Go`, in some cases by directly shadowing the `ctx` variable, which often results in spooky action at a distance where one failure causes code in another part of the application to have its context cancelled.
So, I propose that the API for errgroup split out the context- and non-context APIs:
```go
type Group struct { /* ... */ }
func (Group) Go(func() error) { /* ... */ }
func (Group) Wait() error { /* ... */ }
type ContextGroup struct { /* ... */ }
func WithContext(ctx context.Context) *ContextGroup { /* ... */ }
func (ContextGroup) Go(func(context.Context) error) { /* ... */ }
func (ContextGroup) Wait(context.Context) error { /* ... */ }
```
Unfortunately, this is definitely a backward-incompatible change, and one for which there is probably little chance for a mechanical rewrite unless ContextGroup had a mechanism for retrieving its context. | Proposal | low | Critical |
497,982,467 | go | net/http: Client round-robin across persistent connections | When connecting directly to a service spread across multiple hosts with DNS "load-balancing" it would be ideal if `http.Client` could round-robin requests over persistent connections to the multiple hosts listed in the A record.
Right now, if you want to ensure that load is balanced the only practical option is to disable HTTP keepalives.
One way this could be done is to have idle connection pools for each different A record, and do a DNS resolution for every request to decide which pool to use for that request.
This round-robin mode should likely be opt-in (or opt-out). When enabled, given a DNS record like the following:
```
example.local 60 IN A 10.0.0.1
example.local 60 IN A 10.0.0.2
```
`http.Client` would send roughly half the requests to .1 and half to .2, even when using persistent connections, and even if the client only ever sends requests serially.
---
Until this is fixed, if someone else also needs this you can take a look at https://github.com/CAFxX/balancer.
| NeedsInvestigation,FeatureRequest | low | Major |
497,983,700 | flutter | Add an ABI stability check to the embedder API. | c: new feature,team,engine,e: embedder,P3,team-engine,triaged-engine | low | Minor |
|
498,006,734 | go | proposal: spec: extended type inference for make and new | ### Rationale
Currently in Go, type inference works in one direction and one direction only: From the values of an assignment to the variable declarations. In other words, given some expression with a statically-determinable type, which is basically every expression, the type of a declaration can be omitted. This allows for a lot of the perceived lightness of Go over many statically typed languages.
However, there are a number of situations where this is _not_ possible, most of which have to do with `make()` and `new()`, both of which are unique, not including some stuff in the `unsafe` package, in that they take a type name as an argument. Normally this is a non-issue, as that type can be used to determine the return of the expression, thus allowing for inference in the normal manner:
```go
m := make(map[string]interface{})
s := make([]string, 0, 10)
p := new(int)
```
Sometimes the variable _must_ have a separately declared type, though, such as in the case of struct fields:
```go
type Example struct {
m map[string]interface{}
}
func NewExample() *Example {
return &Example{
m: make(map[string]interface{}),
}
}
```
This leads to unwanted repetition of the type name, making later alteration more awkward. In particular, I thought of this while reading through one of the many proposals about ways to make anonymous structs more useful with channels, and I realized that the following pattern could get very annoying, exacerbating the existing issue:
```go
type Example struct {
c chan struct {
v string
r chan<- int
}
}
func NewExample() *Example {
return &Example{
c: make(chan struct {
v string
r chan<- int
},
}
}
```
### Proposal
I propose the introduction of a new keyword, say `typeof`, which takes a variable, function name, or a struct field identifier and essentially acts as a stand-in for a standard type name, possibly with the restriction of _only_ being available as the argument to a `make()` or `new()` call. For example,
```go
return &Example{
m: make(typeof Example.m),
}
```
This would allow a type name to be pegged to an already declared variable type elsewhere.
### Alternative
Alternatively, `make()` and `new()` could allow for the aforementioned types of identifiers directly, such as
```go
return &Example{
m: make(Example.m),
}
```
This has the advantage of being backwards compatible, but is potentially less flexible if one wants to extend the functionality elsewhere later, such as to generics. | LanguageChange,Proposal,dotdotdot,LanguageChangeReview | medium | Critical |
498,011,573 | flutter | initialize Google maps plugin by providing API key at runtime | Is there anyway to provide the google maps api key from the code base rather than the manifest? I would like to be able to read the api key from our server before initializing google maps.
Thank you! | c: new feature,p: maps,customer: product,package,c: proposal,team-ecosystem,P2,triaged-ecosystem | low | Critical |
498,056,507 | pytorch | Matrix corresponding to convolution by a 2D kernel (convmtx2) | ## π Feature
A function `convmtx2` such that
convmtx2(kernel, image.shape[1:]) @ image.flatten() == torch.nn.functional.conv2d(image[None], kernel)[0].flatten()
for any tensors `image` of shape `(in_channels, image_height, image_width)` and `kernel` of shape `(out_channels, in_channels, kernel_height, kernel_width)`.
## Motivation
Like its [MATLAB counterpart](convmtx2), `convmtx2` computes the matrix corresponding to convolution by a 2D kernel. This matrix is a doubly block [Toeplitz matrix](https://en.wikipedia.org/wiki/Toeplitz_matrix#Discrete_convolution).
This feature has been requested elsewhere. See [here](https://discuss.pytorch.org/t/obtaining-toeplitz-matrix-for-convolution/52968/3) and [here](https://stackoverflow.com/questions/56702873/is-there-an-function-in-pytorch-for-converting-convolutions-to-fully-connected-n) for example. | feature,module: nn,triaged | medium | Major |
498,105,458 | pytorch | Importing tensorboard jams CUDA device selection | ## π Bug
On a multi-GPU environment, it is common to set `os.environ['CUDA_VISIBLE_DEVICES']` to select one GPU. However I have found that importing `torch.utils.tensorboard` leads to an unexpected behavior in this context.
I have two Titan Xp GPUs on an Ubuntu 16.04.5 server. Setting `os.environ['CUDA_VISIBLE_DEVICES'] = '1'` had all memory allocations and computations happen precisely on the second Titan GPU, until I integrated tensorboard into my code today. Now with the same code, memory allocation and computation happens on the first Titan GPU.
Below are the screenshots I took. The first Titan GPU is running something, which allocates 8742MB is GPU memory. The second Titan GPU has 10MB allocated by default (I have no idea why). Refer to the reproduction steps below.
(Screenshot 1)
<img width="1440" alt="μ€ν¬λ¦°μ· 2019-09-25 μ€ν 3 58 17" src="https://user-images.githubusercontent.com/29395896/65578067-102af680-dfb0-11e9-8ddf-21eb126c60c2.png">
(Screenshot 2)
<img width="1440" alt="μ€ν¬λ¦°μ· 2019-09-25 μ€ν 3 58 55" src="https://user-images.githubusercontent.com/29395896/65578069-11f4ba00-dfb0-11e9-9941-1edb5fd3950d.png">
## To Reproduce
Steps to reproduce the behavior:
1. Run `python reproduction_script.py`.
1. Check that memory is allocated on the second GPU. (Screenshot 1)
1. Uncomment the fourth line `import torch.utils.tensorboard`.
1. Run `python reproduction_script.py` again.
1. Check that memory is allocated on the *first* GPU. (Screenshot 2)
Below is the `reproduction_script.py` code:
```python
import os
import torch
# import torch.utils.tensorboard
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
device = torch.cuda.current_device()
gpu_tensor = torch.ones((200, 200, 200), device=device)
input('Enter anything to terminate script.')
```
## Expected behavior
I expect that setting `os.environ['CUDA_VISIBLE_DEVICES']` to the index of a specific GPU and allocating tensors by the `device` name returned calling `torch.cuda.current_device()` have all memory allocations and computations happen on that specific GPU.
## Environment
```
Collecting environment information...
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: TITAN X (Pascal)
GPU 1: TITAN X (Pascal)
Nvidia driver version: 418.56
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] torch==1.2.0
[pip] torchvision==0.4.0a0+6b959ee
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py36he904b0f_0
[conda] mkl_fft 1.0.14 py36ha843d7b_0
[conda] mkl_random 1.0.2 py36hd81dba3_0
[conda] pytorch 1.2.0 py3.6_cuda10.0.130_cudnn7.6.2_0 pytorch
[conda] torchvision 0.4.0 py36_cu100 pytorch
```
## Additional context
`conda list tb-nightly` prints
```
# packages in environment at /home/jaywonchung/anaconda3/envs/meta:
#
# Name Version Build Channel
tb-nightly 2.1.0a20190924 pypi_0 pypi
```
| triaged,module: tensorboard | low | Critical |
498,202,881 | kubernetes | Ephemeral storage doesn't account for deleted files with open handles | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**:
A pod that creates large files in the ephemeral storage (either in an explicit `emptyDir` volume or just by writing to the container's filesystem) and then deletes them while keeping an open file handle will cause serious stability issues on the node while making it very hard for an operator to find out that it's the root cause. From kubelet's perspective the pod's `ephemeral-storage` usage will be close to zero, so it'll go and evict other pods first while the offending pod continues using more and more disk space. None of the reported metrics (e.g. cadvisor's container_fs_usage_bytes) will report this pod's usage correctly, so the only way to even find it is manually running `lsof -a +L1` on the node. This also prevents Ephemeral Storage limits from working correctly, making it impossible for cluster operators to use those to protect nodes from faulty applications.
**What you expected to happen**:
A pod's filesystem usage is accounted for correctly, even if disk space is used by deleted files. When the node runs out of disk space, the offending pod is evicted.
**How to reproduce it (as minimally and precisely as possible)**:
Deploy a pod that'll create large files, delete them while still keeping the handle open and writing no logs or any other data. Wait until the node runs out of disk space. Observe that `kubelet` tries to evict all other pods instead (since they'll consume non-zero space for their logs), then gives up and the node dies completely.
**Environment**:
- Kubernetes version (use `kubectl version`): 1.14.6
- Cloud provider or hardware configuration: [kubernetes-on-aws](https://github.com/zalando-incubator/kubernetes-on-aws/)
- OS (e.g: `cat /etc/os-release`): Ubuntu 18.04.3 LTS
- Kernel (e.g. `uname -a`): 4.15.0-1048-aws
| kind/bug,sig/node,help wanted,priority/important-longterm,lifecycle/frozen,needs-triage | medium | Critical |
498,216,576 | pytorch | setup.py install error | When I performed the steps to "python setup.py install", the following error occurred.
------------------------------------------------------------------
make: *** No rule to make target 'install'. Stop.
Building wheel torch-1.3.0a0+a395c31
-- Building version 1.3.0a0+a395c31
cmake --build . --target install --config Release -- -j 12
Traceback (most recent call last):
File "setup.py", line 756, in <module>
build_deps()
File "setup.py", line 320, in build_deps
cmake=cmake)
File "/home/****/****/****/DFG/pytorch/tools/build_pytorch_libs.py", line 59, in build_caffe2
cmake.build(my_env)
File "/home/****/****/****/DFG/pytorch/tools/setup_helpers/cmake.py", line 334, in build
self.run(build_args, my_env)
File "/home/****/****/****/DFG/pytorch/tools/setup_helpers/cmake.py", line 142, in run
check_call(command, cwd=self.build_dir, env=env)
File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '12']' returned non-zero exit status 2 | module: build,triaged | low | Critical |
498,223,876 | terminal | Optionally persist font size changes when changed with CTRL+Mousewheel | <!--
π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# CTRL+MouseWheel font size changes should optionally persist to profiles.json
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
<!--
A clear and concise description of what you want to happen.
-->
I move between monitors a lot and this means I change monitor geometries, resolutions, and pixel pitches frequently. Frequently enough that I need to change the font size in Terminal almost daily, and it would be lovely if my font size changes "stick" for the day. The multiple round trips required to change the font size to get it exactly where I want it via `profiles.json` is a bit tedious. So, for some, and for me definitely, changes to font size should persist when I use CTRL+MouseWheel.
I know this goes against the folks who want this to **never** persist, so perhaps a new keyboard shortcut is in order, or perhaps the behavior of CTRL+MouseWheel should be exposed in the `keybindings` section of `profiles.json` as a `command`. An additional `command` could be created (which persists the font size when you zoom) and could then be chosen by those who want this behavior.
Persisting the font size when it is changed with CTRL+MouseWheel is the behavior in Sublime Text and I've gotten quite used to this over time, and now I miss it (I especially miss it in VScode, but that's another issue.)
Thank you. | Area-Settings,Product-Terminal,Issue-Task | low | Critical |
498,232,832 | create-react-app | Add scss import pattern to allow importing all css/scss files from folder | ### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
Since I'm using scss modules, I have a main scss file with some common rules that's being imported in .jsx file. In this main scss file, I need to import all scss files from a folder. They work as template options: each imported scss is one specific template and it can grow over time, increasing the number of templates. Nowadays, I need to import every single file such like this:
**mainTemplate.module.scss**
```
// must import every single template
@import "./templates/template01.scss";
@import "./templates/template02.scss";
// ... some files later ...
@import "./templates/template20.scss";
```
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
I'd like to achieve the same above with the following syntax:
**mainTemplate.module.scss**
```
// one single import ...
@import "./templates/*";
// ... and it's done!
```
### Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
I've found this Stack Overflow question:
https://stackoverflow.com/questions/44646201/create-react-app-how-do-i-import-all-scss-files-from-a-directory/44646609#44646609
Wich refers to this npm package:
https://www.npmjs.com/package/import-glob-loader
### Additional context
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here, if you haven't already.
-->
Nothing to add
| issue: proposal,needs triage | low | Major |
498,240,724 | go | cmd/compile: inline functions that are called only once | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +211932b Mon Sep 23 22:33:23 2019 +0000 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What did you do?
Take the following functions as an example:
```go
func f(n int) int {
var sum int
for i := 0; i < n; i++ {
sum += aux(i)
}
return sum
}
func aux(x int) (res int) {
// Long function that goes over the inline budget
// ...
return res
}
```
If it is known that function `aux` is only called once (in this case, in `f`), in spite of `a` going over the inline budget, I think it would be beneficial to inline it in the caller.
I imagine there are many situations where developers extract a piece of code from a loop into a new function and then only call it once.
| Performance,NeedsDecision,compiler/runtime | low | Major |
498,254,012 | three.js | Anaglyph 3D - set it to zero parallax: 0 discrepancy at camera (currently set to negative parallax) | It's rather amazing that threejs has an anaglyph component - thanks heaps, this is a fantastic tool!
One question regarding anaglyph parallax setting: currently it is using negative parallax, as in, the zero point is further away from the viewer, and there is disparity both at a distance zero and far away (see screenshot for the problem, taken from the bottom of the screen, which has the white line starting 0 distance away. Further details about zero/negative parallax here: http://paulbourke.net/stereographics/stereorender/)

It would be more pleasing to the eye to have zero disparity where the camera is, so everything seemed to be "behind" the computer screen.
After a quick look, it appears that the matrix values would need to change in https://github.com/mrdoob/three.js/blob/master/examples/js/effects/AnaglyphEffect.js to achieve this? I'm not entirely sure.
Any help would be appreciated! :) | Addons | low | Minor |
498,280,881 | node | http.ClientRequest with Upgrade header sometimes hangs on first request | * **Version**: 12.10.0 (also checked on 12.9.1)
* **Platform**: macOS 10.14.6 β Darwin 18.7.0 Darwin Kernel Version 18.7.0: Tue Aug 20 16:57:14 PDT 2019; root:xnu-4903.271.2~2/RELEASE_X86_64 x86_64
* **Subsystem**: http
Connecting to a newly started `http.Server` with a `http.ClientRequest` using `Connection: Upgrade` occasionally (~0.2% probability) hangs indefinitely. The request never reaches the server (no `connection` event is fired), but strangely the same issue does not appear when using a raw `net.Socket` on the client and sending equivalent data. The issue also does not appear if the server is bound to `127.0.0.1` rather than the default `0.0.0.0`.
The following code replicates the issue reliably; typically it will freeze after ~500 iterations:
```js
const http = require('http');
async function test() {
const server = http.createServer();
server.on('connection', () => {
process.stdout.write(`${Date.now()} | - connection\n`);
});
server.on('upgrade', (req, socket) => {
socket.end('HTTP/1.1 404 Not Found\r\n\r\n');
});
await new Promise((resolve) => server.listen(0, resolve));
const { port } = server.address();
await new Promise((resolve) => {
const req = new http.ClientRequest({
port,
headers: {
Connection: 'Upgrade',
Upgrade: 'x'
}
});
req.once('close', resolve);
req.end();
});
await new Promise((resolve) => server.close(resolve));
}
(async () => {
for (let i = 0; ; i++) {
process.stdout.write(`${Date.now()} | Attempt ${i}\n`);
await test();
}
})();
```
---
Changes which make the issue disappear:
- Change `server.listen` call to `server.listen(0, '127.0.0.1', resolve)`
- Change `http.ClientRequest` block to:
```js
await new Promise((resolve) => {
const s = new net.Socket();
s.on('close', resolve);
s.on('data', () => {});
s.connect(port, 'localhost', () => {
s.write([
'GET / HTTP/1.1',
'Connection: Upgrade',
'Upgrade: websocket',
'',
'',
].join('\r\n'));
s.end();
});
});
```
- Remove headers, and change related server event (`server.on('request', (req, res) => res.end());`)
---
Changes which *do not* make the issue disappear:
- adding a delay between `server.listen` and the request (tested with up to 1 second)
---
When the issue appears, even the server-side `connection` event does not fire.
This was found while debugging a suite of tests with occasional failures. Initially reported to https://github.com/websockets/ws/issues/1635 | http,macos | low | Critical |
498,308,527 | go | spec: clarify when calling recover stops a panic | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
go version go1.13 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What did you do?
```golang
package main
import "fmt"
func demo() {
defer func() {
defer func() {
// recover panic 2
fmt.Println("panic", recover(), "is recovered")
}()
// recover panic 1
defer fmt.Println(" (done).")
defer recover()
defer fmt.Print("To recover panic 1 ...")
defer fmt.Println("now, two active panics coexist")
panic(2) // If this line is commented out, then the program will not crash.
}()
panic(1)
}
func main() {
demo()
}
```
### What did you expect to see?
Not crash.
### What did you see instead?
Crashes. | Documentation,NeedsInvestigation | medium | Critical |
498,309,050 | kubernetes | Add startup resource requirements | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
I would like startup resources requirements to be added. Now we have `request` and `limits` and I want something like `startup-request` for CPU and RAM.
**Why is this needed**:
Some applications use a lot of RAM and CPU during the startup and use not much while running. Now, such applications need to require high `requests` to be able to started. And when app is running and uses few resources it blocks unused resources so that other applications cannot use it.
For example, there is an app which requires 1 GB RAM and 1 CPU during the startup. And when it finished all preparations and started to work in normal mode it requires only 200 MB RAM and 200m CPU. Now, to make it deployable to k8s we need to set resource requests as 1 GB for RAM and 1 core for CPU which is honest. But when app is running it uses only fifth part of requested resources and other applications can not claim to use these resources.
What I am suggesting for this cases is to introduce `startup-request` which will be actual until `startupProbe` (introduced in 1.16) returns `OK`. So, when app finished startup, k8s frees unnecessary resources (making them available for other apps) and application left only with resources specified in `request` section
| kind/feature,sig/apps,needs-triage | medium | Critical |
498,310,521 | pytorch | Google Summer of Code | @soumith Will PyTorch take part in GSoC next year? | triaged | low | Minor |
498,321,334 | TypeScript | Higher order type inference doesn't work with overloads | @ahejlsberg
**TypeScript Version:** [email protected]
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
interface Curried2<a, b, z> {
(a: a, b: b): z;
(a: a): (b: b) => z;
}
interface Curry {
<a, b, z>(f: (a: a, b: b) => z): Curried2<a, b, z>;
}
declare const curry: Curry;
curry(<T extends 0, U extends 1>(n: T, m: U) => n + m)(1)(0);
curry(<T extends 0, U extends 1>(n: T, m: U) => n + m)(1, 0);
```
Also the following doesn't work:
```ts
interface Curried2<a, b, z> {
(a: a): (b: b) => z;
(a: a, b: b): z;
}
```
When removing overloads as follows:
```ts
interface Curried2<a, b, z> {
(a: a, b: b): z;
}
```
or
```ts
interface Curried2<a, b, z> {
(a: a): (b: b) => z;
}
```
These work correctly.
**Expected behavior:**
Infer type parameters correctly and all parameter values are rejected correctly.
**Actual behavior:**
no error.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Suggestion | low | Critical |
498,323,584 | TypeScript | Allow async functions to return union type T | Promise<T> | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
## Search Terms
"the return type of an async function or method must be the global Promise type"
"typescript promise union type"
## Suggestion
You can explicitly declare a function's return type as a union that includes a `Promise<T>`.
While this works when manually managing promises, it results in a compiler error when using the async/await syntax.
For example, a function can have a return type `T | Promise<T>`. As the developer of an abstraction layer, this allows you to have an abstract return type that can be handled in a typesafe way but doesn't dictate implementation to consumers.
This improves developer ergonomics for consumers of a library without reducing the safety of the type system by allowing developers to change implementation as the system evolves while still meeting the requirements of the abstraction layer.
This only works, currently, if the developer explicitly manages the promises. A developer may start with something like this:
```
type ActionResponse<T> = T | Promise<T>;
function getCurrentUsername(): ActionResponse<string> {
return 'Constant Username';
}
async function logResponse<T>(response: ActionResponse<T>): Promise<void> {
const responseValue = await response;
console.log(responseValue);
}
logResponse(getCurrentUsername());
// Constant Username
```
Then, if the consumer of `logResponse` switches to a promise based method, there's no need to change the explicit return type:
```
function getCurrentUsername(): ActionResponse<string> {
// return 'Constant Username';
return Promise.resolve('Username from Database');
}
// Username from Database
```
However, if the consumer of `logResponse` prefers to use async/await instead of manually managing promises, this no longer works, yielding a compiler error instead:
> The return type of an async function or method must be the global Promise<T> type.
One workaround is to always return promises even when dealing non-async code:
```
async function getCurrentUsername(): Promise<string> {
return 'Constant Username';
// return Promise.resolve('Username from Database');
}
```
Another workaround is to use an implicit return type:
```
async function getCurrentUsername() {
return Promise.resolve('Username from Database');
}
```
These do get around the issue for sure, but they impose restrictions on consumers of the abstraction layer causing it to leak into implementation.
It seems valuable for the behavior to be consistent between using async/await and using `Promise` directly.
## Use Cases
This feature would be useful for developers who are building abstraction layers and would like to provide an abstract return type that could include promises. Some likely examples are middlewares, IoC containers, ORMs, etc.
In my particular case, it's with [inversify-express-utils](https://github.com/inversify/inversify-express-utils) where the [action invoked can be either async or not](https://github.com/inversify/inversify-express-utils/blob/master/src/server.ts#L252-L253) and the resulting behavior doesn't change.
## Examples
```
// this type
type ActionResponse<T> = T | Promise<T>;
// supports this function
function getCurrentUsername(): ActionResponse<string> {
return 'Constant Username';
}
// as it evolves over time into this function
async function getCurrentUsername(): ActionResponse<string> {
return Promise.resolve('Username from Database');
}
// and is handled transparently by functions like this
async function logResponse<T>(response: ActionResponse<T>): Promise<void> {
const responseValue = await response;
console.log(responseValue);
}
logResponse(getCurrentUsername());
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | medium | Critical |
498,337,138 | TypeScript | Disallowing `throw`ing expressions that aren't assignable to `Error` | ## Search Terms
throw, throwing non-error
## Suggestion
Add typechecking to `throw` statement, allowing it to only throw `Error`s and `any`s.
Code currently allowed, but with strong potential for bugs:
```
throw null;
throw undefined;
throw 'foobar';
throw '';
throw 42;
throw {foo: 'bar'};
throw potentiallyNullable; // with βstrictNullChecks
```
After implementing this feature request, such code wouldn't be allowed.
## Use Cases
Even if it's possible to throw non-errors in JavaScript code, it's extremely confusing and hard to debug. Failing to properly handle such cases happened even to [Mozilla](https://bugzilla.mozilla.org/show_bug.cgi?id=1099071).
## Examples
Forbidden code:
```
throw null;
throw undefined;
throw 'foobar';
throw 42;
throw {foo: 'bar'};
throw potentiallyNullable; // with βstrictNullChecks
```
Allowed:
```
throw new Error();
throw null as any;
```
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code - it can break existing TypeScript code.
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
498,357,192 | angular | Optional Routing Parameters issue | # π bug report
### Affected Package
@angular/router
### Is this a regression?
Yes, working in angularjs 1.x
### Description
Optional Routing Parameters issue
Route declaration
const routes: Routes = [
{ path: '', component: SearchpagesComponent ,
children: [
{
path: βnear/:nearSearch/:val1/:val2/:name/:filter', loadChildren: './near-search/near-search.module#NearSearchModule',
}
]
}
];
RouterLink config
<a [routerLink]="['/near/','','37β,β122','San-Francisco','All']" > Goto Near Page - San Francisco</a>
by this we are able to achieve optional param and able to load component and URL generated by routerLink is below which is expected
http://localhost:3000/near//37/122/San-Francisco/All
but when we refresh or reload page it's not working and redirecting to http://localhost:3000/
-Two forward slash after near "//" is required and url should remain same.
-Expected fix is if we pass `nearSearch ` params as a null component should be loaded and and blank forward slash '//' should be persist in URL'
## π₯ Exception or Error
Redirecting back to home page
## π Your Environment
**Angular Version:**
Angular 5
| type: bug/fix,freq2: medium,area: router,state: confirmed,router: URL parsing/generation,P3 | medium | Critical |
498,466,979 | go | doc: document/reassert that last two releases are supported equally | The policy for what does and doesn't get backported is currently documented at https://golang.org/wiki/MinorReleases.
> Our default decision should always be to not backport, but fixes for security issues, serious problems with no workaround, and documentation fixes are backported to the most recent two release branches, if applicable to that branch.
>
> Fixes for experimental ports are generally not backported.
>
> A βseriousβ problem is one that prevents a program from working at all. "Use a more recent stable version" is a valid workaround, so very few fixes will be backported to both previous issues.
This is pretty vague, and it detaches from practice in quite a few ways. We discussed a more complete framework that reflects the current reality with @dmitshur.
I propose we document it at https://golang.org/doc/devel/release.html#policy (in keeping with #34038) with the following text.
/cc @golang/osp-team
---
### Backporting policy
**Most recent major release.** The following changes are are eligible for backporting to the latest major release.
* Security fixes, in their own release, according to the [security policy](https://golang.org/security).
* Fixes for serious problems with no workaround. A βseriousβ problem is one that prevents a program from working at all. This includes, for example, miscompilation issues.
* Early fixes for regressions and issues in new functionality. As the release matures, the bar for these changes gets higher: while the first minor release in a series will accept most fixes for user-visible regressions, after five months they will be mostly rejected. As more of the ecosystem upgrades, fixing regressions with workarounds becomes less and less valuable, and the tradeoff with stability shifts.
* Documentation changes that fix incorrect public docs.
Fixes for experimental ports are generally not backported.
**Previous major release.** The only fixes eligible for backporting to the previous major release (for example, to Go 1.10, once Go 1.11 has been released) are those that address external changes that would make the release unusable. This includes security fixes and platform compatibility fixes (for example, if a new version of an OS breaks Go programs).
We consider upgrading to the latest major release a valid workaround, and the purpose of maintaining the previous major release is only not to force users to upgrade unexpectedly, so pre-existing serious issues and regressions are only fixed in the latest major release. | Documentation,Proposal,Proposal-Accepted | high | Minor |
498,514,936 | go | x/build: add more race detector builders | We should have one for every supported race detector config.
We currently have:
```
darwin/amd64
freebsd/amd64
linux/amd64
windows/amd64
```
We'd need to add:
```
netbsd/amd64
linux/arm64
linux/ppc64le
```
@bradfitz
When attempting to rebuild the .syso files as part of #33309 , I ran into these additional platforms where race.bash does not pass, usually because of dumb reasons that would be easily caught and fixed if we had builders for them.
Builders would also be useful to test my fix for #33309 .
| Builders,NeedsFix,new-builder | low | Minor |
498,550,359 | go | encoding/json: Unmarshal & json.(*Decoder).Token report different values for SyntaxError.Offset for the same input | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13 freebsd/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/max/.cache/go-build"
GOENV="/home/max/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="freebsd"
GONOPROXY=""
GONOSUMDB=""
GOOS="freebsd"
GOPATH="/home/max/Projet/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/freebsd_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build707409473=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
https://play.golang.org/p/zkVEGITpYIo
### What did you expect to see?
The same Offset for the two error cases, as it is the same input.
### What did you see instead?
Different Offset. | help wanted,NeedsInvestigation | low | Critical |
498,555,742 | flutter | Use Raw Input api to support >5 mouse clicks and other devices | We might need this api to support joysticks, game controller and distinguish between different devices of the same kind (e.g. two mice connected) | framework,platform-windows,a: desktop,P3,team-windows,triaged-windows | low | Minor |
498,558,401 | flutter | TextField/TextFormField labelText and hintText should be right-aligned with TextDirection.rtl | There is forgotten attribute **write** for the `labelText of(TextFormField or TextField)` from right to left.
I tried with `textAlign`, `textDirection`. Although, it works with `hintText` and the user input, it doesn't work with the `labelText`.

[textfield.txt](https://github.com/flutter/flutter/files/3655101/textfield.txt)
```dart
TextFormField(
textAlign: Language.lang == 'NotArabic' ?TextAlign.left:TextAlign.right,
validator: (value) {
return Validators().validateEmail(value);
},
controller: _emailController,
decoration: new InputDecoration(
hintStyle: new TextStyle(color: Colors.grey[500],),
labelStyle: new TextStyle(color: Colors.grey[900],),
hintText: 'hint Email',
labelText: 'label Email',
fillColor: Colors.white70),
textDirection:
Language.lang == 'NotArabic' ? TextDirection.ltr : TextDirection.rtl,
)
``` | a: text input,c: new feature,framework,f: material design,a: internationalization,P2,team-text-input,triaged-text-input | low | Major |
498,587,256 | PowerToys | Bring back the floating toolbars in taskbar | # Summary of the new feature/enhancement
A long time ago and until Windows 7, people could create **toolbars** on the taskbar and then **drag them off the taskbar**, either leaving them floating on the desktop or **attaching them on a different desktop edge** β like the top, left or right. Those toolbars _**could even be set to auto-hide**_ so they wouldn't waste space.
Users could have specific toolbars for **shortcuts**, for showing the **drives** (my personal favourite) in "My Computer" or the contents of any other directory. That was a really powerful and very useful feature.
I remember it got _deprecated_ on Windows Vista, but if memory serves there was a little "trick" you could do to have it work. By Windows 7, it was gone.
Please bring it back!
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
# Proposed technical implementation details (optional)
I don't have a proposal, but I'm sure the code allowing the toolbars to be detached from the taskbar and re-attached to other monitor edges is still in there, somewhere.
<!--
A clear and concise description of what you want to happen.
-->
| Idea-New PowerToy,Product-Tweak UI Design | low | Major |
498,601,573 | flutter | ReorderableListView requires child keys to be identical across builds | Internally ReorderableListView creates a GlobalObjectKey per child, where that key's value is the child's key.
GlobalObjectKey uses the identical compare function to determine equality. This means that if child keys are regenerated on each build pass, the ReorderableListView subtree will rebuild, even if the keys compare equal to the previous pass. | framework,f: material design,P2,team-design,triaged-design | low | Major |
498,617,312 | rust | Tracking issue for RFC 2523, `#[cfg(version(..))]` | This is a tracking issue for `#[cfg(version(..))]` (rust-lang/rfcs#2523).
**Steps:**
- [x] Implement the RFC (cc @rust-lang/compiler -- can anyone write up mentoring instructions?)
- [ ] Adjust documentation ([see instructions on rustc-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-guide][stabilization-guide])
[stabilization-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#documentation-prs
**Unresolved questions:**
- [x] What is `cfg(version(...))` relative to in terms of nightly compilers?
We could check against what `rustc --version` says, e.g. nightly being `1.40.0`, beta being `1.39.0`, and stable being `1.38.0`. We could also have `cfg(version(...))` at most be relative to the beta compiler. See the RFC for a longer discussion.
- Resolved in https://github.com/rust-lang/rust/issues/64796#issuecomment-625474439
- [ ] Should we also support `version = "..."` so that crates having a MSRV below when `version(...)` was stabilized can use the flag?
- [ ] Dependency updates cause language changes (https://github.com/rust-lang/rust/issues/79010) | B-RFC-approved,T-lang,C-tracking-issue,S-blocked,F-cfg_version | high | Critical |
498,617,958 | rust | Tracking issue for RFC 2523, `#[cfg(accessible(::path::to::thing))]` | This is a tracking issue for `#[cfg(accessible(::path::to::thing))]` (rust-lang/rfcs#2523).
## Steps
- [ ] Implement the RFC: partially done in https://github.com/rust-lang/rust/pull/69870
- [ ] Adjust documentation ([see instructions on rustc-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-guide][stabilization-guide])
[stabilization-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#documentation-prs
## Status
From [this comment](https://github.com/rust-lang/rust/issues/64797#issuecomment-625803760)
- the surface of the feature was implemented in #69870 as an attribute `#[cfg_accessible(path)] item`. The attribute can configure or unconfigure the `item` and wait until the predicate "`path` is accessible" becomes determinate.
- the predicate itself is not implemented, it either returns truth if the path is certainly available, or indeterminacy if we need to try again later, or reports an error otherwise. So the attribute is not usable in practice yet.
- desugaring of `#[cfg(accessible)]` into `#[cfg_accessible]` is not implemented, we need to consider doing or not doing it only when everything else is implemented.
## Unresolved questions:
None so far. | B-RFC-approved,T-lang,C-tracking-issue,F-cfg_accessible,S-tracking-ready-to-stabilize,S-tracking-impl-incomplete | high | Critical |
498,627,033 | pytorch | [BUG Report]Integrate libtorch to ffmpeg but memory leak happened! | ## π Bug
```
<!-- A clear and concise description of what the bug is. -->
==58309== 3,120 (1,800 direct, 1,320 indirect) bytes in 15 blocks are definitely lost in loss record 14,673 of 15,041
==58309== at 0x4C2A4C3: operator new(unsigned long) (vg_replace_malloc.c:344)
==58309== by 0x287075D3: c10::cuda::CUDACachingAllocator::THCCachingAllocator::malloc(void**, unsigned long, CUstream_st*) (in /data/source/pytorch/build/lib/libc10_cuda.so)
==58309== by 0x28708A49: c10::cuda::CUDACachingAllocator::CudaCachingAllocator::allocate(unsigned long) const (in /data/source/pytorch/build/lib/libc10_cuda.so)
==58309== by 0xC0C5DE5: THCStorage_resize (in /data/source/pytorch/build/lib/libtorch.so)
==58309== by 0x9A51EAB: at::native::empty_strided_cuda(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::TensorOptions const&) (in /data/source/pytorch/build/lib/libtorch.so)
==58309== by 0xBF9C018: at::CUDAType::empty_strided(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::TensorOptions const&) (in /data/source/pytorch/build/lib/libtorch.so)
==58309== by 0xA14729C: at::TensorIterator::allocate_outputs() (in /data/source/pytorch/build/lib/libtorch.so)
==58309== by 0xA148A59: at::TensorIterator::build() (in /data/source/pytorch/build/lib/libtorch.so)
==58309== by 0xA148CA6: at::TensorIterator::binary_op(at::Tensor&, at::Tensor const&, at::Tensor const&, bool) (in /data/source/pytorch/build/lib/libtorch.so)
==58309== by 0x9F8E535: at::native::add(at::Tensor const&, at::Tensor const&, c10::Scalar) (in /data/source/pytorch/build/lib/libtorch.so)
==58309== by 0xBF9D0F0: at::CUDAType::add(at::Tensor const&, at::Tensor const&, c10::Scalar) (in /data/source/pytorch/build/lib/libtorch.so)
==58309== by 0xB987737: torch::autograd::VariableType::add(at::Tensor const&, at::Tensor const&, c10::Scalar) (in /data/source/pytorch/build/lib/libtorch.so)
## To Reproduce
```
Steps to reproduce the behavior:
1. compile libtorch from pytorch source master and tag v1.2.0.
2. integrate libtorch to ffmpeg and compile ffmpeg debug version(not release version)
3. detect memory leak and that happend!
## Environment
```
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: CentOS Linux 7 (Core)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39)
CMake version: version 3.14.0
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
Nvidia driver version: 410.79
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] torch==1.1.0
[pip] torchsummary==1.5.1
[pip] torchvision==0.3.0
[conda] magma-cuda90 2.5.0 1 pytorch
[conda] mkl 2019.4 243 defaults
[conda] mkl-include 2019.4 243 defaults
[conda] mkldnn 0.16.1 0 mingfeima
[conda] pytorch 1.1.0 py3.6_cuda9.0.176_cudnn7.5.1_0
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.3.0 py36_cu9.0.176_1
```
## Additional context
<!-- Add any other context about the problem here. -->
| module: build,triaged,module: vision | low | Critical |
498,640,505 | flutter | Crashing when trying to get dart extension on VSCode | Flutter crash report; please file at https://github.com/flutter/flutter/issues.
## command
flutter packages get
## exception
NoSuchMethodError: NoSuchMethodError: The method '[]' was called on null.
Receiver: null
Tried calling: []("androidPackage")
```
#0 Object.noSuchMethod (dart:core-patch/object_patch.dart:51:5)
#1 _validateFlutter (package:flutter_tools/src/flutter_manifest.dart:381:22)
#2 _validate (package:flutter_tools/src/flutter_manifest.dart:317:9)
#3 FlutterManifest._createFromYaml (package:flutter_tools/src/flutter_manifest.dart:43:34)
#4 FlutterManifest.createFromString (package:flutter_tools/src/flutter_manifest.dart:38:12)
#5 FlutterManifest.createFromPath (package:flutter_tools/src/flutter_manifest.dart:32:12)
#6 FlutterProject._readManifest (package:flutter_tools/src/project.dart:181:34)
#7 FlutterProjectFactory.fromDirectory (package:flutter_tools/src/project.dart:34:53)
#8 FlutterProject.fromDirectory (package:flutter_tools/src/project.dart:64:78)
#9 FlutterProject.fromPath (package:flutter_tools/src/project.dart:72:50)
#10 PackagesGetCommand.usageValues (package:flutter_tools/src/commands/packages.dart:81:55)
<asynchronous suspension>
#11 FlutterCommand.verifyThenRunCommand (package:flutter_tools/src/runner/flutter_command.dart:484:21)
<asynchronous suspension>
#12 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:407:33)
<asynchronous suspension>
#13 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:154:29)
<asynchronous suspension>
#14 _rootRun (dart:async/zone.dart:1124:13)
#15 _CustomZone.run (dart:async/zone.dart:1021:19)
#16 _runZoned (dart:async/zone.dart:1516:10)
#17 runZoned (dart:async/zone.dart:1463:12)
#18 AppContext.run (package:flutter_tools/src/base/context.dart:153:18)
<asynchronous suspension>
#19 FlutterCommand.run (package:flutter_tools/src/runner/flutter_command.dart:397:20)
#20 CommandRunner.runCommand (package:args/command_runner.dart:197:27)
<asynchronous suspension>
#21 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:402:21)
<asynchronous suspension>
#22 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:154:29)
<asynchronous suspension>
#23 _rootRun (dart:async/zone.dart:1124:13)
#24 _CustomZone.run (dart:async/zone.dart:1021:19)
#25 _runZoned (dart:async/zone.dart:1516:10)
#26 runZoned (dart:async/zone.dart:1463:12)
#27 AppContext.run (package:flutter_tools/src/base/context.dart:153:18)
<asynchronous suspension>
#28 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:356:19)
<asynchronous suspension>
#29 CommandRunner.run.<anonymous closure> (package:args/command_runner.dart:112:25)
#30 new Future.sync (dart:async/future.dart:224:31)
#31 CommandRunner.run (package:args/command_runner.dart:112:14)
#32 FlutterCommandRunner.run (package:flutter_tools/src/runner/flutter_command_runner.dart:242:18)
#33 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:63:22)
<asynchronous suspension>
#34 _rootRun (dart:async/zone.dart:1124:13)
#35 _CustomZone.run (dart:async/zone.dart:1021:19)
#36 _runZoned (dart:async/zone.dart:1516:10)
#37 runZoned (dart:async/zone.dart:1500:12)
#38 run.<anonymous closure> (package:flutter_tools/runner.dart:61:18)
<asynchronous suspension>
#39 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:154:29)
<asynchronous suspension>
#40 _rootRun (dart:async/zone.dart:1124:13)
#41 _CustomZone.run (dart:async/zone.dart:1021:19)
#42 _runZoned (dart:async/zone.dart:1516:10)
#43 runZoned (dart:async/zone.dart:1463:12)
#44 AppContext.run (package:flutter_tools/src/base/context.dart:153:18)
<asynchronous suspension>
#45 runInContext (package:flutter_tools/src/context_runner.dart:58:24)
<asynchronous suspension>
#46 run (package:flutter_tools/runner.dart:50:10)
#47 main (package:flutter_tools/executable.dart:65:9)
<asynchronous suspension>
#48 main (file:///C:/src/flutter/packages/flutter_tools/bin/flutter_tools.dart:8:3)
#49 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:303:32)
#50 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:172:12)
```
## flutter doctor
```
[β] Flutter (Channel stable, v1.9.1+hotfix.2, on Microsoft Windows [Version 10.0.17134.1006], locale en-US)
β’ Flutter version 1.9.1+hotfix.2 at C:\src\flutter
β’ Framework revision 2d2a1ffec9 (3 weeks ago), 2019-09-06 18:39:49 -0700
β’ Engine revision b863200c37
β’ Dart version 2.5.0
[β] Android toolchain - develop for Android devices (Android SDK version 29.0.1)
β’ Android SDK at C:\Users\rosna\AppData\Local\Android\Sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-29, build-tools 29.0.1
β’ ANDROID_HOME = C:\Users\rosna\AppData\Local\Android\Sdk
β’ Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
β’ All Android licenses accepted.
[!] Android Studio (version 3.4)
β’ Android Studio at C:\Program Files\Android\Android Studio
β Flutter plugin not installed; this adds Flutter specific functionality.
β Dart plugin not installed; this adds Dart specific functionality.
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[β] VS Code (version 1.38.1)
β’ VS Code at C:\Users\rosna\AppData\Local\Programs\Microsoft VS Code
β’ Flutter extension version 3.4.1
[β] Connected device (1 available)
β’ Android SDK built for x86 β’ emulator-5554 β’ android-x86 β’ Android 8.0.0 (API 26) (emulator)
! Doctor found issues in 1 category.
```
| c: crash,tool,platform-windows,a: first hour,P2,team-tool,triaged-tool | low | Critical |
498,776,028 | flutter | ChangeNotifier.notifyListeners does not block execution if one listener fails | Consider the following code:
```dart
void main() {
final notifier = ValueNotifier(0);
notifier.addListeners(() {
throw Error();
});
notifier.notifyListeners();
print('reached');
}
```
Then the `print('reached')` is executed even although the previous line throws an exception.
This is usually unexpected, and unless one closely follows the console logs, it can be missed
Here's an example of a developer being surprised by this behavior: https://github.com/rrousselGit/provider/issues/228#issuecomment-535414020
To solve this, the idea would be to make `notifyListeners` throw if _at least one listener_ threw an exception.
| c: new feature,framework,d: api docs,c: proposal,P2,team-framework,triaged-framework | low | Critical |
498,822,259 | godot | glTF importer ignores "-colonly" import hint on Blender empty objects | **Godot version:**
Both 3.1.1 and 202440a
**OS/device including version:**
Ubuntu 19.10
**Issue description:**
Documentation on [import hints](https://docs.godotengine.org/en/latest/getting_started/workflow/assets/importing_scenes.html#import-hints) states that:
> Option β-colonlyβ can also be used with Blenderβs empty objects. On import, it will create a StaticBody with a collision node as a child.
However, when importing a glTF file with such an empty into Godot, it just turns into a _Spatial_ node as such:

A Collada file exported from the same Blender scene shows the expected behavior:

where `Cube3-TestColonly` is a _StaticBody_ and `BoxShape` is a _CollisionShape_
**Steps to reproduce:**
* Import a glTF file exported from a Blender scene that includes an empty having `-colonly` appended to its name
* Observe that the empty has turned into a _Spatial_ node insteal of a _StaticBody_
**Minimal reproduction project:**
[glTF import test.zip](https://github.com/godotengine/godot/files/3657466/glTF.import.test.zip)
The project directory also includes the Blender (2.80) file used to export the glb and dae (with _Better Collada exporter_) files.
The node of interest is `Cube3-TestColonly`, other ones are here to showcase other kind of import hints. | bug,confirmed,topic:import,topic:3d | medium | Critical |
498,858,215 | TypeScript | TS3.6 regression: Map constructor overloads | **TypeScript Version:** 3.7.0-dev.20190917
**Search Terms:** Map constructor overload array concatenation
**Code**
```ts
const M1 = new Map([
['k', {a: 1, b: 'asd'}],
['k', {a: 2, b: 'asd'}],
]);
const M2 = new Map([
...M1,
['k', {a: 1}],
]);
```
**Expected behavior:**
Success, constructs a `const M2: Map<string, {a: number;}>`.
**Actual behavior:**
```
input.ts(6,20): error TS2769: No overload matches this call.
The last overload gave the following error.
Argument of type '([string, { a: number; b: string; }] | [string, { a: number; }])[]' is not assignable to parameter of type 'readonly (readonly [string, { a: number; b: string; }])[]'.
Type '[string, { a: number; b: string; }] | [string, { a: number; }]' is not assignable to type 'readonly [string, { a: number; b: string; }]'.
Type '[string, { a: number; }]' is not assignable to type 'readonly [string, { a: number; b: string; }]'.
Types of property '1' are incompatible.
Property 'b' is missing in type '{ a: number; }' but required in type '{ a: number; b: string; }'.
```
**Playground Link:**
https://www.typescriptlang.org/play/#code/MYewdgzgLgBAsgRhgXhmApgd3gQwA4AUA2gFAwxEDkA1pQDQwDeOAXDAgwEZuU4QAmlAL4BdOmQo16TVjABMXHn0GjxIgJQBuEiVCRYcOSjRZchUuQB01xOPJVaDZmwSqSGkkA
**Related Issues:**
None found.
| Needs Investigation | low | Critical |
498,859,470 | pytorch | Statically checked tensor shapes | ## π Feature
(Long term request, mostly to gather feedback on our current experiment)
We would like to extend the `Tensor` class with the description of its dimensions shape and values, to enable static checking of tensor operations w.r.t. to shapes (e.g. detecting an illegal call to `Tensor.mm` statically rather than with a runtime exception). The new syntax would look like `Tensor[int32, dim0, dim1, dim2]`
## Motivation
At the moment, PyTorch is more or less untyped: everything is a `Tensor` and there is no information whatsoever on the dimensions of these tensors. By being more descriptive, we could statically check (aka at compile time, rather than at runtime) that tensor operations are executed on arguments with the right shape. For example, we could catch this kind of errors during type checking that:
```
T0 : Tensor[int32, D3, D4] = ...
T1 : Tensor[int32, D5, D5] = ...
T2 = Tensor.mm(T0, T1) # mismatch: D4 != D5
```
This could save lots of computer time (less runtime errors) along with debugging time.
## Pitch
Recently [Pyre](https://github.com/facebook/pyre-check) added an experimental support for variadic type variable which allows to describe the shape of tensors.
It allowed me to write some initial stubs for PyTorch where the tensor type has a documented shape. This shape can be check statically by Pyre to prevent most mis-usage of PyTorch operators.
As an example, I took [this script](https://github.com/pytorch/examples/blob/29c2ed8ca6dc36fc78a3e74a5908615619987863/regression/main.py#L43) and translated it into this [typed version](https://github.com/facebook/pyre-check/blob/master/examples/pytorch/sources/linear_regression.py). The main stubs are located [here](https://github.com/facebook/pyre-check/blob/master/examples/pytorch/stubs/_torch/__init__.pyi).
We already got some very positive feedback for the Python types community last Friday during Facebook MPK's Python meetup. So now I'm asking the PyTorch community :D
Known limitations: this is an early draft of the project, so we can't type everything at the moment. For example we only support simple broadcasting (like `Tensor.__add__` when the rhs is scalar. Nothing for `Tensor.matmul` yet). Also there are some functions that just can't be statically check (like `Tensor.cat`) and which require manual annotation.
## Alternatives
Current known alternative are all runtime check (like the Named Tensor proposal) which address the same problem, but still at runtime, which could be less efficient when programs run for several hours/days.
## Additional context
I don't expect PyTorch to migrate to this solution right now, I'm gathering feedback on the experiment to see where to go next. Our next stop is to support broadcasting, and I would gladly have some direction on which killer feature we should try to support next. | module: internals,feature,triaged | high | Critical |
498,862,344 | youtube-dl | New PokΓ©mon TV Player Support | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.12.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.09.12.1**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://watch.pokemon.com/en-gb/player.html?id=1b0e462fc0184fbfb8d239956c0e0e4f
- Single video: https://watch.pokemon.com/de-de/player.html?id=b85ebd49197e49259c4f01780b0585f7
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
PokΓ©mon TV got updated and is no longer supported by youtube-dl. Would it be possible to add support for the new version of the site?
| site-support-request | low | Critical |
498,874,841 | puppeteer | Feature Request: Composition Events (Keyboard API) | We're using Puppeteer at WordPress, where we've created a custom editor. One thing we keep running into is bugs around input composition. Every now and then things break, and we're not able to create tests for it with Puppeteer. It would be great if Puppeteer provided an additional `keyboard` API to simulate composing characters with the keyboard.
https://w3c.github.io/uievents/#events-composition-types
In other words, this will not produce `Γ¨` in Puppeteer:
```js
await page.keyboard.down( 'Alt' );
await page.keyboard.type( '`' );
await page.keyboard.up( 'Alt' );
await page.keyboard.type( 'e' );
```
Thanks for the consideration! | feature,confirmed | low | Critical |
498,902,845 | flutter | Failed: ./flutter/tools/gn --android --runtime-mode debug --android-cpu x86 | My Engine branch is v1.9.1-hotfixes
when i try to gen android_debug_x86 by commend
```
./flutter/tools/gn --android --runtime-mode debug --android-cpu x86
```
falid message is fellows:
```
Generating GN files in: out/android_debug_x86
ERROR Unresolved dependencies.
//flutter/lib/snapshot:generate_snapshot_bin(//build/toolchain/android:clang_x86)
needs //third_party/dart/runtime/bin:gen_snapshot(//build/toolchain/mac:clang_x86)
//flutter/runtime:dart_snapshot_kernel_dart_snapshot_runtime_fixtures(//build/toolchain/android:clang_x86)
needs //third_party/dart/runtime/bin:dart(//build/toolchain/mac:clang_x86)
//flutter/shell/platform/embedder:dart_snapshot_kernel_dart_snapshot_fixtures(//build/toolchain/android:clang_x86)
needs //third_party/dart/runtime/bin:dart(//build/toolchain/mac:clang_x86)
//third_party/dart/runtime/bin:generate_snapshot_bin(//build/toolchain/android:clang_x86)
needs //third_party/dart/runtime/bin:gen_snapshot(//build/toolchain/mac:clang_x86)
//third_party/dart/utils/kernel-service:frontend_server(//build/toolchain/android:clang_x86)
needs //third_party/dart/runtime/bin:dart(//build/toolchain/mac:clang_x86)
//third_party/dart/utils/kernel-service:frontend_server(//build/toolchain/android:clang_x86)
needs //third_party/dart/runtime/vm:kernel_platform_files(//build/toolchain/mac:clang_x86)
//third_party/dart/utils/kernel-service:kernel-service_snapshot(//build/toolchain/android:clang_x86)
needs //third_party/dart/runtime/bin:dart(//build/toolchain/mac:clang_x86)
//third_party/dart/utils/kernel-service:kernel_service_bytecode_dill(//build/toolchain/android:clang_x86)
needs //third_party/dart/runtime/vm:kernel_platform_files(//build/toolchain/mac:clang_x86)
//third_party/dart/utils/kernel-service:kernel_service_dill(//build/toolchain/android:clang_x86)
needs //third_party/dart/runtime/vm:kernel_platform_files(//build/toolchain/mac:clang_x86)
```
and i find the commit id "609dd1c93d4e339472fc75d25db80b66ccea2355", which Remove support for the i386 variant of the Mac toolchain. (#300)
why? | engine,d: api docs,P2,team-engine,triaged-engine | low | Critical |
498,939,963 | godot | Listener doesn't output sound if there's no Camera on the scene | Hit this when trying to use `AudioStreamPlayer3D` in a 2-D world to get its advanced panning/rotation support. Essentially I'm syncing the X and Z coordinates of `AudioStreamPlayer3D` nodes with the X and Y coordinates of a `Node2D` parent, as well as their rotations. This seems to work OK for sound sources, but it failed for `Listener` and took days plus help to debug.
Essentially, I can't have a `Listener` as a child of a `Node2D` (or even a `Spatial`) and get audio. Instead, I have to:
* Create a separate `Viewport`.
* Create a `Camera` as a child of the viewport. I don't want 3-D rendering, so I hope this prevents any visible rendering.
* Create a `Listener` as a child of that camera.
* Enable the 3-D listener property on the parent Viewport.
Once I've done that, I can sync `AudioStreamPlayer3D` nodes with a `Node2D` parent. I can also sync the `Listener` with another `Node2D`, but I can't use the parent-child relationship. I have to tag it.
I looked through the listener source, and don't immediately see where it would fail if there isn't a `Camera` in its ancestry.
[Here](https://github.com/ndarilek/godot-audio-failure) is a fairly minimal reproduction of what I tried and failed. The reproduction uses a `Spatial` as the `Listener` parent, but it'd also be nice if I could use a `Node2D`. I figured a `Spatial` parent would be the least likely to fail though, but it did.
Using a custom built Godot 3.1.1.
Thanks. | bug,discussion,confirmed,topic:audio | low | Critical |
498,955,780 | vscode | [api] Allow extensions to determine if a position is within a fold | This has been the most demanded feature of VSCodeVim: [VSCodeVim/Vim#1004](https://github.com/VSCodeVim/Vim/issues/1004) for over 3 years.
The main problem with Folds and Vim is that some motions will skip right over folded areas (like moving up/down). We need to know if we are in a folded area so we can iterate these motions until we are out of the fold.
An API like ``vscode.window.activeTextEditor.getAllFoldedRegions(): vscode.Range[]`` would be ideal.
An API like ``vscode.window.activeTextEditor.isPositionInFold(position: vscode.Position): boolean`` would also be great.
* * *
This is technically a duplicate for [22276](https://github.com/microsoft/vscode/issues/22276), but that was closed because the roadmap at the time couldn't include this ticket.
Maybe it's time to revisit this? It has been blocking the most demanded feature of one of the most popular plugins out there, for 3+ years. | feature-request,api,VIM | high | Critical |
499,003,094 | go | cmd/go: misleading 'use of internal package not allowed' in GOPATH mode when a subtree vendors its own package | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.7 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
### What operating system and processor architecture are you using (`go env`)?
RHEL 8 (Ootpa)
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/srinivas/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/srinivas/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build192096706=/tmp/go-build -gno-record-gcc-switches"
Dep version
dep:
version : v0.5.4
build date :
git hash :
go version : go1.12.7
go compiler : gc
platform : linux/amd64
features : ImportDuringSolve=false
</pre>
</details>
### What did you do?
<pre>
git clone https://github.com/wavefrontHQ/wavefront-kubernetes-collector.git
cd wavefrontHQ/wavefront-kubernetes-collector
dep ensure
make
</pre>
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
### What did you expect to see?
I was expecting the build to be successful
### What did you see instead?
<pre>
go vet -composites=false ./...
cmd/test-driver/main.go:13:2: use of internal package github.com/wavefrontHQ/wavefront-kubernetes-collector/vendor/github.com/wavefronthq/wavefront-kubernetes-collector/internal/options not allowed
cmd/wavefront-collector/main.go:21:2: use of internal package github.com/wavefrontHQ/wavefront-kubernetes-collector/vendor/github.com/wavefronthq/wavefront-kubernetes-collector/internal/agent not allowed
cmd/wavefront-collector/main.go:22:2: use of internal package github.com/wavefrontHQ/wavefront-kubernetes-collector/vendor/github.com/wavefronthq/wavefront-kubernetes-collector/internal/configuration not allowed
</pre>
| NeedsFix | low | Critical |
499,009,717 | electron | BrowserWindow's size get's clamped to monitor's bounds | ### Preflight Checklist
* [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project.
* [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to.
* [x] I have searched the issue tracker for an issue that matches the one I want to file, without success.
### Issue Details
* **Electron Version:** 5.0.7
* **Operating System:** Windows 10
### Expected Behavior
In my new-window handler, I set the size of the window to something that exceeds the size of the monitor that the window is about to be opened on, and then set the position (via `options`) to move it to another monitor. I expect my position and size to be respected.
### Actual Behavior
Position is respected and the window moves to the correct place, but the window size is actually clamped to the size of the initial monitor that the window was about to be launched on.
### To Reproduce
Not exactly my case, but a similar behavior can be reproduced via:
```
const { app, BrowserWindow } = require('electron')
function createWindow() {
const mainWindow = new BrowserWindow({x: 0, y: 0, width: 5000, height: 1350});
console.log(mainWindow.getBounds());
}
app.on('ready', createWindow)
```
It logs `{ x: 0, y: 0, width: 2560, height: 1350}` which is the maximum size of the monitor on which it was launched.
The first time I launched it, I guess it was going to create it on the second monitor instead of the first, and so the width/height was set to the second monitor's bounds even though the window was moved to the first monitor (0,0).
Now this might due to the fact that Chrome clamps window bounds, so if you try launching a window via `window.open` with a width/height greater than the monitor, the window's size will be clamped to the monitor size.
So if there is a bug, the bug is that the BrowserWindow's size is the initial monitor's size, not the size of the monitor that we've repositioned the window to. | platform/windows,bug :beetle:,status/confirmed,5-0-x,component/BrowserWindow,7-1-x,10-x-y | medium | Critical |
499,053,422 | godot | Godot produces confusing error when a variable identifier shadows a keyword | **Godot version:**
3.1.1
**Issue description:**
Godot doesn't allow scripts to create identifers which shadow keywords, this is good and makes sense. Godot serves an error which doesn't help the developer notice when they have misused a keyword however, which leads to confusion.
> Parser Error: Expected identifier for local variable name.
**Steps to reproduce:**
1. Attempt to use a reserved keyword as a variable name
2. The interpreter serves an error which doesn't appear on the surface related to the actual issue
> Parser Error: Expected identifier for local variable name.
**Minimal reproduction project:**
```
extends Node
func _ready():
var floor = "test" # error
``` | enhancement,topic:gdscript | low | Critical |
499,063,732 | flutter | Embedder API should warn when software and GL renderer backing stores are mixed. | Currently, it is embedder responsibility to be consistent with the client rendering API of the backing stores being used with the custom compositor. However, mixing them will lead to significant performance issues (if it works at all in the case of a software root surface renderer config and a an OpenGL compositor backing store).
The embedder API must make sure the client rendering API of the compositor backing stores matches the root renderer config and return an error otherwise. | engine,e: embedder,P2,team-engine,triaged-engine | low | Critical |
499,128,390 | TypeScript | Investigate altering extension priorities for wildcard loading | Our rules for loading files from wildcard matches prioritize `.d.ts` files over `.json` files (and at the same priority as `.js` files). Now that we support declaration files for `.json` and `.js` files, this is undesirable, as when declarations are emitted in-place (ie, with `emitDeclarationsOnly`), subsequent compilations using wildcard lookups will load the generated declaration files over the original source `.js` (maybe) and `.json` (definitely) files. Fixing this would require rejiggering the extension priority machinery in `src\compiler\utilities.ts`, so that declaration files are truly the lowest priority extension to load. [This comment thread](https://github.com/microsoft/TypeScript/pull/32372#discussion_r328195950) has a bit more discussion on the topic. A cursory glance also leads me to believe that extension priorities for `extraFileExtensions` as passed into `getSupportedExtensions` are largely unhandled, so there's probably a fair chunk of work here to revisit this system.
In the meantime, if you encounter an issue likely caused by this discrepancy, the workaround is pretty simple: Use the `outDir` compiler option. | Bug,Fix Available | low | Minor |
499,134,357 | flutter | flutter_root/testing requires dart runtime to be linked in separately to build embedding test. | When we build embedding unit test executable we usually have to link
```
"$flutter_root/testing"
"//third_party/dart/runtime:libdart_jit"
```
together. Otherwise, build will fail.
We should refactor flutter_root/testing or create a new target that include both group so that we do not have to explicitly link both targets. | engine,e: embedder,P3,team-engine,triaged-engine | low | Minor |
499,139,029 | TypeScript | Rewrite `getAccessibleSymbolChain` for performance | `getAccessibleSymbolChain` is one of the oldest parts of the compiler - remaining today mostly unchanged (barring support for new features) from when it was used in the old text-based declaration emitter. `getAccessibleSymbolChain` is used to, given a `[symbol, scope]` pair, find a series of symbols whose exports can be accessed to lookup the symbol. Today, this is by and large uncached, so the fail case, when a symbol is _not_ accessible, causes a traversal of every publicly reachable symbol in a program - this process is then repeated for every symbol that needs to be named, which means you end up spending a _very_ long time traversing symbol structures. On normal typescript code, the inefficiency is less outsized, as we only need to invoke this code when we generate inferred types in declaration emit, which is more rare in TS output than in JS output, however you can still find projects where its' influence is large (some of @AnyhowStep 's samples bad performance in declaration emit trace back to this).
As far as the direction of the fix goes - I have some ideas. Today, we enumerate all possibilities and just backtrack to see what works - instead, I imagine using a (cacheable) set-like structure to check if the symbol is even accessible from a given symbol, and then use a hierarchy of these sets to guide chain creation. | Domain: Declaration Emit,Experience Enhancement,Domain: Performance | low | Major |
499,155,031 | vscode | ctrl + click "go to definition" clashes with ctrl + click "follow link" | @AngusWR commented on [Wed Sep 25 2019](https://github.com/microsoft/vscode-python/issues/7591)
## Environment data
- VS Code version: Version: 1.38.1
- Extension version (available under the Extensions sidebar): Version: 2019.9.34911
- OS and version: Windows 10 Pro 1903 18362.356
- Python version (& distribution if applicable, e.g. Anaconda): 3.6.7
## Expected behaviour
ctrl + click on a string representation of a url opens the url in chrome
## Actual behaviour
ctrl + click on a string representation of a url opens the url in chrome and also goes to the definition for str in builtins.pyi
Holding ctrl and hovering the url:

After clicking the url

I haven't been able to find a way of changing the key binding for the "go to definition" feature. Any advice?
---
@karrtikr commented on [Thu Sep 26 2019](https://github.com/microsoft/vscode-python/issues/7591#issuecomment-535717671)
For changing keybindings, go to `File` -> `Preferences` -> `Keyboard shortcuts`. Search for `Go to definition`.
Anyways, this is controlled by VSCode, not the extension. So I am transferring the issue.
| bug,editor-symbols | medium | Major |
499,164,228 | TypeScript | How to pass along .d.ts comment on function that returns a class extending React.Component | From: https://stackoverflow.com/questions/58088678/how-to-pass-along-d-ts-comment-on-function-that-returns-a-class-extending-react?noredirect=1#comment102571859_58088678
I've got a .d.ts file I'm writing for an npm module teselagen-react-components that is a set of functions/react components. Here is a subset of the definition file in question:
```js
import * as React from "react";
/**
*
* I'm a comment
*/
declare class GenericSelect extends React.Component<GenericSelectProps, any> { }
export interface GenericSelectProps {
layout: string;
}
declare function createGenericSelect() {
return GenericSelect
}
```
In another module I have a file like so:
```js
//./myOtherProject/someFile.js
import { createGenericSelect, GenericSelect } from "teselagen-react-components";
const GenSel = createGenericSelect()
return <GenSel />
return <GenericSelect />
```
The issue I'm seeing is that when I hover `<GenericSelect />` I am able to see the comment:
[![GenericSelect with comment][1]][1]
But when I hover the <GenSel/> component, I can't see it!
[![GenSel with no comment][2]][2]
If instead of this line:
```
/**
*
* I'm a comment
*/
declare class GenericSelect extends React.Component<GenericSelectProps, any> { }
```
I had
```
/**
*
* I'm a comment
*/
declare function GenericSelect { }
```
Then I do see the comment!
[![GenSel() with comment][3]][3]
Why?
Thanks!
[1]: https://i.stack.imgur.com/85DEX.png
[2]: https://i.stack.imgur.com/K8X47.png
[3]: https://i.stack.imgur.com/tOpg6.png | Bug,Domain: Quick Info | low | Major |
499,178,541 | go | go/internal/gcimporter: single source of truth for decoder logic | It's somewhat tedious to have to maintain go/internal/gcimporter in both the standard repo and in x/tools. Can we find a better solution here?
E.g., we vendor a bunch of other x/ repos into the main build. Could we do that for x/tools/go/internal/gcimporter too?
I know go/internal/gcimporter is more aggressive about pruning backwards compatibility code than x/tools/go/internal/gcimporter, but I'd think we could still use the same code base and just either use different entry points into it, or build it in different ways.
/cc @griesemer @alandonovan | NeedsInvestigation | low | Minor |
499,187,007 | flutter | No scrollbar in (infinite) list widget. | While following and completing the instructions on [Write your first app](https://flutter.dev/docs/get-started/codelab#step-4-create-an-infinite-scrolling-listview), the resulting application does have an infinite scrolling list, but there is no (updating) scroll bar to show the "progress" of the list.
I would expect a scroll bar allowing to navigate from the first element to the "last element in memory", and updating as new content keeps loading. | c: new feature,framework,f: scrolling,platform-web,a: desktop,P2,team-framework,triaged-framework | low | Major |
499,213,929 | pytorch | conv2d Memory usage is too largeοΌ pytorch 1.1.0 | ## π Bug
Some specific situations, such as batch_size = 32, in/out_channels = 128, h = 1, w = 128 and kernel_size = 7, Memory usage is too large.
if kernel_size = 5, Memory usage is a few MB;
but if kernel_size = 7, Memory usage can reach 10GB!
## To Reproduce
you can run this code and test different situationsγ
```
import torch as t
import numpy as np
from torch import nn
def main():
print(t.__version__)
kernel_size = 7 # 5
conv = nn.Conv2d(128, 128, kernel_size, 1, (kernel_size - 1) // 2, bias = False)
conv.cuda()
x = t.ones([32, 128, 1, 128], device = 'cuda')
conv(x)
a = input()
if __name__ == '__main__':
main()
```
## Environment
pytorch version: 1.1.0
OS: Ubuntu
CUDA Version: 9.0
cudnn Version: 7.0.5 | module: dependency bug,module: cudnn,module: memory usage,module: convolution,triaged | low | Critical |
499,217,041 | youtube-dl | youtube-dl auto-completion in not working in ubuntu 18.04 | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions
- Search the bugtracker for similar questions: http://yt-dl.org/search-issues
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm asking a question
- [x] I've looked through the README and FAQ for similar questions
- [x] I've searched the bugtracker for similar questions including closed ones
## Question
<!--
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.
-->
In the past, I used youtube-dl and auto-completion feature was working fine. Now I am using ubuntu 18.04 and installed youtube-dl manually with curl. youtube-dl is working fine but it doen's complete any command. I even tried pip and distro installation but nothing is working for me, any help? | question | low | Critical |
499,246,780 | TypeScript | Language service doesn't output neither errors nor crashes | Language service doesn't output errors even when it crashes in these months. So developers can't report neither bugs nor crashes of language service. And this behavior hides type errors.
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.20190926
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
1. Reproduce #33617.
1. VSCode must output crash logs of language service but actually doesn't output anywhere in the output panel.
1. Make a type error like #33616.
1. This type error must be reported on VSCode but actually not, because probably language service has crashed.
1. VSCode must output errors of language service but actually doesn't output anywhere in the output panel.
**Expected behavior:**
**Actual behavior:**
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Needs Investigation | low | Critical |
499,249,195 | pytorch | Batched Dataloader | ## π Feature
Add a mode to `Dataset` that enables fetching data in batches instead of item-by-item.
## Motivation
If model training takes relatively small individual examples as an input, like in the case of training on tabular data, the python interpreter overhead of fetching data becomes so large that hinders training performance. In other words, training becomes CPU-bound (even with multiprocessing enabled).
This came up in a real scenario of the StarSpace model from FAIR.
## Pitch
Add an optional `__getbatch__` method to the `Dataset` that's analogous to `__getitem__` but takes a collection of indices as an input. Make the `Dataloader` aware of `BatchedDataset`. Once the `Dataloader` recognizes that the `__getbatch__` is present, that method is used for fetching data, one batch at the time.
As a result, the user receives an ability to pass data in batch end-to-end and avoid the high cost (per byte read) of python interpreter.
I implemented a variant of batch loading for aforementioned StarSpace model and got the training down from 5.5 days to under 24 hours. The person who originally implemented it used standard PyTorch data loading abstractions and fall into the trap of low performance.
This is a type of issue anybody working on e.g. tabular data will be running into. Unfortunately, there's no natural way out given current PyTorch abstractions.
## Alternatives
Implement this on top of existing abstractions by "smuggling" batches values wrapped as a single value and unwrapping them in a custom collate function. The code, that I provide below, is fairly subtle and a bit hacky (abusing current abstractions). The code is fully functional and used in production, though.
Edit: I found also this: https://github.com/pytorch/pytorch/pull/19228 which a different way of implementing what I need. The downside of IterableDataset is that it essentially throws through the window the nice decomposition into Dataset, Sampler and Dataloader. Suddenly, you're responsible for implementing all of the logic. Having said that, this is a big improvement over my rather hacky solution I posted below.
cc @SsnL | feature,module: dataloader,triaged | low | Major |
499,306,174 | terminal | Add support for roaming settings.json or storing it elsewhere |
<hr>
> **Note**: π Pinned comment: **https://github.com/microsoft/terminal/issues/2933#issuecomment-536652883**
<hr>
# Description of the new feature/enhancement
I have three different computers that I use for work. I keep my PowerShell profile in a GitHub repository and dot source it in my local PowerShell profile. That way I will only need to do a `git pull` on my profile repository to get all changes propagated on each computer.
It would be great if I could do something similar with my Windows Terminal settings by simply stating in my local settings.json file that the "real" settings could be found somewhere else.
(It could even be that the loaded profile acts like a base for the settings on the computer so things could be overridden, but that is a completely different issue that remains to be opened)
I'm aware that I might have overlooked something here, but would love to start a discussion about this since I suspect that I'm not the only one with this "problem".
# Proposed technical implementation details (optional)
```json
{
"$schema": "https://aka.ms/terminal-profiles-schema",
"globals": {
"loadProfileFrom": "C:/Source/git/profiles/terminal_profile.json"
}
``` | Issue-Feature,Help Wanted,Area-Settings,Product-Terminal | high | Critical |
499,328,204 | go | cmd/go: go get -insecure requires GIT_SSL_NO_VERIFY | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOENV="/root/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/go/test/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build128427995=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
```console
$ docker run -ti --rm golang:1.13 bash
$ mkdir test && cd test && go mod init test
$ export GOPRIVATE=github.xxxx.xxxxx.corp
$ go get github.xxxx.xxxxx.corp/myorg/go
go get github.xxxx.xxxxx.corp/myorg/go: unrecognized import path "github.xxxx.xxxxx.corp/myorg/go" (https fetch: Get https://github.xxxx.xxxxx.corp/myorg/go?go-get=1: x509: certificate signed by unknown authority)
$ go get -insecure -x -v github.xxxx.xxxxx.corp/myorg/go
# get https://github.xxxx.xxxxx.corp/?go-get=1
# get https://github.xxxx.xxxxx.corp/myorg?go-get=1
# get https://github.xxxx.xxxxx.corp/myorg/go?go-get=1
# get //github.xxxx.xxxxx.corp/myorg/go?go-get=1: 200 OK (0.068s)
get "github.xxxx.xxxxx.corp/myorg/go": found meta tag get.metaImport{Prefix:"github.xxxx.xxxxx.corp/myorg/go", VCS:"git", RepoRoot:"https://github.xxxx.xxxxx.corp/myorg/go.git"} at //github.xxxx.xxxxx.corp/myorg/go?go-get=1
mkdir -p /go/pkg/mod/cache/vcs # git3 https://github.xxxx.xxxxx.corp/myorg/go.git
# lock /go/pkg/mod/cache/vcs/1e0d9b889f3416a56ea37502ad1137f6723e61f8260c10aaf3fb8c45d44204fe.lockmkdir -p /go/pkg/mod/cache/vcs/1e0d9b889f3416a56ea37502ad1137f6723e61f8260c10aaf3fb8c45d44204fe # git3 https://github.xxxx.xxxxx.corp/myorg/go.git
cd /go/pkg/mod/cache/vcs/1e0d9b889f3416a56ea37502ad1137f6723e61f8260c10aaf3fb8c45d44204fe; git init --bare
0.005s # cd /go/pkg/mod/cache/vcs/1e0d9b889f3416a56ea37502ad1137f6723e61f8260c10aaf3fb8c45d44204fe; git init --bare
cd /go/pkg/mod/cache/vcs/1e0d9b889f3416a56ea37502ad1137f6723e61f8260c10aaf3fb8c45d44204fe; git remote add origin -- https://github.xxxx.xxxxx.corp/myorg/go.git
0.003s # cd /go/pkg/mod/cache/vcs/1e0d9b889f3416a56ea37502ad1137f6723e61f8260c10aaf3fb8c45d44204fe; git remote add origin -- https://github.xxxx.xxxxx.corp/myorg/go.git
cd /go/pkg/mod/cache/vcs/1e0d9b889f3416a56ea37502ad1137f6723e61f8260c10aaf3fb8c45d44204fe; git ls-remote -q origin
# get //github.xxxx.xxxxx.corp/?go-get=1: 200 OK (0.101s)
0.111s # cd /go/pkg/mod/cache/vcs/1e0d9b889f3416a56ea37502ad1137f6723e61f8260c10aaf3fb8c45d44204fe; git ls-remote -q origin
# get //github.xxxx.xxxxx.corp/myorg?go-get=1: 200 OK (1.277s)
go get github.xxxx.xxxxx.corp/myorg/go: git ls-remote -q origin in /go/pkg/mod/cache/vcs/1e0d9b889f3416a56ea37502ad1137f6723e61f8260c10aaf3fb8c45d44204fe: exit status 128:
fatal: unable to access 'https://github.xxxx.xxxxx.corp/myorg/go.git/': server certificate verification failed. CAfile: none CRLfile: none
$ GIT_SSL_NO_VERIFY=1 go get -insecure github.xxxx.xxxxx.corp/myorg/go
go: finding github.xxxx.xxxxx.corp/myorg/go latest
go: downloading github.xxxx.xxxxx.corp/myorg/go v0.0.0-20190903123812-3090d622918c
go: extracting github.xxxx.xxxxx.corp/myorg/go v0.0.0-20190903123812-3090d622918c
```
### What did you expect to see?
I would expect that the option `-insecure` also disables the SSL verification for git.
### What did you see instead?
`go get -insecure` fails and I have to enable `GIT_SSL_NO_VERIFY` to be able to download my dependency. | NeedsInvestigation | low | Critical |
499,335,004 | terminal | Feature Request: Icon buttons to start relevant shell types | Feature request: Add icon buttons to the Windows Terminal title bar for the installed shell types (instead of the current drop down menu). Each button would start the relevant shell type. The buttons could have tool tip/mouse over for the shell type label/description.
The feature would enable one click start of the desired shell type. Currently, other than the default shell type, there's one click for the drop down, one more for the desired shell type from the list.
Then consider deprecating:
- the + button (start default shell).
- the drop down menu of the shell types
| Issue-Feature,Area-UserInterface,Area-Extensibility,Product-Terminal | low | Minor |
499,385,703 | vue | Prevent Vue.use to be used without options parameter unless it's specified as optional (with `?`) | ### What problem does this feature solve?
**Reproduction link**
Please see: https://tinyurl.com/y5mlgqqh
**Steps to reproduce**
1. write a plugin
2. overload Vue.use ( see the given link above for detail )
**What is expected?**
During compiling it fails, because the types are not matching.
**What is actually happening?**
It compiles without errors
### What does the proposed API look like?
A possible solution is to change the type in vue/types/vue VueConstructor to:
` use(plugin: PluginObject<unknown> | PluginFunction<unknown>, ...options: unknown[]): VueConstructor<V>;`
See: https://tinyurl.com/y6anfs8b
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement,typescript | low | Critical |
499,433,382 | material-ui | [material-ui] useMediaQuery('print') doesn't work when print started from window.print | - [X] The issue is present in the latest release.
- [X] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
## Current Behavior π―
Demo: https://codesandbox.io/embed/gifted-bohr-cxrdm
Chrome:
`useMediaQuery('print')` matches only when print is started with `Ctrl+P` or `Print` chosen from browser's menu. When print is started from `window.print` method - it doesn't match. (See headings on sample.)
Firefox:
`useMediaQuery('print')` never matches.
## Expected Behavior π€
I expect to `useMediaQuery('print')` to match for all "printing situations", regardless it is started from "browser user action" or "programmatically". And it should work on Firefox in general.
In attached demo - I expect to have "This should be hidden with useMediaQuery." paragraph to be hidden in all "printing situations".
## Steps to Reproduce πΉ
See demo: https://codesandbox.io/embed/gifted-bohr-cxrdm
Try to print via browser (`Ctrl+P` or `Print` from main menu) or via button "print via window.print" to see how it behaves.
## Context π¦
I'm trying to hide some elements for printing. Some blocks may be wrapped with `<Box displayPrint="none">` - and it works. Sometimes I need to have "lower level" flag determining if this is print view, so I want to utilize `useMediaQuery('print')` - to avoid adding extra `div`s to DOM (what `Box` does).
## Your Environment π
| Tech | Version |
| ----------- | ------- |
| Material-UI | v4.4.3 |
| React | 16.8.4 |
| Browser | Firefox 69.0.1, Chrome 77.0.3865.90 |
| docs,external dependency,hook: useMediaQuery,ready to take | low | Major |
499,440,053 | flutter | Snapping ScrollPhysics | Hi,
After some tinkering and Googling all over the place I became super frustrated with the API / lack of documentation for `ScrollPhysics`.
On Android you can use what's called a [SnapHelper](https://developer.android.com/reference/android/support/v7/widget/LinearSnapHelper) inside your RecyclerView (analogous to a ListView in Flutter) that will automatically snap to a certain position.
The SnapHelper does this on a position based API.
You can ask which View is currently in your chosen ViewPort and get its position and ask the RecyclerView to animate to that position.
Flutter on the other hand wants us to work with logical pixels, which makes this super trivial, common pattern difficult to implement.
All the _solutions_ I found was to use items inside the list that have a fixed width/height and don't account for flinging gestures.
What if items are not equally sized?
What if you don't know the size the item will be at?

tl;dr How to implement this πin Flutter so it works for any item in a ListView? | framework,a: fidelity,f: scrolling,a: quality,c: proposal,P3,team-framework,triaged-framework | low | Major |
499,449,816 | angular | Using ++ and -- in Angular expressions | <!--π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
Oh hi there! π
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
-->
# π feature request
### Relevant Package
<!-- Can you pin-point one or more @angular/* packages the are relevant for this feature request? -->
<!-- βοΈedit: -->
This request is for the **compiler**
### Description
<!-- βοΈ-->
I just realized that using `++` or `--` operators directly in a `(click)` binding, doesn't work.
Blitz here: [https://stackblitz.com/edit/angular-34xtgk](Blitz)
`ng v` output (first lines)
Angular CLI: 7.2.0
Node: 10.14.2
OS: win32 x64
Angular: 7.2.0
### Describe the solution you'd like
<!-- βοΈ-->
I'd like to write like this. If I remember correctly it worked in AngularJS (ng-click).
```
<button (click)="count++">
ADD
</button>
```
### Describe alternatives you've considered
<!-- βοΈ--> Have you considered any alternative solutions or workarounds?
Well, the alternatives are not as concise as the code above.
But of course it's possibile to overcome this limit.
| feature,workaround1: obvious,freq2: medium,area: core,core: basic template syntax,P4,feature: under consideration,feature: votes required | medium | Critical |
499,450,309 | pytorch | AnyValueTest.CorrectlyAccessesIntWhenCorrectType UBSAN failure: owncast of address 0x60300105d750 which does not point to an object of type 'Holder<const int>' Sep 27 00:01:03 0x60300105d750: note: object is of type 'torch::nn::AnyModule::Value::Holder<int>' | In https://github.com/pytorch/pytorch/pull/26927 I turn on libtorch tests in our ASAN build. I subsequently get this UBSAN error
```
Sep 27 00:01:02 [ RUN ] AnyValueTest.CorrectlyAccessesIntWhenCorrectType
Sep 27 00:01:03 /var/lib/jenkins/workspace/caffe2/../torch/csrc/api/include/torch/nn/modules/container/any.h:258:15: runtime error: downcast of address 0x60300105d750 which does not point to an object of type 'Holder<const int>'
Sep 27 00:01:03 0x60300105d750: note: object is of type 'torch::nn::AnyModule::Value::Holder<int>'
Sep 27 00:01:03 5a 02 00 43 e8 7c fd dd 92 55 00 00 e0 7c d7 e0 7a 7f 00 00 05 00 00 00 be be be be 00 00 00 00
Sep 27 00:01:03 ^~~~~~~~~~~~~~~~~~~~~~~
Sep 27 00:01:03 vptr for 'torch::nn::AnyModule::Value::Holder<int>'
Sep 27 00:01:03 #0 0x5592dd02fa81 in int const* torch::nn::AnyModule::Value::try_get<int const>() (/var/lib/jenkins/workspace/build/bin/test_api+0x8f6a81)
Sep 27 00:01:03 #1 0x5592dd003b28 in AnyValueTest_CorrectlyAccessesIntWhenCorrectType_Test::TestBody() (/var/lib/jenkins/workspace/build/bin/test_api+0x8cab28)
Sep 27 00:01:03 #2 0x5592dd81c51a in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (/var/lib/jenkins/workspace/build/bin/test_api+0x10e351a)
Sep 27 00:01:03 #3 0x5592dd7cb00a in testing::Test::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x109200a)
Sep 27 00:01:03 #4 0x5592dd7cd952 in testing::TestInfo::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x1094952)
Sep 27 00:01:03 #5 0x5592dd7cf87a in testing::TestCase::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x109687a)
Sep 27 00:01:03 #6 0x5592dd7f1c84 in testing::internal::UnitTestImpl::RunAllTests() (/var/lib/jenkins/workspace/build/bin/test_api+0x10b8c84)
Sep 27 00:01:03 #7 0x5592dd821c51 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (/var/lib/jenkins/workspace/build/bin/test_api+0x10e8c51)
Sep 27 00:01:03 #8 0x5592dd7f0729 in testing::UnitTest::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x10b7729)
Sep 27 00:01:03 #9 0x5592dcf19527 in main (/var/lib/jenkins/workspace/build/bin/test_api+0x7e0527)
Sep 27 00:01:03 #10 0x7f7aae41c82f in __libc_start_main /build/glibc-LK5gWL/glibc-2.23/csu/../csu/libc-start.c:291
Sep 27 00:01:03 #11 0x5592dcf18ed8 in _start (/var/lib/jenkins/workspace/build/bin/test_api+0x7dfed8)
Sep 27 00:01:03
Sep 27 00:01:03 SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /var/lib/jenkins/workspace/caffe2/../torch/csrc/api/include/torch/nn/modules/container/any.h:258:15 in
```
Looks like a const problem?
cc @yf225 | module: build,module: cpp,triaged | low | Critical |
499,477,671 | pytorch | Lint rule to test for creation of tensor in native/ without options() | A longstanding hazard in native function writing is to forget to pass `options()` of an appropriate tensor to intermediates you build. Failing to do so can lead to Tensor-Variable confusion. Example: #26966
We should have a lint rule that detects if you mess this up. Probably could be as simple as a regex. | module: build,module: lint,triaged,better-engineering | low | Minor |
499,538,664 | vscode | Expanding semantic theming to support semantically embedded languages for colorization | ## Overview
Languages like Razor (and I imagine HTML for custom attributes) typically have scenarios where portions of the document are semantically a different language.
In Razor this happens frequently through the use of TagHelpers or in Blazor:
```Razor
<form asp-antiforgery="ViewBag.ShouldRenderAntiforgery">
...
</form>
```
In this example we'd expect that `ViewBag.ShouldRenderAntiforgery` would be C#. The way TagHelpers (things that apply to HTML and change the semantic language of the right hand side of an attribute) can be customized by users is limitless so we need to have full control over telling the IDE what things are C# and what things aren't.
## Ideas on how to implement
Over in the [issue ](https://github.com/microsoft/vscode/issues/77133) where the discussion of general semantic colorization the proposal was to have an API similar to:
```TypeScript
interface SemanticHighlightRangeProvider {
provideHighlightRanges(doc,...): [ Range, TokenType, TokenModifier[] ][];
}
```
The proposed approach can also be used to enable semantic language colorization without enabling an entire language's extensions for a subset of a document.
To do this one could expand on the `ThemeDefinition` and add a `language` parameter:
```TypeScript
interface TokenStyle {
foreground?: Color | ColorFunction
style?: bold | italic | underline
language?: string
}
```
This would enable LanguageServers to mark a chunk of text in an editor as a Token that associates with a specific language. This would work similarly to how tooltips/completion descriptions etc. work when specifying pieces of text that should be colorized as a certain language.
So in the first example `asp-antiforgery="ViewBag.ShouldRenderAntiforgery"` Razor's language server would indicate that the entire `ViewBag.ShouldRenderAntiforgery` was of a Razor specific theme that had a token Style of:
```JSON
{
"language": "csharp"
}
This would enable future scenarios where if Razor wanted to go above and beyond to provide semantic colorization of specific tokens it could do it in an additive fashion on top of existing theme. | under-discussion,semantic-tokens | low | Minor |
499,545,052 | godot | Problem relocating a vertex of Curve2D | **Godot version:**
3.1.1 mono
**OS/device including version:**
win10 64bits
**Issue description:**
The movement of the sprite begins to have strange movements after 5/10 seconds.
This seems to happen because one of the vertices of the curve2D is being moved in each frame.
**Steps to reproduce:**
Simply execute the scene and observe the movement of the sprite across the curve2D, when the second round begins, jumps will begin to occur in the movement of the sprite.
**Minimal reproduction project:**
[recorrePaths.zip](https://github.com/godotengine/godot/files/3663775/recorrePaths.zip)

Code of sprite node
 | bug,topic:core,confirmed | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.