id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
458,132,124 | svelte | Docs: Add an example of modifying complex store data | People seems to be asking at regular intervals about how to update complex data in a store and, while it is relatively straight forward, we could add an example somewhere. Maybe in the tutorial. | popular,documentation | medium | Critical |
458,140,355 | flutter | Uncaught error handler is not threadsafe in Fuchsia embedder | In the `flutter_runner` embedder for Fuchsia, we landed mitigation to reduce the likelihood of a crash in the uncaught error handler when an uncaught exception is thrown immediately at/during shutdown. Related patch is https://fuchsia-review.googlesource.com/c/topaz/+/292831.
The error handling as originally landed was unsafe. Specifically, in the case where the unhandled error handler was triggered during shutdown, there was a race condition which could cause a crash in the following scenario:
1. `Runner::OnApplicationTerminate()` is triggered, which posts a task to the application's platform thread will free the `Application` instance and terminate the platform thread.
2. Before that task is serviced, the unhandled error handler is called (by `hooks.dart` -> `window.cc` -> `ui_dart_state.cc`) on the UI thread.
3. The kill task is serviced and the `Application` dtor and `Thread::Quit()` are called, terminating the platform thread.
4. The unhandled error handler attempts to post a task to the platform thread, whose thread was killed in step 3. This triggers a crash.
Fixing this requires a mechanism for the message loop to know that the associated thread has been terminated out from under it.
This patch adds mitigation for this scenario, but remains non-threadsafe/racy. We pass the unhandled error handler a weak pointer to the `Application` and check it before posting a task to the platform thread. This has two issues:
1. `WeakPtr` isn't threadsafe (nor is it intended to be), and assumes that all operations occur on a single thread. We're checking its value (which is mutated on the platform thread) on the UI thread without synchronization.
2. Even with a guarantee that the `WeakPtr` state were synchronized, there's a window between when we check the weak pointer and when we post to the platform thread in which application shutdown and thread destruction may occur.
This unsafe mitigation was landed in order to unblock a high priority bug ([FL-256](https://fuchsia.atlassian.net/browse/FL-256)) on a short schedule, and a proper refactoring is required to make this properly threadsafe.
/cc @cbracken @chinmaygarde | c: crash,customer: fuchsia,engine,P2,team-engine,triaged-engine | low | Critical |
458,146,290 | terminal | If we fail to open the settings file, display a message if it existed | A follow up from #1325.
In #1325's case, the file _did_ exist, but we failed to open it. In this case, we should display some sort of error dialog to the user, instead of blowing away the user settings with the defaults. | Area-UserInterface,Product-Terminal,Issue-Task | low | Critical |
458,152,961 | flutter | Make --observatory-port work for Add to app | Today, you can specify command line arguments in an iOS project that get passed through to the Engine and used, e.g. you can tell it to use a specific host port like so:

This isn't possible on Android though, but should be fixable by doing something like the following:
Framework (tooling):
1. Update packages/flutter_tools/lib/src/android/android_device.dart to pass another `--ei` flag with the port number through when starting the application
2. Possibly simplify the logic when this flag is set so that the port forwarding just uses the expected ports rather than trying to figure out the port.
Engine (Android embedding):
1. Update shell/platform/android/io/flutter/embedding/engine/FlutterShellArgs.java and shell/platform/android/io/flutter/embedding/FlutterActivityDelegate.java to be aware of this extra intent information, and pass it through as a shell argument. Bonus points if we can just merge the functionality of these classes.
/cc @xster @matthew-carroll @blasten @jason-simmons | platform-android,tool,a: debugging,c: proposal,P3,team-android,triaged-android | low | Major |
458,169,015 | TypeScript | Support --incremental with --module AMD | ## Search Terms
- module
- incremental
- AMD
## Suggestion
Support using the `--incremental` flag when outputting AMD modules.
## Use Cases
Projects that use the AMD module format should be able to get the performance benefits that come with `--incremental`. Currently testing locally it appears that using `--incremental` makes a large difference for compilation speed when outputting to ES6 modules but didn't make any difference for AMD.
## Examples
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Major |
458,222,237 | terminal | Support "Key Chords" in keybindings | Currently, we only support pressing a single key+modifiers to activate a keybinding action.
However, many text editors support "key chords" where the chord is a combination of multiple keys pressed in sequence. For example, in Visual Studio, the default keybinding for "Comment Code" is the chord `[ctrl+c, ctrl+k]`.
We've already prepared for serializing these chords in the array of keybindings, but we don't support more than one key at a time. We'd have somehow check if a key is the start of a chord, and only dispatch the action when all keys for the chord have been pressed.
Lots of questions:
* [ ] What happens if an action is bound to `[ctrl+c, ctrl+k]` and another is bound to `[ctrl+c]`?
* [ ] If the first key of a chord is pressed, but the second key _isn't_ bound to an action, should we write both keys to the input?
* [ ] If the first key of a chord is pressed, but then nothing is pressed for a long time, should we time out?
* [ ] What's the best way of finding the right keybinding for a chord?
- Iterating over all of them when a key is pressed, until the chord is dispatched seems awful (but trivial to do).
- Maybe we could build a tree of keychords, or a list of trees?
- My CS542 senses are tingling and suggesting that this might be the case to use a state machine.
- Is a state machine really different than a tree in this case?
| Issue-Feature,Help Wanted,Area-Settings,Product-Terminal | medium | Critical |
458,238,310 | flutter | Replacing TabController mid-animation throws exception | When trying to dynamically update the number of tabs with a new TabController, an exception is thrown because the animation library attempts to reach an AnimationController that is no longer available.
## Steps to Reproduce
Here's a [gist](https://gist.github.com/shihaohong/b5723a5902c6549cddbf6f1908255b83) with reproducible sample.
1. Get the TabBarView or TabBar mid-animation
2. Add/Remove tab while mid-animation

## Logs
```
I/flutter ( 4483): #12 ScrollPositionWithSingleContext.setPixels
package:flutter/…/widgets/scroll_position_with_single_context.dart:83
I/flutter ( 4483): #13 BallisticScrollActivity.applyMoveTo
package:flutter/…/widgets/scroll_activity.dart:547
I/flutter ( 4483): #14 BallisticScrollActivity._tick
package:flutter/…/widgets/scroll_activity.dart:534
I/flutter ( 4483): #15 _AnimationController&Animation&AnimationEagerListenerMixin&AnimationLocalListenersMixin.notifyListeners
package:flutter/…/animation/listener_helpers.dart:124
I/flutter ( 4483): #16 AnimationController._tick
package:flutter/…/animation/animation_controller.dart:765
I/flutter ( 4483): #17 Ticker._tick
package:flutter/…/scheduler/ticker.dart:228
I/flutter ( 4483): #18 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._invokeFrameCallback
package:flutter/…/scheduler/binding.dart:1016
I/flutter ( 4483): #19 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.handleBeginFrame.<anonymous closure>
package:flutter/…/scheduler/binding.dart:934
I/flutter ( 4483): #20 __InternalLinkedHashMap&_HashVMBase&MapMixin&_LinkedHashMapMixin.forEach (dart:collection-patch/compact_hash.dart:367:8)
I/flutter ( 4483): #21 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.handleBeginFrame
package:flutter/…/scheduler/binding.dart:932
I/flutter ( 4483): #22 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._handleBeginFrame
package:flutter/…/scheduler/binding.dart:866
I/flutter ( 4483): #26 _invoke1 (dart:ui/hooks.dart:250:10)
I/flutter ( 4483): #27 _beginFrame (dart:ui/hooks.dart:177:3)
I/flutter ( 4483): (elided 3 frames from package dart:async)
I/flutter ( 4483):
I/flutter ( 4483): The AnimationController notifying listeners was:
I/flutter ( 4483): AnimationController#a39b9(▶ 343.604; for BallisticScrollActivity)
I/flutter ( 4483): ════════════════════════════════════════════════════════════════════════════════════════════════════
``` | c: regression,framework,f: material design,f: scrolling,has reproducible steps,a: null-safety,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Major |
458,253,171 | flutter | [iOS 13] New Text Editing Gestures | Several new gestures were announced at WWDC related to editing text:
| Gesture | Confirmed on iOS (not just iPadOS)? |
| -------------------------------------------|------|
| 3 finger pinch to copy | ✅ |
| Double 3 finger pinch to cut | |
| 3 finger drop to paste | |
| 3 finger swipe left to undo | ✅ |
| 3 finger swipe right to redo | ✅ |
| 3 finger tap for shortcut menu | |
| 3 finger double tap to select sentence | ❌ |
| 4 finger double tap to select paragraph | ❌ |
| Shake to undo | ✅ | | a: text input,platform-ios,framework,a: fidelity,f: cupertino,customer: crowd,P2,team-framework,triaged-framework | low | Major |
458,257,906 | rust | Cannot coerce dyn AsRef | Really weird example:
```rust
use std::path::Path;
fn main() {
let boxed_path: Box<Path> = Path::new("test").to_path_buf().into_boxed_path();
let as_ref_works: &Path = boxed_path.as_ref();
//let lots_of_question_marks: Box<dyn AsRef<Path>> = boxed_path;
}
```
Uncommenting the last line gives:
```
error[E0277]: the size for values of type `[u8]` cannot be known at compilation time
--> src/main.rs:5:56
|
5 | let lots_of_question_marks: Box<dyn AsRef<Path>> = boxed_path;
| ^^^^^^^^^^ borrow the `Path` instead
|
= help: within `std::path::Path`, the trait `std::marker::Sized` is not implemented for `[u8]`
= note: to learn more, visit <https://doc.rust-lang.org/book/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait>
= note: required because it appears within the type `std::path::Path`
= note: required for the cast to the object type `dyn std::convert::AsRef<std::path::Path>`
error: aborting due to previous error
```
I genuinely don't know what's going on here, but this seems to be a bug in both coercion *and* diagnostics.
This is on the latest nightly. | A-diagnostics,T-lang,T-compiler,C-bug,A-coercions | low | Critical |
458,261,533 | opencv | Crash in Faster-RCNN Object Detection | Hello all,
I've been struggling with this for a while and i decided to share my issue here
I am trying to run custom object detection and i have the following system
System information (version)
TensorFlow => 1.13
Opencv => 4.1.0
Openvino 2019 R1
OS: Raspbian
##### Detailed description
I have trained a custom inference graph using the the following Faster R-CNN model: http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet50_coco_2018_01_28.tar.gz
with the dnn module
`
net = cv2.dnn.readNetFromTensorflow('frozen_inference_graph.pb','frozen_inference_graph.pbtxt')
`
i get the error
`
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
what(): ConfidenceThreshold parameter is wrong in layer detection_out. It should be > 0.
Aborted
`
I have tried running the intel model optimizer creating an xml and bin using readNet instead
`
net = cv2.dnn.readNet('frozen_inference_graph.xml','frozen_inference_graph.bin')
`
however that way i get a different type of error
`
detections = net.forward()
cv2.error: OpenCV(4.1.0-openvino) /home/jenkins/workspace/OpenCV/OpenVINO/build/opencv/modules/dnn/src/dnn.cpp:2299: error: (-215:Assertion failed) inp.total() in function 'allocateLayers'
`
Before posting i have checked two similar posts about this -
https://answers.opencv.org/question/212026/netforward-crash-in-faster-rcnn-object-detection-sample/
and
https://github.com/opencv/opencv/issues/13751
from the first posts i can confirm that with
`
net.setPreferableBackend(DNN_BACKEND_OPENCV);
`
but for some reason it is working very slow, i am using
`
net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)
`
also i don't seem to have the mentioned nodes from the second post.
The source code, .pb, pbtxt, xml and bin file can be obtained from here -
https://drive.google.com/drive/folders/11HQGpYzHlaMX4kXn6JSymcbFF41zk1vX
Appreciate any help !
Thank you for your time. | bug,category: dnn | low | Critical |
458,273,298 | TypeScript | Conflicting definitions for Set constructor causes unexpected default generic in TS 3.5 | **TypeScript Version:** 3.4.5 vs versions after 3.5
**Search Terms:** set constructor setconstructor unknown
**Code**
```ts
let s = new Set();
```
**Expected behavior:**
In TS 3.4: we expect the default chosen param to give us a `Set<{}>`.
In TS 3.5: we expect a `Set<unknown>`.
That change was an intended change in TS 3.5.
Or, if Set has a default generic arg, I expect the same default chosen in 3.4 and 3.5.
**Actual behavior:**
In TS 3.4, this actually was a `Set<any>`!
It appears there are multiple overloads of the Set constructor.
lib.es2015.collection.d.ts says:
```
new <T = any>(values?: ReadonlyArray<T> | null): Set<T>;
```
while lib.es2105.iterable.d.ts says:
```
new <T>(iterable?: Iterable<T> | null): Set<T>;
```
Note that one has `T = any` and the other doesn't.
So somehow TS 3.4 vs TS 3.5 decided to change whether to obey the `any` default, I think? | Suggestion,In Discussion | low | Major |
458,279,202 | TypeScript | [Feature Request] Preserve comments when using Extract<keyof T, string> | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
mapped type, preserve comment, keyof, Extract
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
```ts
type Mapped<T> = {
[k in keyof T]: ["some transformation", T[k]]
};
interface IFoo {
/** The string */
x: string;
/** The number */
y: number;
}
declare const mappedIFoo: Mapped<IFoo>;
//Tooltip shows "The string" as the comment
mappedIFoo.x
//////////////////////////
type Mapped2<T> = {
[k in Extract<keyof T, string>]: ["some transformation", T[k]]
};
declare const mapped2IFoo: Mapped2<IFoo>;
//Tooltip DOES NOT show "The string" as the comment
mapped2IFoo.x
```
[Playground](http://www.typescriptlang.org/play/#src=type%20Mapped%3CT%3E%20%3D%20%7B%0D%0A%20%20%5Bk%20in%20keyof%20T%5D%3A%20%5B%22some%20transformation%22%2C%20T%5Bk%5D%5D%0D%0A%7D%3B%0D%0A%0D%0Ainterface%20IFoo%20%7B%0D%0A%20%20%2F**%20The%20string%20*%2F%0D%0A%20%20x%3A%20string%3B%0D%0A%20%20%2F**%20The%20number%20*%2F%0D%0A%20%20y%3A%20number%3B%0D%0A%7D%0D%0Adeclare%20const%20mappedIFoo%3A%20Mapped%3CIFoo%3E%3B%0D%0A%2F%2FTooltip%20shows%20%22The%20string%22%20as%20the%20comment%0D%0AmappedIFoo.x%0D%0A%0D%0A%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%0D%0Atype%20Mapped2%3CT%3E%20%3D%20%7B%0D%0A%20%20%5Bk%20in%20Extract%3Ckeyof%20T%2C%20string%3E%5D%3A%20%5B%22some%20transformation%22%2C%20T%5Bk%5D%5D%0D%0A%7D%3B%0D%0A%0D%0Adeclare%20const%20mapped2IFoo%3A%20Mapped2%3CIFoo%3E%3B%0D%0A%2F%2FTooltip%20DOES%20NOT%20show%20%22The%20string%22%20as%20the%20comment%0D%0Amapped2IFoo.x)
I'd like it if the fields of the mapped type could somehow preserve the comments of the fields of `T`, even after using `Extract<keyof T, string>`.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
In my projects, there are many cases where I only want to deal with string keys and not `symbol|number` keys. So, I use `Extract<keyof T, string>` a lot. However, this does not preserve comments and makes me sad =(
## Examples
<!-- Show how this would be used and what the behavior would be -->
```ts
//--noImplicitAny
type Mapped<T extends { [k: string]: any }> = {
[k in keyof T]: ["some transformation", T[k]]
};
const someSymbol: unique symbol = Symbol();
interface IFoo {
/** The string */
x: string;
/** The number */
y: number;
1: "i am a number";
[someSymbol] : "i am a symbol"
}
declare const mappedIFoo: Mapped<IFoo>;
//Tooltip shows "The string" as the comment
mappedIFoo.x;
//Is allowed, because we use `keyof T`
mappedIFoo[1];
//Is allowed, because we use `keyof T`
mappedIFoo[someSymbol];
//////////////////////////
type Mapped2<T extends { [k : string] : any }> = {
[k in Extract<keyof T, string>]: ["some transformation", T[k]]
};
declare const mapped2IFoo: Mapped2<IFoo>;
//Expected: Tooltip shows "The string" as the comment
//Actual: Tooltip DOES NOT show "The string" as the comment
mapped2IFoo.x;
//Expected: is not allowed
//Actual : Is not allowed; OK!
mapped2IFoo[1];
//Expected: is not allowed
//Actual : Is not allowed; OK!
mapped2IFoo[someSymbol];
```
[Playground](http://www.typescriptlang.org/play/#src=%2F%2F--noImplicitAny%0D%0Atype%20Mapped%3CT%20extends%20%7B%20%5Bk%3A%20string%5D%3A%20any%20%7D%3E%20%3D%20%7B%0D%0A%20%20%5Bk%20in%20keyof%20T%5D%3A%20%5B%22some%20transformation%22%2C%20T%5Bk%5D%5D%0D%0A%7D%3B%0D%0Aconst%20someSymbol%3A%20unique%20symbol%20%3D%20Symbol()%3B%0D%0A%0D%0Ainterface%20IFoo%20%7B%0D%0A%20%20%2F**%20The%20string%20*%2F%0D%0A%20%20x%3A%20string%3B%0D%0A%20%20%2F**%20The%20number%20*%2F%0D%0A%20%20y%3A%20number%3B%0D%0A%20%201%3A%20%22i%20am%20a%20number%22%3B%0D%0A%20%20%5BsomeSymbol%5D%20%3A%20%22i%20am%20a%20symbol%22%0D%0A%7D%0D%0Adeclare%20const%20mappedIFoo%3A%20Mapped%3CIFoo%3E%3B%0D%0A%2F%2FTooltip%20shows%20%22The%20string%22%20as%20the%20comment%0D%0AmappedIFoo.x%3B%0D%0A%2F%2FIs%20allowed%2C%20because%20we%20use%20%60keyof%20T%60%0D%0AmappedIFoo%5B1%5D%3B%0D%0A%2F%2FIs%20allowed%2C%20because%20we%20use%20%60keyof%20T%60%0D%0AmappedIFoo%5BsomeSymbol%5D%3B%0D%0A%0D%0A%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%0D%0Atype%20Mapped2%3CT%20extends%20%7B%20%5Bk%20%3A%20string%5D%20%3A%20any%20%7D%3E%20%3D%20%7B%0D%0A%20%20%5Bk%20in%20Extract%3Ckeyof%20T%2C%20string%3E%5D%3A%20%5B%22some%20transformation%22%2C%20T%5Bk%5D%5D%0D%0A%7D%3B%0D%0A%0D%0Adeclare%20const%20mapped2IFoo%3A%20Mapped2%3CIFoo%3E%3B%0D%0A%2F%2FExpected%3A%20Tooltip%20shows%20%22The%20string%22%20as%20the%20comment%0D%0A%2F%2FActual%3A%20%20%20Tooltip%20DOES%20NOT%20show%20%22The%20string%22%20as%20the%20comment%0D%0Amapped2IFoo.x%3B%0D%0A%2F%2FExpected%3A%20is%20not%20allowed%0D%0A%2F%2FActual%20%20%3A%20Is%20not%20allowed%3B%20OK!%0D%0Amapped2IFoo%5B1%5D%3B%0D%0A%2F%2FExpected%3A%20is%20not%20allowed%0D%0A%2F%2FActual%20%20%3A%20Is%20not%20allowed%3B%20OK!%0D%0Amapped2IFoo%5BsomeSymbol%5D%3B)
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Experience Enhancement | low | Critical |
458,364,487 | vue | Only last element is accessible using ref in template-based functional components inside v-for loops | ### Version
2.6.10
### Reproduction link
[https://codesandbox.io/s/vue-template-bqe44?fontsize=14](https://codesandbox.io/s/vue-template-bqe44?fontsize=14)
### Steps to reproduce
1. Create a functional component
2. Use that functional component inside a v-for
3. Add a ref binding to the functional component to be able to get the DOM element in parent component
### What is expected?
The ref in parent component is an array of DOM elements, like it is an array of vNodes when applying this steps to a non-functional component
### What is actually happening?
Only the last DOM element of the v-for is bound to the ref. It's also just an element and not an array
---
There's no warning about overriding previously existing ref
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement,has workaround | low | Minor |
458,367,111 | pytorch | Mysterious Tensor Indexing Problem | ## 🐛 Bug
Indexing into a tensor with a 2d list of indices seems to fail sometimes, with a critical point when the number of indices is less than 32.
## To Reproduce
```python
import torch
def index_test(n):
M = torch.tensor([0.]*n) # Trivial example for illustrative purposes
idxes = [(a,) for a in range(n)]
return M[idxes]
index_test(31) # Note that this fails
index_test(32) # But this works
```
`n=31` fails with:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-169-222ec1878948> in <module>
6 return M[idxes]
7
----> 8 index_test(31)
<ipython-input-169-222ec1878948> in index_test(n)
4 M = torch.tensor([0.]*n) # Trivial example for illustrative purposes
5 idxes = [(a,) for a in range(n)]
----> 6 return M[idxes]
7
8 index_test(31)
IndexError: too many indices for tensor of dimension 1
```
## Expected behavior
I expected this indexing to work the same for any number of indices. This is problematic in my actual code as the size of the data varies.
## Environment
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: Could not collect
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] numpy==1.16.4
[pip3] pytorch-pretrained-bert==0.6.2
[pip3] torch==1.1.0
[conda] Could not collect
## Additional context
<!-- Add any other context about the problem here. -->
cc @ezyang @gchanan @zou3519 @mruberry @rgommers @heitorschueroff | high priority,module: error checking,triaged,module: numpy,module: advanced indexing,module: ux | low | Critical |
458,374,889 | create-react-app | Error: spawn cmd.exe ENOENT using WSL since 9.0.0 | ### Is this a bug report?
Yes
### Did you try recovering your dependencies?
Yes
### Which terms did you search for in User Guide?
cmd event.js ENOENT
### Environment
[Windows WSL]:
System:
OS: Linux 4.4 Ubuntu 18.04.1 LTS (Bionic Beaver)
CPU: (12) x64 Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz
Binaries:
Node: 8.11.4 - ~/.nvm/versions/node/v8.11.4/bin/node
Yarn: 1.15.2 - ~/.yarn/bin/yarn
npm: 5.6.0 - ~/.nvm/versions/node/v8.11.4/bin/npm
Browsers:
Chrome: Not Found
Firefox: 64.0
npmPackages:
react: 16.8.6
react-dom: 16.8.6
react-scripts: 3.0.1
npmGlobalPackages:
create-react-app: Not Found
### Steps to Reproduce
0. react-dev-utils v 8.0.0
1. Upgrade to v.9.0.0
2. yarn install
3. yarn start
4. events.js:183
throw er; // Unhandled 'error' event
^
Error: spawn cmd.exe ENOENT
### Expected Behavior
app should start using v 9.0.0
### Actual Behavior

### Reproducible Demo
Simply run Ubuntu or other distro on Windows WSL, upgrade to 9.0.0, same is valid for any project I tried using v9.0.0.
| issue: needs investigation | medium | Critical |
458,390,343 | vscode | [folding] Fold current level | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
As I want to view the method summary of class, I expect that I can fold all methods of class. I found these Fold Commands.

I can use `Fold Level X` command to meet my needs. But some times it's boring to count the level correctly. I found an extension having the feature meet my needs in issue #20217. But I still hope that having a build-in command such as `Fold Current Level` that can fold the code at current level (according to the cursor)

| feature-request,editor-folding | low | Major |
458,393,609 | flutter | Webview_flutter issue inside another scroll views | when using webview_flutter widget inside another scroll view it will throw exception:
```
══╡ EXCEPTION CAUGHT BY RENDERING LIBRARY ╞═════════════════════════════════════════════════════════
I/flutter ( 4342): The following assertion was thrown during performResize():
I/flutter ( 4342): RenderAndroidView object was given an infinite size during layout.
I/flutter ( 4342): This probably means that it is a render object that tries to be as big as possible, but it was put
I/flutter ( 4342): inside another render object that allows its children to pick their own size.
I/flutter ( 4342): The nearest ancestor providing an unbounded height constraint is:
I/flutter ( 4342): RenderIndexedSemantics#28d7e relayoutBoundary=up2 NEEDS-LAYOUT NEEDS-PAINT
I/flutter ( 4342): creator: IndexedSemantics ← NotificationListener<KeepAliveNotification> ← KeepAlive ←
I/flutter ( 4342): AutomaticKeepAlive ← SliverList ← Viewport ← IgnorePointer-[GlobalKey#583d5] ← Semantics ←
I/flutter ( 4342): Listener ← _GestureSemantics ←
I/flutter ( 4342): RawGestureDetector-[LabeledGlobalKey<RawGestureDetectorState>#91bcc] ← Listener ← ⋯
I/flutter ( 4342): parentData: index=0; layoutOffset=0.0 (can use size)
I/flutter ( 4342): constraints: BoxConstraints(w=411.4, 0.0<=h<=Infinity)
I/flutter ( 4342): semantic boundary
I/flutter ( 4342): size: MISSING
I/flutter ( 4342): index: 0
I/flutter ( 4342): The constraints that applied to the RenderAndroidView were:
I/flutter ( 4342): BoxConstraints(w=411.4, 0.0<=h<=Infinity)
I/flutter ( 4342): The exact size it was given was:
I/flutter ( 4342): Size(411.4, Infinity)
```
i have tried many solutions even customscrollview with slivers but it does not help | c: crash,framework,a: platform-views,has reproducible steps,P3,platform-views: vd,workaround available,a: plugins,team-framework,triaged-framework,found in release: 3.19,found in release: 3.21 | low | Major |
458,415,305 | godot | [GDScript] TextEdit cursor_get_column() and cursor_get_line() misbehaving | **Godot version:**
3.1.1 stable official
**OS/device including version:**
Windows 10 x64 version 1709
**Issue description:**
In-editor embedded TextEdit plugin function cursor_get_column() and cursor_get_line() always returns the last possible column and line, respectively. Bug is editor-only, in-game both returns actual cursor position, as expected.
**Steps to reproduce:**
1. Embed TextEdit as editor plugin.
2. Connect signal "cursor_changed" from TextEdit to function and pass cursor_get_column() and cursor_get_line() as arguments.
3. Try to print or anyhow reference the arguments in said function.
4. See the bug yourself.
**Minimal reproduction project:**
http://www.mediafire.com/file/37gc6eqnamk5jn1/TextEdit+bug.zip
| bug,topic:editor,topic:plugin | low | Critical |
458,564,119 | flutter | Expose a way to detect System UI overlay changes. | Currently [`SystemChrome`](https://api.flutter.dev/flutter/services/SystemChrome-class.html) doesn't expose any methods to check whether the status bar and/or system navigation buttons are visible/hidden.
After some digging I found out that both Android and iOS expose similar native functionality (ios only for status bar, for obvious reasons).
I believe those should be exposed to the core flutter framework.
Something along the lines of:
```dart
await SystemChrome.getEnabledSystemUIOverlays(); // returns List<SystemUiOverlay>
SystemChrome.onSystemUiOverlaysChanged = (List<SystemUiOverlay> overlays) {
if(overlays.contains(whatever)) {
doWhatever();
}
}
``` | c: new feature,platform-android,platform-ios,framework,would be a good package,c: proposal,a: layout,P3,team-framework,triaged-framework | low | Major |
458,593,758 | vscode | Allow QuickPicks to show right-aligned text like "recently opened" in the command palette | Extensions can't currently render text like "recently opened" here, but it'd be nice if we could.
<img width="659" alt="Screenshot 2019-06-20 at 12 09 46 pm" src="https://user-images.githubusercontent.com/1078012/59845114-77536200-9354-11e9-9d3f-c434b4820465.png">
This was raised in https://github.com/microsoft/vscode/issues/75236 and taken as a feature request, but the OP closed it, so here's a new request (cc @chrmarti) :-)
| feature-request,quick-pick | low | Major |
458,601,861 | flutter | How to change background color of BottomNavigationBarItem(child) when user pressed? Not change all of navbar color | I want every menu that I click will change color for each menu that is clicked, not for the entire color background of the nav bar.
Like the example below:
[Here](https://i.stack.imgur.com/dXGaz.png)
When a user clicks on the "Near Me" menu then the box on the menu will change color. But for the background/color box on the other menu, it is still black. Similarly, clicking on the other menus will change color to red and will not change the color of the menu that is not clicked (still black)
This my Code:
```
int _page = 0;
final List<Widget> _children = [
MainPage(),
MainPage(),
MainPage(),
MainPage(),
MainPage(),
];
...
bottomNavigationBar: Theme(
data: Theme.of(context).copyWith(
canvasColor: Color(0xFF3B3D58),
primaryColor: Colors.white,
textTheme: Theme.of(context).textTheme.copyWith(
caption: TextStyle(color: Colors.grey)
)
),
child: BottomNavigationBar(
type: BottomNavigationBarType.fixed,
currentIndex: _page,
items: [
BottomNavigationBarItem(
icon: Icon(Icons.home),
title: Text('Home'),
),
BottomNavigationBarItem(
icon: Icon(Icons.add_circle),
title: Text('Add Place'),
),
BottomNavigationBarItem(
icon: Icon(Icons.near_me),
title: Text('Near Me'),
),
BottomNavigationBarItem(
icon: Icon(Icons.favorite),
title: Text('Favorite'),
),
BottomNavigationBarItem(
icon: Icon(Icons.more_horiz),
title: Text('More'),
),
],
onTap: onTabTapped,
),
),
```
And this is my result of my code:
[Here](https://i.stack.imgur.com/SaUz1.png) | c: new feature,framework,f: material design,d: api docs,P3,workaround available,team-design,triaged-design | medium | Critical |
458,626,918 | godot | Strange visibility button highlight when project is start | **Godot version:**
3.2.dev.custom_build. c6507933a
**Issue description:**
When project is running(and probably when errors occurs), when I try to click at the visibility button, then highlight show in unexpected places

| bug,topic:editor,confirmed | low | Critical |
458,633,948 | rust | u8::reverse_bits is too slow | While upgrading the `bitintr` crate I re-ran its benchmarks and found out that the stable implementation there is much faster than the stabilized `u8::reverse_bits` intrinsic available on nightly.
I'm comparing this implementation of `u8::reverse_bits`:
```rust
fn rbit_u8(x: u8) -> u8 {
(((((x as u64) * 0x80200802_u64) & 0x0884422110_u64) * 0x0101010101_u64)
>> 32) as u8
}
```
vs `u8::reverse_bits`.
My benchmark there isn't super tight, each iteration calls reverse_bits on all [0, 255] integers :
```rust
fn u8_runner<F: Fn(u8) -> u8>(bench: &mut Bencher, f: F) {
bench.iter(|| {
for v in 0..=u8::max_value() {
bencher::black_box(f(bencher::black_box(v)));
}
})
}
#[bench]
fn rbit_u8_std(bench: &mut Bencher) {
u8_runner(bench, |x| x.reverse_bits()))
}
#[bench]
fn rbit_u8_self(bench: &mut Bencher) {
u8_runner(bench, |x| rbit_u8(x)))
}
```
On my laptop (x86_64 1.8Ghz i5), I'm getting 343 ns/iter for `rbit_u8`, while for `u8::reverse_bits` I'm getting 619 ns/iter. Dividing by 256 that's 1.34 (mine) vs 2.42 (libstd) ns / bitreverse.
All of this somehow rings a bell; the `bitintr` crate had a benchmark specifically for this operation, and it was previously comparing its own implementations against `core::intrinsic::bitreverse`, and it had a workaround for using its own implementation even when the user was on nightly and explicitly enabled using `core::intrinsics` via an `unstable` cargo feature. I guess I should have written a comment back then. | A-LLVM,I-slow,O-x86_64,T-compiler,C-bug,O-x86_32 | low | Major |
458,642,772 | rust | Please support using wait4 for Child and Command | On Linux, the `wait4` system call provides all the same information as `waitpid`, but additionally returns a `struct rusage` containing the resource usage of the child process. I'd love to have Child and Command provide a platform-specific extension trait that uses `wait4`, and then returns the `struct rusage` in addition to the exit status. | O-linux,T-libs-api,C-feature-request | low | Minor |
458,645,949 | rust | Lint for `&String::into(): String` on final use of input string | PR #59825 added a `impl From<&String> for String`.
While perusing the `rustc` source, I am now seeing instances of `String` values being constructed and then passed via-reference into a function that wants a `String`. These would not have compiled without the change added by PR #59825.
It might be worthwhile to try to make a lint that detects the last use of a `String` that is then being used as an argument to `&` which is then itself passed to this conversion method (which effectively hides the clone taking place.
(You can see discussion of the hidden clone on #59827; I'm not seeking debate on the merits of PR #59825. I just am wondering if we could easily highlight the unnecessary clones via a lint.) | A-lints,T-lang | low | Minor |
458,665,631 | flutter | Return the current extent to which a drawer is opened | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
I have a usecase where I need to know to what extent the drawer is opened, if the user is dragging the drawer open.
Android has:
https://developer.android.com/reference/android/support/v4/widget/DrawerLayout.DrawerListener.html#onDrawerSlide%28android.view.View,%20float%29
I wanted something similar in Flutter
## Proposal
Ive done the research and the mechanism for this seems to nearly exist
In https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/material/drawer.dart
There exists in DrawerControllerState
```
_controller = AnimationController(duration: _kBaseSettleDuration, vsync: this)
..addListener(_animationChanged)
..addStatusListener(_animationStatusChanged);
```
with
```
void _animationChanged() {
setState(() {
// The animation controller's state is our build state, and it changed already.
});
}
```
We could just update the drawercallback to send this _controller.value which would be the offset required
The question that arises though is should
`typedef DrawerCallback = void Function(bool isOpened);`
be updated to
`typedef DrawerCallback = void Function(bool isOpened, double offset);`
or
`typedef DrawerCallback = void Function(DrawerUpdate update);`
where DrawerUpdate is a class with boolean and double data members
This would be a breaking change either way.
If someone could let me know which would be preferable by the team, Ill be willing to make the change and raise a PR for the same. | c: new feature,framework,f: material design,P3,team-design,triaged-design | low | Critical |
458,705,986 | pytorch | nn.modules.functional.h does not support optional arguments | ## 🐛 Bug
FunctionalImpl binds a function one-to-one with its' arguments and does not allow to add new optional argument. If you add not-optional argument, it will break BC.
## To Reproduce
Steps to reproduce the behavior:
1. I changed signature of avg_pool2d method, added optional argument
2. Got failure:
```
Jun 19 19:06:25 /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/pimpl.h:66:41: required from 'torch::nn::ModuleHolder<Contained>::ModuleHolder(Head&&, Tail&& ...) [with Head = at::Tensor (&)(const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, bool, bool, c10::optional<long int>); Tail = {int, int, int, bool, bool}; <template-parameter-2-3> = void; Contained = torch::nn::FunctionalImpl]'
Jun 19 19:06:25 /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/functional.h:101:1: required from here
Jun 19 19:06:25 /usr/include/c++/5/functional:1410:7: error: static assertion failed: Wrong number of arguments for function
Jun 19 19:06:25 static_assert(sizeof...(_BoundArgs) == sizeof...(_Args),
Jun 19 19:06:25 ^
Jun 19 19:06:25 In file included from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules.h:8:0,
Jun 19 19:06:25 from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:6,
Jun 19 19:06:25 from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:6,
Jun 19 19:06:25 from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
Jun 19 19:06:25 from /var/lib/jenkins/workspace/vision/torchvision/csrc/models/densenet.h:4,
Jun 19 19:06:25 from /var/lib/jenkins/workspace/vision/torchvision/csrc/models/densenet.cpp:1:
Jun 19 19:06:25 /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/functional.h: In instantiation of 'torch::nn::FunctionalImpl::FunctionalImpl(SomeFunction, Args&& ...) [with SomeFunction = at::Tensor (*)(const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, bool, bool, c10::optional<long int>); Args = {int, int, int, bool, bool}; <template-parameter-1-3> = void]':
Jun 19 19:06:25 /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/pimpl.h:66:41: required from 'torch::nn::ModuleHolder<Contained>::ModuleHolder(Head&&, Tail&& ...) [with Head = at::Tensor (&)(const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, bool, bool, c10::optional<long int>); Tail = {int, int, int, bool, bool}; <template-parameter-2-3> = void; Contained = torch::nn::FunctionalImpl]'
Jun 19 19:06:25 /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/functional.h:101:1: required from here
Jun 19 19:06:25 /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/functional.h:73:41: error: no matching function for call to 'std::function<at::Tensor(at::Tensor)>::function(std::_Bind_helper<false, at::Tensor (*&)(const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, bool, bool, c10::optional<long int>), const std::_Placeholder<1>&, int, int, int, bool, bool>::type)'
Jun 19 19:06:25 std::forward<Args>(args)...)) {
```
## Expected behavior
Should allow to add optional arguments to the function and should not break BC.
| module: cpp,module: nn,triaged | low | Critical |
458,763,042 | flutter | CircleBorder sizing doesn't match the size of the enclosed Text | ## Steps to Reproduce
Create a `RaisedButton` like this:
```dart
RaisedButton(
child: Text('Click me'),
onPressed: () {},
shape: CircleBorder(),
)
```
The button does become circular, but it's sized after the height of the text, not the width:

The only workaround I have found, it to wrap the whole thing in a SizedBox, and hard-code a specific height that is large enough. This is poor solution, as that hardcoded value depends on the concrete text.
## Version
```
flutter --version
Flutter 1.5.4-hotfix.2 • channel stable • https://github.com/flutter/flutter.git
Framework • revision 7a4c33425d (7 weeks ago) • 2019-04-29 11:05:24 -0700
Engine • revision 52c7a1e849
Tools • Dart 2.3.0 (build 2.3.0-dev.0.5 a1668566e5)
| ~/dev @ mit-macbookpro3 (mit)
``` | framework,f: material design,has reproducible steps,P2,found in release: 3.3,workaround available,found in release: 3.7,team-design,triaged-design | low | Major |
458,769,906 | pytorch | Consolidate definition of operators/gradients where possible | In working on #21088, there were cases where code changes needed to be made that were repetitive and could be error-prone. We could probably simplify/merge some of this code.
To modify an operator we need to update:
* Tensor.h, TensorMethods.h, and Type.h for C++ interfaces.
* native_functions.yaml for the primary interface definition used by codegen.
* aten/src/ATen/native/*Ops.cpp for the C++ operator implementation
* external changes for xla/msnpu extensions
* derivatives.yaml to define the c++ gradient
* shape_analysis.cpp to define the dtype/shape returned for the operator in the jit graph.
* symbolic_script.cpp to update the torch script definition of the operator and gradient
* symbolic_variable.h to update the jit graph for the operator
* symbolic_opset9.py to update the specification of the operator in onnx.
## Motivation
This isn't a such concern for compile-checked interface definitions like TensorMethods.h vs Tensor.h. Larger concerns are, for example:
* We have gradient implementations in both torchscript (symbolic_script.cpp) and c++ (in derivatives.yaml). It's possible for one to be inconsistent, and harder to test.
* Shape analysis determines dtypes of returned tensors using its own separate logic in parallel to the operator implementation. Changes made necessary by my modification of an operator were not caught by tests, and dtypes retuned were already incorrect for some operators. A single path to determining the correct shape/dtype would make this more robust.
* It could be faster/easier to contribute changes if developers working on one part of the code don't necessarily need to be familiar with all of the code. Updating an onnx opset is somewhat challenging for someone unfamiliar with onnx.
## Pitch
* Gradients: Investigate if it's possible to generate the C++ gradient from torchscript or vice versa.
* Shapes analysis: Find some alternate way to determine dtype/shape returned. tracing?
* SymbolicVariable: This mostly seems derived from the operator interface definition. I'm not sure if codegen would make sense here.
* Interfaces: consider merging native_functions.yaml and derivatives.yaml to reduce repetition.
* Onnx: ???
## Alternatives
I don't know the best way to address these concerns, but I believe we could investigate and make some improvements to what we have. Some of this work (shape analysis?) may already be in progress.
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @ailzhang | module: internals,triaged,better-engineering | low | Critical |
458,808,072 | TypeScript | Babel-generated module with exports["default"] not importable | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.6.0-dev.20190620
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** default, exports, interop, babel
**Code**
```ts
// TS code
import Something from 'some-module';
```
```js
// node_modules/some-module/index.js, produced by `@babel/preset-env` defaults
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports["default"] = void 0;
// Bunch of injected interop/polyfill stuff
function TheMainExport() {
}
exports["default"] = TheMainExport;
```
**Expected behavior:**
TypeScript should be able to import the default export
**Actual behavior:**
This error occurs:
> Module '...node_modules/some-module/index.js' has no default export.
**More details:**
When the problem module is built with these `@babel/preset-env` options:
```
{
"targets": {
"esmodules": true
}
}
```
...then the emitted JavaScript uses the syntax `exports.default` instead of `exports["default"]`. Just manually changing the original JS to use the dot syntax allows TypeScript to see the module. For some reason, the bracket notation is not treated the same way.
Since TS importing from modules generated by babel is a common scenario, perhaps it ought to treat `exports["default"]` the same as `exports.default`.
Compiler flags used:
```
--allowJs
--allowSyntheticDefaultImports
--jsx preserve
--lib dom,es2017
--module esnext
--moduleResolution node
--noEmit
--noUnusedLocals
--noUnusedParameters
--preserveConstEnums
--sourceMap
--strict
--target esnext
--maxNodeModuleJsDepth 1
```
| Bug | low | Critical |
458,818,839 | scrcpy | Cursor and keyboard - DeskDock features | A feature I enjoy from DeskDock (no longer supported) is showing the mouse on the phone screen. The keyboard for DeskDock also works great and allows shortcuts to work (such as CTRL + left/right/backspace.)
Thank you! | feature request | low | Major |
458,834,875 | pytorch | Have a different way to check if gradient was computed in the optimizer (not checking for None) | ## 🚀 Feature
The feature I want is to be able to check if the gradient was actually computed or not for a specific parameter in the optimizer, instead of checking if the gradient is None (as done currently).
## Motivation
The motivation in my case comes from trying to implement some code that is doing multi-task learning. On each training iteration I'm getting a batch from a single task and compute gradients for that task specific head (and shared backbone for all tasks), while heads for other tasks are not supposed to be updated. This is indeed happening on the first iteration, but after we computed gradients for each task at least one time, each one of them will be updated on *every* iteration, even when no gradients were computed for that task specific head. The reason for that is because most optimizers will only ignore weights if their gradient is None, and if not, they will proceed updating them. Thus, after the gradients were computed ones, they will never be None again (since we set them to zero in optimizer.zero_grad) and on each iteration, the current momentum and weight decay will be applied to the weights for which we didn't really compute the gradients.
In my opinion this is very dangerous behavior because this is likely not what the user expects to happen, but the model could still work fine and train to some non-trivial accuracy even though each task weights receive additional weight decay and momentum updates. While it might not matter much in some cases, when the tasks are trained for different number of iterations (e.g. one task has 10 times more updates than the other), this might significantly harm performance. And there might be other cases besides multi-task learning, when not every weight is supposed to be updated on each iteration, where this behavior of the optimizer can cause some unexpected results without users even noticing this.
## Pitch
I am not sure how gradient flow is implemented internally in PyTorch, but conceptually it should be possible (and probably not too complicated) to have a way to check if the gradient was actually computed for each weight after the last call to .backward on the loss. And then this could be used to check if the optimizer should update weights or not, instead of checking for None.
## Alternatives
As an alternative solution, I can explicitly set gradients for weights that are not supposed to be updated to None on each iteration. However, it seems like that will introduce additional computational overhead on each iteration, because new memory will have to be allocated for the gradient storage.
Another solution could be to have separate optimizers for each task. However, in this case the optimizer buffers for shared backbone weights will be separate. This means that, for example, momentum will be accumulated separately for each task gradients and not merged together and this might also lead to suboptimal performance. I could also have a separate optimizer for shared weights, but this becomes more complicated when certain tasks share additional weights between each other (for example, part of the head could be shared between two tasks and not shared with other tasks, while backbone is shared across all tasks).
Another solution could be to set learning rate to zero on each iteration for the parameters that should not be updated. However, in this case, their momentum buffers will still get updated and this will result in different performance.
The solution I think I will end-up doing is to modify the optimizer code to explicitly pass weights that should not be updated on the current iteration. This is also not ideal, because it is cumbersome to check which weights should and should not be updated on the current iteration and I have to look at the names of the weights for this. And I also have to add this new logic for each optimizer I plan to use for my models.
Let me know if there is some other, easy to implement solution for this problem. However, even if that is the case, I think it might still be important to implement it by default in PyTorch or at least mention this in the documentation, or raise some kind of warning for this behavior. Because this problem will not lead to an immediate error and thus it is easy to overlook this and get unexpected results that could potentially harm performance of the models quite significantly.
| feature,module: autograd,module: optimizer,triaged | low | Critical |
458,848,247 | TypeScript | Add a flag for disabling all extensions | Sometimes, server extensions cause server crashes. After a limited number of restarts/retries in the default state, it might make sense for the editor to make a final attempt with a no-extensions flag. | Suggestion,In Discussion,API,Domain: TSServer | low | Critical |
458,881,935 | pytorch | SigAbort while running the Caffe2 unit test - thread_init_test - built on Clang7 +glibc 2.23 | ## 🐛 Bug
SigAbort while running the Caffe2 unit test - thread_init_test - built on Clang7 +glibc 2.23
## To Reproduce
Run the Caffe2 unit tests
.jenkins/caffe2/test.sh
## Execution dump
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (1 ms total)
[ PASSED ] 1 test.
+ for test in '$(find "$cpp_test_dir" -executable -type f)'
+ case "$test" in
++ basename /root/.local/lib/python2.7/site-packages/torch/test/thread_init_test
+ LD_LIBRARY_PATH=/root/.local/lib/python2.7/site-packages/torch/lib
+ /root/.local/lib/python2.7/site-packages/torch/test/thread_init_test --gtest_output=xml:/root/caffe2/caffe2_tests/cpp/thread_init_test.xml
terminate called after throwing an instance of 'c10::Error'
what(): (at::get_num_threads()) == (given_num_threads) INTERNAL ASSERT FAILED at /root/caffe2/aten/src/ATen/test/thread_init_test.cpp:15, please report a bug to PyTorch. (test at /root/caffe2/aten/src/ATen/test/thread_init_test.cpp:15)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7f01ad6d7cd7 in /root/.local/lib/python2.7/site-packages/torch/lib/libc10.so)
frame #1: test(int) + 0x43e (0x4032ee in /root/.local/lib/python2.7/site-packages/torch/test/thread_init_test)
frame #2: main + 0xec (0x40365c in /root/.local/lib/python2.7/site-packages/torch/test/thread_init_test)
frame #3: __libc_start_main + 0xea (0x7f019ec0b71a in /lib64/libc.so.6)
frame #4: _start + 0x29 (0x402df9 in /root/.local/lib/python2.7/site-packages/torch/test/thread_init_test)
.jenkins/caffe2/test.sh: line 31: 7888 Aborted (core dumped) LD_LIBRARY_PATH="$ld_library_path" "$test" --gtest_output=xml:"$gtest_reports_dir/$(basename $test).xml"
## Expected behavior
No abort
## Environment
- PyTorch Version (e.g., 1.0): latest upstream code
- OS (e.g., Linux): Ubuntu 18.04
- How you installed PyTorch (`conda`, `pip`, source): source
- Build command you used (if compiling from source): .jenkins/caffe2/build.sh
- Python version: python 2.7
- CUDA/cuDNN version: -
- GPU models and configuration: ROCm 2.4
- Any other relevant information:
## Additional context
Problem seen only in the environment - Clang 7 + glibc 2.23
| caffe2 | low | Critical |
458,904,645 | flutter | `flutter run` should pause and retry on iOS device lock | Customer request from Q2 survey:
>Please make `flutter run` wait if the iOS device is locked, instead of exiting. Because Flutter exits, you have to rebuild everything again. | platform-ios,tool,customer: crowd,t: xcode,P3,team-ios,triaged-ios | low | Major |
458,926,547 | pytorch | Cannot update part of the parameters in DistributedDataParallel. | ## 🐛 Bug
When I use multiple GPU while the loss is calculated by only part of the parameters. I get the following errors. Use only one GPU works well.
## To Reproduce
Steps to reproduce the behavior:
Define a network in which the loss only depends on part of the parameters. We get:
```
RuntimeError: Expected to have finished reduction in the prior iteration before starting a
new one. This error indicates that your module has parameters that were not used in produ
cing loss. You can enable unused parameter detection by (1) passing the keyword argument `
find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making su
re all `forward` function outputs participate in calculating loss. If you already have done
the above two steps, then the distributed data parallel module wasn't able to locate the
output tensors in the return value of your module's `forward` function. Please include th
e loss function and the structure of the return value of `forward` of your module when rep
orting this issue (e.g. list, dict, iterable). (prepare_for_backward at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:429)
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
PyTorch version: 1.2.0.dev20190620
CUDA used to build PyTorch: 9.0.176
OS: CentOS Linux release 7.5.1804 (Core)
GCC version: (crosstool-NG 1.23.0.449-a04d0) 7.3.0
CMake version: version 2.8.12.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce GTX 1080
GPU 1: GeForce GTX 1080
GPU 2: GeForce GTX 1080
GPU 3: GeForce GTX 1080
Nvidia driver version: 396.26
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] msgpack-numpy==0.4.3.2
[pip3] numpy==1.15.4
[pip3] pytorch-pretrained-bert==0.4.0
[pip3] torch==1.0.1.post2
[pip3] torchfile==0.1.0
[pip3] torchtext==0.4.0
[pip3] torchvision-nightly==0.2.1
[conda] pytorch-pretrained-bert 0.6.2 pypi_0 pypi
[conda] torch-nightly 1.2.0.dev20190620 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchtext 0.4.0 pypi_0 pypi | oncall: distributed,triaged | medium | Critical |
458,926,604 | go | cmd/go: test coverage output twice | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.6 linux/amd64
</pre>
### What did you do?
```
$ go test -json -cover ./...
```
### What did you expect to see?
Coverage statements output only once, or one of the events identified differently from the other.
```
{"Time":"2019-06-20T18:51:29.178821431-04:00","Action":"output","Package":"gotest.tools/assert/cmp","Output":"ok \tgotest.tools/assert/cmp\t(cached)\tcoverage: 91.1% of statements\n"}
{"Time":"2019-06-20T18:51:29.183457861-04:00","Action":"output","Package":"gotest.tools/assert","Output":"ok \tgotest.tools/assert\t(cached)\tcoverage: 85.2% of statements\n"}
```
### What did you see instead?
Coverage statements output twice.
```
{"Time":"2019-06-20T18:51:29.178811583-04:00","Action":"output","Package":"gotest.tools/assert/cmp","Output":"coverage: 91.1% of statements\n"}
{"Time":"2019-06-20T18:51:29.178821431-04:00","Action":"output","Package":"gotest.tools/assert/cmp","Output":"ok \tgotest.tools/assert/cmp\t(cached)\tcoverage: 91.1% of statements\n"}
{"Time":"2019-06-20T18:51:29.183452652-04:00","Action":"output","Package":"gotest.tools/assert","Output":"coverage: 85.2% of statements\n"}
{"Time":"2019-06-20T18:51:29.183457861-04:00","Action":"output","Package":"gotest.tools/assert","Output":"ok \tgotest.tools/assert\t(cached)\tcoverage: 85.2% of statements\n"}
```
Related to #23036, if one of the events could be identified differently the duplicate output would be easy to filter out.
I noticed that `go test -v` has the same behaviour, but i'm not sure why. | help wanted,NeedsInvestigation | low | Minor |
458,970,561 | rust | if/while Some(n) = &mut foo sugar will leak a temporary mutable borrow to current scope in particular situation | [Same Code in the Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=b8592ca63a5f3b37a838cf65f3e665be)
```rust
pub struct Demo {
foo: Option<Box<Demo>>,
}
// Compiles
pub fn foo1(mut a: &mut Demo) {
if let Some(ref mut b) = a.foo {
a = b;
}
a.foo = None;
}
// Does not compile
pub fn foo2(mut a: &mut Demo) {
if let Some(b) = &mut a.foo {
a = b;
}
a.foo = None;
}
// Same Error with foo2
pub fn foo3(mut a: &mut Demo) {
if let Some(ref mut b) = &mut a.foo {
a = b;
}
a.foo = None;
}
```
What is exactly difference between foo1 and foo2
>ADD: I think foo1 will desuger something like this:
```rust
pub fn foo1_desugered(mut a: &mut Demo) {
match a.foo {
Some(ref mut _1) => a = _1 ,
None => ()
}
a.foo = None;
}
```
and foo2 will be like:
```rust
pub fn foo2_desugered(mut a: &mut Demo) {
let mut _t1 = &mut a.foo;
match _t1 {
Some(_1) => a = _1 ,
None => ()
}
a.foo = None;
}
```
>ADD#2 And if I explicit add a scope:
```rust
// Does not compile
pub fn foo2(mut a: &mut Demo) {
{
if let Some(b) = &mut a.foo {
a = b;
}
}
a.foo = None;
}
pub fn foo2_desugered(mut a: &mut Demo) {
{
let mut _t1 = &mut a.foo;
match _t1 {
Some(_1) => a = _1 ,
None => ()
}
}
a.foo = None;
}
```
same error :( | A-lifetimes,A-borrow-checker,T-compiler,C-bug | medium | Critical |
459,001,064 | flutter | Add key for BottomNavigationBarItem | I have check this link:
https://github.com/flutter/flutter/blob/master/packages/flutter/test/material/bottom_navigation_bar_test.dart
It use `tester.tap` to drive one of BottomNavigationBarItem. but It is not very convenient if app support localization.
I think it add key can more flex to test. | a: tests,framework,f: material design,P2,team-design,triaged-design | low | Major |
459,032,702 | rust | Output executable crashes with relocation-model=static | The following code crashes with "relocation-model=static" on linux x86-64, reproducible.
```
#![feature(nll)]
fn main() {
fn divide(numerator: f64, denominator: f64) -> Option<f64> {
if denominator == 0.0 {
None
} else {
Some(numerator / denominator)
}
}
let result = divide(2.0, 3.0);
match result {
Some(x) => println!("Result: {}", x),
None => println!("Cannot Divide by 0"),
}
}
``` | O-linux,T-compiler,C-bug | low | Critical |
459,044,657 | go | proposal: cmd/go: allow 'go get -u' to upgrade the targets of replacements | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.5 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/mj/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/mj/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12.5/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12.5/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/xb/dsd0_2b92x7bsl92xlzj6fbm0000gn/T/go-build803671561=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
go get -u github.com/dfang/auth
### What did you expect to see?
```
replace github.com/qor/auth => github.com/dfang/auth v0.0.0-20190621054412-96c274f7c597
```
just update/replace the hash with updated one in go.mod
### What did you see instead?
the hash didn't change, but add one dependency on
```
require(
....
github.com/dfang/auth v-0.0.0-<hash>
.....
)
```
for now i have to copy hash in `require`, delete the line, update the hash in `replace`
| Proposal,NeedsInvestigation,FeatureRequest,modules | medium | Critical |
459,137,118 | electron | Improve `app.showAboutPanel()` on Linux | <!-- As an open source project with a dedicated but small maintainer team, it can sometimes take a long time for issues to be addressed so please be patient and we will get back to you as soon as we can.
-->
### Preflight Checklist
<!-- Please ensure you've completed the following steps by replacing [ ] with [x]-->
* [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project.
* [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to.
* [x] I have searched the issue tracker for a feature request that matches the one I want to file, without success.
### Problem Description
It's nice that `app.showAboutPanel()` works on Linux now, but how it works could be improved.
### Proposed Solution
#### 1. Add defaults
> On Linux, values must be set in order to be shown; there are no defaults.
This just makes it hard to use, since the macOS version has defaults. I don't see why you couldn't have the defaults and save people time. Electron already has the information:
- `applicationName`: Should default to `productName` in package.json.
- `applicationVersion`: Should default to `version` in package.json.
- `copyright`: Should default to `Copyright © 2019 Author Name`, where `Author Name` is the `author.name` field in package.json, if it exists.
- `website`: Should default to the `homepage` field in package.json, if it exists.
- `iconPath`: Should default to the app's icon.
#### 2. Add `credits` support for Linux
The setting is currently only available on macOS.
https://developer.gnome.org/gtk3/stable/GtkAboutDialog.html#gtk-about-dialog-set-authors
https://developer.gnome.org/gtk3/stable/GtkAboutDialog.html#gtk-about-dialog-add-credit-section
#### 3. Resize the icon to the correct size
The `iconPath` option should accept an image of any resize and resize it to the correct size.
This is what I'm currently seeing:
<img width="1218" alt="Screen Shot 2019-06-21 at 17 13 02" src="https://user-images.githubusercontent.com/170270/59917531-6554ea80-944c-11e9-8a91-32e1c368062d.png">
### Alternatives Considered
Not using the native dialog and make a custom one with HTML and CSS, but I would strongly prefer my apps to look and act as native as possible.
### Additional Information
Electron 5.0.5
---
// @codebytere
| enhancement :sparkles: | low | Minor |
459,179,270 | go | cmd/go: running `go generate` with go tool with different GOROOT fails | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/eliben/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/eliben/eli/go"
GOPROXY="https://proxy.golang.org"
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/eliben/eli/golang-go/src/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build618171788=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
1. Created a fresh clone of the Go repository.
2. `$ cd src/cmd/compile/internal/gc`
3. Edit `syntax.go` to add a new AST node type `OFOOBAR`
```
$ grep -C 4 OFOOBAR syntax.go
// Node ops.
const (
OXXX Op = iota
OFOOBAR
// names
ONAME // var or func name
ONONAME // unnamed arg or return value: f(int, string) (int, error) { etc }
```
4. Ran `go generate`
```
$ go generate
stringer: writing output: open class_string.go: permission denied
/usr/local/go/src/cmd/compile/internal/gc/go.go:43: running "stringer": exit status 1
```
Expected `op_string.go` to be updated with the new node type.
Also ran:
```
$ go generate syntax.go
```
No errors, but the generated `op_string.go` does not have the new node type.
### What did you expect to see?
Expected the new node type to appear in `op_string.go` after running `go generate`
| NeedsInvestigation,GoCommand | low | Critical |
459,200,871 | go | cmd/go: go fmt fails in symlinked directory | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
If latest release is go 1.12 then yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/rof/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/rof/"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build784063998=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I have one directory which is a clone of my git repo. Let's call it `project`.
I have a symbolic link to that directory. Let's call it `link`.
This works fine
```
$ cd src/github.com/.../project
$ go fmt
```
This does not work fine
```
$ cd link
$ go fmt
stat ../src/github.com/.../main.go: no such file or directory
exit status 2
```
### What did you expect to see?
I expected `go fmt` to work with no error.
### What did you see instead?
I saw a `no such file or directory` error, but the file exists.
### Additional information
```
$ strace go fmt |& grep \\.\\./src
waitid(P_PID, 10294, stat ../src/github.com/.../main.go: no such file or directory
$ strace go fmt |& grep \\.\\./src
futex(0xe474e8, FUTEX_WAIT_PRIVATE, 0, NULLstat ../src/github.com/.../main.go: no such file or directory
```
`go fmt` in the `link` directory fails 100% of the time.
50% of time, `futex` fails.
50% of time, `waitid` fails.
This runs in the docker environment of my cloud CI, Codeship. I am reporting the bug here since everything else works fine.
Thank you | NeedsInvestigation | low | Critical |
459,222,249 | go | x/tools/imports: pathological behaviour when package result is in same module | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +44c9354c5a Fri Jun 21 05:21:30 2019 +0000 linux/amd64
$ go list -m golang.org/x/tools
golang.org/x/tools v0.0.0-20190620191750-1fa568393b23
$ go list -m golang.org/x/tools/gopls
golang.org/x/tools/gopls v0.0.0-20190620191750-1fa568393b23
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE="on"
GOARCH="amd64"
GOBIN="/home/myitcv/gostuff/src/github.com/myitcv/govim/cmd/govim/.bin"
GOCACHE="/home/myitcv/.cache/go-build"
GOENV="/home/myitcv/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/myitcv/gostuff"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/myitcv/gos"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/myitcv/gos/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/myitcv/gostuff/src/github.com/myitcv/govim/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build649018026=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Per discussion with @heschik on Slack.
> The other case is where I know a qualified identifier refers to a package in the current module, `imports` still heads off to scan the module cache
>
> It might make sense to kick off the two queries concurrently, i.e. current module and module cache, but the latter will take a _long_ time to come back with ~5G of code
>
> But I would expect the latter query to be cancelled when the former returns a match, which will be very fast.
On a machine with a large amount of code in `$GOPATH/pkg/mod`, the following script will roughly show the slow performance. I _think_ this is representative of the general case; other specific examples which are harder to reproduce exactly, exhibit even worse behaviour but are, I think, the same class of problem.
```
# install goimports
env GOPROXY=https://proxy.golang.org
go install golang.org/x/tools/cmd/goimports
# test goimports
env GOPROXY=$GOMODPROXY
cd mod
# run goimports
exec goimports -w main.go
# check result
cmp main.go main.go.golden
-- go.mod --
module goimports
require golang.org/x/tools v0.0.0-20190620191750-1fa568393b23
-- mod/go.mod --
module mod.com
-- mod/main.go --
package main
func main() {
p.Println("")
}
-- mod/main.go.golden --
package main
import "mod.com/p"
func main() {
p.Println("")
}
-- mod/p/p.go --
package p
func Println(s string) {}
```
On my machine (~5GB of code in the module cache), I see the following:
```
$ testscript -v -e GOPATH goimports_same_module_package.txt
...
# install goimports (0.091s)
> env GOPROXY=https://proxy.golang.org
> go install golang.org/x/tools/cmd/goimports
# test goimports (0.000s)
> env GOPROXY=$GOMODPROXY
> cd mod
$WORK/mod
# run goimports (0.959s)
> exec goimports -w main.go
# check result (0.000s)
> cmp main.go main.go.golden
PASS
```
### What did you expect to see?
`goimports` taking next to no time at all, because there is a "match" within the current module.
I _think_ `imports` needs prioritise its search according to module distance from the main module, falling back to the module cache as a last resort.
Furthermore, I don't think the query against the module cache should be kicked off until modules reachable from the main module (including standard library of course) have been evaluated. Reason being, with a large amount of code in the module cache, the processing and IO cost can be huge.
### What did you see instead?
`goimports` taking a long time, ~10 seconds for some particularly bad cases. In `gopls`, if you are invoking `imports` via "format-on-save", which is a blocking operation, this is particularly painful.
cc for FYI @stamblerre @ianthehat
| NeedsInvestigation | low | Critical |
459,301,352 | go | x/net/http2: handle the case of more content received than was declared by server in content-length headers | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
1.12
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/asd/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/asd/MEGA/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/go"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build964683646=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
running this code:
https://gist.github.com/anysz/e32dfd5bd501da3836ee4960f2620f68
### What did you expect to see?
data decoded correctly like in http/1.0 version
### What did you see instead?
panic: net/http: server replied with more than declared Content-Length; truncated
workaround:
`GODEBUG=http2client=0 go run` works fine
| NeedsDecision | low | Critical |
459,308,433 | flutter | Slight flicker on the screen when loading from app overview on Android | When switching between apps to an app created by flutter on an Android 9.0 device, there might be a short shadowy (so the screen is not completely black) flicker of the screen that the app shows.
This can be reproduced by any app as far as I am aware.
1. Create a basic app by using `flutter create app` followed by `cd app` and `flutter run` on an Android 9.0 device (it might also affect other versions, I have not tested it on any other devices, but I heard that other people are able to reproduce it on their device).
2. Launch the application and go back into the running app overview.
3. Select another app that is open.
4. Wait a little bit (this seems to make the bug more likely to happen).
5. Go back into the overview and select the flutter app.
6. A quick slight flicker will appear on the screen. | platform-android,engine,customer: crowd,c: rendering,has reproducible steps,P2,found in release: 3.7,found in release: 3.9,team-android,triaged-android | low | Critical |
459,316,920 | godot | Inspector leaks memory when inspecting Remote scene tree | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.1.1
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Windows 7 Pro SP1
**Issue description:**
<!-- What happened, and what was expected. -->
Memory is leaked when a node from Remote scene tree is inspected.
https://www.dropbox.com/s/y488ah1jg9fmww4/inspector_leak_remote_scenetree.mp4?dl=0
**Steps to reproduce:**
Create any node, run the game and inspect this node in the Inspector.
You may also inspect 'root'. It leaks more memory.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
Not necessary. | bug,topic:core,topic:editor,confirmed | low | Critical |
459,373,301 | rust | Tidy doesn't check for unused extern crates | Since 44a01b8a54b078d15620d1133b94ee21ee7a6915, tidy no longer emits unused crate warnings -- i.e., Cargo.toml has a dependency but the associated library does not. Due to use of 2018 edition implicit imports, this is no longer an easy thing to do (any file within the crate can use the crate, and extern crate isn't required).
I'm thinking we can probably just remove this tidy lint for now in favor of something like https://github.com/rust-lang/rust/issues/57274 in the future. | T-bootstrap | low | Minor |
459,382,043 | go | cmd/cgo: documentation for C.GoString could be more complete | Documentation for `C.GoString()`, `C.GoStringN()`, and `C.GoBytes()` should probably explicitly document behaviour for "special" values (e.g. `NULL` pointer, negative integers).
https://golang.org/cmd/cgo/#hdr-Go_references_to_C
#### System details
```
go version go1.12.6 linux/amd64
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/nschelle/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/nschelle/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
GOROOT/bin/go version: go version go1.12.6 linux/amd64
GOROOT/bin/go tool compile -V: compile version go1.12.6
uname -sr: Linux 4.4.0-146-generic
Distributor ID: Ubuntu
Description: Ubuntu 16.04.6 LTS
Release: 16.04
Codename: xenial
/gnu/store/h90vnqw0nwd0hhm1l5dgxsdrigddfmq4-glibc-2.28/lib/libc.so.6: GNU C Library (GNU libc) stable release version 2.28.
gdb --version: GNU gdb (GDB) 8.3
```
| Documentation,help wanted,NeedsFix,compiler/runtime | low | Major |
459,392,787 | terminal | A few StateMachine unit tests don't fully test what they purport to | I'm not sure if this is right place to report this, because it's not a bug in the app itself, but it's an issue I noticed in the unit tests, which I think may be considered a bug.
As a specific example, consider the [`TestCursorKeysMode`](https://github.com/microsoft/terminal/blob/9b92986b49bed8cc41fde4d6ef080921c41e6d9e/src/terminal/parser/ut_parser/OutputEngineTest.cpp#L1162-L1177) test in OutputEngineTest.cpp:
```c++
TEST_METHOD(TestCursorKeysMode)
{
StatefulDispatch* pDispatch = new StatefulDispatch;
VERIFY_IS_NOT_NULL(pDispatch);
StateMachine mach(new OutputStateMachineEngine(pDispatch));
mach.ProcessString(L"\x1b[?1h", 5);
VERIFY_IS_TRUE(pDispatch->_fCursorKeysMode);
pDispatch->ClearState();
mach.ProcessString(L"\x1b[?1l", 5);
VERIFY_IS_FALSE(pDispatch->_fCursorKeysMode);
pDispatch->ClearState();
}
```
The second half of this test is assumedly meant to prove that the given escape sequence will reset the cursor keys mode. However, the `_fCursorKeysMode` flag is false by default, so you could replace that escape sequence with almost anything, including removing it altogether, and the test would still pass.
There are similar problems with `TestCursorBlinking`, `TestCursorVisibility`, and possibly also `TestAltBufferSwapping`.
When I added a similar test, I got around the problem by explicitly setting the flag I was testing to the opposite of what was expected, but I'm not sure if that is the best approach. Another possibility might be using `std::optional<bool>` instead of just a `bool`, so the default value could then be a `nullopt`, and it wouldn't be possible to pass the test based on the default value alone. The downside of that approach is the `VERIFY` steps become a little more complicated. | Product-Conhost,Help Wanted,Issue-Task,Area-CodeHealth | low | Critical |
459,393,877 | youtube-dl | Is it possible to abort if one of the formats is unavailable? | ## Checklist
- [x] I'm asking a question
- [x] I've looked through the README and FAQ for similar questions
- [x] I've searched the bugtracker for similar questions including closed ones
## Question
When I request a download from Youtube in formats "137,140" (for video and audio, without merging, because it is lengthy and useless), the following happens:
- Trying to download 137... not available, whatever, moving on
- Trying to download 140... success!
Your video is successfully downloaded!
(...but there is no video, only audio.)
Is it possible to do anything about this situation?
- Maybe add an option to abort if one of the requested formats is not available?
- Maybe add an option to disable merging when downloading in "137+140" form?
- Maybe download the video in both formats at the same time, merging them on the fly, and then saving only the output file? | question | low | Critical |
459,415,327 | flutter | Inline Widgets should have modified/ignore safe area changes for keyboard. | When nesting a whole app inline, bringing up the keyboard causes the app to display a big empty area corresponding to the safe area.
This safe area may or may not be relevant for the inlined app.

| a: text input,framework,P2,team-framework,triaged-framework | low | Minor |
459,419,656 | godot | Convex decomposition leads to inaccurate shape for simple meshes | **Godot version:** 05a0a68c72cc16c443301398ab93e8d838401ac0
**OS/device including version:**
Windows 10 x86_64
**Issue description:**
Using the old quickhull method, when using the Mesh->Create Convex Collision Sibling, a simple scaled cube mesh would produce a perfect convex hull collision shape. Now, with the new convex decomposition, the same simple cube mesh produces an inaccurate shape:

This issue is not that much of a problem for this example (only a small difference), but for more complex convex shapes it becomes a major problem.
**Steps to reproduce:**
Import this gltf file to a godot project: [cube.zip](https://github.com/godotengine/godot/files/3316424/cube.zip)
Use Mesh->Create Convex Collision Sibling(s)
See that the collision shape does not line up with the mesh instance (use wireframe mode for a better look)
**Minimal reproduction project:**
[test.zip](https://github.com/godotengine/godot/files/3316426/test.zip)
| bug,topic:core,confirmed,usability,topic:3d | medium | Major |
459,432,884 | terminal | Add support for turning off cursor blink | I don't link blinking cursor on terminal.
so I have tried to stop blink of cursor from settings but there is no option to do it.
How to disable blink of cursor? | Help Wanted,Area-TerminalControl,Area-Settings,Product-Terminal,Issue-Task,Priority-2 | high | Critical |
459,434,454 | TypeScript | IntelliSense signature help forgets inferred parameter names in completed call | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
Per https://github.com/microsoft/TypeScript/issues/31845#issuecomment-504615727, posting this as a new ticket for posterity.
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.6.0-dev.20190621
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** n/a
**Code**
```ts
function frob<T, A extends any[]>(f: (...args: A) => T, ...args: A)
{
return f(...args);
}
function blub(foo: string, bar: number) {}
// press Ctrl+Shift+Space inside the parentheses here:
frob(blub, "banana slamma", 812);
```
Parameter names are correctly displayed while initially typing out the `frob()` function call:

**However**, once all the arguments are filled out, it reverts to generic names such as `args_0`, `args_1`, etc. ☹️

**Expected behavior:**
Parameter names originating from an inferred parameter tuple should be preserved in IntelliSense.
**Actual behavior:**
Parameter names are preserved while typing out the function call, but once the call is completed it reverts to using generic parameter names and remains that way for all subsequent invocations of signature help for that call (unless an argument is deleted).
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
[⏯ Click here](https://typescript-play.js.org/#code/GYVwdgxgLglg9mABMATnARgHgCoBpECCiApgB5TFgAmAzogIZgCeA2gLoB8AFMAFyJcAdMPooA5jX4EAlIgC8HRHkTDBoiVOkAoAN5bEBxCmJQQKJMCEjxNaQG4tAXy1bQkWAkToANiHQ84OH4aKBQYMDF8dFF+MBAAW3RiFFkdZy0AegzEAAdjGjoAYVDvAGoAZQALGGAoCpz6CGJEcJoYKmaoSuaG4zAu4hpBxG7jXlc0fx8-fAAiaLBGekQab3p4+PpZ-AAOAEYAJnstIA)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
#31845 | Bug | low | Critical |
459,441,656 | go | net: a smart way to check internal/poll ErrTimeout | Hello guys.
It looks like we have no smart way to check the internal/poll ErrTimeout when we set the deadline on the connection.
The `net.Error` interface is not good enough as following
````golang
An Error represents a network error.
type Error interface {
error
Timeout() bool // Is the error a timeout?
Temporary() bool // Is the error temporary?
}
````
The `Timeout()` method may mislead caller think it's deadline error if the error is actually caused by `sycall.Errno` (`e == EAGAIN || e == EWOULDBLOCK || e == ETIMEDOUT`) rather than internal/poll ErrTimeout
Maybe we need a smart way to check the deadline error?
Many thanks in advance:)
| NeedsInvestigation,FeatureRequest | low | Critical |
459,446,867 | create-react-app | can not create react app Unexpected end of JSON input while parsing near '...webpack-dev-middlewar' | <!--
PLEASE READ THE FIRST SECTION :-)
-->
### Is this a bug report?
(write your answer here)
yes
<!--
If you answered "Yes":
Please note that your issue will be fixed much faster if you spend about
half an hour preparing it, including the exact reproduction steps and a demo.
If you're in a hurry or don't feel confident, it's fine to report bugs with
less details, but this makes it less likely they'll get fixed soon.
In either case, please fill as many fields below as you can.
If you answered "No":
If this is a question or a discussion, you may delete this template and write in a free form.
Note that we don't provide help for webpack questions after ejecting.
You can find webpack docs at https://webpack.js.org/.
-->
### Did you try recovering your dependencies?
<!--
Your module tree might be corrupted, and that might be causing the issues.
Let's try to recover it. First, delete these files and folders in your project:
* node_modules
* package-lock.json
* yarn.lock
Then you need to decide which package manager you prefer to use.
We support both npm (https://npmjs.com) and yarn (http://yarnpkg.com/).
However, **they can't be used together in one project** so you need to pick one.
If you decided to use npm, run this in your project directory:
npm install -g npm@latest
npm install
This should fix your project.
If you decided to use yarn, update it first (https://yarnpkg.com/en/docs/install).
Then run in your project directory:
yarn
This should fix your project.
Importantly, **if you decided to use yarn, you should never run `npm install` in the project**.
For example, yarn users should run `yarn add <library>` instead of `npm install <library>`.
Otherwise your project will break again.
Have you done all these steps and still see the issue?
Please paste the output of `npm --version` and/or `yarn --version` to confirm.
-->
(Write your answer here.)
There is actually no package.json preinstalled .
### Which terms did you search for in User Guide?
<!--
There are a few common documented problems, such as watcher not detecting changes, or build failing.
They are described in the Troubleshooting section of the User Guide:
https://facebook.github.io/create-react-app/docs/troubleshooting
Please scan these few sections for common problems.
Additionally, you can search the User Guide itself for something you're having issues with:
https://facebook.github.io/create-react-app/
If you didn't find the solution, please share which words you searched for.
This helps us improve documentation for future readers who might encounter the same problem.
-->
(Write your answer here if relevant.)
### Environment
<!--
To help identify if a problem is specific to a platform, browser, or module version, information about your environment is required.
This enables the maintainers quickly reproduce the issue and give feedback.
Run the following command in your React app's folder in terminal.
Note: The result is copied to your clipboard directly.
`npx create-react-app --info`
Paste the output of the command in the section below.
-->
(paste the output of the command here)
$ npx create-react-app react-spa-demo
npx: 91 安装成功,用时 46.343 秒
Creating a new React app in D:\WebstormProjects\react-spa-demo.
Installing packages. This might take a couple of minutes.
Installing react, react-dom, and react-scripts...
npm ERR! Unexpected end of JSON input while parsing near '...webpack-dev-middlewar'
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\lenovo\AppData\Roaming\npm-cache\_logs\2019-06-22T07_18_33_539Z-debug.log
Aborting installation.
npm install --save --save-exact --loglevel error react react-dom react-scripts has failed.
Deleting generated file... package.json
Deleting react-spa-demo/ from D:\WebstormProjects
Done.
### Steps to Reproduce
<!--
How would you describe your issue to someone who doesn’t know you or your project?
Try to write a sequence of steps that anybody can repeat to see the issue.
-->
(Write your steps here:)
1. npx create-react-app react-spa-demo
2.
3.
### Expected Behavior
<!--
How did you expect the tool to behave?
It’s fine if you’re not sure your understanding is correct.
Just write down what you thought would happen.
-->
(Write what you thought would happen.)
create this app successfully without exceptions
### Actual Behavior
<!--
Did something go wrong?
Is something broken, or not behaving as you expected?
Please attach screenshots if possible! They are extremely helpful for diagnosing issues.
-->
(Write what happened. Please add screenshots!)
$ npx create-react-app react-spa-demo
npx: 91 安装成功,用时 46.343 秒
Creating a new React app in D:\WebstormProjects\react-spa-demo.
Installing packages. This might take a couple of minutes.
Installing react, react-dom, and react-scripts...
npm ERR! Unexpected end of JSON input while parsing near '...webpack-dev-middlewar'
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\lenovo\AppData\Roaming\npm-cache\_logs\2019-06-22T07_18_33_539Z-debug.log
Aborting installation.
npm install --save --save-exact --loglevel error react react-dom react-scripts has failed.
Deleting generated file... package.json
Deleting react-spa-demo/ from D:\WebstormProjects
Done.
### Reproducible Demo
<!--
If you can, please share a project that reproduces the issue.
This is the single most effective way to get an issue fixed soon.
There are two ways to do it:
* Create a new app and try to reproduce the issue in it.
This is useful if you roughly know where the problem is, or can’t share the real code.
* Or, copy your app and remove things until you’re left with the minimal reproducible demo.
This is useful for finding the root cause. You may then optionally create a new project.
This is a good guide to creating bug demos: https://stackoverflow.com/help/mcve
Once you’re done, push the project to GitHub and paste the link to it below:
-->
(Paste the link to an example project and exact instructions to reproduce the issue.)
<!--
What happens if you skip this step?
We will try to help you, but in many cases it is impossible because crucial
information is missing. In that case we'll tag an issue as having a low priority,
and eventually close it if there is no clear direction.
We still appreciate the report though, as eventually somebody else might
create a reproducible example for it.
Thanks for helping us help you!
-->
| issue: needs investigation | high | Critical |
459,461,835 | terminal | Feature Request: Smoothly scroll in new lines | # Summary of the new feature/enhancement
Some hardware terminals allowed smoothly scrolling in new lines. It should be possible to implement this kind of feature in a software terminal emulator as well.
YouTube demonstration (VT525 smooth scrolling):
https://www.youtube.com/watch?v=Iju_pOQM0a0 | Issue-Feature,Help Wanted,Area-Rendering,Area-TerminalControl,Product-Terminal | medium | Critical |
459,472,143 | rust | Lifetime error's span is extremely unhelpful | # Without explicit lifetime
```
error[E0621]: explicit lifetime required in the type of `expr`
--> typescript/checker/src/analyzer/expr.rs:14:84
|
14 | pub(super) fn type_of<'e>(&'e self, expr: &Expr) -> Result<TypeRef<'e>, Error> {
| _______________________________________________-----________________________________^
| | |
| | help: add explicit lifetime `'e` to the type of `expr`: `&'e swc_ecma_ast::Expr`
15 | | let span = expr.span();
16 | |
17 | | match *expr {
... |
340 | | }
341 | | }
| |_____^ lifetime `'e` required
```
# With explicit lifetime
```
error[E0623]: lifetime mismatch
--> typescript/checker/src/analyzer/expr.rs:14:91
|
14 | pub(super) fn type_of<'e, 'a>(&'e self, expr: &'a Expr) -> Result<TypeRef<'e>, Error> {
| ___________________________________________________--------_____--------------------------_^
| | |
| | this parameter and the return type are declared with different lifetimes...
15 | | let span = expr.span();
16 | |
17 | | match *expr {
... |
340 | | }
341 | | }
| |_____^ ...but data from `expr` is returned here
```
Error's span points the whole function, and in my case, it's 300 lines.
| C-enhancement,A-diagnostics,A-lifetimes,T-compiler,E-needs-mcve,D-papercut | low | Critical |
459,476,828 | terminal | Feature: Add support for Infinite Scrollback | # Summary of the new feature/enhancement
I know at least gnome-terminal supports infinite scrollback as a checkbox option in profile settings; it would be great to have that here (I'd love to be able to replace X server+gnome-terminal for WSL).
I checked to make sure that there wasn't an issue for this already, and I saw that you have a TODO comment in the code for this, so I figured I'd open a GitHub issue so that you can use the 👍 to figure out how to rank it.
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
# Proposed technical implementation details (optional)
Looks like the plan is for users to set HistorySize to `-1`
https://github.com/microsoft/terminal/blob/900d0c3cce39fa191e16b0224fa32c3441f8de24/src/cascadia/TerminalCore/Terminal.cpp#L88
| Issue-Feature,Area-TerminalControl,Product-Terminal | high | Critical |
459,494,553 | flutter | Support shorter dev/testing cycles | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
For those of us that use TDD or heavily rely on unit testing, we need to be able to re-run our tests quickly and view the coverage report quickly. Opening the browser to view the coverage report takes to long to support short dev cycles.
## Proposal
Pull some features/use-cases from the [Jest](https://jestjs.io/docs/en/cli#running-from-the-command-line) testing framework like watching, filtering, and coverage output.
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
| a: tests,c: new feature,framework,P3,team-framework,triaged-framework | low | Critical |
459,497,532 | godot | select multiple particle2d and change same property bug | **Godot version:**
3.1.1
**OS/device including version:**
mac 10.14.4
**Issue description:**
can't change all selected particle2d's property at the same time.
**Steps to reproduce:**
1. select two or three particle2d node.
2. change the gravity of them.
3. check each other gravity , found that only one particle be modified.
**sugession:**
1. [good] change all particle2d property with the same time.
2. [bad] hide all property when mutiple selection.
| enhancement,topic:editor,usability,topic:particles | low | Critical |
459,514,240 | terminal | Mouse pointer should be text and not arrow one | <!--
This bug tracker is monitored by Windows Terminal development team and other technical folks.
# Environment
```none
Windows build number: 10.0.18362.175
Windows Terminal version (if applicable): 0.2.1715.0
Any other software?
```
# Steps to reproduce
Move the mouse over some text inside the terminal.
<!-- A description of how to trigger this bug. -->
# Expected behavior
The mouse pointer used should be the text cursor (Ꮖ) one since text is selectable, not the default arrow (↖) pointer.
<!-- A description of what you're expecting, possibly containing screenshots or reference material. -->
# Actual behavior
It's the default arrow pointer that's used.
<!-- What's actually happening? -->
| Help Wanted,Area-TerminalControl,Product-Terminal,Issue-Task | medium | Critical |
459,518,205 | terminal | Feature Request: CornerRadius on Cursor Shape | # Summary of the new feature/enhancement
When setting the cursor shape to **filledBox** or **vintage**, it would be nice if you could round the corners of that shape.
# Proposed technical implementation details (optional)
If the cursor is drawn from a shape, then you could add a setting for `cursorCornerRadius: "2, 2, 2, 2"` and apply that to the shape. If however the cursor image is drawn from a font in the glyph - then perhaps CascadiaCode could have rounded boxes for the cursors.
| Help Wanted,Area-Rendering,Area-Settings,Product-Terminal,Issue-Task,good first issue | low | Major |
459,529,706 | rust | E0515 searching internet for the phrase "owned value" does not give helpful results | https://doc.rust-lang.org/error-index.html#E0515
> Consider returning an owned value instead:
Searching for this phrase does not define it within the first page of a google search. | C-enhancement,A-docs | low | Critical |
459,542,018 | scrcpy | QtScrcpy | Amazing project. Thank you!
I reimplemented scrcpy with different technology stacks (Qt + Opengl) and added something:
- an interface for interaction.
- connecting multiple devices to the same application
Of course I want to share it. [QtScrcpy](https://github.com/barry-ran/QtScrcpy)
 | gui | low | Major |
459,562,282 | TypeScript | Allow inference of class generics when declaring an interface | ## Search Terms
interface, inference, extending class
## Suggestion
I'd like to be able to strongly type an interface such that when it is used on a class that has a generic I can infer the type of that generic to then use in the declaration of the interface.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
I want to use this within a library that exposes both an abstract class and an optional interface that developers can opt-in to use. The abstract class has one generic parameter, and in order to correctly type the interface without duplicate generic declaration I want to be able to infer the abstract class' generic type.
Currently the inheriting class declarations must duplicate their generic type declaration.
This issue is similar to https://github.com/microsoft/TypeScript/issues/26242 I think, but I don't see how the resolution of that issue will fix this one.
## Examples
<!-- Show how this would be used and what the behavior would be -->
```ts
abstract class Base<T> {
// implementation omitted for brevity
}
// ⬇️this is the proposed new syntax ⬇️
interface Remap<T> extends Base<infer U> {
remap(input: U): T;
}
class Foo extends Base<string> implements Remap<number> {
public remap(input: string): number {
return input;
}
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Minor |
459,565,536 | vscode | Snippets: Scope by specific file or pattern | Global snippets already support being scoped by language and language file snippets are naturally scoped. I would like to expand that "scoping" logic to specific files by matching the filename, for example.
A good use case for this, let's say I want to create snippets for travis.yml file. Right now what I do is to create the snippet scoped to YAML language.
The problem is that it will appear for every YAML file which will pollute the autocomplete with not very useful results if you have snippets for multiple kinds of files that have the same extension. (I don't want to see "travis" related snippets in a "gitlab-ci.yml" file for example) as they are not applicable.
I think this is most critical for configuration files where they have a very specific scope and that you can easily restrict by looking at the filename.
Going further, we could even expand this concept for other kinds of snippets. For example, I can have some project level snippets for building queries and I only want to enable them, in Repository classes and not in Controller classes. You could scope the files it applies using a regex pattern, for example.
I guess this shouldn't be very hard to do and it would vastly improve the suggestions quality of the snippets.
I see adding a new field similar to "scope" called "context" or something where you can specify a regex that will match to the filename.
| help wanted,feature-request,snippets | medium | Critical |
459,583,760 | rust | Use getrandom crate for retrieving system entropy? | Right now `std` and [`getrandom`](https://github.com/rust-random/getrandom) essentially duplicate each other. And Rust already depends on `rand` (via `tempfile`) which in v0.7 will use `getrandom`. So I think it makes sense to use a single implementation, to keep work on correctly retrieving system entropy focused in one place.
Note that right now I do not propose to expose `getrandom` API as part of `std` or introduce a lang item, it's a separate discussion, see rust-random/getrandom#21. Also it's probably will be better to wait until rust-random/getrandom#13 will be closed, which should happen relatively soon.
cc @dhardy @tarcieri | C-enhancement,T-libs-api | medium | Major |
459,591,851 | terminal | Feature Request - Extend Scheme Features | # Summary of the new feature/enhancement
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
Allow stylistic properties of a profile to be set by a scheme so these properties do not need to be duplicated among profiles. These properties should also have global default values.
# Proposed technical implementation details (optional)
<!--
A clear and concise description of what you want to happen.
-->
Suggested properties:
* `acrylicOpacity`
* `useAcrylic`
* `fontFace`
* `fontSize`
* `cursorShape`
* `padding`
Additionally, any of the properties set by a scheme should be able to be overridden by the profile, as is the case for `background`.
The style of the UI would be determined in this way:
1. Check for properties in the profile
2. Check for any missing properties in the scheme
3. Set any remaining properties using their global defaults | Issue-Feature,Area-Settings,Product-Terminal | low | Minor |
459,601,690 | neovim | digraph insertion replaces next character temporarily | <!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`: v0.4.0-474-gdfb7f6b34
- Vim (version: 8.1) behaves differently? no
- Operating system/version: Ubuntu
- Terminal name/version: conhost 18362
- `$TERM`: xterm-256color
### Steps to reproduce using `nvim -u NORC`
```
nvim -u NORC
ix←^k
```
### Actual behaviour
The previously entered `x` is overwritten with the digraph-in-progress `?`
### Expected behaviour
The `?` should be inserted where the new digraph is expected to be inserted, rather than overwriting the next character. In my case, the digraph I was inserting was based off of the next character, which I could no longer see. | enhancement,display,core | low | Minor |
459,604,265 | flutter | Improve rendering of rectangular icons | ## The problem
Any non trivial app is likely to use external icon packages, like FontAwesome, which not always have perfectly squared icons. Unfortunately, rectangular icons are not guaranteed to be rendered correctly and in fact they're not. For example, they are not properly centered: https://github.com/flutter/flutter/issues/24054.
Another thing you may consider a "problem" is that different fonts may have uneven sizes (e.g. FontAwesome). In fact, the above issue with centering is related to the fact that some FontAwesome rectangular icons are wider than `size` pixel at `fontSize=size`.
## Proposal
Solving the centering problem is trivial just by passing `textAling: TextAlignment.center` to the `RichText` in `Icon.build`. I have a ready PR for that but I wanted to discuss an alternative solution before submitting it.
If you consider the uneven size between fonts a problem, or you simply want to avoid overflowing the `SizedBox`, then one could think to wrap the `RichText` with a `FittedBox` widget. The problem of uneven font sizes is not completely solved but is alleviated for sure.
Is there any particular reason for not using a `FittedBox`? Does it affect performances in a significant way?
Let me know which solution you prefer (if any) and I'll submit a PR. | framework,f: material design,a: quality,customer: crowd,P2,team-design,triaged-design | low | Major |
459,607,018 | godot | Better node boundary box identification | There might be some confusion when working with certain nodes that involve placing control points (such as Path2D) and the node's boundary box:

This is what a newly created Path2D node looks like - is the red dot a curve control point? Not knowing beforehand, I'd say yes, but creating a new Point makes the original one disappear. Or, if grid snap is enabled, a new one appears alongside the old one (which isn't even a control point)

Continuing to add more points leads to even more of them showing up, often overlapping and are quite confusing.



I think a better way of handling this would be differently colored / shaped boundary rectangle points or the option of disabling it. | enhancement,topic:editor,usability,topic:2d | low | Minor |
459,609,119 | create-react-app | Allow recovery from invalid ~/.yarn/bin/create-react-app | ### Is this a bug report?
Is an enhancement request (your template doesn't cover this as enhancement requests are neither bugs, nor questions or discussion).
### Did you try recovering your dependencies?
Doesn't apply afaik (see steps to reproduce)
### Which terms did you search for in User Guide?
None, I'm suggesting an enhancement from the "Don't make me think" perspective of design. Any search result would be irrelevant afaik.
### Environment
```
> npx create-react-app --info
npx: Installierte 91 in 10.266s
Environment Info:
System:
OS: Linux 5.0 Ubuntu 19.04 (Disco Dingo)
CPU: (8) x64 Intel(R) Core(TM) i7-3632QM CPU @ 2.20GHz
Binaries:
Node: 10.15.2 - /usr/bin/node
Yarn: 1.16.0 - /usr/bin/yarn
npm: 6.9.0 - /usr/local/bin/npm
Browsers:
Chrome: Not Found
Firefox: 67.0.3
npmPackages:
react: Not Found
react-dom: Not Found
react-scripts: Not Found
npmGlobalPackages:
create-react-app: Not Found
```
### Steps to Reproduce
1. change into `/tmp`
2. run `yarn create react-app my-app` following the `README` or this project
### Expected Behavior
The tool should create the react app and handle all recoverable error gracefully or present choosable actions to the user to handle errors.
### Actual Behavior
```
> yarn create react-app my-app
yarn create v1.16.0
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Installed "[email protected]" with binaries:
- create-react-app
error An unexpected error occurred: "ENOENT: no such file or directory, chmod '/home/richter/.yarn/bin/create-react-app'".
info If you think this is a bug, please open a bug report with the information provided in "/home/richter/.config/yarn/global/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/create for documentation about this command.
```
`/home/richter` is my `HOME` directory
This is causes by `/home/richter/.yarn/bin/create-react-app` being a symlink pointing to an inexisting file:
```
> stat ~/.yarn/bin/create-react-app
File: /home/richter/.yarn/bin/create-react-app -> ../../.config/yarn/global/node_modules/.bin/create-react-app
Size: 60 Blocks: 0 IO Block: 4096 symbolic link
Device: fd03h/64771d Inode: 3289638326 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 1000/ richter) Gid: ( 1000/ richter)
Access: 2019-06-23 21:27:31.034193522 +0200
Modify: 2019-06-23 21:23:15.094737846 +0200
Change: 2019-06-23 21:23:15.094737846 +0200
Birth: -
```
Since removal of `~/.yarn/bin/create-react-app` doesn't fix the issue there seems to be a tool or dependency misbehaving. This might be a user error, but as I pointed out, my interest is to make users in the same situation to "not think".
`npx create-react-app my-app` successfully creates a react app.
### Reproducible Demo
Trivial based on steps to reproduce. | issue: proposal | low | Critical |
459,609,870 | pytorch | ASSERT FAILED at /pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:12721 | ## 🐛 Bug
I am trying to run code for a TCAV score from Google Brain. I am using this code [here](https://github.com/MotJuMi/TCAV) for a PyTorch implementation. I am running into the following error which suggests I report the bug. This is all being done on Google Colab as well.
This is the error contained in the actual TCAV code
```
W0624 22:30:33.434192 140182358902656 deprecation_wrapper.py:119] From /content/tcav.py:330: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.
W0624 22:30:40.640806 140182358902656 lazy_loader.py:50]
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-20-48f4c99b8232> in <module>()
11 num_random_exp=num_random_exp)
12
---> 13 results = mytcav.run()
17 frames
/content/tcav.py in run(self, num_workers, run_parallel)
179 for i, param in enumerate(self.params):
180 tf.logging.info('Running param %s of %s' % (i, len(self.params)))
--> 181 results.append(self._run_single_set(param))
182 tf.logging.info('Done running %s params. Took %s seconds...' % (len(
183 self.params), time.time() - now))
/content/tcav.py in _run_single_set(self, param)
226 i_up = self.compute_tcav_score(
227 mymodel, target_class_for_compute_tcav_score, cav_concept,
--> 228 cav_instance, acts[target_class][cav_instance.bottleneck])
229 val_directional_dirs = self.get_directional_dir(
230 mymodel, target_class_for_compute_tcav_score, cav_concept,
/content/tcav.py in compute_tcav_score(mymodel, target_class, concept, cav, class_acts, run_parallel, num_workers)
75 concept,
76 class_id),
---> 77 class_acts)
78 return sum(directions) / float(len(class_acts))
79 else:
/usr/lib/python3.6/multiprocessing/pool.py in map(self, func, iterable, chunksize)
264 in a list that is returned.
265 '''
--> 266 return self._map_async(func, iterable, mapstar, chunksize).get()
267
268 def starmap(self, func, iterable, chunksize=None):
/usr/lib/python3.6/multiprocessing/pool.py in get(self, timeout)
642 return self._value
643 else:
--> 644 raise self._value
645
646 def _set(self, i, obj):
/usr/lib/python3.6/multiprocessing/pool.py in worker(inqueue, outqueue, initializer, initargs, maxtasks, wrap_exception)
117 job, i, func, args, kwds = task
118 try:
--> 119 result = (True, func(*args, **kwds))
120 except Exception as e:
121 if wrap_exception and func is not _helper_reraises_exception:
/usr/lib/python3.6/multiprocessing/pool.py in mapstar(args)
42
43 def mapstar(args):
---> 44 return list(map(*args))
45
46 def starmapstar(args):
/content/tcav.py in <lambda>(act)
74 cav,
75 concept,
---> 76 class_id),
77 class_acts)
78 return sum(directions) / float(len(class_acts))
/content/tcav.py in get_direction_dir_sign(mymodel, act, cav, concept, class_id)
38 """
39 # Grad points in the direction which DECREASES probability of class
---> 40 grad = np.reshape(mymodel.get_gradient(act, [class_id], cav.bottleneck), -1)
41 dot_prod = np.dot(grad, cav.get_direction(concept))
42 return dot_prod < 0
/content/model.py in get_gradient(self, acts, y, bottleneck_name)
31 cutted_model = self.get_cutted_model(bottleneck_name).to(device)
32 cutted_model.eval()
---> 33 outputs = cutted_model(inputs)
34
35 # y=[i]
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
/content/model.py in forward(self, x)
131 y = y.view(y.size(0), -1)
132
--> 133 y = self.layers[i](y)
134 return y
135
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torchvision/models/inception.py in forward(self, x)
195
196 def forward(self, x):
--> 197 branch3x3 = self.branch3x3(x)
198
199 branch3x3dbl = self.branch3x3dbl_1(x)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torchvision/models/inception.py in forward(self, x)
350
351 def forward(self, x):
--> 352 x = self.conv(x)
353 x = self.bn(x)
354 return F.relu(x, inplace=True)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
336 _pair(0), self.dilation, self.groups)
337 return F.conv2d(input, self.weight, self.bias, self.stride,
--> 338 self.padding, self.dilation, self.groups)
339
340
RuntimeError: weight__impl_saved == weight_.getIntrusivePtr() ASSERT FAILED at /pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:12721, please report a bug to PyTorch.
SEARCH STACK OVERFLOW
```
Is this an issue with PyTorch or am is the code I am using creating the issue?
Thank you very much!
**Updated Info**
PyTorch Version: 1.0.1
I am working in Google Colab w/ Python 3.6.
Because I am working in Google Colab it does not seem like it is possible for me to get a GDB backtrace but this is just based on my Google search.
| triaged,module: assert failure | low | Critical |
459,615,235 | terminal | Feature Request: Option for ColorTool to read color theme from registry like conhost | # Summary of the new feature/enhancement
The new Windows Terminal should have an option to set "HKCU\Console" as the source of its color theme instead of specifying the colors in the JSON.
If this were to be the default setting it would make transitioning easier as a users previously customized theme just carries over to WT. Even if it were not the default though, the option to do this would be nice to enable the user to easily theme both conhost and wt consistently from one place instead of having to maintain two configurations (registry for conhost, json for wt)
# Proposed technical implementation details (optional)
"Source Registry settings" (or similar name) should be added as an option for the theme, if a settings GUI is planned then it would be sitting right along the other color schemes inside the theme-selection combobox. Alternatively, it could be a checkbox sitting above the Theme-selection combobox that disables the combobox when checked. Either way, choosing this option, wt would source its color configuration from the ColorTableXX and CursorColor keys in HKCU\Console
| Issue-Feature,Product-Colortool,Help Wanted,Area-Settings | low | Minor |
459,618,513 | TypeScript | parseInt: use more concrete type for `radix` argument | ## Search Terms
parseInt, radix
## Suggestion
From MDN docs:
`radix: An integer between 2 and 36 that represents the radix`
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt#Parameters
TS definition (lib.es5.d.ts):
```ts
declare function parseInt(s: string, radix?: number): number;
```
It will be safer to use more concrete type for radix:
```ts
type TRadix =
2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36
function safeParseInt(s: string, radix?: TRadix) {
return parseInt(s, radix)
}
```
## Use Cases
```ts
['1', '7', '11'].map(safeParseInt) // error -> Type 'number' is not assignable to type 'TRadix' ("strictFunctionTypes": true)
```
## Examples
```ts
['1', '7', '11'].map(item) => safeParseInt(item, 10)) // OK
const config = {
nested: {
number: '42',
radix: 10,
},
} as const;
safeParseInt(config.nested.number, config.nested.radix)) // OK
```
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
459,663,422 | ant-design | [Message] - Option 'closableOnHover' to prevent closing of message affected by mouse | - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
Current version of message will not close according to the 'duration' set when I mouse over on it. Having an option to enable/disable this would be great.
### What does the proposed API look like?
Add an option to the current message, such as 'closableOnHover'.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | help wanted,Inactive | low | Minor |
459,712,166 | go | x/crypto/acme: Bad error description if website reject authorization query | 1. Website reject authorization from Lets encrypt with 403 error.
2. Lets encrypt can't authorize domain and return error.
3. Library error return description ""acme: authorization error for : " in err.Error() after WaitAuthorization <-- PROBLEM
Ideal description in error I see as:
"acme: authorization error for 'domain': website reject authorization error with 403 status"
```golang
fmt.Printf("%#v", err)
```
```
&acme.AuthorizationError{URI:"https://acme-staging.api.letsencrypt.org/acme/challenge/XXX/YYY", Identifier:"", Errors:[]error(nil)}
```
curl https://acme-staging.api.letsencrypt.org/acme/challenge/XXX/YYY
```
{
"type": "http-01",
"status": "invalid",
"error": {
"type": "urn:acme:error:unauthorized",
"detail": "Invalid response from http://ZZZ/.well-known/acme-challenge/HfS3oxj9D7WXST1VKtnDMxeLxTvgn62oLWwYWw-SplQ [XXX]: \"\u003c!DOCTYPE HTML PUBLIC \\\"-//IETF//DTD HTML 2.0//EN\\\"\u003e\\n\u003chtml\u003e\u003chead\u003e\\n\u003ctitle\u003e403 Forbidden\u003c/title\u003e\\n\u003c/head\u003e\u003cbody\u003e\\n\u003ch1\u003eForbidden\u003c/h1\u003e\\n\u003cp\"",
"status": 403
},
"uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/XXX/YYY",
"token": "ASD-ASD",
"validationRecord": [
{
"url": "http://XXX/.well-known/acme-challenge/AAA",
"hostname": "XXX",
"port": "80",
"addressesResolved": [
"YYY"
],
"addressUsed": "ZZZ"
}
]
}
``` | NeedsInvestigation | low | Critical |
459,784,834 | pytorch | Automatic rank selection when using file:// initialization method | ## 🚀 Feature
Process rank can be automatically determined when using the `file://` initialization method.
## Motivation
This used to be possible prior to version 1.0. There is no specific reason NOT to support this and makes for easier usage when running multiple processes on the same machine.
## Pitch
In `torch/distributed/rendezvous.py` we can remove the check for the `rank` argument. If it is not specified, we can determine rank by doing an atomic add on the key `rank` (for example). The first process to get there will be rank 0. The key will increment up to `world_size - 1`.
Beware to also update the docs at `docs/source/distributed.rst` and the changelog.
Motivated by https://discuss.pytorch.org/t/automatic-rank-assignment-in-init-process-group/44554 | oncall: distributed,triaged,enhancement | low | Minor |
459,793,147 | terminal | Feature Request - MP4 backgrounds | # Summary of the new feature/enhancement
Currently, GIF animated backgrounds are supported. I'm not sure what other formats are - but mp4 would be great. I'm using a ~110MB gif and it uses ~10% CPU - the mp4 would be much smaller.
| Issue-Feature,Help Wanted,Area-Settings,Product-Terminal | low | Major |
459,801,821 | go | x/tools/gopls: provide function that gives list of candidate imports matching pattern | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +44c9354c5a Fri Jun 21 05:21:30 2019 +0000 linux/amd64
$ go list -m golang.org/x/tools
golang.org/x/tools v0.0.0-20190620191750-1fa568393b23
$ go list -m golang.org/x/tools/gopls
golang.org/x/tools/gopls v0.0.0-20190620191750-1fa568393b23
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE="on"
GOARCH="amd64"
GOBIN="/home/myitcv/gostuff/src/github.com/myitcv/govim/cmd/govim/.bin"
GOCACHE="/home/myitcv/.cache/go-build"
GOENV="/home/myitcv/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/myitcv/gostuff"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/myitcv/gos"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/myitcv/gos/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/myitcv/gostuff/src/github.com/myitcv/govim/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build670036660=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Much like `gopls` offers completion candidates, it should provide a function that allows editors to tab-complete/fuzzy match/whatever packages to import.
For example, in `govim` we want to provide the command `GOVIMAddImport` such that when we type:
```
:GOVIMAddImport en<Tab>
```
(where `<Tab>` is us looking to complete the import), we should be able to call `gopls` and get a list of import candidates matching `en` (according to some algorithm)
Reference: https://github.com/myitcv/govim/issues/317
---
cc @stamblerre @ianthehat
| NeedsInvestigation,FeatureRequest,gopls,Tools | medium | Critical |
459,826,131 | godot | [Bullet] Adding a high amount of physics bodies is significantly slower with Bullet compared to GodotPhysics | Godot version: Godot 3.1.1
OS/device including version: Windows 10
Issue description:
When I'm adding 100,000 StaticBodies to the scene, each with CollisionShape, and set on each CollisionShape.disable to true, I have 1 FPS. But when I change to GodotPhysics, I have stable 60 FPS.
Event better, if I use GodotPhysics and set on each CollisionShape.disable to false, I still have 60 FPS (and when using Bullet - 1 FPS).
Steps to reproduce:
Do what I have wrote in Issue description
Here is a little sample project. Camera is not needed
[Link](http://www.lathuys.cba.pl/QuickSample.zip) | bug,topic:physics,topic:3d,performance | low | Major |
459,903,717 | vue | Can't use the new v-slot syntax inside a template tag that is there only for conditional purposes | ### Version
2.6.10
### Reproduction link
[https://codesandbox.io/s/vue-template-9r28t](https://codesandbox.io/s/vue-template-9r28t)
### Steps to reproduce
Visit the repo and check out the template compilation error for `App.vue`. Remove the `<template v-else>` to get rid of the error.
### What is expected?
The template compiles properly
### What is actually happening?
The template doesn't comiple
---
Vue should allow to use slots inside a template tag that is inside component. Without this, the new slot syntax severely limits how one can put content into slots in more complex scenarios and forces repeating yourself on multiple template tags.
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | low | Critical |
459,924,215 | flutter | Separate callbacks for TextEditingController's text and selection | ## Use case
I need to re-populate a list with results of search based on user typing
When the edit gets (de-)focused the TextEditingController's event is triggered as if the actual content changed.
## Proposal
Change TextEditingController so that it has two callbacks instead of one.
Assume it should also fix https://github.com/flutter/flutter/issues/34847
| a: text input,c: new feature,framework,P3,team-framework,triaged-framework | low | Minor |
459,969,830 | TypeScript | import ConstJson from './config.json' as const; |
## Suggestion
The ability to get const types from a JSON configuration file.
If the json is:
```json
{
"appLocales": ["FR","BE"]
}
```
I want to import the JSON and get a `{appLocales: "FR" | "BE"}` type instead of `string`
As [suggested here](https://github.com/microsoft/TypeScript/issues/32063#issuecomment-1708847986), this could rely on ES import attributes (stage 4, ES2025, [TypeScript 5.3](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-5-3.html)):
```ts
import data from "./data.json" with { type: "json", const: true };
```
## Use Cases
The current approach gives a too broad type `string`. I understand it makes by default, but having the possibility to import a narrower type would be helpful: it would permit me to avoid maintaining both a runtime locale list + a union type that contains the values that are already in the list, ensuring my type and my runtime values are in sync.
It is possible to [type a JSON file](https://www.totaltypescript.com/override-the-type-of-a-json-file) explicitly using [`allowArbitraryExtensions: true`](https://www.typescriptlang.org/tsconfig/#allowArbitraryExtensions) and a `.d.json.ts` file, but this doesn't permit to infer the type from the JSON directly, requires explicit typing and maintenance.
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
## Links:
This feature has been mentioned:
- by @qm3ster and @maartenth https://github.com/microsoft/TypeScript/pull/29510#issuecomment-456833107
- by @m-b-davis https://github.com/Microsoft/TypeScript/issues/26552#issuecomment-496925984
## Search Terms
json const assertion import attributes with allowArbitraryExtensions | Suggestion,Awaiting More Feedback | high | Critical |
459,979,066 | pytorch | [dataloader] SIGCHLD handler should poll the queue for exception first | Otherwise it may cause weird exception prints like
```
#Traceback (most recent call last):
# File "<string>", line 1, in <module>
# File "/miniconda/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
# exitcode = _main(fd)
# File "/miniconda/lib/python3.7/multiprocessing/spawn.py", line 115, in _main
# self = reduction.pickle.load(from_parent)
#AttributeError: Can't get attribute 'Dataset' on <module '__mp_main__' from #'/deepspeech.pytorch/convasr/bug_.py'>
#Traceback (most recent call last):
# File "/miniconda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 512, in _try_get_batch
# data = self.data_queue.get(timeout=timeout)
# File "/miniconda/lib/python3.7/multiprocessing/queues.py", line 104, in get
# if not self._poll(timeout):
# File "/miniconda/lib/python3.7/multiprocessing/connection.py", line 257, in poll
# return self._poll(timeout)
# File "/miniconda/lib/python3.7/multiprocessing/connection.py", line 414, in _poll
# r = wait([self], timeout)
# File "/miniconda/lib/python3.7/multiprocessing/connection.py", line 920, in wait
# ready = selector.select(timeout)
# File "/miniconda/lib/python3.7/selectors.py", line 415, in select
# fd_event_list = self._selector.poll(timeout)
# File "/miniconda/lib/python3.7/site-packages/torch/utils/data/_utils/signal_handling.py", line 63, in handler
# _error_if_any_worker_fails()
#RuntimeError: DataLoader worker (pid 6082) exited unexpectedly with exit code 1. Details are #lost due to multiprocessing. Rerunning with num_workers=0 may give better error trace.
#During handling of the above exception, another exception occurred:
#Traceback (most recent call last):
# File "bug_.py", line 15, in <module>
# next(iter(loader))
# File "/miniconda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 577, in #__next__
# idx, batch = self._get_batch()
# File "/miniconda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 554, in #_get_batch
# success, data = self._try_get_batch()
# File "/miniconda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 520, in #_try_get_batch
# raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str))
# RuntimeError: DataLoader worker (pid(s) 6082) exited unexpectedly
```
cc @vadimkantorov | module: dataloader,triaged | low | Critical |
459,999,127 | TypeScript | please document project references and outdir | Really really unclear documentation regarding the behavior of "project references" and outDir.
I can't believe how unclear this is -- please please tell me how I'm supposed to do this ...
```
> [email protected] build /Users/ncohen/software/itsacheckmate.com/Web
> tsc --build --verbose
[11:01:04 AM] Projects in this build:
* lib/pdf-api/tsconfig.json
* tsconfig.json
[11:01:04 AM] Project 'lib/pdf-api/tsconfig.json' is out of date because output file 'lib/pdf-api/src/DoordashFailureSamples.js' does not exist
[11:01:04 AM] Building project '/Users/ncohen/software/itsacheckmate.com/Web/lib/pdf-api/tsconfig.json'...
[11:01:07 AM] Project 'tsconfig.json' is out of date because oldest output 'lib/pdf-api/src/DoordashPdfSemantics.js' is older than newest input 'lib/pdf-api'
[11:01:07 AM] Building project '/Users/ncohen/software/itsacheckmate.com/Web/tsconfig.json'...
[11:01:09 AM] Updating unchanged output timestamps of project '/Users/ncohen/software/itsacheckmate.com/Web/tsconfig.json'...
```
```
ncohen@breathe-book ~/s/i/Web> cat tsconfig.json
{
"references": [{ "path": "./lib/pdf-api" }]
}
```
```
ncohen@breathe-book ~/s/i/Web> cat ./lib/pdf-api/tsconfig.json
{
"extends": "../../tsconfig_base.json",
"include": ["./src/**/*.ts"],
"exclude": ["node_modules"]
}
```
```
ncohen@breathe-book ~/s/i/Web> cat ./tsconfig_base.json
{
"compilerOptions": {
"composite": true,
"outDir": "compiled-js",
"target": "es2016",
"lib": ["es2016"],
"module": "commonjs",
"moduleResolution": "node",
"sourceMap": true,
"experimentalDecorators": false,
"pretty": true,
"strict": true,
"strictFunctionTypes": true,
"noFallthroughCasesInSwitch": true,
"noImplicitAny": true,
"noImplicitReturns": true,
"forceConsistentCasingInFileNames": true,
"strictNullChecks": true,
"declaration": true,
"declarationMap": true
}
}
```
Why are compiled files being written to ./lib/pdf-api/src/ directory? How can I have that NOT happen? Or if I have to have something written to my src directory (for some reason ...?) can I have only a single file written there which I can reliable gitignore ...?
<img width="606" alt="image" src="https://user-images.githubusercontent.com/434034/60038106-97df2d00-9670-11e9-9cd7-aa412bf60d48.png">
```
ncohen@breathe-book ~/s/i/Web> tsc --version
Version 3.5.2
``` | Docs | low | Critical |
460,011,777 | TypeScript | Some types are incorrectly assignable from a generic distributive conditional type | **TypeScript Version:** 3.5.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** generic conditional distributive assignable from
**Code**
```ts
type Keys<T extends object> = T extends unknown ? keyof T : never;
function f<T extends object>(keys: keyof T, moreKeys: Keys<T>) {
keys = moreKeys;
}
// With this instantiation, `keys` is typed 'a',
// and `moreKeys` is typed 'a' | 'b' | 'c'. The latter
// should not be assignable to the former, but it happens
// in the generic context of the function `f`.
f<{ a: any, b: any } | { a: any, c: any }>('a', 'b');
```
**Expected behavior:**
`Keys<T>` should not be assignable to `keyof T` since `Keys<T>` is distributive and will produce a different type than `keyof T` when `T` is a union type.
Note that the reverse of this assignment, `moreKeys = keys`, errors for this exact reason; without a concrete `T`, it’s hard to tell what the relationship should be, so we simply have no relationship in place.
**Actual behavior:**
The unsafe assignment is allowed.
[**Playground Link**](https://www.typescriptlang.org/play/#src=type%20Keys%3CT%20extends%20object%3E%20%3D%20T%20extends%20unknown%20%3F%20keyof%20T%20%3A%20never%3B%0D%0A%0D%0Afunction%20f%3CT%20extends%20object%3E(keys%3A%20keyof%20T%2C%20moreKeys%3A%20Keys%3CT%3E)%20%7B%0D%0A%20%20%20%20keys%20%3D%20moreKeys%3B%0D%0A%7D%0D%0A%0D%0A%2F%2F%20With%20this%20instantiation%2C%20%60keys%60%20is%20typed%20'a'%2C%0D%0A%2F%2F%20and%20%60moreKeys%60%20is%20typed%20'a'%20%7C%20'b'%20%7C%20'c'.%20The%20latter%0D%0A%2F%2F%20should%20not%20be%20assignable%20to%20the%20former%2C%20but%20it%20happens%0D%0A%2F%2F%20in%20the%20generic%20context%20of%20the%20function%20%60f%60.%0D%0Af%3C%7B%20a%3A%20any%2C%20b%3A%20any%20%7D%20%7C%20%7B%20a%3A%20any%2C%20c%3A%20any%20%7D%3E('a'%2C%20'b')%3B%0D%0A)
| Bug | low | Critical |
460,028,204 | neovim | Decouple cursor from UI, allow more instances. | <!-- Before reporting: search existing issues and check the FAQ. -->
This is a feature request.
- `nvim --version`: all, up to current (v0.4.0-1078)
- Vim (version: ) behaves differently? No. Same behavior in Vim
- Operating system/version: all
- Terminal name/version: n/a
- `$TERM`: n/a
### Steps to reproduce using `nvim -u NORC`
```
nvim -u NORC
# Alternative for shell-related problems:
n/a
# env -i TERM=ansi-256color "$(which nvim)"
n/a
```
### Actual behaviour
There is only one cursor, tied inextricably with the user interface (*input* as well as output). Trying to perform Vim commands that require moving the cursor around is tricky if side-effects are undesirable. This limits the abilities of functions and plugins that are meant to work in the background or otherwise refrain from bothering the user.
### Expected behaviour
A cursor class with different kinds of instances. Some new instance properties might be visibility, or which other cursor to echo, or a function to call upon to transform the output of a particular cursor (taking all data since a given event would be more powerful than per-keystroke). Only one of them can be the “real” one (make it a static property of the class), and it must be visible and undeletable. Visible secondary cursors that echo the primary cursor would be equivalent to the multi-cursor functionality found in some other editors, especially with the addition of a new search flag or a new type of search--one that allows limiting results to a visual selection or other ranges would probably be desirable. A command allowing users to input a position for a new cursor would allow users to implement such capability themselves, but built-in UI support would make it available to users who prefer a vanilla neovim experience.
Visible secondary cursors that output based on arbitrary code (perhaps tied to what occurs with the primary cursor) would have uses, as well. One example is extremely powerful snippets that can change other fields’ contents automatically in any way wanted (e.g. based on the type of data typed into a particular field).
Grouping cursor objects together in hierarchies with effective range data (in user code) would allow for such things as nested snippets that stay “snippety” when the user leaves the range of the snippet-generated text. If cursor data were preservable in yanked text (but how would this be implemented?), snippets could even maintain this ability after shuffling text around. Multiple puts would then probably have to automatically make new cursors, rather than just keeping the cursors around even when they’re only pointing to positions in registers. Yanks to the clipboard probably wouldn’t be a good place to preserve this data, unless the OS clipboard can be leveraged to paste data that doesn’t need to preserve all meta data (we see such things when pasting formatted text into a plain text file, for example).
Many of the plugins and user code that emulate this kind of behavior could be dramatically simplified. For one thing, making all buffer changes in one insert be part of the same node in the undo history would preserve users and plugin writers from having to implement this behavior themselves. Direct support in the nvim binary would likely speed up these applications as well.
It’s possible that such a feature as this would be unnecessary, since direct editing at arbitrary positions is already possible. However, decoupled cursors would still have some advantages. For one, editing text that shifts around future desired input points requires the script to be able to re-assess where the desired point has moved to. Cursors would track their position with changes to the buffer. Further, decoupling code in a codebase is good in general, especially when talking about links between internal behavior and the UI--though admittedly, I haven’t actually taken a look at the cursor code myself. Maybe it’s already decoupled, but as far as I know, there’s still only one instance, without the same sets of properties that I’ve suggested. | multicursor | low | Major |
460,041,838 | TypeScript | JavaScript rename not working for JSDoc param name for function assignment | Issue Type: <b>Bug</b>
1. Try to rename **`item`** (at any place)
1. **`item`** in code successfully renamed while JSDoc name stayed the same
```js
/**
* @param {*} item
*/
let f = function (item) {}
```
VS Code version: Code - Insiders 1.36.0-insider (9ff8ae037e8e6109d65e4b5e3eb3dc60cc187e21, 2019-06-21T07:52:00.716Z)
OS version: Windows_NT x64 10.0.18362 | Bug | low | Critical |
460,050,914 | pytorch | [jit] Optional type refinement on non-named expressions | Right now we only do type refinement on `TK_VAR`s in the compiler, we should expand this to be more general, these examples below don't currently work:
Refinement on an attribute:
```python
class M(torch.jit.ScriptModule):
def __init__(self):
super(M, self).__init__()
self.id = jit.Attribute(2, Optional[int])
@jit.script_method
def forward(self):
a = 2
if self.id is not None:
a += self.id
m = M()
```
Refinement on a constant:
```python
class M(torch.jit.ScriptModule):
__constants__ = ['x']
def __init__(self, x):
super(M, self).__init__()
self.x = x
@torch.jit.script_method
def forward(self, a):
attr = self.x
if attr is not None:
return a + attr
else:
return a
M(None)
```
cc @gmagogsfm | oncall: jit,triaged,TSRootCause:TypeRefinement,TSUsability | low | Minor |
460,053,535 | material-ui | Add ScrollSpy component | <!-- Checked checkbox should look like this: [x] -->
- [x] This is not a v0.x issue. <!-- (v0.x is no longer maintained) -->
- [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
## Expected Behavior 🤔
Like [zurb's Magellan](https://foundation.zurb.com/sites/docs/magellan.html). highlight navbar item based on scroll and/or based if element is in viewport
## Examples 🌈
See link above
## Context 🔦
I would like to create this component for MUI! I'm happy to submit a PR with some guidance :)
## Benchmark
- https://getbootstrap.com/docs/4.3/components/scrollspy/
- https://superprops-gatsby-tarex.redq.now.sh/saas/#1
- https://github.com/reactstrap/reactstrap/issues/82
- https://get.foundation/sites/docs/magellan.html | new feature,waiting for 👍 | medium | Critical |
460,058,249 | terminal | Feature Request: Mouse/Touch/Pointer Bindings (like middle-click paste, right-click context menu, etc.) | # Summary of the new feature/enhancement
Expand settings to be able to define remappable mouse bindings. Arguably, different touch events should fall under this too. So let's just call this "pointer bindings" for now.
# Proposed technical implementation details (optional)
Mouse bindings are a bit trickier than keybindings in that the mouse has a location where the event occurs. For example, right-clicking a tab should have a different effect on the terminal than right-clicking the terminal.
As a _super_ early idea, consider this format:
```js
"pointerbindings": [
{
"device": "mouse",
"event": "rightClick",
"where": "tab",
"command": "flyoutMenu"
},
{
"device": "mouse",
"event": "doubleClick",
"where": "terminal",
"command": "wordSelection"
},
{
"device": "touch",
"event": "swipe",
"where": "terminal",
"command": "scroll"
}
]
```
We definitely need a spec for this because it'll be a bit hefty. We'll also need to update a decent amount of documentation (particularly settings schema) to be able to describe what combinations are acceptable (again, the JSON mentioned up here is just me rambling about a potential implementation.)
There may be overlap between some commands in keybindings. Be sure to think that through.
###### Mike notes:
_we should go back and collect up all the threads we've said "this would be a good mouse bindings feature" below_
----
###### spec draft
# Mouse bindings
## Abstract
We've had numerous requests to configure how the mouse behaves in the Terminal.
The original behavior was a simple duplication of how conhost behaved: a right
click will copy the a selection if there is one, or paste the clipboard if there
isn't. Over time, we've accumulated a number of scenarios that we believe can
all be addressed by allowing more fine-grained mouse binding support. However,
upon further review, ultimately, we don't really need _deep_ mouse binding
support.
### Scenarios
The following is a list of all the feature requests we've linked to mouse
bindings in the past, grouped into categories of related requests:
### Change how mouse multi-click selects
* [ ] [#7173] Multiple sets of word delimiters for double-click selection
* [ ] [#9881] Limit triple-click selection to command field only
* [ ] [#6511] Multi-click selection granularity
* [ ] [#3196] Feature Request: Smart Double-click Selection (regeces?)
### Change the action that L/M/R-mouse performs
* [ ] [#7646] xterm-style selection, paste on middle click, copy on mouse release
* [ ] [#10802] - `VK_XBUTTON1/2`, etc.
* [ ] [#6250] - separate "Paste Clipboard" vs "Paste Selection" actions
* [x] [#3337] - Add a right-click context menu
### Other
These are smaller, independent features that could all have an individual setting (if needed)
* [ ] [#11710] Request: Setting to disable zooming with ctrl+mousewheel
* [ ] [#13598] Add option to configure URL clicking behavior
* [ ] [#11906] Option to disable "Pinch to Zoom"
* [ ] [#6182] Fast scroll by holding down Alt while scrolling with mouse wheel
* [ ] Block selection by default (without `alt`) (see mail thread "RE: How to disable line wrap selection in Terminal")
* [ ] [#17610] Configure block select to use ctrl, not alt
## Solution design
Following the above scenarios, solutions are proposed below:
### Change how mouse multi-click selects
Across the requests here, we've got the following requests:
> * double-click: selects a "word" between 2 same delimiters
> * triple-click: selects an entire string separated by spaces
> * 4-click: entire line
> Currently, Ctrl+A will select the entire command/text input, however, triple
> clicking (mouse version of Select All selects the entire line (including the
> prompt). GIF shows selecting with mouse vs with keyboard:
> ...
> I would like the triple click action to align to the Ctrl+A selection method.
> Could we maybe add shift double click to select using alternate word
> delimiters?
> I was really thinking more of regex though, because it can be a good starting
> point for implementing more advanced features like type-specific smart
> highlighting and hyperlinking of terminal text, not just smart selection.
To boil this down, users want to be able to configure the behavior of double,
triple, and quadruple clicks. The most common request is to change the
delimiters for double-click selection. But users also want to be able to
configure the delimiters to _change_ on
<kbd>Shift</kbd>/<kbd>Alt</kbd>/<kbd>Ctrl</kbd> clicks.
```json
"mouse": {
"clicks": {
{ "click": "double", "command": { "action": "expandSelection", "delimeters": " /\\()\"'-.,:;<>~!@#$%^&*|+=[]{}~?\u2502" } }
{ "click": "shift+double", "command": { "action": "expandSelection", "delimeters": " " } }
{ "click": "triple", "command": { "action": "expandSelection", "regex": "^.*$" } }
}
}
```
Alternatively,
```json
"mouse": {
"doubleClick": " /\\()\"'-.,:;<>~!@#$%^&*|+=[]{}~?\u2502",
"tripleClick": { "regex": "^.*$" }
}
```
[#3337]: https://github.com/microsoft/terminal/issues/3337
[#6182]: https://github.com/microsoft/terminal/issues/6182
[#6250]: https://github.com/microsoft/terminal/issues/6250
[#6511]: https://github.com/microsoft/terminal/issues/6511
[#7173]: https://github.com/microsoft/terminal/issues/7173
[#7646]: https://github.com/microsoft/terminal/issues/7646
[#9881]: https://github.com/microsoft/terminal/issues/9881
[#10802]: https://github.com/microsoft/terminal/issues/10802
[#11710]: https://github.com/microsoft/terminal/issues/11710
[#11906]: https://github.com/microsoft/terminal/issues/11906
[#13598]: https://github.com/microsoft/terminal/issues/13598
[#17610]: https://github.com/microsoft/terminal/issues/17610
| Help Wanted,Area-Input,Area-TerminalControl,Area-Settings,Product-Terminal,Issue-Scenario | high | Critical |
460,064,725 | angular | [Elements] Slots working for an angular Element, but not Component |
# 🐞 bug report
### Affected Package
not sure
### Is this a regression?
No, I think this has been there for a while (see https://github.com/angular/angular/issues/24859)
### Description
With Elements supporting Shadow Dom v1 the angular elements appropriately slot content being passed to them. However when trying to use the same angular **component** in an angular app then the slot won’t work.
## 🔬 Minimal Reproduction
https://stackblitz.com/edit/angular-slot-sd1
## 🌍 Your Environment
**Angular Version:**
<pre><code>
7.2.15
</code></pre>
**Anything else relevant?**
| type: bug/fix,area: core,area: elements,state: confirmed,core: content projection,P3 | low | Critical |
460,100,869 | godot | Using class_name in a newly created script does not register it in the node creation dialog, if under addons/ (unless part of an enabled editor plugin) | Godot 3.1.1
Windows 10 64 bits
I suspect there is a scenario where it doesnt get registered at all. I was writing a new prototype this evening and the script still hasn't registered.
Here is what I remember doing:
1) Create a new project (no new scene)
2) Create a subfolder `addons/zylann.scatter/`
2) Right-click in the FileSystem dock and choose "New Script" into that folder
3) Write the following (I wrote it by hand so might have had a few errors while doing so, but the end result was the same):
```gdscript
tool
class_name Scatter3D
extends Spatial
var _scenes = []
func _ready():
# TODO Temporary test
_scenes.append(load("res://props/placeholder_tree.tscn"))
```
4) Save the script (still no scene in the project, I don't use the default Ctrl+S, instead I save the script only)
5) Create a new scene, open the node creation dialog, search for `Scatter`: it's nowhere to be found, despite the class name being recognized by the script editor and saved in `project.godot`.
Even after restarting the editor, the node can't be found. No particular messages in the console. Did I miss something obvious?

Here is the project, apparently you should be able to see directly what happens, since restart doesnt fix it:
[Scatter.zip](https://github.com/godotengine/godot/files/3322688/Scatter.zip)
Edit:
It would seem any script I create under `addons/zylann.scatter/` never get registered, but they do if I they are under the project root Oo
According to TheDuriel it's intented. Registration works in other systems, just not in the node dialog. I guess it's because plugins can be enabled/disabled, so it would make sense for them to no longer clutter the dialogs. However, it's not documented... | bug,topic:core,topic:gdscript,confirmed,topic:plugin | medium | Critical |
460,137,604 | node | Report not generated if `--abort-on-uncaught-exception` is also given | * **Version**: `master`
* **Platform**: `Ubuntu 18.04`
* **Subsystem**: `report`
Not sure if that's supposed to work (I would expect so, it works with [node-report](https://github.com/nodejs/node-report) on Node.js v10.x), but when I use `--abort-on-uncaught-exception` with `--experimental-report --report-uncaught-exception` or `--experimental-report --report-on-fatalerror` I only get a core dump (I don't get a report file before crashing):
With `--abort-on-uncaught-exception`:
```
$ ./node --experimental-report --abort-on-uncaught-exception --report-uncaught-exception -e "function lala() { throw new Error() } lala()"
Uncaught Error
FROM
lala ([eval]:1:19)
[eval]:1:39
Script.runInThisContext (vm.js:123:20)
Object.runInThisContext (vm.js:313:38)
Object.<anonymous> ([eval]-wrapper:9:26)
Module._compile (internal/modules/cjs/loader.js:779:30)
evalScript (internal/process/execution.js:80:25)
internal/main/eval_string.js:23:3
[1] 9342 illegal hardware instruction (core dumped) ./node --experimental-report --abort-on-uncaught-exception -e
$ ls
core
```
Without `--abort-on-uncaught-exception`:
```
$ ./node --experimental-report --report-uncaught-exception -e "function lala() { throw new Error() } lala()"
Writing Node.js report to file: report.20190624.161756.9421.0.001.json
Node.js report completed
[eval]:1
function lala() { throw new Error() } lala()
^
Error
at lala ([eval]:1:25)
at [eval]:1:39
at Script.runInThisContext (vm.js:123:20)
at Object.runInThisContext (vm.js:313:38)
at Object.<anonymous> ([eval]-wrapper:9:26)
at Module._compile (internal/modules/cjs/loader.js:779:30)
at evalScript (internal/process/execution.js:80:25)
at internal/main/eval_string.js:23:3
$ ls
core report.20190624.161756.9421.0.001.json
``` | report | low | Critical |
460,143,948 | pytorch | [RFC] NestedTensor - 0.0.1 | ## Motivation
Many fields manipulate collections of Tensors of different shapes. For example, paragraphs of text, images of different sizes or audio files of different lengths. We don't have a first class abstraction that enables the concurrent manipulation of collections of this type of data. We further often need to batch arbitrary data and operations for efficiency.
## Definition
A NestedTensor, just like a torch.Tensor, has a dtype, layout, device and dimension.
In general, there are two cases for which NestedTensors provide computational representations.
**Lists of Tensors**
Each Tensor constituent of the list it represents, if any, must be of its dtype, layout and device. The dimension of a constituent Tensor must be one less than the dimension of the NestedTensor. An empty list of Tensors yields a NestedTensor of dimension one.
**Lists of NestedTensors**
Each constituent NestedTensor must be of its dtype, layout and device. The dimension of a constituent NestedTensor must be one less than the dimension of the NestedTensor.
You cannot have a list of Tensors and NestedTensors, that is, you cannot mix them.
## Goals and current code
**Convenience:** Remove tedious manual workarounds
Sometimes research can be unblocked by or inspired by convenient abstractions. New code may be built on top of NestedTensors if it's general enough and easy enough to use (discoverable). We want to make sure that we solve existing problems well, but also want it to be general enough to inspire new work.
```
# This does a lot of work to use our awkward packed sequence abstraction
def forward(self, sent_tuple):
# sent_len: [len_1, ..., len_bsize]
# sent: (seqlen x bsize x worddim)
sent, sent_len = sent_tuple
# Sort by length (keep idx)
sent_len_sorted, idx_sort = np.sort(sent_len)[::-1], np.argsort(-sent_len)
sent_len_sorted = sent_len_sorted.copy()
idx_unsort = np.argsort(idx_sort)
idx_sort = torch.from_numpy(idx_sort).cuda() if self.is_cuda() \
else torch.from_numpy(idx_sort)
sent = sent.index_select(1, idx_sort)
# Handling padding in Recurrent Networks
sent_packed = nn.utils.rnn.pack_padded_sequence(sent, sent_len_sorted)
sent_output = self.enc_lstm(sent_packed)[0] # seqlen x batch x 2*nhid
sent_output = nn.utils.rnn.pad_packed_sequence(sent_output)[0]
# Un-sort by length
idx_unsort = torch.from_numpy(idx_unsort).cuda() if self.is_cuda() \
else torch.from_numpy(idx_unsort)
sent_output = sent_output.index_select(1, idx_unsort)
```
**Correctness:** Abstract the subtle details
Many users run into subtle issues when padding and masking manually. We can hide these details and provide an implementation that deals with all these subtleties.
Example 1
```
# Currently users have to worry about setting their masked values to
# the correct number to not influence reductions such as max.
def neginf(dtype):
"""Returns a representable finite number near -inf for a dtype."""
if dtype is torch.float16:
return -NEAR_INF_FP16
else:
return -NEAR_INF
[...]
outputs_of_interest = embedding_layer[:, 1:, :]
# The user carries around a mask to indicate regions that aren't relevant
# This mask needs to be updated for each layer.
mask = (~attention_mask[:, 1:]).type(dtype).unsqueeze(2) * neginf(dtype)
sumed_embeddings = torch.sum(outputs_of_interest * mask, dim=1)
nb_elems = torch.sum(attention_mask[:, 1:].float(), dim=1).unsqueeze(1)
embeddings = sumed_embeddings / nb_elems
# This is a very complicated way of getting torch.mean(outputs_of_interest, dim=1)
# for variably sized entries.
[...]
```
Example 2
```
tensor *= mask.unsqueeze(-1).float()
for i in range(self.n_layers):
# The user manually carries around a mask to indicate regions
# that aren't relevant. Each layer has custom mask support
# which the user had to implement
tensor = self.layers[i](tensor, mask)
# It's very easy to write -1e20 instead of 1e-20, which makes a big difference
divisor = mask.float().sum(dim=1).unsqueeze(-1).clamp(min=1e-20)
output = tensor.sum(dim=1) / divisor
```
**Performance:** Take advantage of sparsity and ease batching
There are obvious gains in writing specialized kernels. RNNs already take advantage of that via packed sequences, which was introduce by a vendor library. We can take advantage of this more broadly via a centralized abstraction. It also allows us to replace for loops or other obviously inefficient constructs.
```
GeneralizedRCNNTransform.forward(self, images, targets=None):
# If the images are small this likely causes resource underutilization.
for i in range(len(images)):
image = images[i]
target = targets[i] if [...]
[...]
image = self.normalize(image)
image, target = self.resize(image, target)
images[i] = image
if targets is not None:
targets[i] = target
image_sizes = [img.shape[-2:] for img in images]
images = self.batch_images(images)
image_list = ImageList(images, image_sizes)
return image_list, targets
```
## Scope of this document
For this first version we're treating NestedTensors effectively as a static container (akin to Python tuples) of its constituent Tensors. That means, any operation for explicit shape manipulation, such as broadcasting or select, are not part of this discussion and will be brought up as part of 0.0.2.
## Python repr() string representation
A NestedTensor of two one-dimensional Tensors of length three and four.
```
nestedtensor([
tensor([8, 1, 3, 4]),
tensor([5, 0, 9])
])
```
A NestedTensor of NestedTensors each containing Tensors of different lengths.
```
nestedtensor([
nestedtensor([
tensor([8, 1, 3, 4]),
tensor([5, 0, 9])
]),
nestedtensor([
tensor([2, 4]),
tensor([5, 0, 9, 10, 2])
])
])
```
## Construction/Conversion
For this discussion we use the Python frontend for notation.
### Construction
```
$ a = [torch.rand(2, 3), torch.rand(4, 5)]
$ b = torch.nestedtensor(a)
```
The constructor also accepts tuples.
```
$ a = (torch.nestedtensor([torch.rand(2, 3)]),
torch.nestedtensor([torch.rand(4, 5)]))
$ b = torch.nestedtensor(a)
```
The level of nesting is inferred from the input. The constructor always copies. Whatever you pass into the constructor will share no data with what the constructor returns. This matches torch.tensor's behavior.
If given a NestedTensor or Tensor it will return a detached copy, which is consistent with the behavior of torch.tensor. Remember that you cannot mix Tensors and NestedTensors within a given list.
### Conversion/unbind()
Converting from NestedTensors to nested tuples
```
$ a = [ \
[torch.rand(2, 3), torch.rand(4, 5)], \
[torch.rand(6, 7)] \
]
$ b = torch.nestedtensor(a)
$ b1 = b.unbind() # Tuple of 2 NestedTensors
$ b2 = b1[0].unbind() # Tuple of 2 Tensors
```
A user can retrieve the constituent Tensors via unbind. Unbind is currently used by torch to turn Tensors into tuples of Tensors. Unbind always returns a tuple of views. We do not (yet support a dimension argument to unbind for NestedTensors, because it forces us to argue about shape.
## Operator generalization
Operator generalization is the process of expanding input constraints and semantics to NestedTensors. Note that this also includes features such as aliasing (creating views). Every Tensor operator, if meaningfully generalizable, may be implemented or must be flagged as not implemented and raise an exception.
We directly add support for NestedTensors to operations in torch and there will be no specialized package.
As a general policy, we will be conservative in the operations we support and introduce them slowly for this more general case. In some cases the advanced semantics are obvious, in others they will be heavily driven by potential applications and current issues.
### Generalization strategy
We can find that many operations of torch.Tensor can be expressed as a combination of others. We can minimize the amount of work we need to do in order to get good operator coverage by identifying core operations and generalizing them. This will then also act as a forcing function to yield sane and consistent semantics, because the combination of these core operations will also imply combined semantics and input constraints.
## Core operations
### nested_size()
As motivation let us consider as an example of two paragraphs of word ids.
```
$ words = \
[
[
tensor([0, 2, 4, 6]),
tensor([1, 3, 5])
],
[
tensor([7, 9]),
tensor([11]),
tensor([13, 15, 17, 19]),
]
]
$ words = torch.nestedtensor(words)
$ words.nested_size()
(
(
torch.Size([4]),
torch.Size([3])
),
(
torch.Size([2]),
torch.Size([1]),
torch.Size([4])
)
)
```
nested_size() returns a container of torch.Sizes just like NestedTensor is a container of torch.Tensors. We could also introduce a torch.NestedSize for debatable benefit, but increased consistency.
Note: Someone might call a dimension ragged or batched if for a given dim `size(dim)` doesn't match across all Tensor constituents. We say that this dimension is heterogeneous.
### nested_dim()
```
$ words.nested_dim()
(
(1, 1),
(1, 1, 1)
)
```
Similar to nested_size, nested_dim returns a nested representation. Due to the constraints on the constituent Tensors all numbers will be equal, but this might change in the future if we relax some of them.
## Tensor-wise ops
### Definition
Let's define `z = f(Tensor x_1, Tensor x_2,..., Tensor x_n)` to be an operation on n Tensors. Let's say data is a list of n valid inputs to the constructor.
```
# data is a list that represents inputs to our constructors
# Example:
# data = [(torch.rand(2, 3), torch.rand(5, 4)), (torch.rand(7, 9),)]
inputs = [torch.nestedtensor(data_i) for data_i in data]
def tw_f(inputs, f):
result = []
if isinstance(inputs[0], torch.Tensor):
return f(*inputs)
else:
result = []
# Turn this list of NestedTensors into a list of NestedTensor constituents
unbound_inputs = [input.unbind() for input in inputs]
for i in range(len(inputs[0])):
result.append(tw_f([unbound_input[i] for unbound_input in unbound_inputs], f))
return torch.nestedtensor(result)
result = tw_f(inputs, f)
```
We call an operator *tw(f) to be tensor-wise with respect to f* if it is, semantically, implemented via above code.
**We impose two input constraints for this operation (for now).**
1. The resulting list of Tensors or NestedTensors (called result in the above snippet) need to form a valid NestedTensor. This typically requires that all n input NestedTensors be of the same dtype, layout and dimension for that construction to succeed. We make this a strict requirement.
2 Each NestedTensor constructed from data_i must have the same nesting structure. This means nested_dim must match across all n input NestedTensors.
### Example: torch.mm
```
$ a = torch.nestedtensor(
[
[
rand(3, 5)
],
[
rand(1, 8),
rand(7, 3)
]
])
$ b = torch.nestedtensor(
[
[
rand(5, 3)
],
[
rand(8, 2),
rand(3, 4)
]
])
$ torch.matmul(a, b).nested_size()
(
(
torch.Size([3, 3]),
),
(
torch.Size([1, 2]),
torch.Size([7, 4])
)
)
```
We can think of the NestedTensor version of mm as the tensor-wise operation based on mm.
**List of supported tensor-wise operations**
What follows is a list of categories of operations we consider tensor-wise. All of these operations may be implemented as part of 0.0.1. Some of these stand in place for Python class methods (such as __eq__) and may also double for a module level function, such as torch.add, and method function, such as torch.Tensor.add_.
1. Unary point-wise operations such as add, cos and digamma
2. Binary operations such as add and cmul, but limited to working without Tensor broadcasting
2. mm, mv, addmm and other BLAS operations
4. spectral operations such as fft and ifft
5. specific nn layers such as Embedding/EmbeddingBag or RNN
If there is any doubt whether we want to treat these functions as tensor-wise in the future, we should not implement them as such for now.
Note that we explicitly punt on broadcasting semantics within this RFC simply because they can be viewed as implicit shape manipulation of a NestedTensor (even if it's only of a Tensor constituent) and we want to discuss that separately.
As set of heuristics for inclusion in this list is
1. The operator is independent of the sizes of the Tensor inputs (e.g. torch.cos)
2. The operator does not care about the sizes of the Tensor inputs as long as they match (e.g. torch.add)
3. The operator currently does not have batching or broadcasting semantics (e.g. torch.mm or torch.fft)
4. The operator currently has ad-hoc or awkward variable length input support that can be replaced by NestedTensors
## Example applications
### Embedding
```
def embedding_bag(
input,
weight,
offsets=None,
max_norm=None, norm_type=2,
scale_grad_by_freq=False, mode='mean',
sparse=False, per_sample_weights=None)
```
Currently, if input is of size (B, N) it will be treated as B sequences of length N. The offsets are ignored. The function returns a Tensor of size (B, em), where em is the embedding dimension. If input is 1d, then offsets is used to slice up the input vector. The offsets are currently used to emulate variable length sequences.
Using NestedTensor we can simply do away with offsets. The input is a 2d NestedTensor of size ((n_1,), (n_2,), ..., (n_k,)) and the output will be of size ((em,), (em,), ..., (em,)). This is the tensor-wise operation based on EmbeddingBag applied to a (1, N) Tensor. If there are B Tensor constituents in the input NestedTensor we will simply return a torch.Tensor of size (B, em).
### RNNs
A NestedTensor of 1-dimensional Tensors can be converted into a PackedSequence. RNN functions accept PackedSequences. As such, we can treat NestedTensors as a replacement for PackedSequence with the additional benefits of supporting empty Tensor constituents and entries not sorted by length. PackedSequence is another example of ad-hoc support for variable length sequences, however via a more sophisticated, separate class.
### cross_entropy
```
cross_entropy(
input,
target,
weight=None,
size_average=None,
ignore_index=-100,
reduce=None,
reduction='mean')
```
cross_entropy has implicit support for variable length data via the ignore_index flag. We can implement support for variable length inputs explicitly via NestedTensors, if we treat it as a tensor-wise operation based on cross_entropy for a single sequence.
## Implementation
### List of NestedTensors
Lists of NestedTensors can be implemented by adding metadata on top of a NestedTensor of Tensors. As such they can be implemented separately or even concurrently with future work. We may also restrict ourselves to a few levels of nesting to ease implementation.
### List of Tensors
The simplest implementation implements a NestedTensor of Tensors via a list of Tensors. Operations are implemented via for-loops using existing implementations or via simple extensions.
### List of Tensors: Python list
**Advantages**
- Simplest implementation
- May benefit from for-loop JIT optimizations
**Disadvantages**
- Does not take advantage of inherent parallelism. Very slow.
- Really only useful as a prototype
### Lists of Tensors: Data tensor plus mask tensor
We may start out with a dense tensor and a mask tensor. For performance concerns we may raise a warning on low load factors.
**Advantages**
- All pointwise n-ary operations are immediately implemented by applying them to the data tensor.
- Many torch.nn operations are implemented in torch methods and functions, which yields immediate coverage.
- Many operations can be implemented in terms of others, e.g. BLAS in terms of matmul and index_select via masked_select.
- Autograd support comes out of the box, since this can be written was a wrapper.
- In many cases an existing user implementation uses padding plus masking, so a replacement doesn't yield a performance regression, but surely makes it easier for the user and more widely available.
- Any dtype, layout and device is immediately supported.
- Vendor libraries that require torch.Tensor for high performance operations such as matrix multiply still can be used.
- Provides a central location for various existing implementations that use padding and masking.
**Disadvantages**
- Doesn't take advantage of sparsity and adds a lot of memory overhead due to the mask.
- Cannot support a relaxation of the dtype, layout or device constraints, because there has to be one underlying data Tensor.
### List of Tensors: at::TensorList
**Advantages**
- [at::TensorList](https://github.com/pytorch/pytorch/blob/65b00aa5972e23b2a70aa60dec5125671a3d7153/aten/src/ATen/core/Type.h#L39) is a C++ structure already in place and used
- We can use TensorList as a dispatch mechanism to specialized kernels for a vector of Tensors (as a list of views) and without moving any data around.
- Can use nested parallelism or other constructs to parallelize across constituents as a default implementation.
**Disadvantages**
- TensorList requires the allocation of a lot of Tensor structs, which can come with allocation overhead.
- Does not allow to (easily) treat underlying memory as a single flat buffer, which might prohibit using existing operations without significant changes or a (parallelized) for-loop
### List of Tensors: Dense Packed data layout
Eventually we will want to use a packed data layout to benefit from sparsity. For this we discuss two cases.
1. A NestedTensor of Tensors
2. A NestedTensor of NestedTensors
We will restrict ourselves to NestedTensors with dense layout, regular C dtype and allocated on a device with pointer arithmetic (e.g. CPU or CUDA GPU).

See above for a visual refresher on our memory layout for dense Tensors. For more information and credit for this picture go to http://blog.ezyang.com/2019/05/pytorch-internals/.
**Case 1:**
Let's consider a NestedTensor called t of the following form.
```
nestedtensor([
tensor([...]),
tensor([...]),
...
tensor([...])
])
```
Let's say there are K Tensors and let's call the kth Tensor in this sequence t_k. We can characterize its size by `(n_k^1, n_k^2, ..., n_k^N)` and strides `(s_k^1, s_k^2, ..., s_k^N)`. In concordance with torch.Tensor all strides are non-negative. Call the pointer to the underlying memory data_k. Each logical datapoint maps to a datapoint in the corresponding memory, but two logical data points may map to the same if e.g. the strides are 0.
For each logical element of t_k there exists a unique sequences of indices `(i_k^1, i_k^2, ..., i_k^N)` that maps to the underlying memory location via `data_k + \sum_{n = 1}^{N} i_k^n * s_k^n`. The maximum offset is `s_k^0 = \sum_{i = 1}^{N} i_k^n * s_k^n`.
Pick one logical element x of t. WLOG we can assume x is a part of t_k. Then we can map to the underlying data via `data_k + \sum_{n = 1}^{N} i_k^n * s_k^n` and associate index `(k, i_k^1, i_k^2, ..., i_k^N)`.
Now ,to have a packed data representation we can, during construction or other times, union the memory of each data_k into a single contiguous memory location pointed to by data. Then we can define a mapping from each logical element with index `(k, i_k^1, i_k^2, ..., i_k^N)` via `data + \sum_{i = 0}^{k - 1} s_i^0 + \sum_{n = 1}^{N} i_k^n * s_k^n` to the underlying data.
In conclusion, we simply unify the underlying data into a single memory region. We only need K further values via `s_0^k` to store the offsets and implement this memory layout. This also makes it much easier to provide views and adopt current implementations for regular Tensors. An initial implementation might not unify the data and simply stores a vector of Tensors with metadata to represent the nesting on top.
**Case 2:**
Consider a NestedTensor of the following form.
```
nestedtensor([
nestedtensor([...]),
nestedtensor([...]),
...
nestedtensor([...])
])
```
Let's say there are K NestedTensors and let's call the kth NestedTensor in this sequence nt_k. We can then proceed just as we did with our Tensors and introduce a new index i_k^0 with corresponding offset s_k^0 that equals the maximum offset of each constituent.
**A note on metadata**
However, note that adding another level of nesting due to additional NestedTensors does not change the amount of underlying allocated memory. It simply increases the amount of metadata which is used to represent the structure on top of the collection of Tensors. We might, in the future, want to combat this by introducing homogeneity annotations per dimension (e.g., nested_size() must return the same number for each torch.Size at a given dimension). This will allow us to store a single stride instead of one stride per constituent and might also make it easier to leverage existing kernels. We can also use this to compress the metadata we need to store for the Tensor constituents.
**Advantages**
- Due to our homogeneity constraint on dtype, device and layout much of the current code should be easy to extend. In particular, TensorIterators should be easy to extend for pointwise operations with additional striding information (vector of vector of strides)
- Might concurrently resolve discussions around blocked memory supports (vector of vector of strides)
**Disadvantages**
- Requires deep integration and possibly extension of core structures
- We can run into issues with allocation overhead if we store this as a std::vector of Tensors.
- Some vendor library operations require regular torch.Tensors. High performance operations such as matrix multiplication might require new kernels, but this may be eased with homogeneity annotations.
## Next steps
As next steps we can implement a basic prototype located in torch.prototype.nestedtensor. It'll be a Python wrapper around two torch.Tensors, one data Tensor and one mask Tensor. It allows construction from a list of torch.Tensors, i.e. no nesting. Dispatch is implemented via monkey-patching torch on import of torch.prototype.nestedtensor.torch and using isinstance to distinguish between torch.Tensor and torch.NestedTensor. It also comes with a test suite within test/test_nestedtensor.py.
| triaged,module: batching | high | Major |
460,182,005 | go | net: mass tcp dial timeout with the concurrency more than 10000 | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.1 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN="/home/zhoudazhuang/gobin/"
GOCACHE="/home/zhoudazhuang/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/zhoudazhuang/db11/jm/pro"
GOPROXY=""
GORACE=""
GOROOT="/home/zhoudazhuang/usr/local/go1.12.1/go"
GOTMPDIR=""
GOTOOLDIR="/home/zhoudazhuang/usr/local/go1.12.1/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build631445118=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
My tested code:
Server:
```
func main() {
l, err := net.Listen("tcp", ":8888")
if err != nil {
log.Println("listen error:", err)
return
}
for {
c, err := l.Accept()
if err != nil {
log.Println("accept error:", err)
break
}
go c.Close()
// start a new goroutine to handle
// the new connection.
//log.Println("accept a new connection")
//go handleConn(c)
}
}
```
Client:
```
package main
import (
"fmt"
"github.com/jessevdk/go-flags"
"net"
"os"
"sync"
"time"
)
var args struct {
Addr string `short:"a" required:"yes" description:"服务端地址"`
Concurrence int `short:"c" required:"yes" description:"并发请求数"`
Count int `short:"t" required:"yes" description:"测试次数"`
}
func main() {
argParser := flags.NewNamedParser("tcp_test", flags.PassDoubleDash)
argParser.AddGroup("Mock parameters", "", &args)
_, err := argParser.Parse()
if err != nil {
fmt.Println(err)
argParser.WriteHelp(os.Stdout)
os.Exit(1)
}
fmt.Printf("测试参数: %+v\n",args)
wg := sync.WaitGroup{}
for j:=0; j<args.Count;j++ {
for i:=0; i<args.Concurrence;i++{
wg.Add(1)
go func() {
defer wg.Done()
time.Sleep(time.Second*5)
conn, err := net.DialTimeout("tcp", args.Addr, time.Second*3)
if err != nil {
fmt.Printf("dialog err: %+v i为%d:\n",err,j)
return
}
time.Sleep(time.Millisecond*10)
conn.Close()
}()
}
}
fmt.Println("start wait")
wg.Wait()
fmt.Println("ok")
}
```
The cmd: ./Test -a 127.0.0.1:8888 -c 10000 -t 3
╰─># cat /proc/sys/net/ipv4/ip_local_port_range
4096 65535
╰─># ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 386849
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 10000000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1000000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
╰─># free -h
total used free shared buffers cached
Mem: 94G 85G 9.3G 2.0M 54M 42G
-/+ buffers/cache: 42G 51G
Swap: 0B 0B 0B
╰─># lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-26xx v4
Stepping: 1
CPU MHz: 3192.606
BogoMIPS: 6385.21
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
### What did you expect to see?
no tcp dial timeout.
### What did you see instead?
dialog err: dial tcp 127.0.0.1:8888: i/o timeout i为3:
dialog err: dial tcp 127.0.0.1:8888: i/o timeout i为3:
dialog err: dial tcp 127.0.0.1:8888: i/o timeout i为3:
dialog err: dial tcp 127.0.0.1:8888: i/o timeout i为3:
| NeedsInvestigation | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.