id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
βŒ€
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
507,510,214
flutter
Support customizable page transition durations
The latest Android 10 activity transition occurs over 400ms. Currently, `MaterialPageRoute` has a transition duration that is set to a constant of 300ms. We should create a new parameter so that developers can modify/customize the transition duration if they'd like. This also allows Flutter's Android 10 transition to have the correct transition duration. - [ ] Add `MaterialPageRoute.transitionDuration` parameter - [ ] If possible, add a new `PageTransitionsTheme.transitionDuration` property that is correctly looked up by all page route classes so that it needs to only be specified in one place instead of wherever a new page route is defined.
framework,f: material design,f: cupertino,P2,team-design,triaged-design
low
Minor
507,518,387
godot
[Mono] ResourceSaver.Save() not working with custom classes
**Godot version:** Godot Engine v3.2.alpha.mono.custom_build.119bf2372 **OS/device including version:** Windows 10 **Issue description:** Using C#, ResourceSaver.Save() does not work with custom classes. **Steps to reproduce:** The MRP constains a custom scene and a custom resource. Running the project will save 4 files into `res://save/` 1. custom_resource.tres - created via `new CustomResource();` 2. loaded_custom_resource.tres - created view `GD.Load<>("..").New()` 3. CustomNode.tscn - created via `new CustomNode();` 4. InstancedCustomNode.tscn - created via `GD.Load<>("..").Instance()` Both custom classes contain an exported string variable `variable` which is "default" by default and is modified to "modified" before saving. You will see that saved files created via `new Constructor()` have the wrong script path and files created via `GD.Load()` + `New()/Instance()` have the correct script path but the exported variable contains the default value `"default"` despite having been modified. **Minimal Reproduction Project:** [resource_saver.zip](https://github.com/godotengine/godot/files/3751666/resource_saver.zip)
bug,confirmed,topic:dotnet
low
Major
507,539,701
pytorch
Traced resnet101 leaks memory during `forward`
## πŸ› Bug Memory consumption increases when running inference with torch script module. ## To Reproduce code samples: ``` import os import psutil import torch import torchvision traced_script_module = torch.jit.trace(torchvision.models.resnet101(pretrained=True), torch.rand(1, 3, 224, 224)) process = psutil.Process(os.getpid()) print(process.memory_info().rss / 1e9) for i in range(10): samples = torch.rand(32, 3, 224, 224) predictions = traced_script_module(samples) print(predictions) process = psutil.Process(os.getpid()) print(process.memory_info().rss / 1e9) ``` output ``` 0.595202048 tensor([[-1.5747, 0.6994, -0.7375, ..., 0.8081, 1.2429, -0.5155], [ 1.2828, -1.1425, -0.2093, ..., -1.0598, 1.4675, 0.2916], [-2.8575, 0.1437, -0.9690, ..., -1.2187, 2.2080, 1.3188], ..., [ 0.2986, 0.6763, 1.1985, ..., 1.7018, 0.4733, 0.8496], [-0.8345, -1.2992, -0.7214, ..., -0.6559, 1.0582, -0.1334], [-0.7516, -0.8837, 0.1562, ..., -0.5259, -0.7300, 0.5229]], grad_fn=<DifferentiableGraphBackward>) 5.99044096 tensor([[-1.5746, -1.8324, -1.2751, ..., -1.7721, 0.2030, 1.1744], [ 1.7051, -0.3812, -2.5524, ..., -1.3434, 1.6557, 0.2199], [ 0.0429, 0.2007, 1.2892, ..., -1.7945, -0.8777, 0.0374], ..., [-0.6182, -0.0681, 0.4620, ..., 0.0140, 0.1492, 0.3290], [-0.0735, 0.3230, -0.7785, ..., -0.8295, 0.1701, 0.3623], [-1.1048, -0.9883, 0.0532, ..., -1.6516, -0.7215, 0.1046]], grad_fn=<DifferentiableGraphBackward>) 8.926531584 tensor([[-3.3263, -0.3973, -2.1202, ..., -0.9237, 1.4103, 1.6226], [-1.8091, -1.1055, -2.0259, ..., -0.3219, 0.2350, 0.8425], [-1.7041, -0.3857, -1.3425, ..., 2.0590, 1.9119, 0.3337], ..., [-0.2145, -0.0698, -1.4803, ..., 2.4058, 1.6146, 1.4400], [ 0.7031, 0.0917, 1.5251, ..., 1.4080, 1.0682, 0.9095], [-1.8758, 0.2686, -0.0282, ..., 0.2069, 0.5055, 0.7498]], grad_fn=<DifferentiableGraphBackward>) 9.022865408 tensor([[-1.5912, -1.1433, -2.2109, ..., -1.6980, 3.1191, -0.4465], [-1.2363, -0.8868, -0.2385, ..., -1.9240, 0.8099, 0.6127], [-2.8276, -0.5101, 0.5070, ..., -2.0766, -1.3493, 1.4947], ..., [-1.4246, -1.2391, -1.4561, ..., -1.8967, 1.7254, 0.9091], [-0.5911, -0.2010, -1.3293, ..., 1.2857, 0.5298, -0.3506], [-2.2977, -0.2686, -1.9806, ..., -0.6035, 1.0328, 1.3565]], grad_fn=<DifferentiableGraphBackward>) 10.608922624 tensor([[ 0.9014, 1.0406, -0.0942, ..., 0.7062, -0.9783, -1.1471], [-1.0258, -0.0566, -1.1202, ..., -0.5074, -0.1088, -0.1099], [-0.2558, 1.1689, 1.3797, ..., -2.4711, 0.0562, 1.8224], ..., [ 0.2017, -0.3057, -0.5456, ..., 0.9477, 1.2708, 0.4418], [-1.5674, 0.1624, -1.1115, ..., 1.0431, 1.0180, -0.0383], [-2.1499, -0.7444, -1.4897, ..., 0.6625, 0.8116, 2.2398]], grad_fn=<DifferentiableGraphBackward>) 10.608922624 tensor([[-0.7774, 0.4201, -3.0641, ..., -2.9160, 2.4548, -0.6954], [-1.5430, -0.3068, -2.7503, ..., -2.2520, 0.5887, 0.9933], [-0.6780, 0.1065, 1.2061, ..., -2.1829, 0.6028, -0.0989], ..., [-1.1435, -0.7770, -1.7809, ..., -1.6382, -0.0364, 0.1657], [ 0.0038, -0.7613, -1.7814, ..., -0.9964, 1.4763, 1.1532], [-0.1706, -0.0197, 0.8526, ..., -1.4628, 2.2953, 1.3528]], grad_fn=<DifferentiableGraphBackward>) 10.608922624 ``` ## Expected behavior memory consumption stays constant as the first iteration. ## Environment (or fill out the checklist below manually). Collecting environment information... PyTorch version: 1.3.0 Is debug build: No CUDA used to build PyTorch: 10.1.243 OS: Ubuntu 16.04.2 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.6 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip3] numpy==1.13.3 [pip3] numpydoc==0.6.0 [pip3] torch==0.1.12.post1 [pip3] torchvision==0.1.8 [conda] torch 1.3.0 <pip> [conda] torchvision 0.4.1 <pip> cc @ezyang @gchanan @zou3519 @jerryzh168 @suo
module: memory usage,triaged
low
Critical
507,631,696
flutter
The platform in editable.dart should probably be obtained from the BuildContext
The platform in [editable.dart](https://github.com/flutter/flutter/blob/b9817b4f595c9ee83bc0625efa60a19599565d63/packages/flutter/lib/src/rendering/editable.dart) should probably be obtained from the `BuildContext`. https://github.com/flutter/flutter/blob/b9817b4f595c9ee83bc0625efa60a19599565d63/packages/flutter/lib/src/rendering/editable.dart#L1607 https://github.com/flutter/flutter/blob/b9817b4f595c9ee83bc0625efa60a19599565d63/packages/flutter/lib/src/rendering/editable.dart#L1665
a: text input,framework,c: proposal,P2,team-framework,triaged-framework
low
Minor
507,668,009
youtube-dl
How To Download A Video from Hotstar which has more than 1 audio track
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions - Search the bugtracker for similar questions: http://yt-dl.org/search-issues - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm asking a question - [x] I've looked through the README and FAQ for similar questions - [x] I've searched the bugtracker for similar questions including closed ones ## Question <!-- Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. --> How to download a video with more than 1 audio track from hotstar.com or any other website. for example i am using hotstar.com here https://www.hotstar.com/in/movies/mangalyaan-indias-mission-to-mars/1770005017 This video has 3 or 4 audio track. but when i download i get only the first audio track.
question
low
Critical
507,699,537
vue
<template> tag which inside v-pre directive, will be ignore if v-pre doesn't use at staticRoot
### Version 2.6.10 ### Reproduction link [https://jsfiddle.net/vuetest/c5uw870y/3/](https://jsfiddle.net/vuetest/c5uw870y/3/) ### Steps to reproduce 1、click jsfiddle link 2、you will see result is {{msg}},is incorrect ### What is expected? render template tag as html element, dom looks like that: ```html <div> <p> <template> #document-fragment <span>{{msg}}</span> </template> </p> </div> ``` ### What is actually happening? template tag disappeared ```html <div> <p> <span>{{msg}}</span> </p> </div> ``` --- when v-pre used at the staticRoot, template render correctly, fixed by [#8146](https://github.com/vuejs/vue/pull/8146). see this also: https://jsfiddle.net/vuetest/c5uw870y/5/ when v-pre does not used at the staticRoot, template tag will be skip, source code in `vue/src/compiler/codegen/index.js` is: ```js export function genElement (el: ASTElement, state: CodegenState): string { // ... } else if (el.tag === 'template' && !el.slotTarget && !state.pre) { // template tag run into here, directively render it's children instead of him return genChildren(el, state) || 'void 0' } // ... ``` how to slove this problem: ```js export function genElement (el: ASTElement, state: CodegenState): string { if (el.parent) { el.pre = el.pre || el.parent.pre; // add this line state.pre = el.pre; } ``` I'm not familiar with creating pr with test, someone else can help me? <!-- generated by vue-issues. DO NOT REMOVE -->
regression,has workaround
low
Minor
507,709,939
rust
Make the documentation about `#![allow(unused)]` more visible
I like to build functions as a series of parts, for example I might write the following skeleton, then plan to fill in the loop ``` fn sorter(x: Vec<Vec<isize>>, modifier: bool) -> Vec<isize> { let mut out = vec![]; for i in x { } out } ``` If I run a cargo check on this, just to check for typos, I get: ``` warning: unused variable: `i` --> src/playgame.rs:35:9 | 35 | for i in x {} | ^ help: consider prefixing with an underscore: `_i` | = note: `#[warn(unused_variables)]` on by default warning: unused variable: `modifier` --> src/playgame.rs:33:31 | 33 | fn sorter(x: Vec<Vec<isize>>, modifier: bool) -> Vec<isize> { | ^^^^^^^^ help: consider prefixing with an underscore: `_modifier` warning: variable does not need to be mutable --> src/playgame.rs:34:9 | 34 | let mut out = vec![]; | ----^^^ | | | help: remove this `mut` | = note: `#[warn(unused_mut)]` on by default warning: function is never used: `sorter` --> src/playgame.rs:33:1 | 33 | fn sorter(x: Vec<Vec<isize>>, modifier: bool) -> Vec<isize> { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: `#[warn(dead_code)]` on by default ``` I feel it should be possible to encapsulate a set of warnings which represent "unused/dead", and have a way of disabling them while developing. I realise this could be abused by some people, but I find it hard to spot the important issues between these unused/dead issues while developing.
C-enhancement,A-lints,A-diagnostics,T-compiler,A-docs
low
Minor
507,710,412
terminal
"less" pager renders in the wrong color when started at the bottom of the screen from powershell
# Environment ```none Windows build number: 10.0.18362.356 Windows Terminal version (if applicable): 0.5.2762.0 Any other software? less pager (https://chocolatey.org/packages/Less) ``` # Steps to reproduce Open PowerShell Tab `less <somefile>` will show the file in foreground color `gci` `less <somefile>` will show the file in black color # Expected behavior `<somefile>` should be shown in foreground color # Actual behavior Somehow, after "gci" the file is shown in black. This does not happen with Cmd (note: at least when I tried to reproduce it, but I believe it had happened previously with cmd as well) and it doesn't happen in PowerShell or Cmd standalone (same note applies).
Help Wanted,Area-Rendering,Product-Conpty,Area-Output,Issue-Bug,Priority-3
low
Major
507,740,955
flutter
Support `pub upgrade --dry-run`
The issue has mostly been resolved on the dart side already and it actually supports the `--dry-run` option https://github.com/dart-lang/sdk/issues/15243. However, flutter does not. So in a dart project one can run: ``` pub upgrade --dry-run ``` But with a flutter project you can't do that. If you try using the pub command from dart sdk, you get this error: ``` Resolving dependencies... Because card_communicator depends on flutter_test any from sdk which doesn't exist (the Flutter SDK is not available), version solving failed. Flutter users should run `flutter pub get` instead of `pub get`. ``` And on the other hand: ``` $ flutter pub upgrade --dry-run Could not find an option named "dry-run". Run 'flutter -h' (or 'flutter <command> -h') for available flutter commands and options. ``` So, yeah would be super nice to get this feature also available for flutter side. @munificent you were heavily involved with the dart side of things. Maybe you know how this could be made to work for flutter? It sounds like the heavy lifting for this has already been done and supporting this on flutter side might be almost trivial?
c: new feature,tool,P3,team-tool,triaged-tool
low
Critical
507,742,224
flutter
Text decoration underline is not aligning some characters or space
## Steps to Reproduce Setting text decoration to underline or overline, the decoration is not aligned in some languages' characters or space. I thought there should be an issue here but didn't find any. Is this considered to be a feature which is intended or it's a bug? <img width="438" alt="Screenshot_2019-10-16 17 41 52_Vj9vK6" src="https://user-images.githubusercontent.com/14291993/66907949-d1cba900-f03c-11e9-869c-606f1cd43b15.png"> <img width="395" alt="Screenshot_2019-10-16 17 42 03_7bFvkz" src="https://user-images.githubusercontent.com/14291993/66907953-d2fcd600-f03c-11e9-9c8b-cf678d6f6f30.png"> <img width="580" alt="Screenshot_2019-10-16 17 42 08_8r9PF4" src="https://user-images.githubusercontent.com/14291993/66907957-d4c69980-f03c-11e9-8055-41d16b747f23.png"> **Target Platform:**Android/iOS **Target OS version/browser:**any **Devices:**iPhone, OnePlus phones ``` [βœ“] Flutter (Channel beta, v1.10.7, on Mac OS X 10.15 19A583, locale zh-Hans-CN) [βœ“] Android toolchain - develop for Android devices (Android SDK version 28.0.3) [βœ“] Xcode - develop for iOS and macOS (Xcode 11.0) [βœ“] Android Studio (version 3.5) [βœ“] Connected device (2 available) β€’ No issues found! ```
framework,engine,a: internationalization,a: quality,a: typography,has reproducible steps,P2,found in release: 3.7,found in release: 3.9,team-engine,triaged-engine
low
Critical
507,751,961
rust
`unused_variables` and `unused_assignments` lints do not catch array usage
I recently had a problem where I was accidentally implicitly copying an array with `if let` bindings, and the writes to the original array were never made (I needed to add `ref` in my bindings). I thought it was strange that there was no warnings for this, and noticed this is true for these lints in general. Here's a small reproduction (also on the [playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=d33b43de0a7f7bb853ff77e855bf8937)). ```rust fn main() { // Write but never read. No warning is emitted let mut arr = [0; 5]; arr[0] = 4; [0][0] = 0; // Write but never read, the following warnings are emitted // warning: variable `x` is assigned to, but never used // --> src/main.rs:7:13 // | // 7 | let mut x = 4; // | ^ // | // = note: `#[warn(unused_variables)]` on by default // = note: consider using `_x` instead // warning: value assigned to `x` is never read // --> src/main.rs:8:5 // | // 8 | x = 12; // | ^ // | // = note: `#[warn(unused_assignments)]` on by default // = help: maybe it is overwritten before being read? let mut x = 4; x = 12; } ``` I would expect these lints to fire here, is this expected behavior? I tried a quick search and wasn't able to find an existing issue.
C-enhancement,A-lints,T-lang,T-compiler,A-array
low
Critical
507,803,345
svelte
Documentation crosslinking
Following [this conversation](https://github.com/sveltejs/svelte/issues/3694#issuecomment-541454741) I've started a spreadsheet to track down what parts of the API, the tutorial and the examples can eventually be interlinked [1]. Once we'll have figured out all the links we can figure out eventually how to add links in a non-intrusive/usable way in the site, depending on the links volume. Assigning to myself but please feel free to get onboard and help as this seems quite a bit of work (we can use comments in the spreadsheet for ease of use). The json files extracted from the site as of now are at [2]. ---- These are the sections of the 3 types of doc, each having 2 columns on the right to add correspondences. If a section in `/docs` has a correspondence in `/tutorial` we can add an "x" in the "Tut" column and link it to the correspondent cell under "Tutorial". For example these cells are linked: <img width="1020" alt="Screenshot 2019-10-16 at 12 41 43" src="https://user-images.githubusercontent.com/1309648/66916070-64a31e00-f012-11e9-8239-ac94e28f4034.png"> If we'll find sections with multiple links we can add more "x cells", say for example like this: <img width="384" alt="Screenshot 2019-10-16 at 12 44 53" src="https://user-images.githubusercontent.com/1309648/66916286-d0858680-f012-11e9-84eb-37423b15462a.png"> For ease of use: - Cmd-K to link it to some target cell - select the target cell or type in the target cell coordinate (`E11`) - save [1] https://docs.google.com/spreadsheets/d/14bdp_XNm-n9opbymT_F1T7t0j9Edwl8NSK5G9gweiCU/edit#gid=0 [2] https://gist.github.com/mindrones/bac9331855f29476598ed274be330118
site,documentation
low
Minor
507,838,561
youtube-dl
Add URL's from "Comedy Central Brasil" to extractor comedycentral.py
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.10.16. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a feature request - [x] I've verified that I'm running youtube-dl version **2019.10.16** - [x] I've searched the bugtracker for similar feature requests including closed ones ## Description <!-- Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible. --> Comedycentral.py extractor currently only supports cc.com URLs, and I would like support for Comedy Central Brasil URLs to be added (https://www.comedycentral.com.br) So when they need the credentials I email them I've been searching for other open issues and saw that I already had # 20754, but it had no return and I also noticed that youtube-dl already has support for Comedy Central, except for Brazil =============================================================================== [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['-v', '-F', 'https://www.comedycentral.com.br/episodios/1lejgv/roast-de-alec-baldwin-roast-of-alec-baldwin-temporada-1-ep-1'] [debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2019.10.16 [debug] Python version 3.6.3 (CPython) - Linux-4.15.0-30deepin-generic-x86_64-with-debian-9.0 [debug] exe versions: avconv 3.2.12-1, avprobe 3.2.12-1, ffmpeg 3.2.12-1, ffprobe 3.2.12-1, rtmpdump 2.4 [debug] Proxy map: {} [generic] roast-de-alec-baldwin-roast-of-alec-baldwin-temporada-1-ep-1: Requesting header WARNING: Falling back on generic information extractor. [generic] roast-de-alec-baldwin-roast-of-alec-baldwin-temporada-1-ep-1: Downloading webpage [generic] roast-de-alec-baldwin-roast-of-alec-baldwin-temporada-1-ep-1: Extracting information ERROR: Unsupported URL: https://www.comedycentral.com.br/episodios/1lejgv/roast-de-alec-baldwin-roast-of-alec-baldwin-temporada-1-ep-1 Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/youtube_dl/YoutubeDL.py", line 796, in extract_info ie_result = ie.extract(url) File "/usr/local/lib/python3.6/site-packages/youtube_dl/extractor/common.py", line 530, in extract ie_result = self._real_extract(url) File "/usr/local/lib/python3.6/site-packages/youtube_dl/extractor/generic.py", line 3349, in _real_extract raise UnsupportedError(url) youtube_dl.utils.UnsupportedError: Unsupported URL: https://www.comedycentral.com.br/episodios/1lejgv/roast-de-alec-baldwin-roast-of-alec-baldwin-temporada-1-ep-1
request
low
Critical
507,847,123
godot
Cannot load texture imported as TextureAtlas
**Godot version:** Godot Engine v3.2.alpha2.official **OS/device including version:** Windows 10 (1709) **Issue description:** It exists in filesystem, but can't be used After opening the project: ``` ERROR: get_multiple_md5: Condition ' !f ' is true. Continuing..: At: core/os/file_access.cpp:667 ERROR: Cannot open file 'res://.import/icon.png-487276ed1e3a0c39cad0279d744ee560.res'. At: core/io/resource_format_binary.cpp:1035 ``` When trying to select (double click) it: ``` ERROR: Cannot open file 'res://.import/icon.png-487276ed1e3a0c39cad0279d744ee560.res'. At: core/io/resource_format_binary.cpp:986 ERROR: Failed loading resource: res://.import/icon.png-487276ed1e3a0c39cad0279d744ee560.res. At: core/io/resource_loader.cpp:278 ERROR: Failed loading resource: res://icon.png. At: core/io/resource_loader.cpp:278 ERROR: load_resource: Condition ' !res.is_valid() ' is true. returned: ERR_CANT_OPEN At: editor/editor_node.cpp:656 ``` **Steps to reproduce:** Image from https://godotengine.org/article/atlas-support-returns-godot-3-2 ![image](https://user-images.githubusercontent.com/43543909/66921749-7b576e00-f02e-11e9-8571-a95eb90b4795.png) **Minimal reproduction project:** [BrokenAtlas.zip](https://github.com/godotengine/godot/files/3734424/BrokenAtlas.zip)
bug,topic:core,confirmed,topic:import
low
Critical
507,884,642
flutter
Stylus doesn't trigger PointerDownEvent
## The main problem When you use a stylus as an input device, sometimes a PointerDownEvent gets triggered and then for the next 5-30sec the application doesn't handle any pointer inputs(it probably just freezes). Then I get a notice from the system that the app isn't responding and in that moment everything works as it should. The Problem then only reoccurs after you restart the Application. Note: As long as I don't use the stylus everything works fine. The main strange thing about this is that it only happens sometimes. ## The second problem When I press the button on my SPen and then touch the display, no pointer down events are triggered at all. ## Steps to Reproduce 1. Create a new Flutter Project and paste the content of the test application into main.dart (https://pastebin.com/PDuDVFQU) 2. Load the application onto your test device 3. Run the app (It seems to me, that this problem occurs with a higher probability if the data of the app is deleted beforehand, and the app is started from the device itself) Target Platform: Android Target OS version: 9 Devices: Samsung tab S4 (SM T830); Samsung Tab S3 (SM T820) It seems like the problem occurs more frequently on the S4. But this could also be my imagination. ## Logs Log file: https://pastebin.com/9GFaVMUq In line 1278 it says I didn't do anything for 20711 ms even though I instantly put the pen down again after I lifted it. Flutter analyze doesn't return any issues (after you remove the curly brackets in the Strings which don't affect the problem) <!-- Finally, paste the output of running `flutter doctor -v` here. --> ``` [βœ“] Flutter (Channel beta, v1.10.7, on Linux, locale en_US.UTF-8) β€’ Flutter version 1.10.7 at /home/ich/programming-libraries/flutter-sdk/flutter β€’ Framework revision e70236e36c (2 weeks ago), 2019-10-02 09:32:30 -0700 β€’ Engine revision 9e6314d348 β€’ Dart version 2.6.0 (build 2.6.0-dev.4.0 1103600280) [!] Android toolchain - develop for Android devices (Android SDK version 28.0.3) β€’ Android SDK at /home/ich/Android/Sdk β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-28, build-tools 28.0.3 β€’ Java binary at: /home/ich/.local/share/JetBrains/Toolbox/apps/AndroidStudio/ch-0/191.59002 03/jre/bin/java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) ! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses [!] Android Studio (version 3.5) β€’ Android Studio at /home/ich/.local/share/JetBrains/Toolbox/apps/AndroidStudio/ch-0/191.59002 03 βœ— Flutter plugin not installed; this adds Flutter specific functionality. βœ— Dart plugin not installed; this adds Dart specific functionality. β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) [!] VS Code (version 1.39.1) β€’ VS Code at /usr/share/code βœ— Flutter extension not installed; install from https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter [βœ“] Connected device (1 available) β€’ SM T830 β€’ ce11182b6346556a1c7e β€’ android-arm64 β€’ Android 9 (API 28) ! Doctor found issues in 3 categories. ``` Note: The doctor is wrong. I don't use VSCode and AndroidStudio is up to date with version 3.5.1 Thanks in advance for your help. - Rimaito
e: device-specific,platform-android,framework,f: gestures,P2,found in release: 1.20,team-android,triaged-android
low
Critical
507,886,151
rust
Support index size != pointer width
## Preliminaries `usize` is the pointer-sized unsigned integer type [1]. It is also Rust's index type for slices and loops; this definition works well when pointer size corresponds to the space of indexable objects (most targets today). Informally, `uintptr_t == size_t`. Note that the target pointer width is indisputably set by the LLVM data layout string. It would be correct to say that it is currently impossible to have `usize` different to `target_pointer_width` without breaking numerous assumptions in rustc [2, 3]. Unfortunately, `uintptr_t == size_t` doesn't hold for all architectures. For context, I've worked toward (not active) compiling Rust for MIPS/CHERI (CHERI128) [4]. This target has 128-bit capability pointers (as in layout string), and a 64-bit processor and address space. I also assume that we don't want programmers messing with pointers in Safe Rust, and that they shouldn't have to care how a pointer (or reference) is represented/manipulated by an architecture. ## Problem I think that more than one type is necessary here, to distinguish between the "index" or "size" component of a pointer (a la `size_t`), and the space required to contain a pointer (`uintptr_t`). To me, the ideal solution is to change `usize` to be in line with `size_t` and not `uintptr_t`. As @briansmith [notes](https://github.com/rust-lang/rfcs/issues/1748#issuecomment-475368575), this would be a breaking semantic change. I claim that this is only problematic on architectures where `uintptr_t != size_t`. As such, code breakage from changing this assumption is constrained to targets where the code was _already_ broken. Why not have a 128-bit `usize`? This _is_ technically feasible, and it's the basis of my compilation of Rust for CHERI. But: * Bounds checks explode from 2 instructions to 7. Yes, this occurs with optimisation on, but no, I haven't profiled it on real-world applications. * rustc tries to index into LLVM intrinsics such as `memcpy` with 128-bit integers. This isn't defined in the backend, and _arguably shouldn't be defined_. I will not be the last person to wonder why `memcpy` doesn't generate any instructions. * The address space is 64 bits. `ptr as int` gives an LLVM `i64`, which can't be cast/isn't comparable to an `i128`; again there is no good reason to manipulate 128-bit integers here. Likewise when calling `inttoptr`, which is a valid instruction even if the result can't be dereferenced [5]. It may not be necessary to define and expose a `uintptr_t` type. It's optionally defined in C; I'm not sure programmers want to use such a type, and it could be relegated to the compiler. I haven't thought about this seriously, though. **The key issue is the conflict between index size and pointer width.** How can we resolve this conflict, and support architectures with index size != pointer width? (or: why isn't this a problem at all?) ## Other questions **Is this a better kind of broken?** I don't know, that's what this issue is for. What is certain is that lots of libc-using code probably depends on `usize == uintptr_t == size_t` and that these will break in either case. **Is provenance a problem?** From my experience with the Rust compiler, no [6]. Integers (`usize`) are never cast back to pointers and dereferenced. We already know this at some level (rust-lang/unsafe-code-guidelines#52). This suggests no fundamental link between indexing (i.e. `usize`) and pointer width. **Will we really see 128-bit pointers in our lifetime?** I don't speak with authority on CHERI, but 64 bits definitely isn't enough for the "usual" 48-bit address space there [7]. **But CHERI breaks the C specification; how can we discuss this issue in terms of C types?** This issue really isn't about CHERI [8], or C. I won't speculate on the C specification or whether it's helpful for Rust. I use C types as the people likely to engage with this issue are familiar with them. **What about LLVM address spaces?** This is a whole new can of worms. I believe rustc will only use one LLVM address space, and in particular won't support two address spaces with different pointer widths. This is an issue for CHERI in hybrid capability mode, but also of supporting any architecture with multiple address spaces. [AVR-Rust](https://github.com/avr-rust/) probably cares about address spaces and may have some expertise here. ## Related * The question of whether `usize == uintptr_t` (rust-lang/libc#1400) * Assuming that `usize` == `size_t` will break C FFI code (rust-lang/unsafe-code-guidelines#99). This isn't a problem _per se_, but we almost encourage wrong assumptions in unsafe code. * The problem of `usize` being linked to the bitness of the architecture (rust-lang/unsafe-code-guidelines#152) * This very fragile code to print out pointer width demonstrates the level of assumption in the Rust codebase (rust-lang/rust#56567); also related: rust-lang/rfcs#1748 ## Notes [1] From https://doc.rust-lang.org/std/primitive.usize.html [2] As remarked by @gnzlbg in https://github.com/rust-lang/libc/issues/1400#issuecomment-502308097; this related problem is a bit subtle and quite complex. [3] It isn't clear (to me!) whether this is primarily a compiler implementation problem or a semantic problem, but that is not the subject of this issue. [4] This issue does not motivate support of a particular architecture, though there has been community interest in CHERI. [5] This is relevant when finding out the size of an object, for example. While generating instructions to extend or truncate the integers is possible, this seems a silly use of cycles at compile time (and possibly runtime). [6] My experience is limited to rustc (c. 1.35 nightly), libcompiler_builtins, libcore, and liballoc. Some modification was needed to make this work, but no egregious violations. [7] See [CHERI Concentrate](https://www.cl.cam.ac.uk/research/security/ctsrd/pdfs/2019tc-cheri-concentrate.pdf) for an overview of the considerations. [8] In particular I'm not asking for help in porting Rust to CHERI, or any other platform. However, _I would like support for other architectures to be technically possible_. _(edits because I accidentally posted early)_
A-type-system,T-lang,C-feature-request,needs-rfc,T-types
medium
Critical
507,914,316
pytorch
[discussion] Smarter version of torch.reshape (can avoid realloc in some cases)
Imagine a following situation: ```python a = torch.rand(2,3,4) a_ = a.transpose(-1, -2) b_ = someltwiseinplacefunc(a_.reshape(2, -1)) # reshape seems to reallocate (checked by data_ptr), view will error out b = b_.view_as(a) # or b_.view_as(a_) (currently view_as copies dimension order, disregarding the strides, though) ``` Currently `a_.reshape(2, -1)` seems to reallocate, but it's not necessary for all operations, especially elementwise ones (given that frequently will reshape back the result). These things happen in C++ sometimes. I guess currently the solution is manual handling of strides. If it doesn't reallocate, the semantics seems different, but it may still be worthwhile to allow for flattening of contiguous chunks of memory without reallocation and ignoring the striding dimension order.
triaged,function request,module: viewing and reshaping
low
Critical
507,941,340
opencv
Inconsistent behaviour in cvtColor
When converting from RGB to Lab, cvtColor will apply a gamma curve to the input data before doing the conversion, which implicitly means that the conversion function expects the input to be in a non-linear space. When converting from RGB to XYZ however, no linearization step is performed which makes the use of cvtColor inconsistent.
category: imgproc,RFC
low
Minor
507,942,784
pytorch
Move options logic in _like functions from parsing layer to implementations
We have fairly complex parsing logic for _like functions (e.g. empty_like) for dealing with dtypes, etc. This was necessary when we didn't have optional TensorOptions parameters, because we had no way of distinguishing "default dtype" from "the dtype wasn't specified" once we got past the parser level. But this isn't true anymore. So we should just write the logic we want in the ATen function. I also don't know if JIT handles these things correctly, but it either doesn't or has similar complex logic for working around this issue.
triaged
low
Minor
507,947,384
godot
Godot crashes when using CurveTexture::set_curve
**Godot version:** 3.2.alpha.custom_build. 1fed266bf **OS/device including version:** Ubuntu 19.04 **Issue description:** The minimal project is very, very big because Godot crashes very rarely, sometimes after 5s but sometimes doesn't crash after 10 min. Backtrace: ``` core/vmap.h:86:21: runtime error: member access within null pointer of type 'const struct Pair' core/vmap.h:86:21: runtime error: member call on null pointer of type 'const struct Target' core/vmap.h:86:17: runtime error: member access within null pointer of type 'const struct Pair' core/object.h:449:74: runtime error: member access within null pointer of type 'const struct Target' handle_crash: Program crashed with signal 11 Dumping the backtrace. Please include this when reporting the bug on https://github.com/godotengine/godot/issues [1] godots() [0x140a2b0] (/mnt/KubuntuWolne/godot/platform/x11/crash_handler_x11.cpp:54) [2] /lib/x86_64-linux-gnu/libc.so.6(+0x43f60) [0x7f1616784f60] (??:0) [3] Object::Signal::Target::operator<(Object::Signal::Target const&) const (/mnt/KubuntuWolne/godot/core/object.h:449) [4] VMap<Object::Signal::Target, Object::Signal::Slot>::_find(Object::Signal::Target const&, bool&) const (/mnt/KubuntuWolne/godot/./core/vmap.h:86) [5] VMap<Object::Signal::Target, Object::Signal::Slot>::insert(Object::Signal::Target const&, Object::Signal::Slot const&) (/mnt/KubuntuWolne/godot/./core/vmap.h:120) [6] VMap<Object::Signal::Target, Object::Signal::Slot>::operator[](Object::Signal::Target const&) (/mnt/KubuntuWolne/godot/./core/vmap.h:200) [7] Object::connect(StringName const&, Object*, StringName const&, Vector<Variant> const&, unsigned int) (/mnt/KubuntuWolne/godot/core/object.cpp:1483) [8] CurveTexture::set_curve(Ref<Curve>) (/mnt/KubuntuWolne/godot/scene/resources/texture.cpp:1731 (discriminator 4)) [9] MethodBind1<Ref<Curve> >::call(Object*, Variant const**, int, Variant::CallError&) (/mnt/KubuntuWolne/godot/./core/method_bind.gen.inc:775 (discriminator 12)) [10] Object::call(StringName const&, Variant const**, int, Variant::CallError&) (/mnt/KubuntuWolne/godot/core/object.cpp:921 (discriminator 1)) [11] Variant::call_ptr(StringName const&, Variant const**, int, Variant*, Variant::CallError&) (/mnt/KubuntuWolne/godot/core/variant_call.cpp:1096 (discriminator 1)) [12] GDScriptFunction::call(GDScriptInstance*, Variant const**, int, Variant::CallError&, GDScriptFunction::CallState*) (/mnt/KubuntuWolne/godot/modules/gdscript/gdscript_function.cpp:1085) [13] GDScriptInstance::call_multilevel(StringName const&, Variant const**, int) (/mnt/KubuntuWolne/godot/modules/gdscript/gdscript.cpp:1180) [14] Node::_notification(int) (/mnt/KubuntuWolne/godot/scene/main/node.cpp:58) [15] Node::_notificationv(int, bool) (/mnt/KubuntuWolne/godot/./scene/main/node.h:46 (discriminator 14)) [16] CanvasItem::_notificationv(int, bool) (/mnt/KubuntuWolne/godot/./scene/2d/canvas_item.h:166 (discriminator 3)) [17] Node2D::_notificationv(int, bool) (/mnt/KubuntuWolne/godot/./scene/2d/node_2d.h:38 (discriminator 3)) [18] Object::notification(int, bool) (/mnt/KubuntuWolne/godot/core/object.cpp:933) [19] SceneTree::_notify_group_pause(StringName const&, int) (/mnt/KubuntuWolne/godot/scene/main/scene_tree.cpp:963) [20] SceneTree::idle(float) (/mnt/KubuntuWolne/godot/scene/main/scene_tree.cpp:515 (discriminator 3)) [21] Main::iteration() (/mnt/KubuntuWolne/godot/main/main.cpp:1976) [22] OS_X11::run() (/mnt/KubuntuWolne/godot/platform/x11/os_x11.cpp:3192) [23] godots(main+0x342) [0x1401ba4] (/mnt/KubuntuWolne/godot/platform/x11/godot_x11.cpp:57) [24] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f1616767b6b] (??:0) [25] godots(_start+0x2a) [0x14017aa] (??:?) ``` **Steps to reproduce:** Read project manual - https://github.com/qarmin/The-worst-Godot-test-project#the-worst-godot-test-project Run project **Minimal reproduction project:** [The-worst-Godot-test-project.zip](https://github.com/qarmin/The-worst-Godot-test-project/archive/8f27b1de0293fd91169eb5e39a3cd032882ce0d2.zip)
bug,topic:core,crash
low
Critical
507,967,613
TypeScript
`static abstract` methods and properties
This is a continuation of #14600 which had two separate features proposed in the same issue (static members in interfaces *and* abstract static class members) ## Search Terms static abstract method property properties implement concrete ## Suggestion Currently, this code is illegal: ```ts abstract class A { static abstract doSomething(): void; } // Should be OK class B extends A { static doSomething() { } } // Should be an error; non-abstract class failed to implement abstract member class C extends A { } ``` It should be legal to have `abstract static` (`static abstract`?) members. ## Use Cases (what are they?) ## Unresolved Questions **What calls of abstract static methods are allowed?** Let's say you wrote a trivial hierarchy ```ts abstract class A { static abstract doSomething(): void; } class B extends A { static doSomething() { } } ``` For an expression `x.doSomething()`, what are valid `x`s? #### Option 1: All of them Because `this` isn't generic in `static` members, we should simply allow all invocations of abstract static methods. Otherwise code like this would be illegal: ```ts abstract class A { static abstract initialize(self: A): void; static createInstance() { const a = new this(); this.initialize(a); return a; } } ``` However, this means that TypeScript would miss straight-up crashes: ```ts // Exception: 'this.initialize' is not a function A.createInstance(); ``` * **Pros**: Ergonomic * **Cons**: Literally allows the runtime-crashing code `A.doSomething()`, which seems like a fairly large design deficit #### Option 2: None of them Allowing crashes is bad, so the rule should be that `static abstract` methods simply don't exist from a type system perspective except to the extent that they enforce concrete derived class constraints: ```ts abstract class A { static abstract doSomething(): void; } class B extends A { static doSomething() { } } // Error, can't call abstract method A.doSomething(); // This call would work, but it'd still be an error const Actor: typeof A = B; Actor.doSomething(); function indirect(a: { doSomething(): void }) { a.doSomething(); } // Error, can't use abstract method 'doSomething' to satisfy concrete property indirect(A); // OK indirect(B); ``` This is unergonomic because it'd be impossible to write a function that dealt with an arbitrary complex constructor function without tedious rewriting: ```ts abstract class Complicated { static abstract setup(): void; static abstract print(): void; static abstract ship(): void; static abstract shutdown(): void; } function fn(x: typeof Complicated) { // Error, can't call abstract method x.setup(); // Error, can't call abstract method x.print(); // Error, can't call abstract method x.ship(); // Error, can't call abstract method x.shutdown(); } ``` We know this is a problem because people get tripped up by it constantly when they try to `new` an abstract class: https://www.reddit.com/r/typescript/comments/bcyt07/dynamically_creating_instance_of_subclass/ https://stackoverflow.com/questions/57402745/create-instance-inside-abstract-class-of-child-using-this https://stackoverflow.com/questions/49809191/an-example-of-using-a-reference-to-an-abstract-type-in-typescript https://stackoverflow.com/questions/53540944/t-extends-abstract-class-constructor https://stackoverflow.com/questions/52358162/typescript-instance-of-an-abstract-class https://stackoverflow.com/questions/53692161/dependency-injection-of-abstract-class-in-typescript https://stackoverflow.com/questions/52358162/typescript-instance-of-an-abstract-class For `abstract` constructor signatures, the recommended fix of using `{ new(args): T }` is pretty good because a) you need to be explicit about what arguments you're actually going to provide anyway and b) there's almost always exactly one signature you care about, but for `static abstract` methods/properties this is much more problematic because there could be *any* number of them. This also would make it impossible for concrete `static` methods to invoke `abstract static` methods: ```ts abstract class A { static abstract initialize(self: A): void; static createInstance() { const a = new this(); // Error this.initialize(a); return a; } } ``` On the one hand, this is *good*, because `A.createInstance()` definitely *does* crash. On the other hand, this literally the exact kind of code you *want* to write with abstract methods. One solution would be the existence of an `abstract static` method *with a body*, which would be allowed to invoke other `abstract static` methods but would be subject to invocation restrictions but *not* require a derived class implementation. This is also confusing because it would seem like this is just a "default implementation" that would still require overriding (that is the bare meaning of `abstract`, after all): ```ts abstract class A { abstract static initialize() { console.log("Super class init done; now do yours"); } } // No error for failing to provide `static initialize() {`, WAT? class B extends A { } ``` An alternative would be to say that you can't call *any* `static` method on an `abstract` class, even though that would ban trivially-OK code for seemingly no reason: ```ts abstract class A { static foo() { console.log("Everything is fine"); } } // Can't invoke, WAT? A.foo(); ``` * **Pros**: Correctly prevents all crashes * **Cons**: Extremely unergonomic at use cases; effectively bans concrete `static` methods from calling same-class `abstract` methods #### Option 3: Indirection is sufficient Why not just split the baby and say that the *direct* form `A.doSomething()` is illegal, but `expr.doSomething()` where `expr` is of type `typeof A` is OK as long as `expr` isn't *exactly* `A`. This creates the *dread inconsistency* that a trivial indirection is sufficient to defeat the type system and cause a crash: ```ts // Error; crash prevented! A.doSomething(); const p = A; // OK, crashes, WAT? p.doSomething(); ``` It's also not entirely clear what "indirection" means. Technically if you write ```ts import { SomeStaticAbstractClass as foo } from "./otherModule"; foo.someAbstractMethod(); ``` then `foo` isn't *exactly* the declaration of SomeStaticAbstractClass itself - it's an alias. But there isn't really anything distinguishing that from `const p = A` above. * **Pros**: Catches "bad by inspection" instances while still allowing "maybe it works" code * **Cons**: Extremely inconsistent; simply appears to function as if TypeScript has a bug in it. Unclear what sufficient indirection means in cases of e.g. module imports #### Option 4: Indirection, but with generics Maybe a trivial indirection as described in Option 3 isn't "good enough" and we should require you to use a constrained generic instead: ```ts // Seems like you're maybe OK function fn<T extends typeof A>(x: T) { x.doSomething(); } // Good, OK fn(B); // A fulfills typeof A, fair enough, crashes, WAT? fn(A); ``` This turns out to be a bad option because many subclasses don't actually meet their base class static constraints due to constructor function arity differences: ```ts abstract class A { constructor() { } foo() { } } class B extends A { constructor(n: number) { super(); } bar() { } } function fn<T extends typeof A>(ctor: T) { // Want to use static methods of 'ctor' here } // Error, B's constructor has too many args fn(B); ``` This isn't even code we *want* people to write -- a generic type parameter used in exactly one position is something we explicitly discourage because it doesn't "do anything". * **Pros**: Maybe a slightly better variant of option 3 * **Cons**: Just a more complicated system with the same failure modes #### Option 5: Something else? Anyone able to square this circle? ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Needs Proposal
high
Critical
507,971,517
pytorch
Stable docs show dead __config__ section link
## πŸ“š Documentation Hey folks, there is a `__config__` section showing up in the docs which leads to a 404 page; see - https://pytorch.org/docs/stable/index.html - https://pytorch.org/docs/stable/__config__.html See screenshot below ![torch-config](https://user-images.githubusercontent.com/527241/66940551-6b975400-f045-11e9-9bb4-555399ab72fe.png) Should probably be removed or the linked page fixed; thanks!
module: docs,triaged
low
Minor
508,028,059
pytorch
Provide function similar to cdist that returns dist_p^p
## πŸš€ Feature Provide a function similar to cdist (or add an option to cdist) that would compute \sum_k (x_ik - xj_k)^p instead of (\sum_k (x_ik - xj_k)^p(^(1/p) ## Motivation The function proposed above is differentiable at zero, while norm is not. Requested in #25799 ## Alternatives Raising existing cdist output to power p does not solve the problem of differentiability at 0 cc @ezyang @balandat, @ifedan
triaged,function request,module: distance functions
low
Major
508,049,644
rust
Extend overlapping ranges lint to cover cases with more than a single element overlapping
#64007 introduces an overlapping ranges lint that triggers only if the beginning or the end overlap with another arms' pattern, like in `0..10`/`9..20`. [There should be a conversation](https://github.com/rust-lang/rust/pull/64007#issuecomment-542628737) on whether it should also trigger when the overlap is beyond that, like in `0..10`/`5..15`. <!-- TRIAGEBOT_START --> <!-- TRIAGEBOT_ASSIGN_START --> <!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"Nadrieril"}$$TRIAGEBOT_ASSIGN_DATA_END --> <!-- TRIAGEBOT_ASSIGN_END --> <!-- TRIAGEBOT_END -->
A-lints,I-needs-decision,T-lang,A-exhaustiveness-checking
low
Major
508,087,853
flutter
How can I set value to my `CupertinoDatePicker`?
I use CupertinoDatePicker in form. My form has a button "Reset all". But I didn't find a way to reset the value in a picker.
c: new feature,framework,f: date/time picker,f: cupertino,c: proposal,P3,team-design,triaged-design
low
Major
508,109,579
pytorch
Named Tensor: Support -1 in `unflatten`.
Would be convenient to write: `tensor.unflatten("h", (("height", 8), ("q", -1))).shape` Similar to view. cc @zou3519
triaged,module: named tensor
low
Minor
508,111,509
pytorch
Named tensor: align_to align_as error messages
For new users `align_as` and `align_to` seem synonymous. If you write `align_to(tensor)` it should either fail nicely or call align_as. Currently you get a very scary error message (if you don't know what ellipses are). ``` /usr/local/lib/python3.6/dist-packages/torch/_namedtensor_internals.py in is_ellipsis(item) 53 return item == '...' 54 else: ---> 55 return item == Ellipsis or item == '...' 56 57 TypeError: eq() received an invalid combination of arguments - got (ellipsis), but expected one of: * (Tensor other) didn't match because some of the arguments have invalid types: (!ellipsis!) * (Number other) didn't match because some of the arguments have invalid types: (!ellipsis!) ``` cc @zou3519
triaged,module: named tensor
low
Critical
508,135,757
flutter
[flutter_android_lifecycle] Throw when obtaining lifecycle from a too old engine
`FlutterLifecycleAdapter.getLifecycle` should throw if running with an engine version that doesn't support the hidden lifecycle API. As that API isn't available on stable we're temporarily not throwing so that the example app works on stable. After next stable release we should be able to make `getLifecycle` throw.
platform-android,package,P2,c: tech-debt,p: flutter_plugin_android_lifecycle,team-android,triaged-android
low
Minor
508,143,843
godot
File.open_encrypted_with_pass() returns ERR_FILE_UNRECOGNIZED
**Godot version:** 3.1.1.stable-official **OS/device including version:** Windows 7 **Issue description:** If you try to read or write a file using `File.open_encrypted_with_pass()` it returns `ERR_FILE_UNRECOGNIZED` and doesn't say much more. Windows cmd displays: ``` ERROR: open_and_parse: Condition ` magic != 0x43454447 ` is true. returned: ERR_FILE_UNRECOGNIZED At: core/io/file_access_encrypted.cpp:65 ``` **Steps to reproduce:** ``` var file : File = File.new() var open_result = file.open_encrypted_with_pass("user://user.json", File.WRITE_READ, OS.get_unique_id()) print("Open result ", open_result) ```
bug,topic:core,confirmed
low
Critical
508,151,837
godot
Audio clips when changing volume of audio bus
**Godot version:** 3.1.1 stable **OS:** Windows 10 Pro version 1903 build 18362.418 **Device:** ASUS ROG Zephyrus S GX502GW (using both integrated laptop speaker and OnePlus USB-C bullet earbuds) **Issue description:** Changing the volume of an audio bus using `AudioServer.set_bus_volume_db()` or through the visual interface (Audio bottom tab) produces very audible clipping noises. Comparatively, changing the volume_db property on an AudioStreamPlayer produces virtually zero clipping. **Steps to reproduce:** Create a scene with an ASP and set it to the master channel. Load some music file on it, set `Playing` to true and play with the volume property knob. Notice virtually no sound clipping. Now open the audio tab on the bottom and play the same sound file. Try changing the volume of the master bus with the vertical bar. Sound will clip a lot as you change the volume. *Bugsquad edit (keywords for easier searching): crack, cracking*
bug,topic:editor,confirmed,topic:audio
low
Critical
508,180,795
pytorch
[JIT] Allow user to provide aliasing information on input tensors and model parameters
## πŸš€ Feature <!-- A clear and concise description of the feature proposal --> We need to create API that allows user to provide aliasing information on input tensors and model parameters. Graph mode does not own parameters and takes them as input through `prim::GetAttr` and `ClassType<xxx>`. User has full visibility of aliasing information on direct tensor inputs and model parameters. It is possible for user to provide aliasing information when scripting their model. In default configuration where there's no user-feed aliasing information, `AliasDb` could be conservative and preserve current assumption that all inputs could be aliasing each other. The proposed feature would not change existing behavior. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too --> It is only rare case where inputs and parameters could be aliasing each other. In such cases the alias relationships should be well-aware to the module creator/user. Hence it is reasonable to expect the module creator/user to provide such information to graph compiler. Aliasing information on input tensor and model parameters is beneficial to graph optimization. It allows identify aliases on inputs, which are not visible from within the graph. E.g. it could remove many wildcard (`AliasDb` concept of could aliasing anything) in our current `AliasDb` resulted from `prim::GetAttr`, when related alias information is provided for retrieved attributes. As long as we support in-place operator in JIT, we'll always have input-output tensor (e.g. tensors feed to in-place operator), which would prevent any re-ordering or nodes across it if it is a wildcard. ## Pitch <!-- A clear and concise description of what you want to happen. --> Given a `torch.nn.Module` defined as: ```python class MyModule(torch.nn.Module): def __init__(self, inp_c, out_c, inter_c): super(MyModule, self).__init__() self.fc = torch.nn.modules.Linear(inp_c, int_c) self.fc_dup = torch.nn.modules.Linear(int_c, out_c) self.fc_dup.weight = self.fc.weight # exposed function, aliasing info expected from user @torch.jit.export def exposed_func(self, tensor1 : torch.Tensor, tensor2 : torch.Tensor) -> torch.Tensor: tensor1 += 1e-5 return tensor2 / tensor1 # aliasing info expected from user def forward(self, inp : torch.Tensor) -> torch.Tensor: inp = self.fc(inp) inp = self.fc_dup(inp) out = inp.relu_() # This call instance of `exposed_func` is within the graph, # aliasing info on inputs should be deduced through analysis # instead of passed in by user. return self.exposed_func(inp, out) ``` With current scripting API, we could add another field `alias_info` in `torch.jit.script` function to receive aliasing information. ``` script_model = torch.jit.script(MyModule, alias_info = alias_info) ``` where `alias_info` could be ``` Dict{ 'forward' : [['0.fc.weight', '0.fc_dup.weight'], ], 'exposed_func' : [[]], } ``` Note: 1. For each entry in the dictionary: key matches the compiled method, to which the aliasing information is provided; values are a list of ValueSet, which each is a group of values that are aliases to each other. 1.1 We refer to inputs directly using their index; 1.2 We use scoped name to refer to parameters and parameters in nested `torch.nn.Modules`. The example here uses `0.fc.weight`, which is roughly `input_0 (MyModule) -> attribute["fc"] (Linear) -> attribute["weight"] (Parameter)` 2. We have provided `alias_info` for `exposed_func`. This is only used when we compile `exposed_func` directly, which is exposed through annotation `torch.jit.export`. When `exposed_func` is compiled recursively (nested inside `forward`), aliasing information should be deduced by alias analysis in the graph instead of passed in by the user. 3. We provide `alias_info` for both exposed function `forward` and `exposed_func`. If we have a third exposed function here, for which there's no provided alias_info, `AliasDb` could keep the old assumption that all inputs could be aliasing each other. cc @suo
triage review,oncall: jit,triaged
low
Minor
508,182,990
flutter
[Request Feature]SnackBar offset option
<!-- Thank you for using Flutter! If you are looking for support, please check out our documentation or consider asking a question on Stack Overflow: * https://flutter.dev/ * https://api.flutter.dev/ * https://stackoverflow.com/questions/tagged/flutter?sort=frequent If you have found a bug or if our documentation doesn't have an answer to what you're looking for, then fill our the template below. Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports --> ## Use case Changing SnackBar offset <!-- Please tell us the problem you are running into that led to you wanting a new feature. Is your feature request related to a problem? Please give a clear and concise description of what the problem is. Describe alternative solutions you've considered. Is there a package on pub.dev/flutter that already solves this? --> ## Proposal I read `snack_bar.dart`, I want to remove it. `padding: const EdgeInsets.fromLTRB(15.0, 5.0, 15.0, 10.0),` some people don't want the `padding` <!-- Briefly but precisely describe what you would like Flutter to be able to do. Consider attaching images showing what you are imagining. Does this have to be provided by Flutter directly, or can it be provided by a package on pub.dev/flutter? If so, maybe consider implementing and publishing such a package rather than filing a bug. -->
c: new feature,framework,f: material design,P3,team-design,triaged-design
low
Critical
508,185,552
pytorch
New Stochastic Optimization Algorithms in Pytorch
## πŸš€ Feature I would like to suggest new stochastic optimizer additions to Pytorch. ### For non-convex loss functions It is known that adaptive stochastic optimizers like Adam, Adagrad, RMSprop can indeed fail to converge. Moreover the test error of the trained models can be larger when compared to the models trained by SGD even though if they attain lower training error than with SGD - in other words they can overfit. See [[1](http://papers.nips.cc/paper/7003-the-marginal-value-of-adaptive-gradient-methods-in-machine-learning.pdf)]. Recently, ADABOUND was proposed [[2](https://openreview.net/pdf?id=Bkg3g2R9FX)] to overcome the above mentioned problems of the adaptive optimizers. The algorithm combines the best of both worlds : (a) Makes fast progress initially like the adaptive methods, and (b) attains similar or better test accuracy to that of SGD. In terms of code size and complexity, it is close to Adam's. ### For convex loss functions While Pytorch is generally used for deep learning purposes, I see it is as a tensor-based Machine Leaning (ML) library with automatic differentiation that can run on both CPU and GPU and which allows easy implementation of both convex and non-convex ML models. As such, I think an ML engineer working on convex ML models would be interested to see faster state-of-the-art stochastic optimizers implemented in Pytorch like the variance reduced optimizers. Among them, SVRG [[3](https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf)] is most widely known. In terms of code size and complexity, it s somewhat close to SGD's with few modifications. ### Alternatives I recommended SVRG since it is the most popular variance reduced algorithm. There are more complex variance reduced algorithms that are faster than SVRG and if they are of interest I can suggest some. Also, I would like to know if there are interests on batch optimizers other than LBFGS which is within Pytorch. For large scale ML models, there are cases when LBFGS requires more memory than the system can offer. An alternative would to be try out the accelerated gradient descent that consumes the same amount of memory as Batch gradient descent while performing significantly faster than it. Accelerated gradient descent's code size and complexity is close to batch gradient descent and as such it is way easier to maintain than LBFGS's. Lastly, if there are interests in other specific optimizers, I would be interested to hear out. I can try implementing them. ## References [1] http://papers.nips.cc/paper/7003-the-marginal-value-of-adaptive-gradient-methods-in-machine-learning.pdf [2] https://openreview.net/pdf?id=Bkg3g2R9FX [3] https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf cc @vincentqb
feature,module: optimizer,triaged
low
Critical
508,194,220
go
net/http: performance collapse when http/2 requests wait for connection
### Does this issue reproduce with the latest release? / What version of Go are you using (`go version`)? The code that causes this behavior is present in Go 1.13 and in tip, but #34941 shadows the bug in releases newer than Go 1.11β€”so for today, I'll demonstrate it with go1.11.13. <pre> $ go1.11 version go version go1.11.13 darwin/amd64 </pre> ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go1.11 env GOARCH="amd64" GOBIN="" GOCACHE="/Users/rhys/Library/Caches/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/rhys/go" GOPROXY="" GORACE="" GOROOT="/Users/rhys/go/version/go1.11" GOTMPDIR="" GOTOOLDIR="/Users/rhys/go/version/go1.11/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/49/zmds5zsn75z1283vtzxyfr5hj7yjq4/T/go-build546431991=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre></details> ### What did you do? In the test, I set up an http/2 server/client pair, limited the number of TCP connections that could carry HTTP requests and started a large number of HTTP requests. I then measured how long it took to create and cancel one additional request. ### What did you expect to see? I expected the speed of creating and canceling additional requests to not vary based on the number of other HTTP requests waiting on the TCP connection. ### What did you see instead? Creating and canceling an HTTP request gets slow as a function of how many requests are waiting for the same TCP connection when HTTP/2 is active. The code in `net/http/h2_bundle.go` that bridges between the channel for canceling a single request and the `sync.Cond` that guards the TCP connection ([`net/http.http2ClientConn.awaitOpenSlotForRequest`](https://github.com/golang/go/blob/go1.13.1/src/net/http/h2_bundle.go#L7648-L7661)) responds to request cancelation by waking up every goroutine that's waiting to use the TCP connection (with `sync.Cond.Broadcast`). Each of those goroutines in sequence will acquire the lock on the `*http2ClientConn` to check if there's room to send another request. On top of that, the contention on the `sync.Mutex` protecting a single connection results in a slowdown on the `sync.Mutex` protecting the Transport's HTTP/2 connection pool when `*http2clientConnPool.getClientConn` calls `cc.idleState` while holding the pool's lock. --- In the reproducer, the baseline speed of creating and canceling an HTTP/2 request is 1ms since the test waits that long before canceling to give the RoundTrip goroutine time to find the TCP connection in the pool and start waiting for an available slot. When there are a small number of outstanding requests (100 or 200), creating and canceling an additional request takes about 1.3ms: that baseline of 1ms plus 300Β΅s of actual time. As the number of outstanding requests grows past the capacity of the single TCP connection (the default Go http/2 server sets that to 250 HTTP requests), creating and canceling a request wakes up more and more goroutines. The cost of this is still small with 1600 idle requests (1.1ms over the 1ms baseline), but with 6400 idle requests it's grown to 5.9ms over the 1ms baseline. With 100k idle requests, the cost to cancel one is nearly one second of work. With N idle requests that all time out / are canceled, the total cost is O(N^2). The cost should be O(N). ``` name time/op QueuedRequests/idle=100-8 1.35ms Β± 4% QueuedRequests/idle=200-8 1.32ms Β± 6% QueuedRequests/idle=400-8 1.41ms Β± 5% QueuedRequests/idle=800-8 1.62ms Β±12% QueuedRequests/idle=1600-8 2.08ms Β±15% QueuedRequests/idle=3200-8 2.92ms Β±43% QueuedRequests/idle=6400-8 6.88ms Β±61% QueuedRequests/idle=12800-8 13.4ms Β±19% QueuedRequests/idle=25600-8 31.1ms Β±18% QueuedRequests/idle=51200-8 87.6ms Β±16% QueuedRequests/idle=102400-8 764ms Β±71% ``` ``` package repro import ( "context" "crypto/tls" "fmt" "io/ioutil" "log" "net/http" "net/http/httptest" "sync" "sync/atomic" "testing" "time" ) func withHTTP2Server(h http.Handler, do func(s *httptest.Server)) { s := httptest.NewUnstartedServer(h) s.TLS = &tls.Config{ NextProtos: []string{"h2"}, } s.Config.ErrorLog = log.New(ioutil.Discard, "", 0) // swallow the "bad certificate" log lines s.StartTLS() defer s.Close() transport := s.Client().Transport.(*http.Transport) clientConfig := transport.TLSClientConfig transport.TLSClientConfig = nil // make a request to trigger HTTP/2 autoconfiguration resp, err := s.Client().Get(s.URL) if err == nil { resp.Body.Close() } // now allow the client to connect to the ad-hoc test server transport.TLSClientConfig.RootCAs = clientConfig.RootCAs do(s) } func BenchmarkQueuedRequests(b *testing.B) { testcase := func(idleRequestCount int) func(b *testing.B) { return func(b *testing.B) { allow := make(chan struct{}) var starts int64 h := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { if !r.ProtoAtLeast(2, 0) { b.Errorf("Request is not http/2: %q", r.Proto) return } atomic.AddInt64(&starts, 1) <-allow }) withHTTP2Server(h, func(s *httptest.Server) { // We're looking at how http/2 requests get queued on a single // connection. Force use of a single connection so we can examine // that queuing. s.Client().Transport.(*http.Transport).MaxConnsPerHost = 1 ctx := context.Background() ctx, cancel := context.WithCancel(ctx) defer cancel() // Set up some (variable) number of idle/outstanding requests var wg sync.WaitGroup for i := 0; i < idleRequestCount; i++ { req, err := http.NewRequest("GET", s.URL, nil) if err != nil { b.Fatalf("NewRequest: %s", err) } wg.Add(1) go func() { defer wg.Done() ctx, cancel := context.WithCancel(ctx) defer cancel() req = req.WithContext(ctx) resp, err := s.Client().Do(req) if err != nil { return } resp.Body.Close() }() } // Allow requests to settle time.Sleep(time.Second) // Measure how quickly we can create and cancel a marginal request // on the contended http/2 connection. b.ResetTimer() for i := 0; i < b.N; i++ { req, err := http.NewRequest("GET", s.URL, nil) if err != nil { b.Fatalf("NewRequest: %s", err) } ctx, cancel := context.WithTimeout(ctx, time.Millisecond) req = req.WithContext(ctx) resp, err := s.Client().Do(req) if err == nil { resp.Body.Close() } cancel() } b.StopTimer() close(allow) cancel() wg.Wait() }) } } for idle := 100; idle < 120000; idle *= 2 { b.Run(fmt.Sprintf("idle=%d", idle), testcase(idle)) } } ```
Performance,NeedsInvestigation
low
Critical
508,197,297
TypeScript
Type discrimination in function calls for callable types doesn't work if discriminating property is optional
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.7.0-dev.20191015 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** type discrimination, callable, signatures, implicit any **Code** ```ts function test( a: | { type?: 1; property: (a: number) => void } | { type: 2; property: (a: string) => void }, ) { return a.property } test({ property: (x) => { // complaining that x implicitly has type any console.log('hi') }, }) ``` **Expected behavior:** `x` should be inferred to be `number`. **Actual behavior:** `x` is `any`. If you change `type?: 1` to `type: 1`, it works as expected. **Playground Link:** [link](http://www.typescriptlang.org/play/index.html#code/GYVwdgxgLglg9mABFApgZygCgFCMQQwC5c9EAfRAb2QE8AHFAfkMQEYBuROgJzge6g0WmIojAgAtgCMU3AJSIAvAD5EANzgwAJogC+JPBWqCGLAEycefWYOGiM3GGADmCles07dAGmwLKJNwoUCDcSPgAdFb8gtj62NioGJgBeNE2QoiYAB5uqtQA9AWIEHASdAA2+E5OzsgAFvhQiNmIMOUVMBAwUBU0iI1otAwEYDQGJQhocBUoERVwzpgA5PUwy3IkPnFyQA) **Related Issues:** #7294 (closed with #29011 - see second half of the original post, I believe it describes why this issue occurs)
Suggestion,Awaiting More Feedback
low
Critical
508,205,754
rust
Frequent use of line 0 in debug info reduces effectiveness of Cachegrind
I frequently use Cachegrind (and Callgrind) to profile rustc and other Rust programs. Cachegrind attributes instruction counts (and possibly cache misses and branch mispredictions) to specific lines of code, like this: ``` . /// Maps a string to its interned representation. 320,664 ( 0.27%) pub fn intern(string: &str) -> Self { 160,332 ( 0.14%) with_interner(|interner| interner.intern(string)) 267,220 ( 0.23%) } ``` But when profiling rustc with Cachegrind, lots of instruction counts don't get attribute to a particular line, so we end up with output like this: ``` 1,555,039 ( 1.33%) <counts for unidentified lines in /home/njn/.cargo/registry/ src/github.com-1ecc6299db9ec823/hashbrown-0.6.1/src/raw/mod.rs> ``` Often the fraction of executed instructions that don't get attributed to a particular line is 20%, 30%, or higher. That's a lot! The way Cachegrind works is that the first time an instruction, X, from the binary is executed, Cachegrind gets X's file name and line number from debug info, and X's function name from the symbol table, and creates a (address, filename, fn_name, line_num) cost centre. Every time X is executed the cost centre is incremented appropriately. The problem with the Rust code's debug info is that there are lots of instructions for which the debug info says the line number is 0. That means Cachegrind can't attribute a line, and all executions of such instructions count towards the "unidentified lines" entry. Notably, these instructions do have a valid filename. I grabbed some debugging output from Valgrind and matched it up with the binary code, as produced by `objdump -d`. What follows is for the function `syntax_pos::symbol::Interner::intern`. The machine code is on the left, and Valgrind's output for each instruction (its runtime address, filename, and line number) is on the right. ``` 0000000001b742a0 <_ZN10syntax_pos6symbol8Interner6intern17h6c86044010aa5bfcE>: 1b742a0: push %rbp # 0x65CF2A0 libsyntax_pos/symbol.rs:976 1b742a1: push %r15 # 0x65CF2A1 libsyntax_pos/symbol.rs:976 1b742a3: push %r14 # 0x65CF2A3 libsyntax_pos/symbol.rs:976 1b742a5: push %r13 # 0x65CF2A5 libsyntax_pos/symbol.rs:976 1b742a7: push %r12 # 0x65CF2A7 libsyntax_pos/symbol.rs:976 1b742a9: push %rbx # 0x65CF2A9 libsyntax_pos/symbol.rs:976 1b742aa: sub $0x58,%rsp # 0x65CF2AA libsyntax_pos/symbol.rs:976 1b742ae: mov %rsi,%r13 # 0x65CF2AE libsyntax_pos/symbol.rs:976 1b742b1: movabs $0x517cc1b727220a95,%rax # 0x65CF2B1 libsyntax_pos/symbol.rs:976 1b742bb: cmp $0x8,%rdx # 0x65CF2BB rustc-hash-1.0.1/src/lib.rs:81 1b742bf: jb 1b742e7 <_ZN10syntax_pos6symbol8Interner6intern17h6c86044010aa5bfcE+0x47> # 0x65CF2BF rustc-hash-1.0.1/src/lib.rs:81 1b742c1: lea -0x8(%rdx),%r8 # 0x65CF2C1 rustc-hash-1.0.1/src/lib.rs:81 1b742c5: mov %r8,%rbp # 0x65CF2C5 rustc-hash-1.0.1/src/lib.rs:81 1b742c8: shr $0x3,%rbp # 0x65CF2C8 rustc-hash-1.0.1/src/lib.rs:81 1b742cc: add $0x1,%rbp # 0x65CF2CC rustc-hash-1.0.1/src/lib.rs:81 1b742d0: mov %ebp,%ecx # 0x65CF2D0 rustc-hash-1.0.1/src/lib.rs:81 1b742d2: and $0x3,%ecx # 0x65CF2D2 rustc-hash-1.0.1/src/lib.rs:81 1b742d5: cmp $0x18,%r8 # 0x65CF2D5 rustc-hash-1.0.1/src/lib.rs:81 1b742d9: jae 1b742fe <_ZN10syntax_pos6symbol8Interner6intern17h6c86044010aa5bfcE+0x5e> # 0x65CF2D9 rustc-hash-1.0.1/src/lib.rs:81 1b742db: xor %ebx,%ebx # 0x65CF2DB rustc-hash-1.0.1/src/lib.rs:0 1b742dd: mov %r13,%rsi # 0x65CF2DD rustc-hash-1.0.1/src/lib.rs:0 1b742e0: test %rcx,%rcx # 0x65CF2E0 rustc-hash-1.0.1/src/lib.rs:81 1b742e3: jne 1b7434e <_ZN10syntax_pos6symbol8Interner6intern17h6c86044010aa5bfcE+0xae>(0x65CF2E3) -> 1b7434e # 0x65CF2E3 rustc-hash-1.0.1/src/lib.rs:81 ... # ... 1b7434e: xor %ebp,%ebp # 0x65CF34E rustc-hash-1.0.1/src/lib.rs:0 1b74350: rol $0x5,%rbx # 0x65CF350 libcore/num/mod.rs:2461 1b74354: xor (%rsi,%rbp,8),%rbx # 0x65CF354 libcore/ops/bit.rs:306 1b74358: imul %rax,%rbx # 0x65CF358 libcore/num/mod.rs:3096 1b7435c: add $0x1,%rbp # 0x65CF35C rustc-hash-1.0.1/src/lib.rs:81 1b74360: cmp %rbp,%rcx # 0x65CF360 rustc-hash-1.0.1/src/lib.rs:81 1b74363: jne 1b74350 <_ZN10syntax_pos6symbol8Interner6intern17h6c86044010aa5bfcE+0xb0> # 0x65CF363 rustc-hash-1.0.1/src/lib.rs:81 1b74365: mov %r8,%rsi # 0x65CF365 rustc-hash-1.0.1/src/lib.rs:81 1b74368: and $0xfffffffffffffff8,%rsi # 0x65CF368 rustc-hash-1.0.1/src/lib.rs:81 1b7436c: lea (%rsi,%r13,1),%rcx # 0x65CF36C rustc-hash-1.0.1/src/lib.rs:81 1b74370: add $0x8,%rcx # 0x65CF370 rustc-hash-1.0.1/src/lib.rs:81 1b74374: sub %rsi,%r8 # 0x65CF374 rustc-hash-1.0.1/src/lib.rs:81 1b74377: cmp $0x3,%r8 # 0x65CF377 rustc-hash-1.0.1/src/lib.rs:85 1b7437b: jbe 1b74392 <_ZN10syntax_pos6symbol8Interner6intern17h6c86044010aa5bfcE+0xf2> # 0x65CF37B rustc-hash-1.0.1/src/lib.rs:85 ... # ... 1b74392: cmp $0x2,%r8 # 0x65CF392 rustc-hash-1.0.1/src/lib.rs:89 1b74396: jae 1b7458b <_ZN10syntax_pos6symbol8Interner6intern17h6c86044010aa5bfcE+0x2eb> # 0x65CF396 rustc-hash-1.0.1/src/lib.rs:89 1b7439c: test %r8,%r8 # 0x65CF39C rustc-hash-1.0.1/src/lib.rs:93 1b7439f: je 1b743af <_ZN10syntax_pos6symbol8Interner6intern17h6c86044010aa5bfcE+0x10f> # 0x65CF39F rustc-hash-1.0.1/src/lib.rs:93 1b743a1: movzbl (%rcx),%ecx # 0x65CF3A1 rustc-hash-1.0.1/src/lib.rs:94 1b743a4: rol $0x5,%rbx # 0x65CF3A4 libcore/num/mod.rs:2461 1b743a8: xor %rcx,%rbx # 0x65CF3A8 libcore/ops/bit.rs:306 1b743ab: imul %rax,%rbx # 0x65CF3AB libcore/num/mod.rs:3096 1b743af: lea 0x30(%rdi),%rcx # 0x65CF3AF libsyntax_pos/symbol.rs:0 1b743b3: mov %rcx,0x20(%rsp) # 0x65CF3B3 libsyntax_pos/symbol.rs:0 1b743b8: rol $0x5,%rbx # 0x65CF3B8 libcore/num/mod.rs:2461 1b743bc: xor $0xff,%rbx # 0x65CF3BC libcore/ops/bit.rs:306 1b743c3: imul %rax,%rbx # 0x65CF3C3 libcore/num/mod.rs:3096 1b743c7: mov 0x30(%rdi),%rcx # 0x65CF3C7 hashbrown-0.6.1/src/raw/mod.rs:489 1b743cb: mov 0x38(%rdi),%rsi # 0x65CF3CB hashbrown-0.6.1/src/raw/mod.rs:0 1b743cf: mov %rbx,%rax # 0x65CF3CF hashbrown-0.6.1/src/raw/mod.rs:0 1b743d2: shr $0x39,%rax # 0x65CF3D2 hashbrown-0.6.1/src/raw/mod.rs:0 1b743d6: movd %eax,%xmm0 # 0x65CF3D6 hashbrown-0.6.1/src/raw/mod.rs:0 1b743da: punpcklbw %xmm0,%xmm0 # 0x65CF3DA hashbrown-0.6.1/src/raw/mod.rs:0 1b743de: pshuflw $0xe0,%xmm0,%xmm0 # 0x65CF3DE hashbrown-0.6.1/src/raw/mod.rs:0 1b743e3: pshufd $0x0,%xmm0,%xmm1 # 0x65CF3E3 hashbrown-0.6.1/src/raw/mod.rs:0 1b743e8: mov %rdi,0x28(%rsp) # 0x65CF3E8 hashbrown-0.6.1/src/raw/mod.rs:0 1b743ed: mov 0x40(%rdi),%r12 # 0x65CF3ED hashbrown-0.6.1/src/raw/mod.rs:0 1b743f1: xor %ebp,%ebp # 0x65CF3F1 hashbrown-0.6.1/src/raw/mod.rs:0 1b743f3: pcmpeqd %xmm2,%xmm2 # 0x65CF3F3 hashbrown-0.6.1/src/raw/mod.rs:0 1b743f7: mov 0x4a9b8a(%rip),%r14 # 201df88 <bcmp@GLIBC_2.2.5> # 0x65CF3F7 hashbrown-0.6.1/src/raw/mod.rs:0 1b743fe: and %rcx,%rbx # 0x65CF3FE hashbrown-0.6.1/src/raw/mod.rs:0 1b74401: movdqu (%rsi,%rbx,1),%xmm3 # 0x65CF401 libcore/intrinsics.rs:1462 ``` The thing to notice is that the majority of the lines have a valid filename and line number. (The filenames jump around a lot due to inlining. That's fine.) But quite a few of them have a line number of zero. There is no obvious pattern to the zeroes; sometimes there are one or two in a sequence, sometimes there are more. If we trace through these instructions once, here are the filename/linenum pairs that get instructions counted towards them: ``` libsyntax_pos/symbol.rs:976 x 9 rustc-hash-1.0.1/src/lib.rs:81 x 10 rustc-hash-1.0.1/src/lib.rs:0 (0!) rustc-hash-1.0.1/src/lib.rs:81 x 2 rustc-hash-1.0.1/src/lib.rs:0 (0!) libcore/num/mod.rs:2461 libcore/ops/bit.rs:306 libcore/num/mod.rs:3096 rustc-hash-1.0.1/src/lib.rs:81 x 8 rustc-hash-1.0.1/src/lib.rs:85 x 2 rustc-hash-1.0.1/src/lib.rs:89 x 2 rustc-hash-1.0.1/src/lib.rs:93 x 2 rustc-hash-1.0.1/src/lib.rs:94 libcore/num/mod.rs:2461 libcore/ops/bit.rs:306 libcore/num/mod.rs:3096 libsyntax_pos/symbol.rs:0 x 2 (0!) libcore/num/mod.rs:2461 libcore/ops/bit.rs:306 libcore/num/mod.rs:3096 hashbrown-0.6.1/src/raw/mod.rs:489 hashbrown-0.6.1/src/raw/mod.rs:0 x 13 (0!) libcore/intrinsics.rs:1462 ``` 47 instructions have a non-zero line number, and 17 have a zero line number. I haven't seen this problem occur with C and C++ code. I suspect the problem is that the production of debug info isn't rigorous enough in some fashion, and Cachegrind's requirements might be more onerous than other tools'. (Debug info is often not tested thoroughly.) Sometimes it can be non-obvious exactly which line of code an instruction should be attributed to. In that case, I'm not too fussed so long as it's attributed to something plausible and not zero. cc @julian-seward1
A-LLVM,A-debuginfo,T-compiler,C-bug,S-waiting-on-LLVM
medium
Critical
508,227,689
youtube-dl
NoodleMagazine URL Support Request
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.10.16. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights. - Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a new site support request - [x] I've verified that I'm running youtube-dl version **2019.10.16** - [x] I've checked that all provided URLs are alive and playable in a browser - [x] I've checked that none of provided URLs violate any copyrights - [x] I've searched the bugtracker for similar site support requests including closed ones ## Example URLs <!-- Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours. --> - Single video: https://noodlemagazine.com/watch/-97597836_456243773 ## Description <!-- Provide any additional information. If work on your issue requires account credentials please provide them or explain how one can obtain them. --> Phone:~ mobile$ youtube-dl -f best https://noodlemagazine.com/watch/-97597836_456243773 Β  [generic] -97597836_456243773: Requesting header Β  WARNING: Falling back on generic information extractor. Β  [generic] -97597836_456243773: Downloading webpageΒ  [generic] -97597836_456243773: Extracting information Β  NOT TEACHABLE Β  ERROR: Unsupported URL: https://noodlemagazine.com/watch/-97597836_456243773
site-support-request
low
Critical
508,241,758
pytorch
torch.utils.tensorboard.SummaryWriter.add_graph do not support non-tensor inputs
## πŸ› Bug <!-- A clear and concise description of what the bug is. --> ## To Reproduce Steps to reproduce the behavior: 1.Run my script below: ```python import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.tensorboard import SummaryWriter # from tensorboardX import SummaryWriter # bug 1: bool type inputs class Net_1(nn.Module): def __init__(self, dropout=0.5): super(Net_1, self).__init__() self.fc1 = nn.Linear(120, 84) self.fc2 = nn.Linear(84, 10) self.dropout = nn.Dropout(dropout) def forward(self, x, use_dropout=False): x = F.relu(self.fc1(x)) if use_dropout: x = self.dropout(x) # or other operations .... x = F.relu(self.fc2(x)) return x with SummaryWriter("bugs") as w: net = Net_1() input_x = torch.randn((2,120)) w.add_graph(net, (input_x, True)) # bug 2: None type inputs (might be argument's default value) class Net_2(nn.Module): def __init__(self): super(Net_2, self).__init__() self.fc1 = nn.Linear(120, 84) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(120, 84) self.fc4 = nn.Linear(84, 10) def forward(self, x, y=None, z=None): x = F.relu(self.fc1(x)) if y is not None: y = F.relu(self.fc2(y)) x = x + y if z is not None: z = F.relu(self.fc3(z)) x = x + z x = F.relu(self.fc4(x)) return x with SummaryWriter("bugs") as w: net = Net_2() input_x = torch.randn((2,120)) input_y = None input_z = torch.randn((2,120)) w.add_graph(net, (input_x, input_y, input_z)) # bug 3: List type inputs (dict, or other python build-in types like int,str,... may also meet this question) class Net_3(nn.Module): def __init__(self): super(Net_3, self).__init__() self.fc_list = [nn.Linear(120, 120) for _ in range(10)] self.fc_n = nn.Linear(120, 10) def forward(self, x, index:list=None): if index is not None: for i in index: x = F.relu(self.fc_list[i](x)) x = F.relu(self.fc_n(x)) return x with SummaryWriter("bugs") as w: net = Net_3() input_x = torch.randn((2, 120)) index = [1, 5, 1, 7, 0] w.add_graph(net, (input_x, index)) ``` and you can see the trace(take bug 3 as an example): ``` Error occurs, No graph saved Traceback (most recent call last): File "<input>", line 1, in <module> File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/wangyuanzheng/Downloads/xxxxxxx/project/albert_pytorch/dev/add_graph_bug.py", line 25, in <module> w.add_graph(net, (input_x, True)) File "/Users/wangyuanzheng/anaconda3/envs/CCFBigData-torch/lib/python3.7/site-packages/torch/utils/tensorboard/writer.py", line 682, in add_graph self._get_file_writer().add_graph(graph(model, input_to_model, verbose)) File "/Users/wangyuanzheng/anaconda3/envs/CCFBigData-torch/lib/python3.7/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 239, in graph raise e File "/Users/wangyuanzheng/anaconda3/envs/CCFBigData-torch/lib/python3.7/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 234, in graph trace = torch.jit.trace(model, args) File "/Users/wangyuanzheng/anaconda3/envs/CCFBigData-torch/lib/python3.7/site-packages/torch/jit/__init__.py", line 858, in trace check_tolerance, _force_outplace, _module_class) File "/Users/wangyuanzheng/anaconda3/envs/CCFBigData-torch/lib/python3.7/site-packages/torch/jit/__init__.py", line 997, in trace_module module._c._create_method_from_trace(method_name, func, example_inputs, var_lookup_fn, _force_outplace) RuntimeError: Type 'Tuple[Tensor, bool]' cannot be traced. Only Tensors and (possibly nested) Lists, Dicts, and Tuples of Tensors can be traced (toTraceableIValue at ../torch/csrc/jit/pybind_utils.h:298) frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 135 (0x110c479e7 in libc10.dylib) frame #1: torch::jit::toTraceableIValue(pybind11::handle) + 1280 (0x110246740 in libtorch_python.dylib) frame #2: torch::jit::toTypedStack(pybind11::tuple const&) + 31 (0x1102e7edf in libtorch_python.dylib) frame #3: void pybind11::cpp_function::initialize<torch::jit::script::initJitScriptBindings(_object*)::$_16, void, torch::jit::script::Module&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, pybind11::function, pybind11::tuple, pybind11::function, bool, pybind11::name, pybind11::is_method, pybind11::sibling>(torch::jit::script::initJitScriptBindings(_object*)::$_16&&, void (*)(torch::jit::script::Module&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, pybind11::function, pybind11::tuple, pybind11::function, bool), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::'lambda'(pybind11::detail::function_call&)::__invoke(pybind11::detail::function_call&) + 147 (0x11031e4e3 in libtorch_python.dylib) frame #4: pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 3372 (0x10fe57d3c in libtorch_python.dylib) <omitting python frames> ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> writer.add_graph should run normally. ## Environment Please copy and paste the output from our [environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py) (or fill out the checklist below manually). You can get the script and run it with: ``` wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py # For security purposes, please check the contents of collect_env.py before running it. python collect_env.py ``` Collecting environment information... PyTorch version: 1.3.0 Is debug build: No CUDA used to build PyTorch: None OS: Mac OSX 10.14.6 GCC version: Could not collect CMake version: Could not collect Python version: 3.7 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip] numpy==1.17.2 [pip] torch==1.3.0 [pip] torchvision==0.4.1 [conda] torch 1.3.0 pypi_0 pypi [conda] torchvision 0.4.1 pypi_0 pypi ## Additional context <!-- Add any other context about the problem here. --> 1.TensorboardX.SummaryWriter.add_graph has the same bug as torch.utils.tensorboard 2.Besides this bug, I hope add_graph could accept not only a tuple as positional arguments, but also a dict as keyword arguments for the model.forward()'s input
oncall: visualization
low
Critical
508,292,773
youtube-dl
FA Player
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.10.16. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights. - Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a new site support request - [x] I've verified that I'm running youtube-dl version **2019.10.16** - [x] I've checked that all provided URLs are alive and playable in a browser - [x] I've checked that none of provided URLs violate any copyrights - [x] I've searched the bugtracker for similar site support requests including closed ones ## Example URLs <!-- Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours. --> - Single video: https://faplayer.thefa.com/video/MF93eHg2Y3FsNyU3QyUyRmJveHNldCUzRnBhZ2UlM0RoaWdobGlnaHRzLXdzbCU3Q2hpZ2hsaWdodHMtYXJzZW5hbA ## Description <!-- Provide any additional information. If work on your issue requires account credentials please provide them or explain how one can obtain them. --> WRITE DESCRIPTION HERE Log in is required to watch the videos. Signing up is completely free.
site-support-request
low
Critical
508,372,725
flutter
Run `flutter run --verbose` should stream the output of `pod install --verbose` so useful information is displayed during long operations
1. when some pod version changed. 2. I run `flutter run --verbose` to see the detail progress. 3. And I found that it's run `pod install`, since these step is so long, it stuck there for a while. 4. I just stop there, and go into /ios/ folder and run `pod install --verbose` myself 5. it seems it will Expected result: Run `flutter run --verbose` should better run `pod install --verbose` <!-- Please tell us which target platform(s) the problem occurs (Android / iOS / Web / macOS / Linux / Windows) Which target OS version, for Web, browser, is the test system running? Does the problem occur on emulator/simulator as well as on physical devices? --> **Target Platform:iOS** **Target OS version/browser:iOS 13** **Devices:iPhone8** ## Logs <!-- Run your application with `flutter run --verbose` and attach all the log output below between the lines with the backticks. If there is an exception, please see if the error message includes enough information to explain how to solve the issue. --> The bellow log is when I run `pod install --verbose` success in `/ios/` folder by myself, and go back to root folder and run `flutter run --verbose`: * still I can see the `Running pod install` takes much time * and the `CocoaPods' output` looks like wait `pod install` finished first. it's better if it can show logs steps by steps just like what `pod install --verbose` do. ``` [ +112 ms] executing: pod --version [+1118 ms] 1.8.3 [ +5 ms] Running pod install... [+5138 ms] Running pod install... (completed in 5.1s) [ +1 ms] CocoaPods' output: ↳ [ +16 ms] Preparing Analyzing dependencies Inspecting targets to integrate Using `ARCHS` setting to build architectures of target `Pods-Runner`: (``) Finding Podfile changes .... ``` <!-- Run `flutter analyze` and attach any output of that command below. If there are any analysis errors, try resolving them before filing this issue. --> ``` ``` <!-- Finally, paste the output of running `flutter doctor -v` here. --> ``` [βœ“] Flutter (Channel beta, v1.9.1+hotfix.4, on Mac OS X 10.14.6 18G95, locale en-CN) β€’ Flutter version 1.9.1+hotfix.4 at /Users/jerryzhou/Documents/code/flutter/sdk/flutter β€’ Framework revision cc949a8e8b (3 weeks ago), 2019-09-27 15:04:59 -0700 β€’ Engine revision b863200c37 β€’ Dart version 2.5.0 [βœ“] Android toolchain - develop for Android devices (Android SDK version 28.0.3) β€’ Android SDK at /Users/jerryzhou/Library/Android/sdk β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-28, build-tools 28.0.3 β€’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01) β€’ All Android licenses accepted. [βœ“] Xcode - develop for iOS and macOS (Xcode 11.1) β€’ Xcode at /Applications/Xcode.app/Contents/Developer β€’ Xcode 11.1, Build version 11A1027 β€’ CocoaPods version 1.8.3 [βœ“] Android Studio (version 3.4) β€’ Android Studio at /Applications/Android Studio.app/Contents β€’ Flutter plugin version 34.0.2 β€’ Dart plugin version 183.5901 β€’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01) [βœ—] Cannot determine if IntelliJ is installed βœ— Directory listing failed [βœ“] VS Code (version 1.39.1) β€’ VS Code at /Applications/Visual Studio Code.app/Contents β€’ Flutter extension version 3.5.1 [βœ“] Connected device (1 available) β€’ iPhonejerry β€’ 1daca728d75535c21ced151fc7829191064d0e00 β€’ ios β€’ iOS 13.1.3 ```
platform-ios,tool,platform-mac,c: proposal,P3,team-ios,triaged-ios
low
Critical
508,376,462
go
cmd/compile: consider flagging ops that should not be moved between blocks
Some intrinsics (such as population count, FMA and rounding) are emitted by the compiler but are not present in the base ISA of the CPU the compiler is targeting. In these cases the compiler emits code to guard the execution of the instruction and fall back to a slower implementation if the required CPU feature is not present. This technique is currently working fine but I am concerned it might not interact well with block fusing optimizations that we might add in the future (such as those mentioned in #30645) and this could lead to subtle bugs, especially since machines without some CPU features (e.g. SSE 4.1) are fairly rare these days. We already do some optimizations where we perform code movement and speculatively execute code in order to emit conditional select instructions. I think we should consider marking these ops somehow, perhaps simply with the 'has side effects' flag. This would represent the possibility that these instructions could cause an illegal instruction exception and prevent the compiler from moving them. The conditional select optimizations special case integer divide ops since they panic if the divisor is 0 for a similar reason: they should not be speculatively executed. Example: ```go if cpu.HasPopCount { y = PopCount(x) } else { y = GenericPopCount(x) } ``` Could be transformed by the compiler into: ```go y = select(cpu.HasPopCount, PopCount(x), GenericPopCount(x)) ``` Currently there is ~zero risk of this transformation occuring because the fall back is generally an expensive function call with side effect. However I think that is the only reason this code transformation wouldn't be applied and that seems a bit fragile.
NeedsInvestigation,compiler/runtime
low
Critical
508,440,465
TypeScript
Add quick fix to export unexported members to fix unresolved symbol errors
## Search Terms import export unexported symbols auto import completion global globals intellisense ## Suggestion tsserver can auto-complete exported symbols from other modules, and add an import statement that imports the symbol you selected. It should also suggest **global symbols** from other modules that are ***not* exported**, and add the *export* keyword if the suggestion is selected. ## Use Cases When I write a module I can't predict every use case of it, and I only export the global symbols that I think other modules would need to use. But when I'm working on a different module and I realize I want to use a symbol which is not exported by the other module (or maybe I don't even remember if it is or isn't exported), I would like my editor to export and import it for me instead of me having to find that symbol manually (because even go-to definition won't work) and export it, and then go back and auto-complete to auto-import. This is also very very **very** useful when converting a web project to use imports. You can just go over all of the "undeclared symbol" errors and auto-complete to export and import the correct symbol. ## Examples *module.ts*: ```ts function foo(name = 'World') { console.log('Hello ' + name); } export function bar() { foo(); } ``` *app.ts*: ```ts import { bar } from './module.ts'; bar(); ``` I now want to customize the "hello" message, so I start typing `foo` and then wait for completion suggestions which include the function `foo` from `module.ts`. After selecting that, the symbol `foo` is automatically exported and imported for me, and I can use it immediately: *module.ts*: ```ts export function foo(name = 'World') { console.log('Hello ' + name); } export function bar() { foo(); } ``` *app.ts*: ```ts import { bar, foo } from './module.ts'; bar(); foo('TypeScript'); ``` ## Checklist My suggestion meets these guidelines: * [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [X] This wouldn't change the runtime behavior of existing JavaScript code * [X] This could be implemented without emitting different JS based on the types of the expressions * [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Awaiting More Feedback,Domain: Quick Fixes
low
Critical
508,449,203
pytorch
reflective padding for 5D tensor
## πŸš€ Feature <!-- A clear and concise description of the feature proposal --> Reflective padding can be implemented for 5D tensor. ## Motivation Reflective padding works great in image convolution. However, it is not yet implemented for 3D data (5D tensors) <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too --> ## Pitch Reflective padding be implemented for 5D tensor. <!-- A clear and concise description of what you want to happen. --> ## Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered, if any. --> I've made a python implementation of this in #28215. However, a fix implemented in [csrc](https://github.com/pytorch/pytorch/blob/master/torch/csrc) would be better ## Additional context This padding can be applied for 5D tensors, however, it is not implemented. <!-- Add any other context or screenshots about the feature request here. -->
triaged,function request,module: padding
low
Minor
508,454,688
flutter
Internationalized words in text widgets are not centered vertically.
Having a problem regarding displaying Kannada language words using the Text widget. The text is not aligned vertically center. ```dart import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: Column( crossAxisAlignment: CrossAxisAlignment.center, mainAxisAlignment: MainAxisAlignment.center, children: [ FlatButton( child: Text( 'Follow', style: TextStyle(fontSize: 20, color: Colors.white), ), onPressed: () { print("Button pressed"); }, color: Colors.blue, ), FlatButton( child: Text( 'ಫಾಲೋ', style: TextStyle(fontSize: 20, color: Colors.white), ), onPressed: () { print("Button pressed"); }, color: Colors.blue, ), ], ), ); } } ``` <img width="367" alt="Screenshot 2019-10-17 at 6 04 42 PM" src="https://user-images.githubusercontent.com/17924983/67009156-9d252380-f108-11e9-921c-89d6e2ac9c24.png"> ``` flutter doctor -v [βœ“] Flutter (Channel stable, v1.9.1+hotfix.4, on Mac OS X 10.15 19A583, locale en-IN) β€’ Flutter version 1.9.1+hotfix.4 at /Users/vinothini/Documents/flutter β€’ Framework revision cc949a8e8b (3 weeks ago), 2019-09-27 15:04:59 -0700 β€’ Engine revision b863200c37 β€’ Dart version 2.5.0 [βœ“] Android toolchain - develop for Android devices (Android SDK version 29.0.2) β€’ Android SDK at /Users/vinothini/Library/Android/sdk β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-29, build-tools 29.0.2 β€’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) β€’ All Android licenses accepted. [βœ“] Xcode - develop for iOS and macOS (Xcode 11.0) β€’ Xcode at /Applications/Xcode.app/Contents/Developer β€’ Xcode 11.0, Build version 11A420a β€’ CocoaPods version 1.8.3 [βœ“] Android Studio (version 3.5) β€’ Android Studio at /Applications/Android Studio.app/Contents β€’ Flutter plugin version 39.0.3 β€’ Dart plugin version 191.8423 β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) [βœ“] VS Code (version 1.39.2) β€’ VS Code at /Applications/Visual Studio Code.app/Contents β€’ Flutter extension version 3.5.1 [βœ“] Connected device (1 available) β€’ iPhone 11 β€’ 4BE1A3E5-2E98-4539-B98C-672848CA2098 β€’ ios β€’ com.apple.CoreSimulator.SimRuntime.iOS-13-0 (simulator) β€’ No issues found! ```
framework,engine,a: internationalization,a: typography,has reproducible steps,P2,found in release: 1.22,found in release: 2.0,team-engine,triaged-engine
low
Major
508,491,491
bitcoin
bench: Add BIP 143 sighash benchmark
I believe there is no benchmark for the BIP 143 signature hash. Just for completeness, it would be nice to have a few different sighash flags benchmarked.
Feature,Tests
low
Minor
508,502,022
pytorch
index_sub, index_mul and index_div
## πŸš€ Feature Provide `index_sub`, `index_sub_`, `index_mul` etc. functions that work like existing functions `index_add` and `index_add_` ## Motivation If we have a 2-dimensional tensor `T`, and want to multiply `T[0]`, `T[3]` and `T[15]` by 2, there seems to be no way to do this at the moment (in the case where the number of rows we want to multiply by 2 is variable). `index_mul_` would enable us to accomplish this by calling `T.index_mul_(0, torch.tensor([0, 3, 15]), torch.full((3, T.shape[1]), 2.0))`, the way `index_add_` allows adding 2 right now. ## Alternatives `index_sub`, in particular, can be worked around using `index_add` right now, but is less efficient since it involves negating a tensor first. I do not know of any alternatives to the other two, short of taking exponents and logarithms of the entire tensor.
triaged,enhancement,OSS contribution wanted
low
Minor
508,512,405
create-react-app
Coverage report not working
### Describe the bug The problem is that I can't get the coverage report for any project. I used a brand new cra project to test if something is wrong locally but I still have the same problem. This is what I am supposed to see: ![image](https://user-images.githubusercontent.com/18174459/67017097-16dbf380-f0ae-11e9-9fed-f9689bd67029.png) Taken from: https://create-react-app.dev/docs/running-tests/#coverage-reporting And this is what I see instead: ![image](https://user-images.githubusercontent.com/18174459/67017174-33782b80-f0ae-11e9-8f17-ede231fce6f9.png) ### Did you try recovering your dependencies? I did. yarn --version 1.19.1 ### Environment Environment Info: System: OS: macOS 10.15 CPU: (4) x64 Intel(R) Core(TM) i5-7267U CPU @ 3.10GHz Binaries: Node: 12.12.0 - /usr/local/bin/node Yarn: 1.19.1 - /usr/local/bin/yarn npm: 6.11.3 - /usr/local/bin/npm Browsers: Chrome: 77.0.3865.120 Firefox: 69.0 Safari: 13.0.2 npmPackages: react: ^16.10.2 => 16.10.2 react-dom: ^16.10.2 => 16.10.2 react-scripts: 3.2.0 => 3.2.0 npmGlobalPackages: create-react-app: 0.3.0 ### Steps to reproduce 1. Create a new project using CRA 2. You will see the 3.2.0 version of react-scripts 3. Run npm test -- --coverage or yarn test --coverage ### Expected behavior I should see the coverage report including the App.js file. ### Actual behavior I see no file included in the report: ![image](https://user-images.githubusercontent.com/18174459/67017602-ddf04e80-f0ae-11e9-9aa7-8957165d282d.png) ### Reproducible demo Just used the latest CRA version in an empty project.
issue: bug,issue: needs investigation
medium
Critical
508,536,733
go
runtime: throw should print more diagnostics
As of Go 1.14 tip, `runtime.throw` enables slightly more diagnostic information in the traceback. It would help with debugging runtime crashes if it printed more diagnostics, such as: 1. FP/SP/and PC for all frames in all goroutines, not just the throwing goroutine (which doesn't even work if the throw happens on a system stack) 2. Runtime frames on all goroutines. 3. The `*g` of each goroutine. When a crash involves the scheduler or GC, we often see g pointers in arguments, but generally can't match these up to goroutines in the traceback. 4. Whether or not the world is stopped, or GC is running. Most calls to `runtime.throw` are for internal runtime issues, but some of them are user errors that can't be caught and need to tear down the process (e.g., deadlocks, map races, out of memory). We probably want to distinguish these so the user errors don't include potentially confusing runtime-internal diagnostics. /cc @mknyszek
NeedsFix,compiler/runtime
low
Critical
508,539,659
pytorch
TorchScript custom ops like `cuda` `byte` etc. doesn't support memory_format argument
cc @suo
oncall: jit,triaged
low
Minor
508,540,365
pytorch
TorchScript doesn't support torch.channels_last or any other memory format constants
cc @suo @VitalyFedyunin
oncall: jit,triaged,module: memory format
low
Minor
508,543,553
pytorch
Convert manually bound `cuda` `cpu` `byte` `float` operators to native_functions
I am a bit skeptical that these should be manually bound. They seem like fairly simple functions that the code generator should be able to understand... _Originally posted by @ezyang in https://github.com/pytorch/pytorch/pull/27228#discussion_r335551596
triaged,module: ux
low
Minor
508,569,371
pytorch
Allow to disable polling for CUDA synchronization
## πŸš€ Feature When using CUDA, by default, synchronization of the host and GPU works through a tight loop that polls a particular memory location, running a CPU core at 100%. CUDA provides two alternative modes: (1) a tight loop that yields to other threads in between, and (2) synchronization via an interrupt (translated into an event by the driver). PyTorch should provide some way to select either of the three modes. ## Motivation Burning a CPU core at 100% is not very green. It doesn't save a lot of time either, especially when the GPU has to process large workloads in between synchronizations. And it makes it harder to see how much the CPU is actually utilized. ## Pitch On the backend, the way to select the synchronization mode is to run either of: * [`cudaSetDeviceFlags(cudaDeviceScheduleAuto)`](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__DEVICE.html#group__CUDART__DEVICE_1g69e73c7dda3fc05306ae7c811a690fac) * `cudaSetDeviceFlags(cudaDeviceScheduleSpin)` * `cudaSetDeviceFlags(cudaDeviceScheduleYield)` * `cudaSetDeviceFlags(cudaDeviceScheduleBlockingSync)` This can be done right after the first `cudaSetDevice()` call, but before the device is used (otherwise it's a cudaErrorSetOnActiveProcess error) -- this may be hard to guarantee. However, it can also be done *before* any `cudaSetDevice()` call, in that case it will set the default for any future activated devices. So I'd suggest to add a function such as `torch.cuda.set_sync_mode()` that takes a string `"auto"`, `"spin"`, `"yield"`, or "`block`" and directly issues the corresponding `cudaSetDeviceFlags()` call. It's up to the user to run this early enough then. Disclaimer: I haven't looked into what would happen with multiprocessing. ## Alternatives The main alternative for implementing this would be figuring out where to safely place a `cudaSetDeviceFlags()` call in the device managing code, and have it read some global variables at that time. We could have a `torch.cuda.sync_mode` variable then, or still guard it by a setter function. It's also possible to solve this in user code doing something like: ```python import ctypes ctypes.CDLL('libcudart.so').cudaSetDeviceFlags(4) # = cudaDeviceScheduleBlockingSync ``` cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @ngimel @VitalyFedyunin @mruberry
high priority,module: performance,module: cuda,triaged,enhancement
medium
Critical
508,610,915
rust
Use structured suggestion for closure needed instead of inner function
We currently emit the following error ``` error[E0434]: can't capture dynamic environment in a fn item --> $DIR/bad-env-capture3.rs:4:31 | LL | fn bar() { log(debug, x); } | ^ | = help: use the `|| { ... }` closure form instead ``` It'd be nice if we provided a structured suggestion instead.
C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics,D-papercut
low
Critical
508,623,942
pytorch
torch native functions cannot be used with inspect.signature
## πŸš€ Feature <!-- A clear and concise description of the feature proposal --> It would be nice if native functions were annotated with the necessary metadata to allow runtime introspection via the `inspect` module. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too --> Right now using e.g. `inspect.signature` with a function defined natively produces a `ValueError`: ``` >>> import inspect >>> import torch >>> inspect.signature(torch.mm) *** ValueError: no signature found for builtin <built-in method mm of type object at 0x7ff8961cc940> ``` ## Pitch <!-- A clear and concise description of what you want to happen. --> It would be nice if the native function generation facilities in the pytorch build could create the function objects with the necessary metadata. Cython is [able to do this](https://stackoverflow.com/questions/46033277/how-to-introspect-a-function-defined-in-a-cython-c-extension-module), so it's definitely possible: ``` >>> import cython >>> import inspect >>> scope = cython.inline("""def f(a,*args,b=False): pass """) >>> inspect.signature(scope['f']) <Signature (a, *args, b=False)> ``` Unfortunately I don't know enough about how cython does this or how python builtins can get annotated with the necessary metadata to point to further resources for doing this in pytorch. ## Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered, if any. --> Another way to do this would be for pytorch to make a public API that provides this information, since it's available in `native_functions.yaml` at build time. I think this is less nice since it should be possible to integrate with python's native introspection facilities.
triaged,module: pybind,module: language binding
low
Critical
508,645,041
vscode
Provision trusted domains for enterprise setup
As referenced in https://github.com/microsoft/vscode/issues/80595#issuecomment-539794968, currently there's no way to pre-configure VS Code with a list of trusted domains. This is a common case in enterprise setup.
feature-request,workbench-link
low
Major
508,659,578
rust
Quadratic slowdown for lookup of bounds on associated types.
There are two factors here, which we've spotted combined in the wild: 1. we hoist all `type X: ...;` bounds to the trait-level, i.e.: ```rust trait Trait { type X: A + B; type Y: C + D; } ``` is sugar for: ```rust trait Trait where Self::X: A, Self::X: B, Self::Y: C, Self::Y: D, { type X; type Y; } ``` * I doubt we can do much about this, perhaps group/index the bounds? 2. lookup of `<_ as Foo>::X: Bar` in `Foo`'s `where` clauses appears to be quadratic * this wouldn't have a noticeable impact without 1. * probably easier to fix, I would've assumed it was already linear <hr/> And here's our stress test - apologies for the macro, but it takes *a lot* to push it into multiple seconds (the version in the wild had more sophisticated, and actually useful, macros): ```rust macro_rules! stress { (type $name:ident: $($i:expr,)*) => { type $name: $(From<[(); $i]> + Into<[(); $i]> +)*; }; (fn $($underscore:tt)*) => { const _: () = { $({fn _f<T: Trait>() { #[derive(Copy, Clone)] struct Foo<X>(X); let $underscore = Foo(T::X::default()).clone(); }})* }; }; } trait Trait { type X: Copy + Default; // The bounds would normally be on many separate // associated types, but that makes no difference. stress!(type _Unused: 00, 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, // Uncomment to (almost) quadruple compile time: //20, 21, 22, 23, 24, 25, 26, 27, 28, 29, //30, 31, 32, 33, 34, 35, 36, 37, 38, 39, // Add more to raise the total time to minutes. ); } stress!(fn _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ); ``` Using `-Zself-profile` and `summarize`, we get this (times are total "self" times): | Query | 20 (`00`-`19`) | 40 (`00`-`39`) | Slowdown | |---|---|---|---| | `type_op_prove_predicate` | 2.52s | 9.17s | 3.64x | | `typeck_tables_of` | 1.78s | 5.83s | 3.28x | | `check_item_well_formed` | 1.43s | 4.47s | 3.13x | cc @rust-lang/wg-traits
A-trait-system,I-compiletime,A-associated-items,T-compiler
low
Major
508,665,816
pytorch
support class annotations in __init__
``` class MyMod(torch.nn.Module): def __init__(self): self.foo : List[int] = [] ``` should work. cc @suo
oncall: jit,triaged,jit-backlog
low
Minor
508,681,230
rust
Reduce the places where `stable` annotations are needed
Looking at the definition of `std::option::Option`, we have the following `stable` annotations: ```rust #[derive(Copy, PartialEq, PartialOrd, Eq, Ord, Debug, Hash)] #[stable(feature = "rust1", since = "1.0.0")] pub enum Option<T> { /// No value #[stable(feature = "rust1", since = "1.0.0")] None, /// Some value `T` #[stable(feature = "rust1", since = "1.0.0")] Some(#[stable(feature = "rust1", since = "1.0.0")] T), } ``` Is it strictly _necessary_ for the stability checker to look at the fields of enum variants (or of structs, for that matter)? It seems to me that `MissingStabilityAnnotations` could be modified to allow stability markings to flow down from at least a variant to its fields, as _changing_ them is not a backwards compatible change and shouldn't ever happen. Even extending an existing enum would be backwards incompatible, so I would even flow from ADT attribute downwards. This would let the definition above to be: ```rust #[derive(Copy, PartialEq, PartialOrd, Eq, Ord, Debug, Hash)] #[stable(feature = "rust1", since = "1.0.0")] pub enum Option<T> { /// No value None, /// Some value `T` Some(T), } ``` This is normally not an issue, but with [the new output in #65421](https://github.com/rust-lang/rust/pull/65421#issuecomment-542869234) we will start showing this code to end users in errors, and it'd be nice if we could eliminate unnecessary boilerplate.
C-enhancement,P-medium,A-stability,T-compiler,F-staged_api
medium
Critical
508,681,730
rust
[Edition vNext] Consider deprecating weird nesting of items
For example, do we really want `mod`s inside random function? Similarly, IIRC https://youtu.be/LIYkT3p5gTs mentioned that it's unfortunate for incremental that we allow `impl`s of random things inside functions. Though at the same time, it's nice to allow all sorts of things for how `?eval` bots work on Discord and IRC -- just put all the code inside a block that's passed to `println!` and it just works.
T-lang,finished-final-comment-period,disposition-postpone,A-maybe-future-edition
medium
Critical
508,683,063
rust
Direct users to crate-type=cdylib in crate-type=dylib documentation
The dylib crate type is well known to be misleading and rarely useful (to put it politely) among rustc developers, but if a user reads about crate types in `rustc --help`, [the reference](https://doc.rust-lang.org/reference/linkage.html), or [the rustc book](https://doc.rust-lang.org/rustc/command-line-arguments.html#--crate-type-a-list-of-types-of-crates-for-the-compiler-to-emit) they won't get much indication of this. This probably contributes to the constant stream of users picking it or recommending it to each other when they really need `cdylib`. Generally this is only prevented if someone with "insider knowledge" is in the same room when the decision is made, and otherwise only corrected when something breaks and the user seeks help or files a bug report. I still believe it would be best to deprecate the crate type entirely but since that probably involves more process and pushback, for now I propose to add a note to all the aforementioned documents that says something like: > As with `lib`, the file produced from a `dylib` crate can only be consumed by the same version of rustc and is not guaranteed to be in any particular format or contain any symbols that other programs could use. If you wish to produce a dynamic library to be loaded dynamically by Rust programs or linked against by other languages, you need to use the `cdylib` crate type.
A-frontend,C-enhancement,A-FFI,T-compiler
low
Critical
508,686,768
flutter
Fix invalid WeakPtr usage in Shell::Setup()
Shell::Setup() is called on the platform thread, but we deref the Engine WeakPtr to get the display refresh rate which requires us to deref it on the UI thread.
engine,P2,team-engine,triaged-engine
low
Minor
508,689,449
flutter
Fix invalid WeakPtr usage in Shell::CreateShellOnPlatformThread()
Shell::CreateShellOnPlatformThread() calls into PlatformView::CreateResourceContext which requires us to be on the IO thread (and so it does so in an IO task) and derefs the PlatformView's WeakPtr inside that lambda, which was created on the platform thread.
engine,P2,team-engine,triaged-engine
low
Minor
508,690,460
flutter
Remove WeakPtr::getUnsafe()
getUnsafe() is used as a stopgap measure to allow us to turn on thread safety checks again for most WeakPtr usages. These are the issues that need fixing to remove getUnsafe(): https://github.com/flutter/flutter/issues/42946 https://github.com/flutter/flutter/issues/42947 https://github.com/flutter/flutter/issues/42948
engine,P2,team-engine,triaged-engine
low
Minor
508,702,842
pytorch
ProcessGroupMPI reports incorrect world size
## πŸ› Bug ## To Reproduce If you add the test method below to `ProcessGroupGlooTest` in test_c10d.py it fails: ``` def test_new_group(self): print ('WORLD SIZE: {}\n'.format(self.world_size)) store = c10d.FileStore(self.file_name, self.world_size) c10d.distributed_c10d.init_process_group( backend='mpi', store=store, world_size=self.world_size, rank=self.rank, timeout=timedelta(seconds=1)) print ('PG WORLD SIZE: {}\n'.format(c10d.distributed_c10d.get_world_size())) self.assertEqual(self.world_size, c10d.distributed_c10d.get_world_size()) ``` `self.world_size` is 4, but `get_world_size` returns 1. ## Expected behavior ProcessGroupMPI should return the world_size that we pass into `init_process_group`. cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528
oncall: distributed,triaged
low
Critical
508,708,518
pytorch
PyTorch RPC should expose critical metrics to the application.
## πŸš€ Feature Context for Model Parallel: https://github.com/pytorch/pytorch/issues/23110 ## Motivation When applications are using complex distributed primitives like RPC, RRef and Distributed Autograd, debugging issues can be cumbersome. We should have a way of exposing metrics to applications. This could simply be a `def get_metrics() -> Dict[str, int]` API that returns information about various things. The full list of metrics needs to be decided, although a few examples could be number of owner rrefs, number of user rrefs, RPC latency, Distributed autograd latency etc. cc @ezyang @gchanan @zou3519 @jerryzh168 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528
feature,triaged,module: rpc
low
Critical
508,714,962
pytorch
Add scopes to autograd profiler
## πŸš€ Feature Add scopes within a torchscript model so you can get profile information per scope for both the forward and backward pass. ## Motivation For distributed execution (model parallelism) we want to be able to measure the exact CPU time spent on different parts of a torchscript module so we can shard the model in a reasonable manner to different machines. We have high level "components" that represent shardable units of execution and want to use the autograd profiler to profile per component. ## Pitch We want to add two custom methods into torch so we can describe the scope. Making them in C++ allows us to use them in torchscript as well in python. TorchScript pseudocode ```py def forward(self, x): handle = torch._profiler_scope_enter("foo") x = torch.relu(x) ... torch._profiler_scope_exit(handle) return x ``` Python Sugar ```py def forward(self, x): with profiler.scope("foo"): x = torch.relu(x) ... return x ``` These methods would add scope information to RecordFunction and thus provide it to the autograd Profiler. https://github.com/pytorch/pytorch/blob/master/torch/csrc/autograd/record_function.h The current RECORD_FUNCTION implementation appears to track the lineage from backwards functions to the forward pass via sequence_nr so it should be easy to walk the RecordFunction tree and extract scope information. https://github.com/pytorch/pytorch/blob/master/torch/csrc/autograd/function.h#L104-L116 https://github.com/pytorch/pytorch/blob/master/tools/autograd/gen_variable_type.py#L238 One notable thing is that sequence_nr is currently thread_local. From what I've heard, the autograd/backward pass can be multithreaded and thus we will likely need to extend sequence_nr to include a thread ID as well as the current thread_local sequence_nr. ## Alternatives There's ways to do this when running in the python environment w/ register_backward_hook + a custom nn.Module however there's no equivalent way to do this with torchscript since it doesn't support register_backward_hook. The autograd backward pass can also run multithreaded from my understanding so that would break that approach anyways.
triaged
low
Major
508,721,555
TypeScript
Expose TypeScript-specific globbing behavior
Today, TypeScript's globber uses a custom strategy that acts differently from other tools. This is an issue for 3rd party users who need to consume TypeScript's APIs, but need to validate that a file belongs to a compilation. #32564 dives into some of the discussion around this. This API doesn't have to be the same as `matchFiles` - but it needs to compose well with the rest of our API.
Suggestion,API,Awaiting More Feedback
low
Minor
508,751,943
flutter
"iproxy cannot be opened because the developer cannot be verified"
**If you're looking for a quick fix, this should work:** https://github.com/flutter/flutter/issues/42969#issuecomment-568303471 ## Steps to Reproduce 1. Going through the macOS install tutorial for Flutter (https://flutter.dev/docs/get-started/install/macos), step-by-step, on a brand new MacBook running 10.15. 2. Any time a `flutter` command is called (eg. `flutter doctor` or `flutter create`), the following warning appears: <img width="532" alt="Screenshot 2019-10-17 at 23 05 55" src="https://user-images.githubusercontent.com/24899791/67052129-25b9b900-f135-11e9-9fa4-39d5bceea97e.png"> ## Logs The process hangs until I acknowledge the window by clicking "Cancel", but the verbose logs don't actually show anything. For example, for flutter doctor logs: ``` ➜ Developer flutter doctor -v [βœ“] Flutter (Channel stable, v1.9.1+hotfix.5, on Mac OS X 10.15 19A582a, locale en-GB) β€’ Flutter version 1.9.1+hotfix.5 at /usr/local/flutter β€’ Framework revision 1aedbb1835 (7 hours ago), 2019-10-17 08:37:27 -0700 β€’ Engine revision b863200c37 β€’ Dart version 2.5.0 [βœ“] Android toolchain - develop for Android devices (Android SDK version 29.0.2) β€’ Android SDK at /Users/lucas/Library/Android/sdk β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-29, build-tools 29.0.2 β€’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) β€’ All Android licenses accepted. [βœ“] Xcode - develop for iOS and macOS (Xcode 11.0) β€’ Xcode at /Applications/Xcode.app/Contents/Developer β€’ Xcode 11.0, Build version 11A420a β€’ CocoaPods version 1.8.4 [βœ“] Android Studio (version 3.5) β€’ Android Studio at /Applications/Android Studio.app/Contents β€’ Flutter plugin version 40.2.2 β€’ Dart plugin version 191.8580 β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) [βœ“] VS Code (version 1.38.1) β€’ VS Code at /Applications/Visual Studio Code.app/Contents β€’ Flutter extension version 3.5.1 [!] Connected device ! No devices available ! Doctor found issues in 1 category. ```
c: crash,tool,platform-mac,β€Žβ€platform-catalina,P2,team-tool,triaged-tool
low
Critical
508,755,801
pytorch
[jit] TorchScript classes don't work in notebooks
In IPython: ```python @torch.jit.script class Container(object): def __init__(self): pass ``` gives ``` TypeError: <module '__main__'> is a built-in class ``` since the `inspect` module can't get the source of a class inside of a notebook cc @suo
oncall: jit,triaged
low
Critical
508,761,953
vscode
Minimap: Render overview ruler decorations (for extension support)
Originally mentioned in the comments here: https://github.com/microsoft/vscode/issues/20934 Rendering overview ruler decorations on the minimap would not only eventually allow the minimap to optionally replace the scrollbar by supporting the same decorations, but would also allow extensions to provide decorations on the minimap.
feature-request,editor-minimap
medium
Critical
508,787,836
flutter
parsing line table prologue at offset 0x6f697463 found unsupported version 0x00
**I can't build ios --release**. ## Steps to Reproduce Pretty similar to https://github.com/flutter/flutter/issues/20685, which is closed and instructed to open a new issue in case problem persists. 1. Happens with both New and Legacy system builds. 2. build apk --release OK 3. build ios --debug OK 4. build ios --release gives the error below ## Logs ``` flutter build ios --release Building br.com.gorges.edu.colegio for device (ios-release)... Automatically signing iOS for device deployment using specified development team in Xcode project: <MY_SIGNING_KEY> Running Xcode build... β”œβ”€Building Dart code... 87,6s β”œβ”€Generating dSYM file... 0,6s β”œβ”€Stripping debug symbols... 0,3s β”œβ”€Assembling Flutter resources... 3,0s └─Compiling, linking and signing... 4,1s Xcode build done. 179,4s Failed to build iOS app Error output from Xcode build: ↳ ** BUILD FAILED ** Xcode's output: ↳ In file included from /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.m:26: /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:328:19: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param sharedStyle ~~~~~~~~~~~~~~~~~^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:25: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param allowTapToDismiss ~~~~~~~~~~~~~~~~~~~~~~~^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:9: warning: parameter 'allowTapToDismiss' not found in the function declaration [-Wdocumentation] @param allowTapToDismiss ^~~~~~~~~~~~~~~~~ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:9: note: did you mean 'tapToDismissEnabled'? @param allowTapToDismiss ^~~~~~~~~~~~~~~~~ tapToDismissEnabled /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:362:20: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param queueEnabled ~~~~~~~~~~~~~~~~~~^ 4 warnings generated. In file included from /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/Fluttertoas tPlugin.m:2: /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:328:19: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param sharedStyle ~~~~~~~~~~~~~~~~~^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:25: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param allowTapToDismiss ~~~~~~~~~~~~~~~~~~~~~~~^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:9: warning: parameter 'allowTapToDismiss' not found in the function declaration [-Wdocumentation] @param allowTapToDismiss ^~~~~~~~~~~~~~~~~ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:9: note: did you mean 'tapToDismissEnabled'? @param allowTapToDismiss ^~~~~~~~~~~~~~~~~ tapToDismissEnabled /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:362:20: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param queueEnabled ~~~~~~~~~~~~~~~~~~^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/Fluttertoas tPlugin.m:19:23: warning: unused variable 'viewController' [-Wunused-variable] UIViewController *viewController = ^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/Fluttertoas tPlugin.m:70:21: warning: unused variable 'topPadding' [-Wunused-variable] CGFloat topPadding = window.safeAreaInsets.top; ^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/Fluttertoas tPlugin.m:71:21: warning: unused variable 'bottomPadding' [-Wunused-variable] CGFloat bottomPadding = window.safeAreaInsets.bottom; ^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/Fluttertoas tPlugin.m:48:19: warning: unused variable 'size' [-Wunused-variable] NSNumber *size = call.arguments[@"size"]; ^ 8 warnings generated. In file included from /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.m:26: /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:328:19: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param sharedStyle ~~~~~~~~~~~~~~~~~^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:25: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param allowTapToDismiss ~~~~~~~~~~~~~~~~~~~~~~~^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:9: warning: parameter 'allowTapToDismiss' not found in the function declaration [-Wdocumentation] @param allowTapToDismiss ^~~~~~~~~~~~~~~~~ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:9: note: did you mean 'tapToDismissEnabled'? @param allowTapToDismiss ^~~~~~~~~~~~~~~~~ tapToDismissEnabled /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:362:20: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param queueEnabled ~~~~~~~~~~~~~~~~~~^ 4 warnings generated. In file included from /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/Fluttertoas tPlugin.m:2: /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:328:19: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param sharedStyle ~~~~~~~~~~~~~~~~~^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:25: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param allowTapToDismiss ~~~~~~~~~~~~~~~~~~~~~~~^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:9: warning: parameter 'allowTapToDismiss' not found in the function declaration [-Wdocumentation] @param allowTapToDismiss ^~~~~~~~~~~~~~~~~ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:343:9: note: did you mean 'tapToDismissEnabled'? @param allowTapToDismiss ^~~~~~~~~~~~~~~~~ tapToDismissEnabled /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/UIView+Toas t.h:362:20: warning: empty paragraph passed to '@param' command [-Wdocumentation] @param queueEnabled ~~~~~~~~~~~~~~~~~~^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/Fluttertoas tPlugin.m:19:23: warning: unused variable 'viewController' [-Wunused-variable] UIViewController *viewController = ^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/Fluttertoas tPlugin.m:70:21: warning: unused variable 'topPadding' [-Wunused-variable] CGFloat topPadding = window.safeAreaInsets.top; ^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/Fluttertoas tPlugin.m:71:21: warning: unused variable 'bottomPadding' [-Wunused-variable] CGFloat bottomPadding = window.safeAreaInsets.bottom; ^ /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/fluttertoast-3.1.3/ios/Classes/Fluttertoas tPlugin.m:48:19: warning: unused variable 'size' [-Wunused-variable] NSNumber *size = call.arguments[@"size"]; ^ 8 warnings generated. 1 warning generated. 1 warning generated. 1 warning generated. 4 warnings generated. 1 warning generated. /Users/egr/OneDrive/Games/flutter/.pub-cache/hosted/pub.dartlang.org/firebase_analytics-3.0.3/ios/Classes/Fireb aseAnalyticsPlugin.m:65:50: warning: incompatible pointer to integer conversion sending 'id _Nullable' to parameter of type 'BOOL' (aka 'signed char') [-Wint-conversion] NSNumber *enabled = [NSNumber numberWithBool:call.arguments]; ^~~~~~~~~~~~~~ In module 'Foundation' imported from /Users/egr/AppsSource/colegio/ios/Pods/Headers/Private/FirebaseCore/FIRApp.h:17: /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS12.4.sdk/System/L ibrary/Frameworks/Foundation.framework/Headers/NSValue.h:100:36: note: passing argument to parameter 'value' here + (NSNumber *)numberWithBool:(BOOL)value; ^ 1 warning generated. Building AOT snapshot in release mode (ios-release)... Building App.framework for arm64... Building App.framework for armv7... **Building AOT snapshot in release mode (ios-release)...** 84,8s **Built to build/aot/.** warning: **_parsing line table prologue at offset 0x6f697463 found unsupported version 0x00_** warning: line table parameters mismatch. Cannot emit. note: while processing /Users/egr/AppsSource/colegio/build/aot/armv7/snapshot_assembly.o Project /Users/egr/AppsSource/colegio built and packaged successfully. xattr: No such file: /Users/egr/Library/Developer/Xcode/DerivedData/Runner-ckhfxkwmgjyhrsewwelpxeajspef/Build/Intermediates.noindex/ ArchiveIntermediates/Runner/BuildProductsPath/Release-iphoneos/Runner.app Clear ld: framework not found App clang: error: linker command failed with exit code 1 (use -v to see invocation) note: Using new build systemnote: Planning buildnote: Constructing build description Encountered error while building for device. ``` ## Podfile ``` # Using a CDN with CocoaPods 1.7.2 or later can save a lot of time on pod installation, but it's experimental rather than the default. # source 'https://cdn.cocoapods.org/' # Uncomment this line to define a global platform for your project platform :ios, '9.0' # CocoaPods analytics sends network stats synchronously affecting flutter build latency. ENV['COCOAPODS_DISABLE_STATS'] = 'true' project 'Runner', { 'Debug' => :debug, 'Profile' => :release, 'Release' => :release, } def parse_KV_file(file, separator='=') file_abs_path = File.expand_path(file) if !File.exists? file_abs_path return []; end pods_ary = [] skip_line_start_symbols = ["#", "/"] File.foreach(file_abs_path) { |line| next if skip_line_start_symbols.any? { |symbol| line =~ /^\s*#{symbol}/ } plugin = line.split(pattern=separator) if plugin.length == 2 podname = plugin[0].strip() path = plugin[1].strip() podpath = File.expand_path("#{path}", file_abs_path) pods_ary.push({:name => podname, :path => podpath}); else puts "Invalid plugin specification: #{line}" end } return pods_ary end target 'Runner' do # Prepare symlinks folder. We use symlinks to avoid having Podfile.lock # referring to absolute paths on developers' machines. system('rm -rf .symlinks') system('mkdir -p .symlinks/plugins') # Flutter Pods generated_xcode_build_settings = parse_KV_file('./Flutter/Generated.xcconfig') if generated_xcode_build_settings.empty? puts "Generated.xcconfig must exist. If you're running pod install manually, make sure flutter pub get is executed first." end generated_xcode_build_settings.map { |p| if p[:name] == 'FLUTTER_FRAMEWORK_DIR' symlink = File.join('.symlinks', 'flutter') File.symlink(File.dirname(p[:path]), symlink) pod 'Flutter', :path => File.join(symlink, File.basename(p[:path])) end } # Plugin Pods plugin_pods = parse_KV_file('../.flutter-plugins') plugin_pods.map { |p| symlink = File.join('.symlinks', 'plugins', p[:name]) File.symlink(p[:path], symlink) pod p[:name], :path => File.join(symlink, 'ios') } end # Prevent Cocoapods from embedding a second Flutter framework and causing an error with the new Xcode build system. install! 'cocoapods', :disable_input_output_paths => true post_install do |installer| puts 'Determining pod project minimal deployment target' pods_project = installer.pods_project deployment_target_key = 'IPHONEOS_DEPLOYMENT_TARGET' deployment_targets = pods_project.build_configurations.map{ |config| config.build_settings[deployment_target_key] } minimal_deployment_target = deployment_targets.min_by{ |version| Gem::Version.new(version) } puts 'Minimal deployment target is ' + minimal_deployment_target puts 'Setting each pod deployment target to ' + minimal_deployment_target installer.pods_project.targets.each do |target| target.build_configurations.each do |config| config.build_settings[deployment_target_key] = minimal_deployment_target end end end ``` <!-- If possible, paste the output of running `flutter doctor -v` here. --> ## flutter doctor -v ``` [βœ“] Flutter (Channel unknown, v1.9.1+hotfix.4, on Mac OS X 10.14.6 18G103, locale pt-BR) β€’ Flutter version 1.9.1+hotfix.4 at /Users/egr/OneDrive/Games/flutter β€’ Framework revision cc949a8e8b (3 weeks ago), 2019-09-27 15:04:59 -0700 β€’ Engine revision b863200c37 β€’ Dart version 2.5.0 [βœ“] Android toolchain - develop for Android devices (Android SDK version 28.0.3) β€’ Android SDK at /Users/egr/Library/Android/sdk β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-28, build-tools 28.0.3 β€’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) β€’ All Android licenses accepted. [βœ“] Xcode - develop for iOS and macOS (Xcode 10.3) β€’ Xcode at /Applications/Xcode.app/Contents/Developer β€’ Xcode 10.3, Build version 10G8 β€’ CocoaPods version 1.6.0 [βœ“] Android Studio (version 3.5) β€’ Android Studio at /Applications/Android Studio.app/Contents β€’ Flutter plugin version 39.0.3 β€’ Dart plugin version 191.8423 β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) [βœ“] VS Code (version 1.39.1) β€’ VS Code at /Applications/Visual Studio Code.app/Contents β€’ Flutter extension version 3.5.1 [!] Connected device ! No devices available ! Doctor found issues in 1 category. ```
c: crash,platform-ios,tool,customer: google,P2,team-ios,triaged-ios
low
Critical
508,790,649
pytorch
Problem with jit TorchScript while copying data between GRUs
## πŸ› Bug Not able to create TORCHSCRIPT when copying data between GRU to GRUCell using '.' operator. It not able to identify `.` operator. <!-- A clear and concise description of what the bug is. --> ## To Reproduce Steps to reproduce the behavior: 1. [colab](https://colab.research.google.com/drive/1A4IERK4j-veOuunrcO-sytuednfhwSlk) <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ```python import torch import torch.nn as nn class Gru(nn.Module): def __init__(self,input_size, hidden_size): super().__init__() self.gru_cell = nn.GRUCell(input_size, hidden_size) def forward(self, gru): self.gru_cell.weight_hh.data = gru.weight_hh_l0.data self.gru_cell.weight_ih.data = gru.weight_ih_l0.data self.gru_cell.bias_hh.data = gru.bias_hh_l0.data self.gru_cell.bias_ih.data = gru.bias_ih_l0.data return self.gru_cell gru_ = Gru(512,512) rnn = nn.GRU(512, 512) gru_cell = gru_(rnn) s = torch.jit.script(gru_) ``` ``` RuntimeError: unknown:0: expecting kind 'variable' but found '.' at <ipython-input-8-335cd824f51c>:10:7 def forward(self): self.gru_cell.weight_hh.data = self.gru.weight_hh_l0.data ~~~~~~~~~~~~~~~~~~~~ <--- HERE self.gru_cell.weight_ih.data = self.gru.weight_ih_l0.data ``` ## Expected behavior Run successfully <!-- A clear and concise description of what you expected to happen. --> ## Environment - PyTorch Version (e.g., 1.0): 1.2.0 - OS (e.g., Linux): Ubuntu 18.04.3 LTS - How you installed PyTorch (`conda`, `pip`, source): pip - Build command you used (if compiling from source): - Python version: 3.6 - CUDA/cuDNN version: 10.0.130 / 7.6.3 - GPU models and configuration: - Any other relevant information: [pip3] numpy==1.16.5 [pip3] torch==1.2.0 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.3.1 [pip3] torchvision==0.4.0 ## Additional context <!-- Add any other context about the problem here. --> cc @suo
triage review,oncall: jit,triaged
low
Critical
508,806,530
youtube-dl
US Commerce Senate Hearing not downloading
## Checklist - [ x] I'm reporting a new site support request - [ x] I've verified that I'm running youtube-dl version **2019.10.16** - [ x] I've checked that all provided URLs are alive and playable in a browser - [ x] I've checked that none of provided URLs violate any copyrights - [ x] I've searched the bugtracker for similar site support requests including closed ones ## Example URLs https://www.commerce.senate.gov/2017/10/the-commercial-satellite-industry-what-s-up-and-what-s-on-the-horizon ## Description US Senate Hearing uses Akamai Media Player with a "blob:Https...." javascript local file, not sure what to do, just lots of errors
site-support-request
low
Critical
508,828,146
go
crypto/x509: select a certificate store for systemVerify on Windows
### What version of Go are you using (`go version`)? <pre> $ go version go version go1.13 linux/amd64 </pre> ### Does this issue reproduce with the latest release? Yes. ### What operating system and processor architecture are you using (`go env`)? Windows. ### What did you do? Attempted to build a set of certificate chains for a certificate: ```Go var cert *x509.Certificate ... intermediates := x509.NewCertPool() intermediates.AddCert(intermediate) vo := x509.VerifyOptions{ Roots: nil, // Use system roots Intermediates: intermediates, } chains, err := cert.Verify(vo) ``` ### What did you expect to see? Complete chains when running as a user or as the system user. ### What did you see instead? systemVerify() in crypto.x509.root_windows.go always passes HCCE_CURRENT_USER as the hChainEngine argument to syscall.CertGetCertificateChain() (syscall.Handle(0) in the first argument). This means that chain lookups as the system user will fail for us, because the required certificates are stored in HCCE_LOCAL_MACHINE (syscall.Handle(1)). It's possible to pass an additional store as argument 4 to the syscall. This argument is currently storeCtx.Store, which is nil but it might be possible for createStoreContext to populate it with a HCERTSTORE pointing to HCCE_LOCAL_MACHINE. Alternately, the store preference could be specified in VerifyOptions.
OS-Windows,NeedsInvestigation
low
Major
508,844,182
flutter
WebView JavaScriptChannelHandler didReceiveScriptMessage should JSON-encode collections
### Problem Now JavaScriptChannelHandler.m `- (void)userContentController:didReceiveScriptMessage:` arguments receive Dictionary or Array will got a string that hard to serialized like below. ```json { action = do_somethineg; data = { key_1 = some_info; key_2 = "some_info"; }; timestamp = 1570005270866; token = "jwt_token"; } ``` ### Suggestion to modify ```objective-c NSString *body; if ([message.body isKindOfClass:[NSDictionary class]] || [message.body isKindOfClass:[NSArray class]]) { NSData *jsonData = [NSJSONSerialization dataWithJSONObject:message.body options:NSJSONWritingPrettyPrinted error:nil]; body = [[NSString alloc] initWithData:jsonData encoding:NSUTF8StringEncoding]; } if (body.length <= 0) { body = [NSString stringWithFormat:@"%@", message.body]; } NSDictionary* arguments = @{ @"channel" : _javaScriptChannelName, @"message" : body }; [_methodChannel invokeMethod:@"javascriptChannelMessage" arguments:arguments]; ``` hope it will getting update soon πŸ‘πŸΌ
platform-ios,p: webview,package,c: proposal,P3,p: requires breaking change,team-ios,triaged-ios
low
Critical
508,855,105
pytorch
nn.parallel.replicate in v1.1+ is much slower than v1.0
[https://github.com/pytorch/pytorch/issues/28212#issue-508340762](https://github.com/pytorch/pytorch/issues/28212#issue-508340762) I run nn.DataParallel with only replicate operation and found that is much slower in v1.3. For v1.0, the GPU util is ![image](https://user-images.githubusercontent.com/28811637/67066706-41a97300-f1a6-11e9-86c0-a9c23b49a1ab.png) and for v1.3 is ![image](https://user-images.githubusercontent.com/28811637/67066761-79b0b600-f1a6-11e9-82da-6b101571c3f3.png) cc @VitalyFedyunin @ngimel @mruberry
module: performance,triaged,module: data parallel
low
Major
508,867,774
flutter
Create end-to-end test for platform brightness on Android
Create end-to-end test for platform brightness on Android. The Android embedding is responsible for reporting Android's platform brightness to Flutter, either light or dark. This ticket is to setup an end-to-end test that verifies that real platform brightness values are correctly reported from the Android embedding to Flutter over the settings channel.
a: tests,team,engine,P2,team-engine,triaged-engine
low
Minor
508,891,601
godot
Tileset Editor does not support shape transforms
**Godot version:** 3.2 alpha 2 **OS/device including version:** Windows 10 64 bits **Issue description:** Convert to TileSet generates incorrect collision shapes when sprites have ```centered``` set to true. It seems to just take the ```polygon``` property inside the CollisionPolygon2D and use those values to generate the collider. ![image](https://user-images.githubusercontent.com/526829/67071349-a5a54a80-f158-11e9-81a8-0335ee4b1b7b.png) Disabling ```centered``` and manually redrawing the polygon will generate the correct collision shape when converting to tileset. ![image](https://user-images.githubusercontent.com/526829/67072450-47c63200-f15b-11e9-91db-3f567b1f0c21.png) **Steps to reproduce:** - Open TilesetEdit.tscn and convert to TileSet saving as TileSet.tres - Try to edit collision shapes using integrated tileset editor Expected: - Collision shapes should look as in TilesetEdit.tscn Actual: - Collision shapes are offset as if being centered in the top left of the tile **Minimal reproduction project:** https://github.com/godotengine/godot-demo-projects/tree/master/2d/platformer
bug,topic:editor,confirmed
low
Minor
508,893,848
opencv
opencv read model file size 2GB error
train a model file size over 2GB. but use model read fail get message "Traceback (most recent call last): File "try.py", line 86, in <module> face_recognizer.read("mingxing3000.xml") cv2.error: OpenCV(4.1.1) C:\projects\opencv-python\opencv\modules\core\src\persistence.cpp:1456: error: (-215:Assertion failed) ofs == fs_data_blksz[blockIdx] in function 'cv::FileStorage::Impl::normalizeNodeOfs'"
feature,priority: low,category: core
low
Critical
508,900,482
scrcpy
scrcpy white/blank screen (only on second windows desktop)
I have everything working. scrcpy is connected to two windows desktops via wifi. - on the main desktop I can see and control my android - the second desktop is also connected and I'm able to control the android. But shows a white screen (so I can't see anything) I've tried a lower resolution with scrcpy -m 800 -b 2M (nokia 7 plus, android version 9) How can I solve this? Thx in advance :-)
display
low
Minor
508,935,560
kubernetes
OpenAPI definition for AdmissionReview
<!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: I would like an OpenAPI schema definition for admission controllers for the purposes of generating the API types (and potentially servers or server stubs) in multiple languages. **Why is this needed**: Now that AdmissionReview is promoted to v1, it would be welcome if we could get an OpenAPI schema on how such an API would look like. Currently we have a nice OpenAPI schema that is used to programmatically generate API server clients in a multitude of languages. This is great for writing components that interact directly with the API Server. We do however have the AdmissionReview API where one does not communicate with the component indirectly by watching API resources, but instead the component is called synchronously on API resource changes. For such use-cases it would be nice to have a separate OpenAPI schema as well, where the types can also be generated for a multitude of languages, and maybe even generate servers from them, where you only need to pass in a request handler for example. Right now it would be sufficient to simply generate the OpenAPI spec for the AdmissionReview types though.
sig/api-machinery,kind/feature,lifecycle/frozen
medium
Major
508,941,144
node
fs.chmod behaviour should change to be more expected across platforms
<!-- Thank you for suggesting an idea to make Node.js better. Please fill in as much of the template below as you're able. --> **Is your feature request related to a problem? Please describe.** fs.chmod currently behaves a bit odd across different platforms: On linux and unix like system it behaves as expected if mode<=777. On windows on the other hand it ignores the group and other permissions, and changes the 'read-only' attribute. This is problematic because: 1. Making file readonly doesn't really prevent anyone from changing its content. one can just undo read-only. So chmod doesn't do what it meant to do; user can change the file if a write permission set elsewhere outside node. Using chmod doesn't restrict its usage.. this might also be a minor security issue. 2. User might get the false impression that node is compatible with windows' acl permission policy (regarding group and other permissions). Which currently isn't the case because libuv doesn't support it. Because of that, bugs which are related to other users permissions might happen. 3. User who passes numbers greater than 777 accidentally might experience random platform dependent bugs. As according the docs, "any value larger than 0o777 may result in platform-specific behaviors that are not supported to work consistently". **Describe the solution you'd like** 1. fs.chmod will throw newly created `ERR_FEATURE_UNAVAILABLE_ON_PLATFORM` on windows. 2. Current windows logic of fs.chmod will be separated to different method fs.makeReadonly. related functionality of other platforms should be researched and used. If not found, `ERR_FEATURE_UNAVAILABLE_ON_PLATFORM` should be thrown. 3. chmod will no longer support passing modes greater than 777. An exception will be thrown instead. If for some reason in the future a new mode will be added, or will be discovered, this behavior will change. **Describe alternatives you've considered** any combination of 1,2,3.
fs
low
Critical
508,954,991
TypeScript
Future-proof non-aliasing/always-expanding of mapped/intersection/union/etc. types
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ --> ## Search Terms <!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily --> non-aliasing types ## Suggestion <!-- A summary of what you'd like to see added or changed --> I would like an "officially blessed" way to tell TS if it should give me a type alias, or give me an "expanded" type. (See examples) Now that this issue has been fixed, https://github.com/microsoft/TypeScript/issues/32824 the technique I've been using to force TS to not alias a type will break when I migrate to TS 3.6/3.7 ## Use Cases <!-- What do you want to use this for? What shortcomings exist with current approaches? --> In some cases, seeing `Mapped1<T>/Intersected1<T>/Unioned1<T>` is better for a developer (see examples). In other cases, seeing the full "expanded" type is better (see examples for `Mapped2<T>/Intersected2<T>/Unioned2<T>`) ## Examples ```ts type Identity<T> = T; type Mapped1<T> = { [k in keyof T]: [T[k], "mapped"] }; type Mapped2<T> = Identity<{ [k in keyof T]: [T[k], "mapped"] }>; declare function foo1(): Mapped1<{ x: string, y: number }>; declare function foo2(): Mapped2<{ x: string, y: number }>; /* Aliased, bad const m1: Mapped1<{ x: string; y: number; }> */ const m1 = foo1(); /* Non-aliased, good const m2: { x: [string, "mapped"]; y: [number, "mapped"]; } */ const m2 = foo2(); //============================================================== type Intersected1<T> = T & { hi: string }; type Intersected2<T> = Identity<T & { hi: string }>; declare function bar1(): Intersected1<{ x: string, y: number }>; declare function bar2(): Intersected2<{ x: string, y: number }>; /* const i1: Intersected1<{ x: string; y: number; }> */ const i1 = bar1(); /* const i2: { x: string; y: number; } & { hi: string; } */ const i2 = bar2(); //============================================================== type Unioned1<T> = T | { hi: string }; type Unioned2<T> = Identity<T | { hi: string }>; declare function baz1(): Unioned1<{ x: string, y: number }>; declare function baz2(): Unioned2<{ x: string, y: number }>; /* const u1: Unioned1<{ x: string; y: number; }> */ const u1 = baz1(); /* const u2: { hi: string; } | { x: string; y: number; } */ const u2 = baz2(); ``` [Playground](http://www.typescriptlang.org/play/?ts=3.7-Beta&ssl=1&ssc=1&pln=74&pc=19#code/C4TwDgpgBAkgJhAdsAlqAPAFQHxQLxSYDcAUCaJFALICGYkcAjFrgQN5QDaA1lColG4QQAewBmhALoAuLph6SANFABEAWzoMVkqAF9SFaLXoQ4AJhb5YCZGhDoOPPgKGiJmGXIXL1m09r1sUhIEAGMAGxoAJ2gxAFdEUNQRATEREUYACgBKWWMGZg4AD1kAZ2Ao-gBzZRBZRDi1ACMIKMDSMMiYqHjE5NT0sxy8v3MHKBKocsrEGqg6qAbm1vayAHoAKhIAQXCUGlLTZSaaOBJQlPKoNUYRkyYHEihnibKK6tIX+frGlqjSXTYEgbNbnS7Aa6MKxpDI5UibEgAORSAFoaHsDkcoFV0mcLogrmozLI2E8XpNONNqj4NPdtJ8XgtOEs-jTRvSSLpgaD8YSzNDBnD1ms8KKxeKJZKpdKZbKpWRDLBkK1DklTMwcFZMFAAGRQDgACxQbxmVT0BnA0BgyqiquApgsmoI8CQqAw2r1huNU3es1WIQgEWisQSSRQKSgJyiWVySvttsD9oexRN1O+i1+K0BHUDXRDfXDAijQ1j1vjdod40mVNmtR+yza2fWW15EJQtzjKsT6seX2rvqqDOeCxZrQBQJBYIJbahBCjMfhLfBfGJ+rJz37pqH6dH-05urXXyNqdmAO5U6uKH5c+iJeCaxFcqfz5fcoVlqgAFVEIWe07CFAAA++pQMePqmua5Aft+v5jP+Lq2O6QEgWBNZmk2AZBt0vRhhGJwAF4xrIMEpD2KbgWmI6Zo2QSYXmPShv0kY0PhJbET+pFjORaF1hmDb+girZQHEHYkYgZHrq8FGnpJVENuO55CSJVgEQuJCCcucSrqSR7emhALITp5InoOsn1n8Z6Tkp17MaxcJAA) ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals). ## Related https://github.com/microsoft/TypeScript/issues/32824#issuecomment-521127740 @weswigham brought up a different way to force TS to not alias a type. I prefer my `Identity<T>` trick because it doesn't create a new "temporary" object type.
Suggestion,Needs Proposal
low
Critical
509,005,506
flutter
SIGSEGV during hot reload
## Steps to Reproduce 1. Random change of the running app source code 2. Save, triggers hot reload 3. Crash <!-- Please tell us which target platform(s) the problem occurs (Android / iOS / Web / macOS / Linux / Windows) Which target OS version, for Web, browser, is the test system running? Does the problem occur on emulator/simulator as well as on physical devices? --> **Target Platform: Android 9** **LG G6** ## Logs ``` Performing hot reload... Syncing files to device LGM G600L... F/libc (29643): Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x75ce9cce18 in tid 29664 (Thread-3), pid 29643 (ntsoft.pavement) *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** Build fingerprint: 'lge/lucye_lgu_kr/lucye:9/PKQ1.190522.001/192601604c1e3:user/release-keys' Revision: '12' ABI: 'arm64' pid: 29643, tid: 29664, name: Thread-3 >>> net.pavementsoft.pavement <<< signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x75ce9cce18 x0 00000076c6d80640 x1 0000000000000044 x2 000000009be096d0 x3 0000000000000045 x4 00000000ffffffff x5 00000076d097c360 x6 0000000000000002 x7 00000000000000e4 x8 00000076c6d80640 x9 ffffffff07c4c7d8 x10 000000003e0f4e00 x11 0000000000000004 x12 0000000000000005 x13 0000000000000001 x14 0000000000000bc0 x15 0000000000008886 x16 000000776b6e3bd0 x17 0000000000003800 x18 00000076d097c388 x19 00000076ce84be98 x20 00000076ce84bee8 x21 00000076debe28e0 x22 ffffffff07c4c7d8 x23 0000000000000001 x24 000000000001fec6 x25 00000076d024d250 x26 0000000000000000 x27 00000076c3960df0 x28 00000076e3411ad8 x29 00000076cc4badb0 sp 00000076ce84be50 lr 00000076cfe278d8 pc 00000076cfe279f8 backtrace: #00 pc 00000000015d99f8 /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #01 pc 00000000015d9e5c /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #02 pc 00000000015c8620 /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #03 pc 00000000015c0e0c /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #04 pc 00000000016de8e0 /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #05 pc 00000000016d5108 /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #06 pc 00000000016d56a8 /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #07 pc 00000000015be550 /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #08 pc 00000000015e8d80 /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #09 pc 00000000015e9130 /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #10 pc 00000000016fe0f4 /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #11 pc 00000000016cd10c /data/app/net.pavementsoft.pavement-GinPXwGjthqn3s_vdqVZvA==/lib/arm64/libflutter.so (offset 0x11c0000) #12 pc 0000000000001668 <anonymous:00000076cd180000> Lost connection to device. ``` <!-- Run `flutter analyze` and attach any output of that command below. If there are any analysis errors, try resolving them before filing this issue. --> ``` D:\dev\pavement>flutter analyze Analyzing pavement... No issues found! (ran in 5.0s) ``` <!-- Finally, paste the output of running `flutter doctor -v` here. --> ``` D:\dev\pavement>flutter doctor -v [√] Flutter (Channel stable, v1.9.1+hotfix.4, on Microsoft Windows [Version 10.0.17134.1006], locale en-GB) β€’ Flutter version 1.9.1+hotfix.4 at C:\tools\flutter β€’ Framework revision cc949a8e8b (3 weeks ago), 2019-09-27 15:04:59 -0700 β€’ Engine revision b863200c37 β€’ Dart version 2.5.0 [√] Android toolchain - develop for Android devices (Android SDK version 28.0.3) β€’ Android SDK at c:\tools\Android-SDK β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-29, build-tools 28.0.3 β€’ ANDROID_HOME = c:\tools\Android-SDK β€’ Java binary at: C:\tools\android-studio\jre\bin\java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03) β€’ All Android licenses accepted. [√] Android Studio (version 3.5) β€’ Android Studio at C:\tools\android-studio β€’ Flutter plugin version 40.2.2 β€’ Dart plugin version 191.8580 β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03) [!] VS Code, 32-bit edition (version 1.37.1) β€’ VS Code at C:\Program Files (x86)\Microsoft VS Code X Flutter extension not installed; install from https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter [√] Connected device (1 available) β€’ LGM G600L β€’ LGMG600Lf8469764 β€’ android-arm64 β€’ Android 9 (API 28) ! Doctor found issues in 1 category. ```
c: crash,tool,dependency: dart,t: hot reload,customer: crowd,P2,team-tool,triaged-tool
low
Critical
509,129,757
go
syscall: memory corruption when forking on OpenBSD, NetBSD, AIX, and Solaris
``` #!watchflakes default <- `fatal error: (?:.*\n\s*)*syscall\.forkExec` && (goos == "aix" || goos == "netbsd" || goos == "openbsd" || goos == "solaris") ``` ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.13.2 openbsd/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/home/jrick/.cache/go-build" GOENV="/home/jrick/.config/go/env" GOEXE="" GOFLAGS="-tags=netgo -ldflags=-extldflags=-static" GOHOSTARCH="amd64" GOHOSTOS="openbsd" GONOPROXY="" GONOSUMDB="" GOOS="openbsd" GOPATH="/home/jrick/go" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/home/jrick/src/go" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/home/jrick/src/go/pkg/tool/openbsd_amd64" GCCGO="gccgo" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0" </pre></details> ### What did you do? <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> I observed these issues in one of my applications, and assumed it was a race or invalid unsafe.Pointer usage or some other fault of the application code. When the 1.13.2 release dropped yesterday I built it from source and observed a similar issue running the regression tests. The failed regression test does not look related to the memory corruption, but I can reproduce the problem by repeatedly running the test in a loop: ``` $ cd test # from go repo root $ while :; do go run run.go -- fixedbugs/issue27829.go || break; done >go.panic 2>&1 ``` It can take several minutes to observe the issue but here are some of the captured panics and fatal runtime errors: https://gist.githubusercontent.com/jrick/f8b21ecbfbe516e1282b757d1bfe4165/raw/6cf0efb9ba47ba869f98817ce945971f2dff47d6/gistfile1.txt https://gist.githubusercontent.com/jrick/9a54c085b918aa32910f4ece84e5aa21/raw/91ec29275c2eb1be49f62ad8a01a5317ad168c94/gistfile1.txt https://gist.githubusercontent.com/jrick/8faf088593331c104cc0da0adb3f24da/raw/7c92e7e7d60d426b2156fd1bdff42e0717b708f1/gistfile1.txt https://gist.githubusercontent.com/jrick/4645316444c12cd815fb71874f6bdfc4/raw/bffac2a448b07242a538b77a2823c9db34b6ef6f/gistfile1.txt https://gist.githubusercontent.com/jrick/3843b180670811069319e4122d32507a/raw/0d1f897aa25d91307b04ae951f1b260f33246b61/gistfile1.txt https://gist.githubusercontent.com/jrick/99b7171c5a49b4b069edf06884ad8e17/raw/740c7b9e8fa64d9ad149fd2669df94e89c466927/gistfile1.txt Additionally, I observed `go run` hanging (no runtime failure due to deadlock) and it had to be killed with SIGABRT to get a trace: https://gist.githubusercontent.com/jrick/d4ae1e4355a7ac42f1910b7bb10a1297/raw/54e408c51a01444abda76dc32ac55c2dd217822b/gistfile1.txt It may not matter which regression test is run as the errors also occur in run.go.
OS-NetBSD,OS-OpenBSD,OS-Solaris,NeedsInvestigation,OS-AIX,compiler/runtime
high
Critical
509,150,711
rust
Confusing error msg without indication of where the error occured
Although admittedly pretty new to Rust, the notes in this error message doesn't help me in anything other than general terms: ``` error[E0495]: cannot infer an appropriate lifetime for lifetime parameter `'a` due to conflicting requirements | note: first, the lifetime cannot outlive the lifetime 'a as defined on the impl at 10:19... --> src/msgmgr/mod.rs:10:19 | 10| pub struct RmbMsg<'a> { | ^^ = note: ...so that the types are compatible: expected msgmgr::RmbMsg<'a> found msgmgr::RmbMsg<'_> = note: but, the lifetime must be valid for the static lifetime... = note: ...so that the types are compatible: expected std::clone::Clone found std::clone::Clone ``` My primary issue is I don't know where the `found msgmgr::RmbMsg<'_>` line in the note: is occurring. The secondary issue revolves around the ``` = note: ...so that the types are compatible: expected std::clone::Clone found std::clone::Clone ``` which I am pretty sure those two types are compatible: :) ## Meta ``` $ rustc --version --verbose rustc 1.38.0 (625451e37 2019-09-23) binary: rustc commit-hash: 625451e376bb2e5283fc4741caa0a3e8a2ca4d54 commit-date: 2019-09-23 host: x86_64-apple-darwin release: 1.38.0 LLVM version: 9.0 ```
A-diagnostics,A-lifetimes,A-macros,T-compiler,D-confusing
low
Critical
509,161,626
go
proposal: x/text: provide an option for compact number formatting
The Unicode Technical Standard specifies a list of long and short [Compact Number Formats](http://www.unicode.org/reports/tr35/tr35-numbers.html#Compact_Number_Formats) for use in formatting truncated versions of numbers (e.g. 35K, 2M). For reference, the JSON representation of these formats in English can be viewed [here](https://github.com/unicode-cldr/cldr-numbers-modern/blob/master/main/en-US-POSIX/numbers.json#L35-L88). `x/text` implements a lot of the standard, including currency and number formatting, but there isn't currently a way to get compact formats as described. I suggest `x/text/number` be given a `Compact` [Option](https://godoc.org/golang.org/x/text/number#Option) which can be created with either a "long" or "short" typed argument. When this option is provided and the number formatted is suitable for compacting in that locale, the compact version should instead be returned.
Proposal
low
Major
509,167,271
terminal
Conditional settings - i.e. different colors when Administrator
<!-- 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING: 1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement. 2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement. 3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number). 4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement. 5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement. All good? Then proceed! --> # Description of the new feature/enhancement As a user, it would be nice to be able to specify different color schemes based on some conditions. <!-- A clear and concise description of what the problem is that the new feature would solve. Describe why and how a user would use this new functionality (if applicable). --> For decades (ever since an errant recursive remove command executed as root) I have used light cyan on dark blue for normal command windows and bright white on red when I am operating as root or Administrator just to remind myself to be extra careful. # Proposed technical implementation details (optional) This notion could be generalized as conditional profiles (or conditional attributes within a profile) to be applied when a specified expression evaluates to true. Conditional profiles (and/or attributes) would be guaranteed to be applied after all unconditional ones so they would override any default specifications, thus eliminating the need to add "else" or inverted duplicate conditionals. Order of evaluation/application of conditionals would be undefined (or, if you really want to get wild, they could be prioritized then undefined ordering among equal priorities). <!-- A clear and concise description of what you want to happen. --> When invoked as Administrator, any profile or attribute that was tagged as conditional based on having Administrator access would be applied; otherwise not. In my use case, any window I opened as me would be pleasantly colored while any window opened as Administrator would be "**remember you're _ROOT_, stupid!**" colored.
Issue-Feature,Area-Settings,Product-Terminal
low
Critical
509,204,006
rust
Investigate replacing most of the debugger pretty-printing scripts with traits.
In the case of `Vec<T>` and `String`, it would be possible to have e.g.: ```rust impl<'a, T> DebuggerView<'a> for Vec<T> { type View = &'a [T]; fn debugger_view(&'a self) -> Self::View { &self[..] } } ``` ```rust impl<'a> DebuggerView<'a> for String { type View = &'a str; fn debugger_view(&'a self) -> Self::View { &self[..] } } ``` We would then need: * support for pretty-printing `&[T]` and `&str` * already exists in our custom pretty-printing scripts * long-term DWARF should allow us to express these directly (see #37504) * a way to encode `<X as DebuggerView>::debugger_view` in `X`'s debuginfo * i.e. a symbol name, or maybe we can make symbol names based on DWARF type IDs? * support for going through `<X as DebuggerView>::debugger_view` for pretty-printing * the method signature above should be relatively simple to call I suspect there are better APIs that could accommodate more complex data structures, but I'm not sure what the limitation of the pretty-printer scripts are, around being able to call arbitrary functions. There's also the possibility that we could encode this sort of information as data, instead of compiling it into functions, but that's probably a lot more work. cc @michaelwoerister
A-debuginfo,E-hard,P-medium,T-compiler,WG-debugging,E-needs-design,E-needs-investigation
medium
Critical
509,204,820
vue
transition-group has stutter when component updated elsewhere
### Version 2.6.10 ### Reproduction link [https://jsfiddle.net/zncxud6q/](https://jsfiddle.net/zncxud6q/) ### Steps to reproduce Use test1, test2, or test3 buttons to see transition without stutter. Use test4, test5, or test6 buttons to see transition with stutter. ### What is expected? No stutter ### What is actually happening? While the DOM is updating, the transition restarts, even though the portion updating isn't a child of anything transitioning <!-- generated by vue-issues. DO NOT REMOVE -->
bug,transition
low
Minor
509,225,095
rust
Audit uses of pprint in suggestions
The preferred source of user code in suggestions should be coming from `span_to_snippet`, optionally with `pprint`ing an expression if desired. There are places in the codebase where older code might be using `pprint` directly, which can end up with us suggesting code that doesn't look like what the user already wrote. We should clean this up, ideally by making also simplifying the API for "get snippet from span, make a new suggestion string, fallback if needed".
C-cleanup,A-diagnostics,T-compiler
low
Minor
509,255,334
terminal
Code health: consider splitting Pane class into leaf and parent classes
Idea: Split `Pane` class into `LeafPane` and `ParentPane`, both inheriting from abstract base `Pane`. Currently Pane class has many members that are used only in leaf or only in parent mode. IMHO proposed splitting would make the code much cleaner and safer (e.g. from accidentally accessing a terminal when we're not a leaf). There is a long way ahead of panes so this might be worthwhile.
Area-UserInterface,Product-Terminal,Issue-Task,Area-CodeHealth
low
Minor
509,297,519
pytorch
[jit] scripted module and user defined methods
Currently, for a scripted module, user-defined methods that return `self` will return a **unscripted** module, yet builtin ones return the scripted one: ```py In [14]: import torch ...: ...: class SimpleNet(torch.nn.Module): ...: def __init__(self, mult): ...: super().__init__() ...: self.register_buffer('mult', torch.as_tensor(mult)) ...: ...: def forward(self, x): ...: return x * self.mult ...: ...: def set_mult(self, mult): ...: self.mult.fill_(mult) ...: return self ...: ...: ...: m = SimpleNet(3) ...: m = torch.jit.script(m) In [17]: print(m) ScriptModule(original_name=SimpleNet) In [18]: print(m.set_mult(2342)) SimpleNet() ``` This could be intended. However, it is pretty unintuitive from user land. And it leads to weird cases like ```py In [19]: isinstance(m, SimpleNet) Out[19]: False In [20]: isinstance(m.set_mult(2), SimpleNet) Out[20]: True In [21]: isinstance(m.train(), SimpleNet) Out[21]: False In [22]: isinstance(m.set_mult(2).train(), SimpleNet) Out[22]: True In [15]: m.set_mult(44)(3) Out[15]: tensor(132) In [16]: m(3) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-16-7d05f9f266c1> in <module> ----> 1 m(3) ~/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) RuntimeError: forward() Expected a value of type 'Tensor' for argument 'x' but instead found type 'int'. Inferred 'x' to be of type 'Tensor' because it was not annotated with an explicit type. Position: 1 Value: 3 Declaration: forward(ClassType<SimpleNet> self, Tensor x) -> (Tensor) ``` ## General case The problem is more general. If the method returns, e.g., a child, then it will be a normal child, rather than a scripted child, different from `getattr`. cc @suo
oncall: jit,triaged
low
Critical
509,307,804
react
SuspenseList in DevTools could cycle through the Suspense states
We currently have a way to force a fallback on a Suspense boundary in DevTools. It might be cool to have a "play" button or something on SuspenseList that cycles through the states. E.g. if it's "together" mode it shows all the fallbacks and then switches to showing all the content and then back again in a loop. If it's "forwards" it shows all the fallbacks then one at a time and then back to all fallbacks. If it's tail "hidden" or "collapsed" it hides all the ones that are not yet inserted and then inserts one at a time. Could be a nice way to demo/test the loading sequence experience.
Component: Developer Tools,React Core Team
medium
Minor
509,308,221
pytorch
jit script fails with `AttributeError: 'str' object has no attribute 'lineno'`
## πŸ› Bug <!-- A clear and concise description of what the bug is. --> ## To Reproduce minimum failing code: ```python import torch from torch import jit from torch import nn class Bar(nn.Module): def __init__(self, a): super(Bar, self).__init__() self.a = a @torch.no_grad() def forward(self, x): return x if __name__ == '__main__': a = Bar(1) jit.script(a) ``` Run the code above in **Python 2**. The error message is ```console File "script_str_error.py", line 20, in <module> jit.script(a) File "/Users/zaf/Git/pytorch2/torch/jit/__init__.py", line 1239, in script return torch.jit.torch.jit._recursive.recursive_script(obj) File "/Users/zaf/Git/pytorch2/torch/jit/_recursive.py", line 508, in recursive_script return create_script_module(nn_module, infer_methods_to_compile(nn_module)) File "/Users/zaf/Git/pytorch2/torch/jit/_recursive.py", line 491, in infer_methods_to_compile stubs.append(make_stub_from_method(nn_module, method)) File "/Users/zaf/Git/pytorch2/torch/jit/_recursive.py", line 41, in make_stub_from_method return make_stub(func) File "/Users/zaf/Git/pytorch2/torch/jit/_recursive.py", line 34, in make_stub ast = torch.jit.get_jit_def(func, self_name="RecursiveScriptModule") File "/Users/zaf/Git/pytorch2/torch/jit/frontend.py", line 169, in get_jit_def return build_def(ctx, py_ast.body[0], type_line, self_name) File "/Users/zaf/Git/pytorch2/torch/jit/frontend.py", line 199, in build_def param_list = build_param_list(ctx, py_def.args, self_name) File "/Users/zaf/Git/pytorch2/torch/jit/frontend.py", line 220, in build_param_list ctx_range = ctx.make_range(expr.lineno, expr.col_offset - 1, expr.col_offset + len(expr.arg)) AttributeError: 'str' object has no attribute 'lineno' ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> This code works in **Python 3**, and I was expecting it to behave similarly in 2. ## Environment Please copy and paste the output from our [environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py) (or fill out the checklist below manually). Collecting environment information... PyTorch version: 1.4.0a0+a5ac7f6 Is debug build: No CUDA used to build PyTorch: None OS: Mac OSX 10.15 GCC version: Could not collect CMake version: version 3.14.4 Python version: 2.7 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip] numpy==1.16.5 [pip] torch==1.4.0a0+a5ac7f6 [conda] Could not collect - PyTorch Version (e.g., 1.0): **master build** - OS (e.g., Linux): **OSx 10.15** - How you installed PyTorch (`conda`, `pip`, source): **source** - Build command you used (if compiling from source): **`python setup.py develop`** - Python version: **2.7** - CUDA/cuDNN version: **None** - GPU models and configuration: **None** - Any other relevant information: ## Additional context cc @suo
oncall: jit,triaged
low
Critical
509,321,994
flutter
Documentation for how to share assets from the host app to the flutter module
cc @gaaclarke, @matthew-carroll
framework,engine,d: api docs,a: existing-apps,P2,team-engine,triaged-engine
low
Minor
509,335,700
pytorch
`torch.nn.Module._load_state_dict` catch-all error message can be misleading
## πŸ› Bug `torch.nn.Module._load_state_dict` loads parameter values by an in-place `copy_` operation on the module parameters. Sometimes the parameter values may be initialized to an expanded tensor, in which case the `copy_` call fails with a catch-all error message, ``` RuntimeError: Error(s) in loading state_dict for Foo: While copying the parameter named "bar", whose dimensions in the model are torch.Size([2, 3]) and whose dimensions in the checkpoint are torch.Size([2, 3]). ``` ## To Reproduce ``` import torch class Foo(torch.nn.Module): def __init__(self): super().__init__() # initialize with placeholder data data = torch.randn(3).expand(2, -1) self.register_parameter("bar", torch.nn.Parameter(data)) # make a model checkpoint foo = Foo() foo.bar = torch.nn.Parameter(torch.randn(2, 3)) state_dict = foo.state_dict() # attempt to load the checkpoint new_foo = Foo() new_foo.load_state_dict(state_dict) ``` ## Expected behavior At the very least the correct error message should be displayed, ``` *** RuntimeError: unsupported operation: more than one element of the written-to tensor refers to a single memory location. Please clone() the tensor before performing the operation. ``` Ideally, since the old parameter data is being overwritten anyways, `Module._load_state_dict` should catch this exception and find another way to load the checkpointed data. ## Environment PyTorch version: 1.3.0 Is debug build: No CUDA used to build PyTorch: 10.1.243 OS: Ubuntu 16.04.6 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~16.04~ppa1) 7.4.0 CMake version: version 3.16.0-rc1 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: GeForce GTX 1080 Ti Nvidia driver version: 418.87.01 cuDNN version: Could not collect Versions of relevant libraries: [pip] gpytorch==0.3.6 [pip] numpy==1.17.2 [pip] torch==1.3.0 [pip] torchvision==0.4.1a0+d94043a [conda] blas 1.0 mkl [conda] gpytorch 0.3.6 dev_0 <develop> [conda] mkl 2019.4 243 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.14 py37ha843d7b_0 [conda] mkl_random 1.1.0 py37hd6b4f25_0 [conda] pytorch 1.3.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch [conda] torchvision 0.4.1 py37_cu101 pytorch ## Additional context This use case can occur when parameter values are initialized with placeholder values, or when an expanded view of the tensor is necessary for broadcasted operations.
triaged
low
Critical
509,339,721
pytorch
Seg-fault in LayerNormKernelImpl
## πŸ› Bug While implementing a recurrent neural network with Layer normalization on the GPU I get a segmentation fault in the kernel implementation of the normalization. The model runs fine on CPU. ## To Reproduce Steps to reproduce the behavior: 1. create a model with layerNormalization and try to run it on GPU 2. run forward pass Error message in debugger is: Thread 1 "python" received signal SIGSEGV, Segmentation fault. 0x00007fffbc6607e2 in at::native::(anonymous namespace)::LayerNormKernelImpl(at::Tensor const&, at::Tensor const&, at::Tensor const&, long, long, double, at::Tensor*, at::Tensor*, at::Tensor*) () from [...]/lib/python3.7/site-packages/torch/lib/libtorch.so ## Expected behavior not crash ## Environment PyTorch version: 1.2.0 Is debug build: No CUDA used to build PyTorch: 10.0.130 OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: Could not collect Python version: 3.7 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce RTX 2080 Ti Nvidia driver version: 418.87.00 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4 Versions of relevant libraries: [pip] numpy==1.17.2 [pip] numpydoc==0.9.1 [pip] torch==1.2.0 [pip] torchvision==0.4.0a0 [conda] _pytorch_select 0.2 gpu_0 [conda] blas 1.0 mkl [conda] mkl 2019.4 243 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.14 py37ha843d7b_0 [conda] mkl_random 1.1.0 py37hd6b4f25_0 [conda] pytorch 1.2.0 cuda100py37h938c94c_0 [conda] torchvision 0.4.0 cuda100py37hecfc37a_0 not sure why the script misses the cuda version: copied from nvidia-smi: NVIDIA-SMI 418.87.00 Driver Version: 418.87.00 CUDA Version: 10.1 cc @ngimel
needs reproduction,module: crash,module: cuda,triaged
low
Critical
509,343,806
pytorch
Unable to install pytorch with cuda 10.0 using conda
## πŸ› Bug Corrupted archive of pytorch 1.3.0 with cuda 10.0 ## To Reproduce Steps to reproduce the behavior: install pytorch with cuda 10.0 in conda ## Expected behavior installation of pytorch with cuda 10.0 ## Environment Collecting environment information... PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A OS: Debian GNU/Linux 9.11 (stretch) GCC version: (Debian 6.3.0-18+deb9u1) 6.3.0 20170516 CMake version: Could not collect Python version: 3.7 Is CUDA available: N/A CUDA runtime version: 10.0.130 GPU models and configuration: GPU 0: Tesla T4 Nvidia driver version: 410.104 cuDNN version: Could not collect Versions of relevant libraries: [pip3] intel-numpy==1.15.1 [pip3] numpy==1.15.1 [pip3] torch==1.2.0 [pip3] torchvision==0.4.0 [conda] Could not collect ## Additional context $ conda install pytorch torchvision cudatoolkit=10.0 -c pytorch Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: /home/blah/.conda/envs/open-mmlab added / updated specs: - cudatoolkit=10.0 - pytorch - torchvision The following packages will be downloaded: package | build ---------------------------|----------------- pytorch-1.3.0 |py3.7_cuda10.0.130_cudnn7.6.3_0 454.1 MB pytorch ------------------------------------------------------------ Total: 454.1 MB The following NEW packages will be INSTALLED: blas pkgs/main/linux-64::blas-1.0-mkl cffi pkgs/main/linux-64::cffi-1.12.3-py37h2e261b9_0 cudatoolkit pkgs/main/linux-64::cudatoolkit-10.0.130-0 freetype pkgs/main/linux-64::freetype-2.9.1-h8a8886c_1 intel-openmp pkgs/main/linux-64::intel-openmp-2019.4-243 jpeg pkgs/main/linux-64::jpeg-9b-h024ee3a_2 libgfortran-ng pkgs/main/linux-64::libgfortran-ng-7.3.0-hdf63c60_0 libpng pkgs/main/linux-64::libpng-1.6.37-hbc83047_0 libtiff pkgs/main/linux-64::libtiff-4.0.10-h2733197_2 mkl pkgs/main/linux-64::mkl-2019.4-243 mkl-service pkgs/main/linux-64::mkl-service-2.3.0-py37he904b0f_0 mkl_fft pkgs/main/linux-64::mkl_fft-1.0.14-py37ha843d7b_0 mkl_random pkgs/main/linux-64::mkl_random-1.1.0-py37hd6b4f25_0 ninja pkgs/main/linux-64::ninja-1.9.0-py37hfd86e86_0 numpy pkgs/main/linux-64::numpy-1.17.2-py37haad9e8e_0 numpy-base pkgs/main/linux-64::numpy-base-1.17.2-py37hde5b4d6_0 olefile pkgs/main/linux-64::olefile-0.46-py37_0 pillow pkgs/main/linux-64::pillow-6.2.0-py37h34e0f95_0 pycparser pkgs/main/linux-64::pycparser-2.19-py37_0 pytorch pytorch/linux-64::pytorch-1.3.0-py3.7_cuda10.0.130_cudnn7.6.3_0 six pkgs/main/linux-64::six-1.12.0-py37_0 torchvision pytorch/linux-64::torchvision-0.4.1-py37_cu100 zstd pkgs/main/linux-64::zstd-1.3.7-h0b5b093_0 Proceed ([y]/n)? y Downloading and Extracting Packages pytorch-1.3.0 | 454.1 MB | ##################################################################################################################7 | 68% pytorch-1.3.0 | 454.1 MB | ################################################################################################################### | 69% pytorch-1.3.0 | 454.1 MB | ######################################################################################################################################################################## | 100% InvalidArchiveError("Error with archive /home/blah/.conda/pkgs/pytorch-1.3.0-py3.7_cuda10.0.130_cudnn7.6.3_0.tar.bz2. You probably need to delete and re-download or re-create this file. Message from libarchive was:\n\nFailed to create dir 'lib/python3.7/site-packages/torch/for_onnx' (errno=2, retcode=-25, archive_p=94813630959696)")
module: build,triaged
low
Critical
509,354,939
TypeScript
Creating too many types for DOM & D3
Commit 06fe1ed53721862b56c61ebb89b1daee2bc10710 introduced a regression in the number of types created when compiling a snippet of `@types\d3`. Specifically, it appears to be elaborating types for an error message that ends up getting dropped. Repro: ```json { "compileOnSave": false, "compilerOptions": { "alwaysStrict": true, "noImplicitAny": true, "noEmit": true, "lib": [ "es5", "dom" ], "noResolve": true, "types": [], "skipLibCheck": true }, "files": [ "test.ts", } ``` test.ts ```ts declare function select<GElement extends BaseType, OldDatum>(selector: string): Selection2<GElement, OldDatum, HTMLElement, any>; type BaseType = Element | Document | Window | null | number; type ValueFn<T extends BaseType, Datum, Result> = (this: T, datum: Datum, index: number, groups: T[] | ArrayLike<T>) => Result; interface Selection2<GElement extends BaseType, Datum, PElement extends BaseType, PDatum> { select<DescElement extends BaseType>(selector: ValueFn<GElement, Datum, DescElement>): Selection2<DescElement, Datum, PElement, PDatum>; } ``` For an apples-to-apples comparison, use `nolib` and specify the 3.1 lib.es5.d.ts and lib.dom.d.ts for both compilations. In round numbers, there are ~150 types without the change and ~2500 types with the change.
Domain: Performance
low
Critical