id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
βŒ€
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
552,240,784
terminal
Default color indices set through SetConsoleScreenBufferInfoEx do not propagate to the propsheet
# Environment ```none Windows build number: 10.0.18363.0 ``` # Steps to reproduce 1. Create a new "Windows Console Application" project. 2. Compile and run the code. ```cpp #include <windows.h> #include <stdio.h> int main() { HANDLE hStdOut = GetStdHandle(STD_OUTPUT_HANDLE); if (hStdOut == INVALID_HANDLE_VALUE) { fprintf(stderr, "GetStdHandle() failed."); return 1; } CONSOLE_SCREEN_BUFFER_INFOEX info = { 0 }; info.cbSize = sizeof(info); if (!GetConsoleScreenBufferInfoEx(hStdOut, &info)) { fprintf(stderr, "GetConsoleScreenBufferInfoEx() failed."); return 1; } // Set color #6 as the foreground color and color #3 as the background color. info.wAttributes = FOREGROUND_GREEN | FOREGROUND_RED | BACKGROUND_GREEN | BACKGROUND_BLUE; // Set color #6 as the foreground color and color #3 as the background color. info.wPopupAttributes = FOREGROUND_GREEN | FOREGROUND_RED | BACKGROUND_GREEN | BACKGROUND_BLUE; if (!SetConsoleScreenBufferInfoEx(hStdOut, &info)) { fprintf(stderr, "SetConsoleScreenBufferInfoEx() failed."); return 1; } return 0; } ``` 3. Open the "Properties" page. 4. Select the "Colors" tab. # Expected behavior Screen colors and popup colors should be consistent with the colors set through the `SetConsoleScreenBufferInfoEx` function. Behavior on Windows 10.0.17134.0. ![colors-expected](https://user-images.githubusercontent.com/15797194/72721218-90432080-3b8c-11ea-8921-972bc2ad0fb3.png) # Actual behavior Screen colors and popup colors are set to default state. ![colors-actual](https://user-images.githubusercontent.com/15797194/72716575-81a43b80-3b83-11ea-9b3d-f4f702575ec3.png) Note that setting the color table through the `SetConsoleScreenBufferInfoEx` function works as expected.
Product-Conhost,Issue-Bug,Area-Settings,Priority-2
low
Critical
552,288,752
pytorch
Segmentation fault in C++ API torch::from_blob(...).clone()
## πŸ› Bug Code crashes in C++ API when torch tensor is read from specific buffer ## To Reproduce build and run this code in release build ```c++ #include <iostream> #include <torch/extension.h> int main() { std::string buffer; buffer.resize((1 << 19) + 128); std::memset(buffer.data(), 0, buffer.size()); size_t start_offset; // This would work fine start_offset = 1 << 10; // This crashes start_offset = (1 << 10) - 1; size_t element_count = 319; std::cerr << buffer.size() << " vs " << torch::elementSize(torch::kInt32) * element_count + start_offset; auto tensor = torch::from_blob(buffer.data() + start_offset, {element_count}, torch::CPU(torch::kInt32)); tensor.clone(); } ``` ```cmake cmake_minimum_required(VERSION 3.12) project(from_blob_sigsegv) set(CMAKE_CXX_STANDARD 17) set(CMAKE_PREFIX_PATH /home/alxmopo3ov/libtorch) find_package(Torch REQUIRED) include_directories(${TORCH_INCLUDE_DIRS}) find_package(Python3 COMPONENTS Interpreter Development) include_directories(${Python3_INCLUDE_DIRS}) add_executable(from_blob_sigsegv main.cpp) target_link_libraries(from_blob_sigsegv ${Python3_LIBRARIES} ${TORCH_LIBRARIES}) ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Expected behavior Don't crash ## Environment Collecting environment information... PyTorch version: 1.4.0 Is debug build: No CUDA used to build PyTorch: 10.1 OS: Ubuntu 14.04.6 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~14.04~ppa1) 7.4.0 CMake version: version 3.12.2 Python version: 3.7 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip3] numpy==1.15.1 [pip3] numpydoc==0.8.0 [pip3] torch==1.4.0 [pip3] torchvision==0.2.1 [conda] blas 1.0 mkl [conda] mkl 2019.0 118 [conda] mkl-service 1.1.2 py37h90e4bf4_5 [conda] mkl_fft 1.0.4 py37h4414c95_1 [conda] mkl_random 1.0.1 py37h4414c95_1 [conda] torch 1.4.0 <pip> [conda] torchvision 0.2.1 <pip> cc @yf225
module: cpp,triaged
low
Critical
552,308,792
godot
You can override the new method to return a different object
**Godot version:** 3.2 RC2 **OS/device including version:** Windows 10 **Issue description:** You can override the new constructor method to return a different class. I cannot find anywhere that makes this clear that you can do this or whether or not you should. If your intended to be able to do this, then it should be documented, if not, then there should be an error involved. **Steps to reproduce:** Add the following script to a node and then run it as a scene. It should output "Item". ``` extends Node func _ready() -> void: print(Test.new()) class Test extends Node: static func new(): return Item.new() func _to_string() -> String: return "Test" class Item: func _to_string() -> String: return "Item" ```
discussion,topic:gdscript
low
Critical
552,364,001
flutter
please include scale control option in the google_maps_flutter package
c: new feature,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem
low
Major
552,378,447
go
testing: testing Log functions do not capture timestamp
<!-- Please answer these questions before submitting your issue. Thanks! For questions please use one of our forums: https://github.com/golang/go/wiki/Questions --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.13.6 linux/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/home/vimalkum/.cache/go-build" GOENV="/home/vimalkum/.config/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/home/vimalkum/go" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/usr/local/go" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build339334134=/tmp/go-build -gno-record-gcc-switches" </pre></details> ### What did you do? <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> https://play.golang.org/p/525R1YDp8sB Write a test case which calls `t.Log` or `t.Logf` The output of these logs is generated when a test fails. If i am using a CI for testing, it makes my job much easier if i can correlate test log timestamps with other logs captured by CI ### What did you expect to see? The `t.Log` statements should produce timestamps. ### What did you see instead? No timestamps ``` 2020/01/20 21:15:05 GetValue called --- FAIL: TestGetValue (0.00s) main_test.go:9: this is log statement main_test.go:11: this is error log statement main_test.go:12: expected 100 FAIL exit status 1 FAIL github.com/vimalk78/test-testing 0.001s ```
NeedsInvestigation
medium
Critical
552,383,504
opencv
Support audio in OpenCV
Hello! At the moment, OpenCV library and deep learning do not stand still. Already more often systems are observed where audio and video are simultaneously used for processing. Therefore, question arose of creating a `audioio` module. Plan of integrate `audioio` module: To try support ALSA (Advanced Linux Sound Architecture): * Read/Write audio files (for example `wav`) * Support audio input/output stream (support standard devices) Your comments are welcome!
feature
low
Minor
552,383,861
material-ui
choice chips and filter chips as input element
Hello everyone, I didn't find an issue for this so I though it would be good to open one. In the material.io documentation they mention [choice chips](https://material.io/components/chips/#choice-chips) as a way of selecting a single value out of a set of values. They also mention filter chips as a way to select multiple values to filter content. if I look at these 2 components I see a lot of similarities between them. For starters, they both output value based on a selection, by displaying the options as chips. The actual difference between them, is that the choice chips are meant for single value output, but the filter chips can output an array. My suggestion is adding these two concepts as one material-ui component that stores it's value in an input. That way you could use it in a form to replace select elements with a short options list. But can also do your own function with the onChange on the input. By adding a prop like `multiple` it would support the filter chips concept as well as the choice chips concept. I was planning on trying to create a PR myself, but haven't found the time yet. I belief this is a fairly simple component though that is very useful. What do you guys think?
new feature
low
Major
552,412,265
go
cmd/go: mod why -m is not strict when parsing an import path
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.13.6 linux/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GO111MODULE="on" GOARCH="amd64" GOBIN="/home/manlio/.local/bin" GOCACHE="/home/manlio/.cache/go-build" GOENV="/home/manlio/.config/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GONOPROXY="" GONOSUMDB="github.com/perillo/*" GOOS="linux" GOPATH="/home/manlio/.local/lib/go:/home/manlio/src/go" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/usr/lib/go" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64" GCCGO="gccgo" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/home/manlio/src/go/src/github.com/perillo/database/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build853843883=/tmp/go-build -gno-record-gcc-switches" GOROOT/bin/go version: go version go1.13.6 linux/amd64 GOROOT/bin/go tool compile -V: compile version go1.13.6 uname -sr: Linux 5.4.12-arch1-1 /usr/lib/libc.so.6: GNU C Library (GNU libc) stable release version 2.30. gdb --version: GNU gdb (GDB) 8.3.1 </pre></details> ### What did you do? In a project that depends on `golang.org/x/tools/go/packages`, I executed ``` go mod why -m golang.org/x/tools/ ``` ### What did you expect to see? ``` go: malformed import path "golang.org/x/tools/": trailing slash ``` as it is done by `go mod init`. ### What did you see instead? ``` # golang.org/x/tools/ (main module does not need module golang.org/x/tools/) ```
help wanted,NeedsFix,modules
low
Critical
552,434,916
godot
Dictionary exporting not showing default value
**Godot version:** 3.1.1 **OS/device including version:** Windows 10 **Issue description:** When exporting a dictionary on a script the default assigned dictionary is not shown on the Inspector as expected. Instead, an empty dictionary is shown. I attached two screenshots, one with the declaration of the dictionary and the key/value pairs that it contains and another of the inspector with the empty dictionary that it shows. ![Screenshot 134 - 20 de enero 18 35](https://user-images.githubusercontent.com/24842781/72747218-49692100-3bb4-11ea-89d7-60a778787cf2.png) ![Screenshot 135 - 20 de enero 18 35](https://user-images.githubusercontent.com/24842781/72747219-49692100-3bb4-11ea-8836-716ca208f6f1.png) This may be related to #10018 but it is said to be fixed. **Steps to reproduce:** 1.- Export a Dictionary on a script. 2.- Assign it a non-empty dictionary. 3.- Save everything and check the exported dictionary. It is empty instead of having said key/values. **Minimal reproduction project:**
bug,topic:editor
low
Major
552,451,730
go
cmd/dist: cmd/internal/objabi/zbootstrap.go is not removed
I noted that on Linux amd64, with go1.14beta1-137-g71239b4f49, after calling `make.bash`, `clean.bash` does not remove the file `cmd/internal/objabi/zbootstrap.go`. I have confirmed the problem using `git status --ignored`.
help wanted,NeedsInvestigation
low
Major
552,464,059
pytorch
[C++] Don't use DeprecatedTypeProperties in torch::utils::reorder_tensors_like
This is a followup to https://github.com/pytorch/pytorch/issues/29161. As per https://github.com/pytorch/pytorch/issues/29161#issuecomment-558308534, DeprecatedTypeProperties needs to be removed from the very last use site before it can be completely gotten rid of. cc @ezyang @gchanan @ngoldbaum
module: internals,triaged
low
Minor
552,483,506
godot
[Bullet] Changing 3D gravity in BulletPhysics does not work as expected
**Godot version:** 3.2 RC 2 **OS/device including version:** Windows 10 / PC **Issue description:** I'm trying to make a game where the gravity changes during runtime. This does not work as expected. - Changing the gravity in _Ready() works fine - Changing the gravity in _Process(float delta) just slows the default gravity vector - Changing the gravity in _PhysicsProcess(float delta) freezes the gravity: ``` public override void _Ready() { // Works as expected Vector3 lNewGravityVec3 = new Vector3(0,1,0); PhysicsServer.AreaSetParam(GetWorld().Space, PhysicsServer.AreaParameter.GravityVector, lNewGravityVec3); } ``` ``` public override void _Process(float delta) { // Just 'slows' the gravity Vector3 lNewGravityVec3 = new Vector3(0,1,0); //PhysicsServer.AreaSetParam(GetWorld().Space, PhysicsServer.AreaParameter.GravityVector, lNewGravityVec3); } ``` ``` public override void _PhysicsProcess(float delta) { // Freezes the gravity Vector3 lNewGravityVec3 = new Vector3(0,1,0); //PhysicsServer.AreaSetParam(GetWorld().Space, PhysicsServer.AreaParameter.GravityVector, lNewGravityVec3); } ``` **Minimal reproduction project:** [Physics Test.zip](https://github.com/godotengine/godot/files/4087649/Physics.Test.zip)
bug,topic:physics,topic:dotnet
low
Major
552,592,137
youtube-dl
Screenopsis support request
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.01.15. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights. - Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a new site support request - [x] I've verified that I'm running youtube-dl version **2020.01.15** - [x] I've checked that all provided URLs are alive and playable in a browser - [x] I've checked that none of provided URLs violate any copyrights - [x] I've searched the bugtracker for similar site support requests including closed ones ## Example URLs <!-- Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours. --> - Single video: https://www.screenopsis.com/www/program/67aa1d05-61a6-41c3-8fd2-a6cd051010bb/home?videoId=1a8d1830-a588-426c-9a6c-caa3f75892cc&skipConsult=true - Single video: https://www.screenopsis.com/www/program/d6679c95-c646-463e-bf7f-6449705a04c6/home?videoId=7a6b485a-baed-4c09-8b30-1111fd389899&skipConsult=true - Single video: https://www.screenopsis.com/www/program/9c1e52c1-efaa-4480-b38d-94a2ead41818/home?videoId=fdb51260-4616-499c-9841-6d404f04d2b9&skipConsult=true ## Description <!-- Provide any additional information. If work on your issue requires account credentials please provide them or explain how one can obtain them. --> Database for French media like movies, TV shows and documentaries. Includes trailers and sample episodes which can't be downloaded, even with an account (which at least grants access to all available videos, only some videos can be seen without an account).
site-support-request
low
Critical
552,596,545
youtube-dl
Radiohead Library support request
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.01.15. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights. - Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a new site support request - [x] I've verified that I'm running youtube-dl version **2020.01.15** - [x] I've checked that all provided URLs are alive and playable in a browser - [x] I've checked that none of provided URLs violate any copyrights - [x] I've searched the bugtracker for similar site support requests including closed ones ## Example URLs <!-- Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours. --> - Single video: https://radiohead.com/library/#amsp/burn-the-witch - Single video: https://radiohead.com/library/#ir/house-of-cards - Single video: https://radiohead.com/library/#okc/karma-police ## Description <!-- Provide any additional information. If work on your issue requires account credentials please provide them or explain how one can obtain them. --> A recently opened archive of Radiohead's videos, music and merchandise. Includes their entire catalogue of music videos in the best available quality (most are in HD), and whose status bar/UI appear to be based on vimeo. Right now it isn't supported by youtube-dl or any vimeo downloader tools.
site-support-request
low
Critical
552,637,051
create-react-app
Allow default hostnames for fetch() in production
### Problem I'm always frustrated when I'm making a `create-react-app` app that's hosted in S3 (or other static hosting) and the backend API app is hosted elsewhere. Locally I can use ``` "proxy": "http://localhost:5000", ``` and ``` await fetch(`/someEndpoint`) ``` will proxy the request and display the results. But there's no way to provide a different default hostname for production. Unless I intervene, this code will always try to hit `/someEndpoint` on the static web server. ### Solution I'd like I'd like to similarly provide a hostname for production in `package.json`, perhaps: ``` "production-hostname": "https://fun-app.herokuapp.com", ``` or perhaps as a collection: ``` "hostnames": [ "production": "https://fun-app.herokuapp.com" ] ``` These hostnames could be inserted into `fetch()` calls during compilation. ### Alternatives considered - Hacking together something based on environment variables (a lot of cruft that we could hide from users) - Ejecting and making my own proxy middleware ### CORS note This approach would require that the production API (e.g. `fun-app.herokuapp.com `) respond to an `OPTIONS` request with the appropriate `Access-Control-Allow-Origin` HTTP header to allow the user's browser to process the request. This should be relatively straightforward and if we implement this feature it could be discussed in documentation. ### Additional context I see lots of roughly related issues here and on Stack Overflow. Seems like hosting an API one place and then hosting the react app on a static hosting service elsewhere is a fairly common setup but we don't have the configuration options to really handle it elegantly.
issue: proposal
low
Minor
552,766,256
create-react-app
Specify a standard filename under terserOptions
### Is your proposal related to a problem? The current `npm run build` extracts the License comments and put them in a file that ends in a non standard extension `.LICENCSE`. This causes problem for servers like IIS to return 404 for request to the file. It also causes serviceWorker to break due to the file not being loaded. ### Describe the solution you'd like Specify a standard name under `terserOptions` in `webpack.config.js`, so that it ends in `.js`. Not tested, should be something like the below: filename: [file]
issue: proposal,needs triage
low
Minor
552,806,727
create-react-app
Update dependencies webpack
β”œβ”€β”¬ @storybook/[email protected] β”‚ └─┬ @storybook/[email protected] β”‚ β”œβ”€β”¬ [email protected] β”‚ β”‚ └── [email protected] deduped β”‚ └── [email protected] deduped β”œβ”€β”¬ @storybook/[email protected] β”‚ └── [email protected] └─┬ [email protected] └── [email protected] Hi, when I am using storybook, I have issue with webpack, dependencies for create react app is older 4.41.2, could you please update it? Best regards Jindrich Kuba
issue: proposal,needs triage
low
Minor
552,817,844
vue
v-bind with empty key
### Version 2.6.11 ### Reproduction link [https://jsfiddle.net/andrewharvey4/k6r3uzby/10/](https://jsfiddle.net/andrewharvey4/k6r3uzby/10/) ### Steps to reproduce v-bind an object with a empty string key ### What is expected? vue not to crash, it to either be silently ignored or a warning but not a fatal error ### What is actually happening? an error is thown crashing the app ``` vue.js:4483 Uncaught DOMException: Failed to execute 'setAttribute' on 'Element': '' is not a valid attribute name. at baseSetAttr (https://unpkg.com/[email protected]/dist/vue.js:6778:10) at setAttr (https://unpkg.com/[email protected]/dist/vue.js:6753:7) at Array.updateAttrs (https://unpkg.com/[email protected]/dist/vue.js:6708:9) at invokeCreateHooks (https://unpkg.com/[email protected]/dist/vue.js:6064:24) at initComponent (https://unpkg.com/[email protected]/dist/vue.js:5997:9) at createComponent (https://unpkg.com/[email protected]/dist/vue.js:5980:11) at createElm (https://unpkg.com/[email protected]/dist/vue.js:5920:11) at createChildren (https://unpkg.com/[email protected]/dist/vue.js:6048:11) at createElm (https://unpkg.com/[email protected]/dist/vue.js:5949:11) at Vue.patch [as __patch__] (https://unpkg.com/[email protected]/dist/vue.js:6509:11) ``` <!-- generated by vue-issues. DO NOT REMOVE -->
improvement,warnings
low
Critical
552,827,929
create-react-app
Slow `ForkTsCheckerWebpackPlugin` compilation compared to running `tsc` directly
### Describe the bug Recently we've upgraded our CRA setup to the latest version of `react-scripts` (from 3.0.1 to 3.3.0 at the time of writing) and we noticed a _really significant_ increase in compilation times on our CI server. Instead of a couple of minutes (~5 mins) compiling the application, the total compilation time is now around 15-20 minutes. We have a project with a CLOC of about 67k lines of TS over 2000 files. ### Did you try recovering your dependencies? Yes, it had no significant effect. ### Which terms did you search for in User Guide? Not really relevant here, as there is not a lot I can customise in an out of the box CRA app's Webpack config. ### Environment ``` Environment Info: System: OS: macOS Mojave 10.14.6 CPU: (12) x64 Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz Binaries: Node: 10.17.0 - ~/.nvm/versions/node/v10.17.0/bin/node Yarn: Not Found npm: 6.11.3 - ~/.nvm/versions/node/v10.17.0/bin/npm Browsers: Chrome: 79.0.3945.117 Firefox: Not Found // <- This is inaccurate, running FF 73.0b7 Developer Edition Safari: 13.0.4 npmPackages: react: 16.12.0 => 16.12.0 react-dom: 16.12.0 => 16.12.0 react-scripts: 3.3.0 => 3.3.0 npmGlobalPackages: create-react-app: Not Found ``` ### Steps to reproduce Run `time npm run build` and wait till it completes ### Expected behavior I'd expected compilation time to be similar to the total time of running `npm run build` without the `ForkTsCheckerWebpackPlugin` + a manual `npm run tsc`. ### Actual behavior The total compilation time is about 30% slower, meaning on my machine it's ~90s for the two separate commands vs ~120s running the default CRA Webpack config. Seeing the recent increase in compilation times makes me wonder if we can do something with the compilation/type-checking times for TS projects? ### Reproduction I've done a bit of investigation and comparison of various settings in the `webpack.config.js` of `react-scripts`. Here are my results: All tests run with `time npm run build`. Projects stats: #### Test 1 - Changing config options |useTypescriptIncrementalApi|checkSyntacticErrors|Result|Delta baseline| |---------------------------|--------------------|------|--------------| |true|true|119.56s|baseline| |false|true|93.48s|-26.08s| |false|false|91.95s|-27,61s| #### Test 2 - No `ForkTsCheckerWebpackPlugin`, manual commands `ForkTsCheckerWebpackPlugin` removed from `webpack.config.js`. |Command|Result| |-------|------| |`npm run build`|70.73s| |`npm run tsc`|19.65s| |Total|90.38s| |Command|Result| |-------|------| |`npm run tsc & npm run build`|93.95s| ### Conclusion Seeing the improvement in completion time, especially after disabling `useTypescriptIncrementalApi` gives me a couple of questions: - I wonder if there is something special in my project where that would actually slow down the build when this option is enabled? - Is this option really beneficial when running `react-scripts build`? - Would it make sense to only enable that option when you're running `react-scripts start` as that's watching changes? In that case I believe it makes sense to do incremental checks.
issue: needs investigation
low
Critical
552,857,001
godot
3.2 RC1 Transparent material appear opaque when opaque prepass used [GLES2]
Godot version: Godot 3.2 RC1 OS/device including version: Windows 10 Issue description: Setting a material as transparent and opaque prepass, alpha channel behaves weird. This was working perfectly on 3.1 and 3.2 the behaviour is shown in the attached gif: ![opaque_prepass_transparent](https://user-images.githubusercontent.com/28926813/72806143-749f4f00-3c4c-11ea-9943-c660bb36572e.gif) Steps to reproduce: Create a cube, create a SpatialMaterial. Set flag Transparent to True, set Opaque pre-pass. In albedo, play with alpha channel slider Minimal reproduction project: N/A
bug,topic:rendering,confirmed
low
Major
552,892,423
scrcpy
audio recording via line-in
Hi, I love your tool, and I am using it for several scenarios. There is a very special need for the windows version, but I am not able to do the coding stuff by myself. Is there anyone in the crew or the community able to help me out? Due to known limitations (issue #14 ) it is not possible to record audio on windows. In our special setup, we do have the possibility to deliver the android device audio via line-out to the windows pc. There is also a dadicated windows sound device (Soundblaster USB stick) available to use the line-in to forward android's audio to the windows pc. I am now looking for the possibility to record the display video via scrcpy including the sound via line-in of the soundblaster USB stick. As this scenario is covering the needs for special purpose vehicles for a German special police unit I am willing to spend some money on this. Any help is highly appreciated! Frank
audio
low
Major
552,934,150
go
proposal: net/http: add a ServeConn method to http.Server to handle net.Conn
``` // x/net/http func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) ``` or ``` // github.com/valyala/fasthttp func (s *Server) ServeConn(c net.Conn) error ```
Proposal,NeedsInvestigation
medium
Critical
552,935,671
go
runtime: use standard prefix for GODEBUG logs
<!-- Please answer these questions before submitting your issue. Thanks! For questions please use one of our forums: https://github.com/golang/go/wiki/Questions --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.13.1 linux/amd64 </pre> ### Does this issue reproduce with the latest release? yes. ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env not relevant </pre></details> ### What did you do? I've just enabled the gctrace and found all the debug logs are sent to stderr by print function. src/runtime/mgc.go: if debug.gctrace > 0 { printlock() print("gc ", memstats.numgc, " @", string(itoaDiv(sbuf[:], uint64(work.tSweepTerm-runtimeInitTime)/1e6, 3)), "s ", util, "%: ") // write to goroutine-local buffer if diverting output, // or else standard error. func gwrite(b []byte) { if len(b) == 0 { return } recordForPanic(b) gp := getg() // Don't use the writebuf if gp.m is dying. We want anything // written through gwrite to appear in the terminal rather // than be written to in some buffer, if we're in a panicking state. // Note that we can't just clear writebuf in the gp.m.dying case // because a panic isn't allowed to have any write barriers. if gp == nil || gp.writebuf == nil || gp.m.dying > 0 { writeErr(b) return } n := copy(gp.writebuf[len(gp.writebuf):cap(gp.writebuf)], b) gp.writebuf = gp.writebuf[:len(gp.writebuf)+n] } ### What did you expect to see? i'm happy to see the debug logs puts to stdout with some debug flag like: debug.out=1 ### What did you see instead? all debug logs were treated as error by log server.
NeedsDecision,compiler/runtime
medium
Critical
552,941,637
opencv
I am unable to run the command detail_ImageFeatures().descriptors in Python
### System Version(Information) - OpenCV =>4.1.2 (Python 3.7.3) - Operating System / Platform => Windows 10 64 Bit - Hardware => Dell Inspiron 7460 - Compiler/Interpreter => Jupyter Notebooks - GPU =>NVIDIA 940MX **Detailed Description** I am unable to access a public data member of cv::detail::ImageFeatures::descriptors. When i run this command: `import cv2 obj_Umat=cv2.detail_ImageFeatures().descriptors` The command `cv2.detail_ImageFeatures().descriptors` returns a UMat object. My interpreter then gives me a response that : **The kernel appears to have died**
bug,category: python bindings,category: stitching
low
Major
553,005,208
scrcpy
Feature request: Add shortcut to swipe home screen left/right
I have a hard time finding the right spot on a home screen where I can use the mouse to swipe to get to the next/previous home screen. What usually happens is the action triggers Android to move the nearest icon or widget. Edit: Removed reference to long press.
feature request
low
Minor
553,040,974
pytorch
Segmentation fault in lazyInitCUDA -> CUDAHooks::initCUDA -> THCMagma_init -> magma_init
## πŸ› Bug I have two machines, on one the latest development version of PyTorch works as expected, on the other a very simple example program terminates with a segfault. I trained a neural network on the machine where everything works as expected, then reduced the code to a minimal program showing the problem. ## To Reproduce compile and run: ```#include <torch/torch.h> #include <iostream> auto main(int argc, char ** argv) -> int { torch::manual_seed(1); } ``` Backtrace: > > Thread 2 (Thread 0x7fff9fa91700 (LWP 1503742)): > #0 0x00007fffdb0eac08 in accept4 (fd=8, addr=..., addr_len=0x7fff9fa8eb38, flags=524288) at ../sysdeps/unix/sysv/linux/accept4.c:32 > resultvar = 18446744073709551104 > sc_cancel_oldtype = 0 > sc_ret = <optimized out> > #1 0x00007fff9fef991a in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 > No symbol table info available. > #2 0x00007fff9feebbbd in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 > No symbol table info available. > #3 0x00007fff9fefafe8 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 > No symbol table info available. > #4 0x00007fffdae4bfb7 in start_thread (arg=<optimized out>) at pthread_create.c:486 > ret = <optimized out> > pd = <optimized out> > now = <optimized out> > unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140735872046848, -5529860711636867593, 140737488347966, 140737488347967, 140735872046848, 140735872036096, 5530072531295200759, 5529937872893044215}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} > not_first_call = <optimized out> > #5 0x00007fffdb0e92cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 > No locals. > > Thread 1 (Thread 0x7fffa10a4a00 (LWP 1503707)): > #0 0x00007fffddfd2aff in magma_init () from /home/alexander/pytorch/torch/lib/libtorch_cuda.so > No symbol table info available. > #1 0x00007fffdbf0bc25 in THCMagma_init (state=<optimized out>) at /home/alexander/pytorch/aten/src/THC/THCTensorMathMagma.cu:24 > No locals. > #2 0x00007fffddbb25ed in at::cuda::detail::CUDAHooks::initCUDA (this=<optimized out>) at ../aten/src/ATen/cuda/detail/CUDAHooks.cpp:51 > logFlag1 = <optimized out> > thc_state = 0x5555567e9370 > #3 0x0000555555559ebf in at::Context::lazyInitCUDA()::{lambda()#1}::operator()() const (__closure=<error reading variable: Cannot access memory at address 0xffffffffffffffe8>) at /home/alexander/pytorch/torch/include/ATen/Context.h:74 > this = <error reading variable this (Cannot access memory at address 0xffffffffffffffe8)> > Backtrace stopped: previous frame inner to this frame (corrupt stack?) ## Expected behavior No segmentation fault. ## Environment Latest pytorch version from git: ecbf6f99e6a4e373105133b31534c9fb50f2acca Build type of PyTorch is "RelWithDebInfo", build type of the example is "Debug". PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A OS: Debian GNU/Linux bullseye/sid GCC version: (Debian 9.2.1-22) 9.2.1 20200104 CMake version: version 3.14.0 Python version: 3.6 Is CUDA available: N/A CUDA runtime version: 10.1.168 GPU models and configuration: GPU 0: GeForce GTX 970 Nvidia driver version: 430.64 cuDNN version: Could not collect Versions of relevant libraries: [pip3] numpy==1.17.4 [conda] _tflow_select 2.3.0 mkl [conda] blas 1.0 mkl [conda] magma-cuda92 2.4.0 1 pytorch [conda] mkl 2019.4 243 [conda] mkl-include 2019.4 243 [conda] mkl-service 2.3.0 py36he904b0f_0 [conda] mkl_fft 1.0.10 py36ha843d7b_0 [conda] mkl_random 1.0.2 py36hd81dba3_0 [conda] pytorch 1.0.1 cuda80py36ha8650f8_0 [conda] torchvision 0.2.1 py_2 pytorch ## Additional context <!-- Add any other context about the problem here. --> cc @ngimel
needs reproduction,module: cuda,triaged,module: third_party
low
Critical
553,042,235
pytorch
DataParallel does not work with sparse parameters
## πŸ› Bug DataParallel does not work with sparse parameters. The root issue is located in the model replication part of DataParallel. I have a fix proposal for this and can make a pull request : https://github.com/madlag/pytorch/commit/a64aacd4168ac73f50ee9e6ed16cfdca8e22af1d (this fixes completely DataParallel for version 1.4) ## To Reproduce Steps to reproduce the behavior: Run the following code on a machine with at least 2 gpus : ``` import torch import torch.nn import torch.sparse import torch.nn.parallel.replicate as replicate class MyModule(torch.nn.Module): def __init__(self): super().__init__() indices = torch.LongTensor([[0,1,2],[0,0,0]]) values = torch.tensor([0.0,0.0,0.0]) t = torch.sparse.FloatTensor(indices, values, [3,3]) self.p = torch.nn.Parameter(t) m = MyModule().cuda() replicate(m, [0,1]) ``` You will get this error message : ``` Traceback (most recent call last): File "test_data_parallel.py", line 15, in <module> replicate(m, [0,1]) File "/home/lagunas/ml/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/replicate.py", line 92, in replicate param_copies = _broadcast_coalesced_reshape(params, devices, False) File "/home/lagunas/ml/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/replicate.py", line 71, in _broadcast_coalesced_reshape tensor_copies = Broadcast.apply(devices, *tensors) RuntimeError: Could not run 'aten::view' with arguments from the 'SparseCUDATensorId' backend. 'aten::view' is only available for these backends: [CUDATensorId, QuantizedCPUTensorId, VariableTensorId, CPUTensorId, MkldnnCPUTensorId]. ``` ## Expected behavior The expected behaviour should be to replicate correctly the model containing sparse parameter. ## Environment PyTorch 1.4, with two GPUS. ```Collecting environment information... PyTorch version: 1.4.0 Is debug build: No CUDA used to build PyTorch: 10.0 OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.10.2 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce RTX 2080 Ti GPU 1: GeForce GTX 1080 Nvidia driver version: 430.26 cuDNN version: Could not collect Versions of relevant libraries: [pip] numpy==1.16.4 [pip] numpydoc==0.9.1 [pip] torch==1.4.0 [pip] torch-scatter==1.4.0 [pip] torch-sparse==0.4.3 [pip] torchfile==0.1.0 [pip] torchvision==0.5.0 [conda] blas 1.0 mkl [conda] mkl 2019.4 243 [conda] mkl-service 2.0.2 py37h7b6447c_0 [conda] mkl_fft 1.0.12 py37ha843d7b_0 [conda] mkl_random 1.0.2 py37hd81dba3_0 [conda] pytorch 1.4.0 py3.7_cuda10.0.130_cudnn7.6.3_0 pytorch [conda] torch-scatter 1.4.0 pypi_0 pypi [conda] torch-sparse 0.4.3 pypi_0 pypi [conda] torchfile 0.1.0 py_0 conda-forge [conda] torchvision 0.5.0 py37_cu100 pytorch ```
triaged,module: data parallel
low
Critical
553,114,609
pytorch
Numba Enhancement Proposal (NBEP) 7: External Memory Management Plugins
Raising here to solicit feedback on a proposal for External Memory Management support in [Numba]( https://numba.pydata.org ) (also called NBEP 7). Discussion is occurring in [this repo]( https://github.com/gmarkall/nbep-7 ). Please take a look and raise issues/PRs as you see fit. Thanks in advance for your feedback! cc @gmarkall (for awareness)
feature,triaged
low
Minor
553,121,238
TypeScript
Go to Definition for Angular (9.0.0-rc.9) project breaks
#### OS and VSCode version - VSCode Version: 1.41.1 - OS Version: Ubuntu Mate 18.04 - Typescript versions tested: 3.6.4 (version from node_modules) , 3.7.3 (current version used in VSCode), ms-vscode.vscode-typescript-next-3.8.20200119 (using VSCode plugin) - Angular version: 9.0.0-rc.9 #### Problem description (updated with findings in comments below) Go To Definition does not work properly when trying to go to definition of an Angular class that resides in node_modules. Simply mouse-hovering over this Class (or function) definition works though and the context popup is shown. Moreover, if navigating to a class that resides inside the project source files (not inside node_modules) also works. Go To Definition also works when going into definition of libraries that are not part of Angular - for example, rxjs. #### Steps to Reproduce: 1. Import or create Angular project (using RC version 9.0.0-rc.9) and install node_modules 2. Create a component like so: ```typescript import { Component, OnInit } from '@angular/core'; @Component({ selector: 'app-login', templateUrl: './login.component.html', styleUrls: ['./login.component.scss'] }) export class LoginComponent implements OnInit { constructor() {} ngOnInit() {} } ``` 3. Try `ctrl + mouse click` on `OnInit` class. Following error is thrown in the log file: ``` Info 260 [8:44:36.701] request: {"seq":37,"type":"request","command":"definitionAndBoundSpan","arguments":{"file":"/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/src/app/modules/login/login.component.ts","line":8,"offset":44}} Err 261 [8:44:36.883] Exception on executing command {"seq":37,"type":"request","command":"definitionAndBoundSpan","arguments":{"file":"/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/src/app/modules/login/login.component.ts","line":8,"offset":44}}: Maximum call stack size exceeded RangeError: Maximum call stack size exceeded at String.replace (<anonymous>) at Object.normalizeSlashes (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:15757:21) at Object.combinePaths (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:16094:31) at Object.getPathComponents (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:15943:19) at Object.resolvePath (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:16115:82) at Object.normalizePath (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:15865:19) at Object.toPath (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:8866:18) at toPath (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:110995:23) at getSourceFile (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111043:24) at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111016:24) at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61) at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61) at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61) at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61) at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61) [... repeated entries ommited for brevity] File text of /media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/src/app/modules/login/login.component.ts: import { Component, OnInit } from '@angular/core'; @Component({ selector: 'app-login', templateUrl: './login.component.html', styleUrls: ['./login.component.scss'] }) export class LoginComponent implements OnInit { constructor() {} ngOnInit() {} } Info 176 [8:56:13.165] response: {"seq":0,"type":"response","command":"definitionAndBoundSpan","request_seq":14,"success":false,"message":"Error processing request. Maximum call stack size exceeded\nRangeError: Maximum call stack size exceeded\n at String.replace (<anonymous>)\n at Object.normalizeSlashes (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:15757:21)\n at Object.combinePaths (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:16094:31)\n at Object.getPathComponents (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:15943:19)\n at Object.resolvePath (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:16115:82)\n at Object.normalizePath (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:15865:19)\n at Object.toPath (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:8866:18)\n at toPath (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:110995:23)\n at getSourceFile (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111043:24)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111016:24)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111016:24)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111016:24)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61)\n at tryGetSourcePosition (/media/martin/DADOS/Programacao/projetos/bootstrap-code/ui/node_modules/typescript/lib/tsserver.js:111020:61)\n [... repeated entries ommited for brevity] ``` ##### Does this issue occur when all extensions are disabled?: Yes/No Yes
Needs Investigation
low
Critical
553,134,748
flutter
Blend modes for Widgets
<!-- Thank you for using Flutter! If you are looking for support, please check out our documentation or consider asking a question on Stack Overflow: * https://flutter.dev/ * https://api.flutter.dev/ * https://stackoverflow.com/questions/tagged/flutter?sort=frequent If you have found a bug or if our documentation doesn't have an answer to what you're looking for, then fill our the template below. Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports --> ## Use case Using blend modes on text, especially in relation to images is very common, but it can also help with using less CustomPaints and better utilising existing Widgets, BoxDecorations and Shadows to create slick UI and Widgets. This will also help with handoff since all of the popular Design tools (Figma, Sketch, Photoshop) have these features. <!-- Please tell us the problem you are running into that led to you wanting a new feature. Is your feature request related to a problem? Please give a clear and concise description of what the problem is. Describe alternative solutions you've considered. Is there a package on pub.dev/flutter that already solves this? --> ## Proposal <!-- Briefly but precisely describe what you would like Flutter to be able to do. Consider attaching images showing what you are imagining. Does this have to be provided by Flutter directly, or can it be provided by a package on pub.dev/flutter? If so, maybe consider implementing and publishing such a package rather than filing a bug. --> I think it would be best to have the user set a blend mode as a property on in the constructor of any Widget. Alternatively the Stack Widget could have an option to enable blend modes.
c: new feature,framework,P3,team-framework,triaged-framework
low
Critical
553,153,604
rust
typeck: diverging binding in pattern does not generate unreachable_code
The following should probably result in the lint being emitted: ```rust #![feature(never_type)] pub fn foo(maybe_never: Option<!>) { match maybe_never { Some(_never) => { println!("foo"); } None => {} } } ``` as `_never` is matched on, and it has a diverging type. Compare this with: ```rust #![feature(never_type)] pub fn foo(maybe_never: Option<!>) { match maybe_never { Some(never) => { let _ = never; println!("foo"); } None => {} } } ``` Currently, the pattern type checking code does not care about `diverges`. We should probably avoid fixing this in typeck and have this be fixed automatically (?) by moving `diverges` logic to MIR or some such. cc @eddyb https://github.com/rust-lang/rust/pull/68422#discussion_r369246043.
C-enhancement,A-lints,T-compiler,C-bug
low
Minor
553,179,769
rust
Add Default Lint to Ensure Match Arm Bindings do not Shadow Local Variables.
Given the following code: ```rust fn foo(x: u32) -> u32 { let a = 100; let b = 200; let c = 300; match x { c => 3, b => 2, a => 1, _ => 0 } } ``` we currently emit: ```rust warning: unreachable pattern --> <source>:8:7 | 7 | c => 3, | - matches any value 8 | b => 2, | ^ unreachable pattern | = note: `#[warn(unreachable_patterns)]` on by default ``` However, in this case, the developer was likely confused and thought they were matching on the value of `c`, not creating a binding of with name `c`. We should emit something more similar to: ```rust warning: match arm binding shadows local binding --> <source>:8:7 | 7 | c => 3, | ^ = note: shadows local binding `c` declared at <source>:4:7 = note: `#[warn(arm_binding_shadows_local)]` on by default ```
C-enhancement,A-lints,A-diagnostics,T-compiler,D-newcomer-roadblock
low
Major
553,182,352
go
x/build/cmd/gopherbot: should not parse comments inside most markdown
In https://github.com/golang/go/issues/36508#issuecomment-576915859 @ianlancetaylor told somebody how to remove a label later and put the instructions in backticks but gopherbot still parsed it. It should try a bit to avoid finding directives in formatted text. /cc @andybons @dmitshur @toothrot
Builders,NeedsFix
low
Minor
553,195,700
pytorch
[JIT] Make `torch.jit.script` work on all objects which we can represent as IValues
## πŸš€ `torch.jit.script` on all representable mutable PyObjects In our current implementation python objects are copied between on the boundary between the JIT and python. This results in issues like https://github.com/pytorch/pytorch/issues/31129 and https://github.com/pytorch/pytorch/issues/30421, and we've had internal reports of users passing a python dictionary into a mutating method. Example ``` x = [1] def foo(x: List[int]): x.append(2) foo(x) print(x) # 1 ``` One solution here is to make `torch.jit.script` work on all objects which we can reprsent as IValues. `torch.jit.script([1])` would return a `c10::List` with all the python bindings you would expect for a list, and the same for `c10::Dict`. ``` x = torch.jit.script([1]) def foo(x: List[int]): x.append(2) foo(x) print(x) # 1, 2 ``` This exact api & warnings etc would still have to be figured out. cc @suo
oncall: jit,triaged
low
Minor
553,210,732
create-react-app
Evaluate esModule options in webpack loaders
`css-loader` and `style-loader` now support a `esModule` option to emit to ESM instead of CJS, which can improve tree shaking/module concatenation in webpack, but I haven't had a chance to see if it's compatible with our other CSS loaders. `file-loader` and `url-loader` now enable `esModule` by default, but have dropped support for node < `10.13.0`. Let's see if there's any benefit to enabling these in our webpack config.
issue: needs investigation
low
Minor
553,221,760
go
cmd/cover: (html output) UI accessibility issues, unfriendly to screen reader
### What version of Go are you using (`go version`)? go 1.13, go 1.14 Recently on golang-nuts, "Is it possible to get code coverage information in a way that does not assume you can see color?" https://groups.google.com/g/golang-nuts/c/DY4O9UXMr9M?pli=1 In this case, "see color" refers to a screen reader for a totally blind person. This is a bug, not an enhancement, because accessibility is important. It does need someone who knows something about UI accessibility to look at it.
ExpertNeeded,help wanted,NeedsInvestigation
medium
Critical
553,244,946
flutter
CupertinoPageRoute. Left to Right Animation Transition support ?
## Use case Before opening a PR, i would like to know if flutter team is interested in a flag we could pass to `CupertinoPageRoute` to decide what direction we want the animation to perform: - Right to left (current behavior) - Left to Right (not implemented) I hacked around with https://github.com/flutter/flutter/compare/master...kwent:cupertino_page_route_left_to_right and it's working fine. Thanks again for this amazing library ! Regards
framework,f: cupertino,f: routes,c: proposal,P3,team-design,triaged-design
low
Major
553,319,689
storybook
Addon docs fails to render prop table correctly when type is imported from a .tsx file (.ts works as expected)
Works πŸ‘ (table shows both `other` and `foo` props) ```typescript // Bar.ts export type BarProps = { foo?: string } // Other.tsx import { BarProps } from './Bar' type OtherProps = BarProps & { other?: number } const Other = (props: OtherProps) => <span {...props}>Other</span> ``` --- Doesnt work πŸ‘ŽπŸΌ (table doesnt render at all) ```typescript // Bar.tsx export type BarProps = { foo?: string } // Other.tsx import { BarProps } from './Bar' type OtherProps = BarProps & { other?: number } const Other = (props: OtherProps) => <span {...props}>Other</span> ``` --- Doesnt work πŸ‘ŽπŸΌ (table renders, but shows `foo` as type `any`) ```typescript // Bar.tsx export type BarProps = { foo?: string } // Other.tsx import { BarProps } from './Bar' type OtherProps = Pick<BarProps, 'foo'> & { other?: number } const Other = (props: OtherProps) => <span {...props}>Other</span> ```
question / support,typescript,addon: docs,block: props
medium
Major
553,372,626
opencv
OpenCV Seamless Clone CUDA Support
`void cv::seamlessClone` works very well to blend images seamlessly. However this process takes more time than other OpenCV operation. OpenCV image processing can be accelerated using OpenCV with GPU support like `cv::cuda::resize`. All of the supported reference can be found here https://docs.opencv.org/master/d1/d1a/namespacecv_1_1cuda.html. As we can see, there's no support for SeamlessClone function. Is there any CUDA support for the function? If no, will there be any development for it? Thank you
feature,priority: low,category: gpu/cuda (contrib)
low
Major
553,394,142
material-ui
Consistent disabled state for all components
<!-- Provide a general summary of the issue in the Title above --> At the moment, the disabled color of the slider is [`theme.palette.grey[400]`](https://github.com/mui-org/material-ui/blob/master/packages/material-ui/src/Slider/Slider.js#L145). Other inputs use [`theme.palette.action.disabled`](https://github.com/mui-org/material-ui/blob/master/packages/material-ui/src/OutlinedInput/OutlinedInput.js#L35) (also used e.g. [here](https://github.com/mui-org/material-ui/blob/master/packages/material-ui/src/Checkbox/Checkbox.js#L37)) and [`theme.palette.text.disabled`](https://github.com/mui-org/material-ui/blob/master/packages/material-ui/src/InputBase/InputBase.js#L44). The Slider should use `theme.palette.action.disabled` as well. <!-- Thank you very much for contributing to Material-UI by creating an issue! ❀️ To avoid duplicate issues we ask you to check off the following list. --> <!-- Checked checkbox should look like this: [x] --> - [ ] The issue is present in the latest release. - [ ] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate. ## Current Behavior 😯 The Slider uses `theme.palette.grey[400]`. <!-- Describe what happens instead of the expected behavior. --> ## Expected Behavior πŸ€” The Slider should use `theme.palette.action.disabled`. <!-- Describe what should happen. --> ## Steps to Reproduce πŸ•Ή <!-- Provide a link to a live example (you can use codesandbox.io) and an unambiguous set of steps to reproduce this bug. Include code to reproduce, if relevant (which it most likely is). This codesandbox.io template _may_ be a good starting point: https://codesandbox.io/s/github/mui-org/material-ui/tree/master/examples/create-react-app If you're using typescript a better starting point would be https://codesandbox.io/s/github/mui-org/material-ui/tree/master/examples/create-react-app-with-typescript If YOU DO NOT take time to provide a codesandbox.io reproduction, should the COMMUNITY take time to help you? --> Example showing that `theme.palette.grey[400]` is used: https://codesandbox.io/s/material-demo-qpzdk ![image](https://user-images.githubusercontent.com/10603631/72879169-a839b200-3cfc-11ea-95a6-5f9641ada70b.png) Note: I've forced the color with `!important` for the screenshot above. Actually, this demo shows another problem with the current style system - the custom disabled style is not applied, because the css specifity of the default disabled class is higher than the one of the custom class - is this wished? Should I create another issue for this problem? ![image](https://user-images.githubusercontent.com/10603631/72878822-f39f9080-3cfb-11ea-8045-3dcc80040c14.png) ![image](https://user-images.githubusercontent.com/10603631/72878610-8ab81880-3cfb-11ea-9082-4bb174627394.png) Steps: 1. Create a custom theme to see the difference between `grey[400]` and `action.disabled` 2. set `disabled` to `true` ## Context πŸ”¦ <!-- What are you trying to accomplish? How has this issue affected you? Providing context helps us come up with a solution that is most useful in the real world. --> We edit some values of the palette, which leads to a difference between `grey[400]` and `actions.disabled`. All disabled inputs have the same color, except for the Slider component. ![image](https://user-images.githubusercontent.com/10603631/72879438-24cc9080-3cfd-11ea-82b5-e80aa3b88f3c.png) ## Your Environment 🌎 <!-- Include as many relevant details about the environment with which you experienced the bug. If you encounter issues with typescript please include version and tsconfig. --> | Tech | Version | | ----------- | ------- | | Material-UI | v4.8.3 |
design: material,breaking change
low
Critical
553,410,230
TypeScript
Proposal: Allow a name bound to a class value to be used in a type position to reference the class's instance type
# Proposal: Allow a name bound to a class value to be used in a type position to reference the class's instance type ## What? In TypeScript, when a class is created and bound to a name via a class declaration, the name can be used in both value and type positions. In contrast, when a class is bound to a name via `const`, `let`, or `var`, the name can only be used in value positions. I propose that when a class is bound to a name via `const`, `let`, or `var`, the name should be usable in both value and type positions, and that when such a name is used in a type position, it be interpreted as the instance type of the bound class value. More formally: - A name should be deemed a _ClassValueName_ if: - It is declared via a `const`, `let`, or `var` statement, and - Its type is assignable to `new (...args: any[]) => any`, and - Its type is not `any` - A _ClassValueName_ should be usable in any type position that a class declaration name could be used in, and - A _ClassValueName_ used in a type position should be interpreted as the instance type of the class value, in the same way that a class declaration name used in a type position is interpreted as the instance type of the class declaration ### Examples of proposed behaviour ```typescript // `Foo` is of type `new () => {}` const Foo = class {}; // `Foo` can be used in a type position here, and `foo` is of type `{}` const foo: Foo = new Foo(); ``` ```typescript // `Foo` is of type `new <T>(value: T) => { value: T }` const Foo = class <T> { constructor(public value: T) {} }; // `Foo` can be used in a type position here (and is generic), and `foo` is of type `{ value: number }` const foo: Foo<number> = new Foo(42); ``` ```typescript function newFooClass() { return class <T> { constructor(public value: T) {} }; } // `Foo` is of type `new <T>(value: T) => { value: T }` const Foo = newFooClass(); const foo: Foo<number> = new Foo(42); ``` ```typescript const classes = { Foo: class <T> { constructor(public value: T) {} } }; // `Foo` is of type `new <T>(value: T) => { value: T }` const { Foo } = classes; const foo: Foo<number> = new Foo(42); ``` ```typescript const withBrand = <B extends string>(brand: B) => <C extends new (...args: any[]) => {}>(ctor: C) => class extends ctor { brand: B = brand; }; const Foo = class <T> { constructor(public value: T) {} }; // `FooWithBrand` is of type `new <T>(value: T) => ({ value: T } & { brand: 'Foo' })` const FooWithBrand = withBrand('Foo')(Foo); // `FooWithBrand` can be used in a type position here (and is generic), and `fooWithBrand` is of type `{ value: number } & { brand: 'Foo' }` const fooWithBrand: FooWithBrand<number> = new FooWithBrand(42); ``` ## Why? ### Unlike a class declaration, a class value requires a separate type declaration to expose the class's instance type For class declarations, we can simply use the name of the class in a type position to reference its instance type: ```typescript class Foo {} const foo: Foo = new Foo(); ``` But for class values, a separate type declaration is required: ```typescript const Foo = class {}; type Foo = InstanceType<typeof Foo>; const foo: Foo = new Foo(); ``` Requiring a separate type declaration has a few issues: - It's inconsistent with class declarations (which don't require a manual type declaration) - It doesn't work for generic classes (see next section) - It's boilerplate (which adds up, especially with multiple classes in the same file) With this proposal however, we wouldn't need a separate type declaration, and all of the following would just work: ```typescript const Foo = class {}; const foo: Foo = new Foo(); ``` ```typescript function newFooClass() { return class {}; } const Foo = newFooClass(); const foo: Foo = new Foo(); ``` ```typescript const classes = { Foo: class {} }; const { Foo } = classes; const foo: Foo = new Foo(); ``` ### There is currently no way to access the generic instance type of a generic class value None of the following work: ```typescript const Foo = class <T> { constructor(public value: T) {} }; const foo: Foo<number> = new Foo(42); // => Error: 'Foo' refers to a value, but is being used as a type here. ``` ```typescript const Foo = class <T> { constructor(public value: T) {} }; type Foo = InstanceType<typeof Foo>; const foo: Foo<number> = new Foo(42); // => Error: Type 'Foo' is not generic. ``` ```typescript const Foo = class <T> { constructor(public value: T) {} }; type Foo<T> = InstanceType<typeof Foo<T>>; // => Error: '>' expected. const foo: Foo<number> = new Foo(42); ``` ```typescript const Foo = class <T> { constructor(public value: T) {} }; type Foo<T> = InstanceType<typeof Foo><T>; // => Error: ';' expected. const foo: Foo<number> = new Foo(42); ``` With this proposal however, we could simply use the name of the class value in a type position to reference its generic instance type: ```typescript const Foo = class <T> { constructor(public value: T) {} }; const foo: Foo<number> = new Foo(42); // => No error ``` ### It enables a potential workaround for [#4881](https://github.com/microsoft/TypeScript/issues/4881) This doesn't work: ```typescript const withBrand = <B extends string>(brand: B) => <C extends new (...args: any[]) => {}>(ctor: C) => class extends ctor { brand: B = brand; }; @withBrand('Foo') class FooWithBrand<T> { constructor(readonly value: T) {} } type FooBrand<T> = FooWithBrand<T>['brand']; // => Error: Property 'brand' does not exist on type 'FooWithBrand<T>'. ``` But with this proposal, we could do this: ```typescript const withBrand = <B extends string>(brand: B) => <C extends new (...args: any[]) => {}>(ctor: C) => class extends ctor { brand: B = brand; }; const FooWithBrand = withBrand('Foo')( class <T> { constructor(public value: T) {} } ); type FooBrand<T> = FooWithBrand<T>['brand']; ``` While this wouldn't be type support for actual decorators, it would at least provide _a_ means of class decoration that's reflected at the type level. ### Flow supports this (there is prior art) The following all work in Flow: ```typescript const Foo = class {}; const foo: Foo = new Foo(); ``` ```typescript const Foo = class <T> { value: T; constructor(value: T) { this.value = value; } }; const foo: Foo<number> = new Foo(12); ``` ```typescript function newFooClass() { return class {}; } const Foo = newFooClass(); const foo: Foo = new Foo(); ``` ```typescript function newFooClass() { return class <T> { value: T; constructor(value: T) { this.value = value; } }; } const Foo = newFooClass(); const foo: Foo<number> = new Foo(42); ``` ```typescript const classes = { Foo: class {} }; const { Foo } = classes; const foo: Foo = new Foo(); ``` ```typescript const classes = { Foo: class <T> { value: T; constructor(value: T) { this.value = value; } } }; const { Foo } = classes; const foo: Foo<number> = new Foo(42); ``` While feature parity with Flow is obviously not one of TypeScript's goals, Flow supporting this behaviour means that there's at least a precedent for it. ## Why not? ### It _may_ be a breaking change Depending on the implementation approach, this may be a breaking change. For example, if this proposal were to be implemented by having the compiler automagically generate a separate type name whenever it encountered a _ClassValueName_, the generated type name may clash with an existing type name and break that currently working code. On the other hand, if it were possible to implement this proposal using some kind of fallback mechanism (such as preferring any existing type name over using a _ClassValueName_ in a type position, for instance), then the change would be backwards compatible. It is currently unclear whether or not the proposed change can be implemented in a backwards compatible manner. ### It "muddies" the separation between the value and type namespaces TypeScript maintains separate namespaces for values and types. While this separation is fairly core to the language, there are already several exceptions in the form of class declarations, enums, and namespaces. This proposal would introduce a new exception to that separation. This exception is particularly notable, as it would mark the first time a `const`, `let`, or `var`-declared name would be permitted in a type position. This is in contrast to all current exceptions, which each have construct-specific declaration syntax that differentiates them from standard variable declarations. This proposal's "muddying" of the separation between the value and type namespaces may be confusing and/or surprising for both new and existing TypeScript users. ## Next steps - [ ] Get feedback from both the TypeScript team and the community - [ ] Investigate potential implementation strategies ## Checklist My suggestion meets these guidelines: * [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code (**unclear at this stage**) * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,In Discussion
medium
Critical
553,428,664
flutter
GestureDetector's velocity is always 0.0 on IOS with VoiceOver.
Device: IPhone8 OS: IOS 1. Turn on VoiceOver. 2. Swipe with three fingers. **Expected results:** Velocity from GestureDetector. **Actual results:** Velocity is 0.0. ```dart GestureDetector( onHorizontalDragEnd: (details) { print('horizontal'); print(details.primaryVelocity); // 0.0 }, onVerticalDragEnd: (details) { print('vertical'); print(details.primaryVelocity); // 0.0 }, ); ``` <details> <summary>Logs</summary> **flutter run --verbose** ``` [ +5 ms] flutter: horizontal [ +1 ms] Notification from VM: {streamId: Stdout, event: {type: Event, kind: WriteEvent, isolate: {type: @Isolate, id: isolates/4114029510212103, name: main, number: 4114029510212103}, timestamp: 1579686758678, bytes: Cg==}} [ ] Notification from VM: {streamId: Stdout, event: {type: Event, kind: WriteEvent, isolate: {type: @Isolate, id: isolates/4114029510212103, name: main, number: 4114029510212103}, timestamp: 1579686758678, bytes: Zmx1dHRlcjogMC4w}} [ ] Notification from VM: {streamId: Stdout, event: {type: Event, kind: WriteEvent, isolate: {type: @Isolate, id: isolates/4114029510212103, name: main, number: 4114029510212103}, timestamp: 1579686758678, bytes: Cg==}} [ ] flutter: 0.0 ``` **flutter analyze** No analyze issues. **flutter doctor -v** ``` [βœ“] Flutter (Channel stable, v1.12.13+hotfix.5, on Mac OS X 10.15.2 19C57, locale ko-KR) β€’ Flutter version 1.12.13+hotfix.5 at /Users/maximilian/devkit/flutter β€’ Framework revision 27321ebbad (6 weeks ago), 2019-12-10 18:15:01 -0800 β€’ Engine revision 2994f7e1e6 β€’ Dart version 2.7.0 [βœ“] Android toolchain - develop for Android devices (Android SDK version 28.0.3) β€’ Android SDK at /Users/maximilian/Library/Android/sdk β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-28, build-tools 28.0.3 β€’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01) β€’ All Android licenses accepted. [βœ“] Xcode - develop for iOS and macOS (Xcode 11.3.1) β€’ Xcode at /Applications/Xcode.app/Contents/Developer β€’ Xcode 11.3.1, Build version 11C504 β€’ CocoaPods version 1.8.4 [βœ“] Android Studio (version 3.3) β€’ Android Studio at /Applications/Android Studio.app/Contents β€’ Flutter plugin version 33.4.1 β€’ Dart plugin version 182.5215 β€’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01) [βœ“] Connected device (1 available) β€’ iPhone β€’ 5ac729d286d9125808cfeb83e9a358fd3722602e β€’ ios β€’ iOS 13.3 β€’ No issues found! ``` </details>
platform-ios,framework,a: accessibility,f: gestures,has reproducible steps,P2,found in release: 3.7,found in release: 3.8,team-ios,triaged-ios
low
Major
553,507,571
pytorch
TracerWarning When Using Tensor Size in Torchscript Trace
## πŸ› Bug When the size of a tensor during a Torchscript trace is used to compare the size of a dimension against a Python integer, a tracer warning is issued stating that converting a tensor to a Python boolean might cause the trace to be incorrect. A tensor is not being converted to a boolean, the output from size() or shape is a torch.Size object and not a tensor so this operation should be safe or the warning message is not correct. ## To Reproduce The following script will reproduce the behaviour, specifically the second assert where the size of dimension 0 is being compared to the Python literal 0: import torch class JITTest(torch.nn.Module): def forward(self,x): size=x.size() # or x.shape assert not isinstance(size,torch.Tensor) assert size[0]>0 # <- TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. return x net=JITTest() scripted=torch.jit.trace(net,torch.rand((10,10))) Output is: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! This appears to be stating that `size` is a tensor and that the operation `size[0] > 0` is implicitly converting the tensor to a boolean using the `>` operator. Either trace has misidentified `size` as a tensor or this operation is not properly traceable but the issued warning incorrectly states what the problem is. ## Expected behaviour I expected no warning to be issued and that any operation on the size of a tensor to be correct. I would expect code like the following to produce a correct trace: if x.shape[0] == 1: do_something_for_single_channel() else: do_something_for_multi_channel() ## Environment PyTorch version: 1.3.1 Is debug build: No CUDA used to build PyTorch: 10.1.243 OS: Ubuntu 16.04.6 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609 CMake version: version 3.12.1 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce GTX 980 GPU 1: TITAN X (Pascal) Nvidia driver version: 418.67 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.1 Versions of relevant libraries: [pip] Could not collect [conda] _pytorch_select 0.1 cpu_0 [conda] blas 1.0 mkl [conda] ignite 0.2.1 py37_0 pytorch [conda] mkl 2019.4 243 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.15 py37ha843d7b_0 [conda] mkl_random 1.1.0 py37hd6b4f25_0 [conda] pytorch 1.3.1 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch [conda] torchvision 0.4.2 cpu_py37h9ec355b_0 cc @suo
oncall: jit,triaged
low
Critical
553,522,707
flutter
frameBuilder or loadingProgressBuilder error handling support
## Use case The spec for image loading for our product specifies that there is a different placeholder image shown before an image is loaded and when the image loading fails. Let's call it an errorPlaceholder. Implementing this with the current Image widget is not possible because there's no way to access the onError callback from the underlying ImageStream. Currently, this requires re-implementing the Image widget. ## Proposal Add a way in frameBuilder or the loadingProgressBuilder to access the onError callback of the underlying stream.
framework,customer: dream (g3),c: proposal,P3,team-framework,triaged-framework
low
Critical
553,546,171
rust
SIMD-enabled utf-8 validation
## Introduction The ["Parsing Gigabytes of JSON per second"](https://branchfree.org/2019/02/25/paper-parsing-gigabytes-of-json-per-second/) post ([ArXiv - langdale, lemire](https://t.co/MgQINMJlNB?amp=1)) proposes a novel approach for parsing JSON that is fast enough that on many systems it moves the bottleneck to the disk and network instead of the parser. This is done through the clever use of SIMD instructions. Something that stood out to me from the post is that JSON is required to be valid utf-8, and they had come up with new algorithms to validate utf-8 using SIMD instructions that function *much* faster than conventional approaches. Since rustc does a *lot* of utf-8 validation (each `.rs` source file needs to be valid utf-8), it got me curious about what rustc currently does. Validation seems to be done by the following routine: https://github.com/rust-lang/rust/blob/2f688ac602d50129388bb2a5519942049096cbff/src/libcore/str/mod.rs#L1500-L1618 This doesn't appear to use SIMD anywhere, not even conditionally. But it's run a *lot*, so I figured it might be interesting to use a more efficient algorithm for. ## Performance improvements The post ["Validating UTF-8 strings using as little as 0.7 cycles per byte"](https://lemire.me/blog/2018/05/16/validating-utf-8-strings-using-as-little-as-0-7-cycles-per-byte/) shows about an order of magnitude performance improvement on validating utf-8, going from `8` cycles per byte parsed to `0.7` cycles per byte parsed. When passing Rust's validation code through the godbolt decompiler, `from_utf8_unchecked` outputs 7 instructions, and `from_utf8` outputs 57 instructions. In the case of `from_utf8` most instructions seem to occur inside a loop. Which makes it likely we'll be able to observe a performance improvement by using a SIMD-enabled utf-8 parsing algorithm. Especially since economies of scale would apply here -- it's not uncommon for the compiler to parse several million bytes of input in a run. Any improvements here would quickly add up. - [assembly for str::from_utf8_unchecked (godbolt) - 7 lines](https://godbolt.org/z/Y9mwfd) - [assembly for str::from_utf8 (godbolt) - 57 lines](https://godbolt.org/z/ZJk8mL) - [assembly for run_utf8_validation routine (godbolt) - 183 lines](https://godbolt.org/z/sQteLm) _All examples linked have been compiled with `-O -C target-cpu=native`._ Also ecosystem libraries such as `serde_json` perform utf-8 [validation in several locations](https://github.com/serde-rs/json/search?q=utf8&unscoped_q=utf8), so would likely also benefit from performance improvements to Rust's utf-8 validation routines. ## Implementation There are two known Rust implementations of lemire's algorithm available in Rust today: - [simd-lite/simdjson-rs](https://github.com/simd-lite/simdjson-rs) - [argnidagur/rust-isutf8](https://github.com/ArniDagur/rust-isutf8) The latter even includes benchmarks against the compiler's algorithm (which makes it probable I'm not be the first person to think of this). But I haven't been able to successfully compile the benches, so I don't know how they stack up against the current implementation. I'm not overly familiar with rustc's internals. But it seems we would likely want to keep the current algorithm, and through feature detection enable SIMD algorithms. The `simdjson` library has different algorithms for different architectures, but we could probably start with instructions that are widely available and supported on tier-1 targets (such as `AVX2`). These changes wouldn't require an RFC because no APIs would change. The only outcome would be a performance improvement. ## Future work [Lemire's post](https://lemire.me/blog/2018/05/16/validating-utf-8-strings-using-as-little-as-0-7-cycles-per-byte/) also covers parsing ASCII in as little as 0.1 cycles per byte parsed. Rust's current ASCII validation algorithm validates bytes one at the time, and could likely benefit from similar optimizations. https://github.com/rust-lang/rust/blob/2f688ac602d50129388bb2a5519942049096cbff/src/libcore/str/mod.rs#L4136-L4141 Speeding this up would likely have ecosystem implications as well. For example HTTP headers must be valid ASCII, and are often performance sensitive. If the stdlib sped up ASCII validation, it would likely benefit the wider ecosystem as well. ## Conclusion In this issue I propose to use a SIMD-enabled algorithm for utf-8 validation in rustc. This seems like an interesting avenue to explore since there's a reasonable chance it might yield a performance improvement for many rust programs. I'm somewhat excited to have stumbled upon this, but was also surprised no issue had been filed for this yet. I'm a bit self-aware posting this since I'm not a rustc compiler engineer; but I hope this proves to be useful! cc/ @jonas-schievink @nnethercote ## References - [Parsing Gigabytes of JSON per second](https://branchfree.org/2019/02/25/paper-parsing-gigabytes-of-json-per-second/) - [simd-lite/simdjson-rs](https://github.com/simd-lite/simdjson-rs) - [argnidagur/rust-isutf8](https://github.com/ArniDagur/rust-isutf8) - [lemire/simdjson](https://github.com/lemire/simdjson) - [Validating UTF-8 strings using as little as 0.7 cycles per byte](https://lemire.me/blog/2018/05/16/validating-utf-8-strings-using-as-little-as-0-7-cycles-per-byte/) - [assembly for str::from_utf8_unchecked (godbolt) - 7 lines](https://godbolt.org/z/Y9mwfd) - [assembly for str::from_utf8 (godbolt) - 57 lines](https://godbolt.org/z/ZJk8mL) - [assembly for run_utf8_validation routine (godbolt) - 183 lines](https://godbolt.org/z/sQteLm)
C-enhancement,A-Unicode,T-libs-api,A-SIMD,T-libs,A-target-feature
high
Major
553,560,303
vscode
Editor title being read out once suggestion is accepted
found by @pawelurbanski 1. Open the suggest widget 2. Accept a suggestion 3. Editor aria label is begin read out :bug: The issue I believe is that the focus moves back to the editor which makes ScreenReaders read out the editor aria label. I am not sure how to tackle this and am open to suggestions @pawelurbanski Assigning to January so I look into fixing this week. Can reproduce both with NVDA and VoiceOver on mac.
bug,upstream,macos,accessibility,windows
medium
Critical
553,565,326
pytorch
RuntimeError: has_marked_unused_parameters_ INTERNAL ASSERT FAILED at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:290, please report a bug to PyTorch.
## πŸ› Bug ``` Phase=train CNN_AE_supervised( (encoder): CNN_encoder( (conv1): Conv2d(1, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (conv3): Conv2d(16, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (decoder): CNN_decoder( (conv4): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv6): Conv2d(16, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (readout): BN_readout( (flatten): Flatten() (fcn1): Linear(in_features=504, out_features=24, bias=True) (relu1): ReLU() (dropout1): Dropout(p=0.3, inplace=False) (fcn2): Linear(in_features=24, out_features=1, bias=True) ) ) Epoch 0/2 ---------- Phase=train 2020-01-22 13:14:16,093 sagemaker-containers ERROR ExecuteUserScriptError: Command "/opt/conda/bin/python -m train_ptsrep --backend gloo --epochs 3 --lr 0.01 --seed 42" Traceback (most recent call last): File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/ml/code/train_ptsrep.py", line 264, in <module> train(parser.parse_args()) File "/opt/ml/code/train_ptsrep.py", line 154, in train best_ae_model = dl_procedures.create_and_train_AE_supervised(data_loader, device, dataset_sizes, args.epochs, args.lr, is_distributed, use_cuda) File "/opt/ml/code/dl_procedures.py", line 320, in create_and_train_AE_supervised num_epochs=num_epochs,model_name="supervised_ae", is_distributed=is_distributed, use_cuda=use_cuda) File "/opt/ml/code/dl_procedures.py", line 111, in train_AE_model bce_loss.backward() File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 118, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: has_marked_unused_parameters_ INTERNAL ASSERT FAILED at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:290, please report a bug to PyTorch. 2020-01-22 13:14:16,003 sagemaker-containers ERROR ExecuteUserScriptError: Command "/opt/conda/bin/python -m train_ptsrep --backend gloo --epochs 3 --lr 0.01 --seed 42" Traceback (most recent call last): File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/ml/code/train_ptsrep.py", line 264, in <module> train(parser.parse_args()) File "/opt/ml/code/train_ptsrep.py", line 154, in train best_ae_model = dl_procedures.create_and_train_AE_supervised(data_loader, device, dataset_sizes, args.epochs, args.lr, is_distributed, use_cuda) File "/opt/ml/code/dl_procedures.py", line 320, in create_and_train_AE_supervised num_epochs=num_epochs,model_name="supervised_ae", is_distributed=is_distributed, use_cuda=use_cuda) File "/opt/ml/code/dl_procedures.py", line 111, in train_AE_model bce_loss.backward() File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 118, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: has_marked_unused_parameters_ INTERNAL ASSERT FAILED at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:290, please report a bug to PyTorch. 2020-01-22 13:14:18 Uploading - Uploading generated training model 2020-01-22 13:14:48 Failed - Training job failed ``` ## To Reproduce ``` def train_AE_model(model, device, dataloaders,dataset_sizes,optimizer, supervised_criterion=None, scheduler=None, num_epochs=25,model_name="default_model", is_distributed=False, use_cuda=False): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 best_loss=1000000000000.0 finished_batches = 0 for epoch in range(num_epochs): # Each epoch has a training and validation phase print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) all_labels = {} for phase in ['train', 'dev']: print('Phase={}'.format(phase)) all_labels[phase]=[] if phase == 'train': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode. eval mode fixes dropout, batchnorm etc running_loss = 0.0 running_corrects = 0 # Iterate over data. max_batches = 3 finished_batches = 0 for (inputs,labels) in dataloaders[phase]: # tqdm(dataloaders[phase]) # if finished_batches > max_batches: # break all_labels[phase].extend(copy.deepcopy(labels)) inputs = inputs.float().to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): if supervised_criterion is not None: # supervised AE outputs,readout = model(inputs) preds = readout.clone().detach().squeeze() > 0.0 preds = preds.float() loss = binary_MSE_plus_loss(outputs,inputs,debug=False) bce_loss = supervised_criterion(torch.squeeze(readout), labels.float()) else: # plain AE outputs = model(inputs) loss = binary_MSE_plus_loss(outputs,inputs,debug=False) # backward + optimize only if in training phase if phase == 'train': if supervised_criterion is not None: loss.backward(retain_graph=True) bce_loss.backward() else: loss.backward() """ if is_distributed and not use_cuda: # average gradients manually for multi-machine cpu case only _average_gradients(model) """ optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) finished_batches = finished_batches+1 if supervised_criterion is not None: running_corrects += torch.sum(preds.long() == labels.data) # Recommended for performance: # Flush the pytorch cache # https://discuss.pytorch.org/t/why-the-training-slow-down-with-time-if-training-continuously-and-gpu-utilization-begins-to-jitter-dramatically/11444 if finished_batches % 50 == 0: torch.cuda.empty_cache() del(labels) del(inputs) finished_batches = finished_batches + 1 epoch_loss = running_loss #/ dataset_sizes[phase] if supervised_criterion is not None: epoch_acc = running_corrects.double() / dataset_sizes[phase] print('{} Loss: {:.4f}, acc={:.4f}'.format(phase, epoch_loss,epoch_acc)) else: print('{} Loss: {:.4f}'.format(phase, epoch_loss)) # save the best model. Base "best-ness" on dev set loss if phase=="dev" and epoch_loss < best_loss: best_running_loss = running_loss best_model_wts = copy.deepcopy(model.state_dict()) # save the best model (overwritten each time a better one is found) torch.save(best_model_wts,"{}.best_loss.state".format(model_name)) if phase=="dev" and (supervised_criterion is not None) and epoch_acc > best_acc: best_acc = epoch_acc best_acc_model_wts = copy.deepcopy(model.state_dict()) # save the best model (overwritten each time a better one is found) torch.save(best_model_wts,"{}.best_acc.state".format(model_name)) print() # write the model of each epoch as well # only for debugging (as these can be large for large networks) # torch.save(model.state_dict(), "{}_E{}.state".format(model_name,epoch)) time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) # load best model weights model.load_state_dict(best_model_wts) return model ``` ``` def create_and_train_AE(data_loader,device,dataset_sizes,num_epochs=5,model_name="plain_ae"): # get network sizes inputs,labels = iter(data_loader["train"]).next() inputs = inputs.float() one_data = torch.unsqueeze(inputs[0,:,:],0).float() encoder = dl_models.CNN_encoder() encoded,encoder_sizes = encoder(one_data,dry_run=True) bottleneck_size=np.prod(np.array(encoded.size())[1:]) # build model proper ae = dl_models.CNN_AE(encoder_sizes=encoder_sizes) supervised_criterion=torch.nn.BCEWithLogitsLoss() optimizer = torch.optim.SGD(ae.parameters(), lr=0.01, momentum=0.9) ae.to(device) # train model best_ae_model = train_AE_model(ae,device,data_loader,dataset_sizes,optimizer, supervised_criterion=None,scheduler=None, num_epochs=num_epochs,model_name=model_name) `` ``` I am trying to train on Sagemaker Pytorch container a supervised convolutional autoenconder (i.e. there is a loss function for the convolutional autoencoder and one for the ANN that comes after it). I get the error above and I cannot figure out what is wrong with my code. It was working before trying to do distributed training. I found this bug report https://github.com/pytorch/pytorch/issues/31035 but in my case .backward() is only called once so it must be something else. Maybe @pietern or someone else can this time also help. cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar
oncall: distributed,triaged
low
Critical
553,594,695
go
libgo: SEGV in runtime test TestChan on ppc64le
<!-- Please answer these questions before submitting your issue. Thanks! For questions please use one of our forums: https://github.com/golang/go/wiki/Questions --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.14beta1 gccgo (GCC) 10.0.1 20200122 (experimental) linux/ppc64le </pre> ### Does this issue reproduce with the latest release? This started happening in Go 1.13 and was reported in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92564. Continues to happen in Go 1.14beta1 ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env linux/ppc64le </pre></details> ### What did you do? Run the libgo tests <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> ### What did you expect to see? No failures ### What did you see instead? === RUN TestChan fatal error: unexpected signal during runtime execution [signal SIGSEGV: segmentation violation code=0x2 addr=0x7214a61c0000 pc=0x1008eb24] More details, stacks, and gdb information can be found in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92564
NeedsInvestigation
medium
Critical
553,596,630
rust
Error message [E0423] "expected value, found built-in attribute `start`" is confusing
[Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=d80302b4633e9c6b07744fd31e1a2330): ```rust fn one_char_range(s: &str, offset: usize) -> &str { &s[start..start + 1] } ``` ``` error[E0423]: expected value, found built-in attribute `start` --> src/lib.rs:2:8 | 2 | &s[start..start + 1] | ^^^^^ not a value ``` rustc thinks maybe I mean `#[start]`. But that is silly. And it was confusing to me as a user. I know there's an attribute `#[start]`, but presented like that without the `#[]` punctuation, I didn't recognize it, even though the message says exactly what it means! `rustc --explain E0423` didn't help. I think E0423 is confusing here, and rustc should fall back on E0425 ("cannot find the value `start` in scope").
C-enhancement,A-diagnostics,A-resolve,T-compiler,D-confusing
low
Critical
553,606,245
TypeScript
Declaration includes incorrect 'any' types for inferred nested property members
**TypeScript Version:** 3.5.3 **Search Terms:** declaration any inferred type **Code** I'm using some helper functions to construct type guards where the type predicate is inferred: ```ts const isObject = (value: unknown): value is { [key: string]: unknown } => typeof value === 'object' && value !== null const isObjectWith = <S>( predicates: { readonly [P in keyof Required<S>]: (value: unknown) => value is S[P] }, ) => (value: unknown): value is S => isObject(value) && entries(predicates).every(([k, p]) => p(value[k])) const isNumber = (x: unknown): x is number => typeof x === 'number' const entries = <V extends object>(object: V): ReadonlyArray<readonly [string & keyof V, V[string & keyof V]]> => Object.entries(object) as Array<[string & keyof V, V[string & keyof V]]> export const isFoo = isObjectWith({ a: isNumber, }) export const isBar = isObjectWith({ b: isFoo, }) ``` **Expected behavior:** The `.d.ts` file should contain all the inferred property types: ``` export declare const isFoo: (value: unknown) => value is { a: number; }; export declare const isBar: (value: unknown) => value is { b: { a: number; }; }; //# sourceMappingURL=repro.d.ts.map ``` **Actual behavior:** The `.d.ts` file contains `any` for the nested property type: ``` export declare const isFoo: (value: unknown) => value is { a: number; }; export declare const isBar: (value: unknown) => value is { b: { a: any; }; }; //# sourceMappingURL=repro.d.ts.map ``` Interestingly, if I force TypeScript to "evaluate" the inferred type using a mapped type, I get the correct result: ``` const _isFoo = isObjectWith({ a: isNumber, }) type GuardedType<T> = T extends (value: any) => value is infer S ? { [K in keyof S]: S[K] } : never export const isFoo: (value: unknown) => value is GuardedType<typeof _isFoo> = _isFoo ``` **Related Issues:** https://github.com/microsoft/TypeScript/issues/19565
Bug
low
Minor
553,713,991
pytorch
Don't unnecesarily send cleanup dist autograd context RPCs to other nodes
## Enhancement: Don't unnecesarily send cleanup dist autograd context RPCs to other nodes Currently if node A sends an RPC to B to clean up a given dist autograd context, B will propagate this information and send out its own messages too. It could be possible that B pointlessly sends out a message to A, creating an unneeded extra RPC. We can send over the workerId we're getting this message from to avoid this. This will help us prevent the overhead of one additional RPC when there is such a request. cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar
module: bootcamp,triaged,better-engineering,module: rpc
low
Minor
553,732,942
TypeScript
JSDoc doesn't show param info for parameter properties using @param
*TS Template added by @mjbvz* **TypeScript Version**: 3.8.0-beta **Search Terms** - quickInfo - jsdoc --- <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> <!-- Use Help > Report Issue to prefill these. --> - VSCode Version: 1.41.1 (Commit: 26076a4de974ead31f97692a0d32f90d735645c0) - OS Version: MacOS Catalina 10.15 (19A602) Steps to Reproduce: 1. Create a function with a single parameter 2. Add documentation to it with object parameter properties contained. ```javascript /** * Shows a dialog to the user with the given title, content, and performs the onAcceptClick * on the accept button press * * @param {Object} payload - The dialog information to show the user * @param {string} payload.title - Title of dialog * @param {string} payload.content - Content of the dialog * @param {string} [payload.acceptText=null] - Text shown on the accept button * @param {Function} [payload.onAcceptClick=null] - Action performed when accept is clicked * @param {string} [payload.cancelText=null] - Text shown on the cancel button * @param {Function} [payload.onCancelClick=null] - Action performed when cancel is clicked */ export const showDialog = (payload) => ({ type: constants.UPDATE_DIALOG, payload, }) ``` All the parameters show for the method signatures but none of the inner parameter descriptions show. e.g. payload.cancelText does not show the text "Text shown on the cancel button" in the hover event dialog. ![Sample](https://user-images.githubusercontent.com/9369196/72921291-04aaca80-3d19-11ea-9b2d-fa34fc932d34.png) Note as well. Following the sytax defined [here](https://jsdoc.app/tags-param.html#optional-parameters-and-default-values) the default values do not show anywhere within the pop up. (I can make a separate issue if needed)
Suggestion,Domain: Quick Info,Experience Enhancement
low
Major
553,741,608
godot
Fail to capture microphone input on macOs - AudioUnitRender error -10863
**Godot version:** 3.1.1.stable.custom_build **OS/device including version:** MacMini - macOS Catalina 10.15.2 **Issue description:** When trying to record microphone input on macOs it throws an AudioUnitRender error -10863. ![](https://i.imgur.com/g3jVF58.png) FYI, I already patched the microphone permission on #34338. **Steps to reproduce:** Download official demo project Mic Record Demo, plugin a microphone and run the scene. If you disable Autoplay from AudioStreamRecord it doesn't throw this error. **Minimal reproduction project:** https://github.com/godotengine/godot-demo-projects/tree/master/audio/mic_record
bug,platform:macos,topic:audio
low
Critical
553,744,282
rust
Code generation quality for a recursive function
After reading [this blog post](https://thomashartmann.dev/blog/feature(slice_patterns)/) recently, I was very happily surprised with the quality of the code generation for the following function : ```rust fn middle(xs: &[u32]) -> Option<&u32> { match xs { [_, inner @ .., _] => middle(inner), [x] => Some(x), [] => None, } } ``` ```asm example::middle: cmp rsi, 2 jb .LBB0_2 add rsi, -2 mov rax, rsi and rax, -2 lea rdi, [rdi + 2*rax] add rdi, 4 and esi, 1 .LBB0_2: xor eax, eax cmp rsi, 1 cmove rax, rdi ret ``` It is amazing what the compiler is able to achieve ! But then, I tried this very slight variation : ```rust pub fn middle(xs: &[u32]) -> Option<u32> { match xs { [_, inner @ .., _] => middle(inner), [x] => Some(*x), [] => None, } } ``` (the only difference is that we are returning an `u32` instead of an `&u32`) And the generated code changes dramatically: ```asm example::middle: push rax cmp rsi, 1 jbe .LBB0_1 add rdi, 4 add rsi, -2 call qword ptr [rip + example::middle@GOTPCREL] pop rcx ret .LBB0_1: cmp rsi, 1 jne .LBB0_2 mov edx, dword ptr [rdi] mov eax, 1 pop rcx ret .LBB0_2: xor eax, eax pop rcx ret ``` Is there something making the optimization harder to apply in the second case, or is it a bug somewhere in the compiler ?
I-slow,C-enhancement,A-codegen,T-compiler,A-slice-patterns,WG-llvm,C-optimization
low
Critical
553,744,774
ant-design
Affix is stakable
- [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate. ### What problem does this feature solve? If there are multiple Affix components on the page, they are stakable and will not be overlapped. ### What does the proposed API look like? add a property: stakable={true | false} <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
πŸ’‘ Feature Request,Inactive
low
Minor
553,772,967
pytorch
LibTorch operates very slowly on data blobs from GPU
## πŸ› Bug <!-- A clear and concise description of what the bug is. --> If `dArray` is a pointer to GPU global memory, you can create a tensor using that poiner as follows: ```C++ 1. const size_t N = 10000000; 2. cudaMalloc(&dArray, N * sizeof(float)); 3. auto options = torch::TensorOptions().dtype(torch::kFloat32).device(torch::kCUDA); 4. auto torchTensor = torch::from_blob(dArray, torch::IntList(kNumElems), options); 5. torchTensor += 1; ``` Creating the tensor in line 4 takes 1808 us and increasing its value in line 5 takes **3168169** us. The latter value is very large. This probably indicates that there is a considerable overhead involved when we use `from_blob` in the aforementioned fashion. I downloaded binary version of LibTorch from the website and used that to make the C++/CUDA project. The relevant flags in my `CMakeLists.txt` are: ```cmake set(CMAKE_CXX_STANDARD 14) set(CMAKE_CXX_STANDARD_REQUIRED ON) set(CMAKE_CUDA_STANDARD 14) set(CMAKE_CUDA_STANDARD_REQUIRED ON) set(DEFAULT_BUILD_TYPE "Release") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}") ``` ## Environment ``` - LibTorch Version: 1.4.0 - OS (e.g., Linux): Ubuntu 18.04 - How you installed PyTorch (`conda`, `pip`, source): Downloaded the binary from pytorch.org - Build command you used (if compiling from source): N/A - Python version: N/A - 3.7.4 - CUDA/cuDNN version: CUDA: 10.0, cuDNN: 7.6.5 - GPU models and configuration: TITAN V - Any other relevant information: cc @ngimel @VitalyFedyunin @mruberry
module: performance,module: cuda,triaged
low
Critical
553,803,722
pytorch
Tensor.random_ is not implemented for bool on CUDA(but implemented on CPU)
``` In [1]: import torch In [2]: a = torch.empty((3, 3), dtype=torch.bool, device='cpu') In [3]: a.random_() Out[3]: tensor([[False, True, False], [False, True, True], [False, True, False]]) In [4]: a = torch.empty((3, 3), dtype=torch.bool, device='cuda') In [5]: a.random_() --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-5-7b7864c4b98d> in <module> ----> 1 a.random_() RuntimeError: "random_cuda_range_calc" not implemented for 'Bool' ``` cc @ngimel
module: cuda,triaged,module: random
low
Critical
553,810,287
pytorch
TorchScript C++ API Tracking Issue
Things we need to clean up in the TorchScript C++ API ## πŸš€ Feature - [ ] torch::save and torch::load should work on both C++ serialized models and data created from torch.save/toch.load - [ ] Simple constructors for Tuple, List, and Dict in the torch namespace `torch::Tuple(<vector of Ivalues>)`, `torch::List(std::vector<T>)`, etc. should work as expected - [ ] Header file reorganization to make it clearer what is part of the public API (e.g. torch.h should contain declarations for things implemented in the torch:: namespace). @suo @driazati cc @suo
oncall: jit,triaged
low
Minor
553,813,233
flutter
attempting to encode an image causes widget test to hang
## Steps to Reproduce 1. Run `flutter create bug`. 2. Update the files as follows: replace the contents of `test/widget_test.dart` with: ``` import 'dart:ui'; import 'package:flutter/material.dart'; import 'package:flutter/rendering.dart'; import 'package:flutter_test/flutter_test.dart'; void main() { testWidgets('test image encoding', (WidgetTester tester) async { final GlobalKey _globalKey = GlobalKey(); await tester.pumpWidget( Directionality( child: RepaintBoundary( key: _globalKey, child: Text('Test'), ), textDirection: TextDirection.ltr, ), ); expect(find.byType(RepaintBoundary), findsOneWidget); final b = _globalKey.currentContext.findRenderObject() as RenderRepaintBoundary; final image = await b.toImage(); await image.toByteData(format: ImageByteFormat.png); }); } ``` 3. run `flutter test` **Expected results:** test completes successfully **Actual results:** test hangs, never times out. I can see its hanging on the last line as removing that line allows the test to complete successfully. <details> <summary>Logs</summary> ``` flutter doctor Doctor summary (to see all details, run flutter doctor -v): [βœ“] Flutter (Channel master, v1.14.4-pre.25, on Linux, locale en_AU.UTF-8) [βœ“] Android toolchain - develop for Android devices (Android SDK version 29.0.2) [βœ“] Android Studio (version 3.5) [βœ“] VS Code (version 1.41.1) [βœ“] Connected device (1 available) ``` NOTE: I also see same results on stable v1.12.13+hotfix.5 which is where I first noticed this issue. </details>
a: tests,engine,a: images,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-engine,triaged-engine
low
Critical
553,837,227
vscode
Soft-wrap JSON following a newline \n in a string
Given JSON like the following I would like to be able to opt-into soft-wrapping following any `\n` within a JSON string. Such that this: ![image](https://user-images.githubusercontent.com/452414/72941037-3fb6f900-3d2d-11ea-9012-556ada28dacc.png) Would render in the text editor similar to this when the feature is enabled: ![image](https://user-images.githubusercontent.com/452414/72941000-257d1b00-3d2d-11ea-9e18-b81bfceb0eef.png) This feature could be enabled or disabled independent of the existing word wrap feature.
feature-request,editor-wrapping
medium
Critical
553,871,397
TypeScript
Typescript error highlighting not working with project references
*TS Template added by @mjbvz* **TypeScript Version**: 3.8.0-beta **Search Terms** - composite - references - error reporting - diagnostics --- <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> <!-- Use Help > Report Issue to prefill these. --> - VSCode Version: 1.41.1 - OS Version: macOS Mojave 1.14.6 Steps to Reproduce: 1. Clone this repo: https://github.com/johnfn/example 2. Open the root folder in vscode as a project 3. Open foo.ts ``` let x: number; x = "blahsdflhjh" ``` This should be underlined as an error, but it's not. Weirdly, if you add an empty .ts file in the topmost directory, everything starts working. Additionally, `tsc --watch` outputs it as an error, as expected. Does this issue occur when all extensions are disabled?: Yes
Bug
low
Critical
553,881,197
flutter
Discussion: Alternative way for TextSpan.recognizer?
Currently the _painting_ package depends on the _gestures_ package only by the last stronghold, `TextSpan`. More specifically, - `TextSpan.recognizer`. - `InlineSpanSemanticsInformation.recognizer`, which is the semantics information collected from `TextSpan.recognizer`. - `InlineSpan.recognizer`, which is deprecated and always returns null. So basically `TextSpan.recognizer`. If we can instead implement it in the _rendering_ package, then _painting_ will no longer depend _gestures_, allowing _gestures_ to import _services_ and _rendering_. This will at least make the API for the following issues a lot simpler: - Mouse cursor https://github.com/flutter/flutter/issues/31952, because currently when to change mouse cursor is defined in _gestures_, while how to change mouse cursor (method channel) is defined in _services_. - Correcting `localPosition` of mouse events https://github.com/flutter/flutter/issues/33675, because the hit test is defined in the _rendering_ package. Also, I've noticed a few issues with the current design, as some additional incentives: - `_LinkTextSpan` (which is the only reference of `TextSpan.recognizer` within Flutter) seems not to be very happy with this API - `TextSpan.describeSemantics` hard-codes the recognizable gesture recognizers, which is not friendly to custom gestures.
framework,f: gestures,a: typography,c: proposal,P2,team-framework,triaged-framework
low
Minor
553,893,007
rust
Diagnostics for mismatched generic types for constructors accessed via Self could show where the mismatch occurs
In this code, I used `Self::VariantName` to construct a new enum. However, since `Self` is treated as `Wrapper<T>`, I get an error when I try to populate `Self::There` / `Wrapper<T>::There` with a value of type `U`: ```rust enum Wrapper<T> { There(T), NotThere, } impl<T> Wrapper<T> { fn map<U>(self, f: impl FnOnce(T) -> U) -> Wrapper<U> { match self { Self::There(v) => Self::There(f(v)), Self::NotThere => Self::NotThere, } } } ``` ([Playground](https://play.integer32.com/?version=stable&mode=debug&edition=2018&gist=d2020ec808cbb3a65df21fb04012ac22)) Errors: ``` error[E0308]: mismatched types --> src/lib.rs:9:43 | 9 | Self::There(v) => Self::There(f(v)), | ^^^^ expected type parameter, found a different type parameter | = note: expected type `T` found type `U` = note: a type parameter was expected, but a different one was found; you might be missing a type parameter or trait bound = note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters error[E0308]: mismatched types --> src/lib.rs:9:31 | 7 | fn map<U>(self, f: impl FnOnce(T) -> U) -> Wrapper<U> { | ---------- expected `Wrapper<U>` because of return type 8 | match self { 9 | Self::There(v) => Self::There(f(v)), | ^^^^^^^^^^^^^^^^^ expected type parameter, found a different type parameter | = note: expected type `Wrapper<U>` found type `Wrapper<T>` = note: a type parameter was expected, but a different one was found; you might be missing a type parameter or trait bound = note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters ``` This threw me for a fair number of minutes, mostly spent saying "no, that function `f` takes a `T` and returns a `U`, not the other way around". Technically, the compiler *is* pointing to the entire call of `f` which should have tipped me off to realizing that it's the constructor call that was an issue. Interestingly, the same problem doesn't occur for structs: ```rust struct Wrapper<T>(T); impl<T> Wrapper<T> { fn map<U>(self, f: impl FnOnce(T) -> U) -> Wrapper<U> { Self(f(self.0)) } } ```
C-enhancement,A-diagnostics,A-associated-items,T-compiler,D-confusing,D-papercut
low
Critical
553,904,917
go
net: document potential values for the "Op" in net.OpError
The current documentation reads: ```go // OpError is the error type usually returned by functions in the // net package. It describes the operation, network type, and address of an error. type OpError struct { // Op is the operation which caused the error, such as // "read" or "write". Op string ``` As far as I am aware the only way to determine which "Op"'s exist is by grepping the source code of the `net` package. In particular I am interested in determining whether an error with an `Op` value of `"dial"` could have ever been returned after a client sent bytes of a request to a remote HTTP server.
Documentation,NeedsInvestigation
low
Critical
553,919,874
flutter
Fuchsia Message Loop implementation is susceptible to idle wakes.
This is being temporarily reverted in https://github.com/flutter/engine/pull/15903. Once the underlying issue (described below) is patched, this patch can be re-landed. The original regression was introduced in https://github.com/flutter/engine/pull/14007. The message loop implementations may be asked to wake up multiple times at various points in the future. When the implementation is asked to wake up the thread at the new time-point, the previous request must be disregarded. Once the time-point it reached, the implementation must call RunExpiredTasksNow. In the reverted patch, the Fuchsia implementation was scheduling a task to be run in the future for each call to [`MessageLoopImpl::WakeUp`](https://github.com/flutter/engine/pull/14007/files#diff-f2adf5aa8dfbf051d4691e57e091ace9R33). This did not take into account disregarding the previous requests. [Other platforms use timer file descriptors](https://github.com/flutter/engine/blob/973cfbc6cb030526269acc8dc03d524abb199e94/fml/platform/linux/message_loop_linux.cc#L83) that are continuously re-armed to implement this functionality. In the absence of this mechanism on Fuchsia, for each task posted to the target message loop (potentially many thousands), the message loop would wake up and end up doing no work. This would eventually cause CPU usage to spike and cause actual work to be deferred.
customer: fuchsia,engine,P2,team-engine,triaged-engine
low
Minor
553,936,963
pytorch
Pytorch 1.4 does not detect gpu
## πŸ› Bug Installing pytorch 1.4 does not detect GPU, but pytorch-1.2 does work fine. ## To Reproduce Steps to reproduce the behavior: 1. Create a new environment using conda: `conda create -n py14 python=3.7` 1. Activate the conda environment `conda activate py14` 1. Install pytorch using the command `conda install pytorch -c pytorch` 1. `python -c "import torch; print(torch.cuda.is_available())"` Repeat The same sequence of steps but use pytorch 1.2 and a different environment say py12 1. Create a new environment using conda: `conda create -n py12 python=3.7` 1. Activate the conda environment `conda activate py12` 1. Install pytorch using the command `conda install pytorch=1.2 -c pytorch` 1. `python -c "import torch; print(torch.cuda.is_available())"` <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> The key difference seems to be in the toolkit, cudnn version: pytorch-1.4 pulls: `pytorch/linux-64::pytorch-1.4.0-py3.7_cuda10.1.243_cudnn7.6.3_0` pytorch-1.2 pulls: `pytorch/linux-64::pytorch-1.2.0-py3.7_cuda10.0.130_cudnn7.6.2_0` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> In both cases step 4 should print True. But pytorch-1.2 prints True and pytorch-1.4 prints False ## Environment For environment py14: ``` Collecting environment information... PyTorch version: 1.4.0 Is debug build: No CUDA used to build PyTorch: 10.1 OS: Ubuntu 18.04.1 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.10.2 Python version: 3.7 Is CUDA available: No CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce RTX 2080 Ti GPU 1: GeForce RTX 2080 Ti Nvidia driver version: 410.78 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.2 Versions of relevant libraries: [pip] numpy==1.18.1 [pip] torch==1.4.0 [conda] blas 1.0 mkl [conda] mkl 2019.4 243 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.15 py37ha843d7b_0 [conda] mkl_random 1.1.0 py37hd6b4f25_0 [conda] pytorch 1.4.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch ``` For environment py12: ``` Collecting environment information... PyTorch version: 1.2.0 Is debug build: No CUDA used to build PyTorch: 10.0.130 OS: Ubuntu 18.04.1 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.10.2 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce RTX 2080 Ti GPU 1: GeForce RTX 2080 Ti Nvidia driver version: 410.78 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.2 Versions of relevant libraries: [pip] numpy==1.18.1 [pip] torch==1.2.0 [conda] blas 1.0 mkl [conda] mkl 2019.4 243 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.15 py37ha843d7b_0 [conda] mkl_random 1.1.0 py37hd6b4f25_0 [conda] pytorch 1.2.0 py3.7_cuda10.0.130_cudnn7.6.2_0 pytorch ``` cc @ngimel
module: cuda,triaged
low
Critical
553,938,875
opencv
Please make an official OpenCV bindings for R
To whomsoever it may concern, As an avid R user, I would humbly request if OpenCV.org could create an official bindings of OpenCV for R. R users like me have to migrate to Python to use OpenCV to its full potential. Please it is a very humble request if this could be made possible. Thanks Best Regards
feature,category: build/install
low
Minor
553,954,609
rust
Add sanity checking for query keys
We frequently see issues where a query parameter ends up containing something that it's not supposed to (an inference variable, a placeholder region, etc): https://github.com/rust-lang/rust/issues/68477 and https://github.com/rust-lang/rust/issues/64964 are recent examples. Currently, incremental compilation must be enabled to see these crashes, since they only occur when we try to hash the 'bad' type. This presents a number of issues: 1. The playground can't be used to reproduce them, since it (rightly) disables incremental compilation. 2. We may miss these kinds of issues when invoking `rustc` directly (e.g. the `ui` test suite), since `-C incremental=1` is usually not passed. 3. The panic message isn't very helpful - in particular, it doesn't show the original type being hashed. I think it would be useful to add a `sanity_check` method to `Key`, which would verify that the value is sane (e.g. no inference variances or placeholder regions) regardless of whether or not incremental compilation is enabled.
C-cleanup,T-compiler
low
Critical
553,977,664
godot
ImageTexture.create_from_image RGB8 black on Android
**Godot version:** 3.2 RC 2 3.2 RC 2 Mono 3.2.3 GLES2, GLES3 **OS/device including version:** Linux Mint 19.3 Exported -> Working as expected Google Pixel 3a -> Texture is broken and black **Issue description:** Any image, loaded or created dynamically with the format RGB8 (Possibly others) will show up black on Android. **Android** ![image](https://user-images.githubusercontent.com/14253836/72963623-6ac93900-3d7d-11ea-8a4d-4797e61c7fba.png) **Desktop** ![image](https://user-images.githubusercontent.com/14253836/72963684-7d437280-3d7d-11ea-8c60-c7f85f102690.png) **Steps to reproduce:** Display an image created like so: ``` var img = Image.new() img.create(64, 64, false, Image.FORMAT_RGB8) img.fill(Color.red) img.lock() print(img.get_pixel(5, 5)) # This prints correctly on all platforms texture = ImageTexture.new() texture.create_from_image(img) ``` **Minimal reproduction project:** [Test2d.zip](https://github.com/godotengine/godot/files/4101743/Test2d.zip)
bug,platform:android,topic:rendering,confirmed
medium
Critical
554,002,781
pytorch
Bug in add_param - functionality for tensorboard (scatter matrix view)
## πŸ› Bug The add_param() functionality in torch.utils.tensorboard doesn't keep the type of the hyperparameter when writing them to tensorboard file. As a result the hparams are ordered in a alphabetic order instead of a numeric one (if the hparm type is float for instance). Another resulting behavior is that tensorboard is producing a single tick for each hparam on x-axis. ## To Reproduce Steps to reproduce the behavior: 1. Just use the example which is provided in the docu but change the range to 15 i.e.: from torch.utils.tensorboard import SummaryWriter with SummaryWriter() as w: for i in range(15): w.add_hparams({'lr': 0.1*i, 'bsize': i}, {'hparam/accuracy': 10*i, 'hparam/loss': 10*i}) 2. Start tensorboard and got to hparam section. Then go to scatter plot matrix view and take a look on the x axis of bsize. ## Expected behavior Collecting environment information... PyTorch version: 1.4.0 Is debug build: No CUDA used to build PyTorch: 10.1 OS: Microsoft Windows 10 Pro GCC version: (x86_64-posix-seh-rev0, Built by MinGW-W64 project) 8.1.0 CMake version: Could not collect Python version: 3.6 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip] numpy==1.18.1 [pip] numpydoc==0.9.2 [pip] torch==1.4.0 [pip] torchvision==0.5.0 [conda] blas 1.0 mkl [conda] mkl 2019.4 245 [conda] mkl-service 2.3.0 py36hb782905_0 [conda] mkl_fft 1.0.15 py36h14836fe_0 [conda] mkl_random 1.1.0 py36h675688f_0 [conda] pytorch 1.4.0 py3.6_cuda101_cudnn7_0 pytorch [conda] torch 1.0.1 pypi_0 pypi [conda] torchvision 0.2.2.post3 pypi_0 pypi ## Additional context <!-- Add any other context about the problem here. --> ![image](https://user-images.githubusercontent.com/60213215/72966813-15624b80-3dc0-11ea-81a1-0d8c6712921b.png)
module: tensorboard,oncall: visualization
low
Critical
554,010,806
next.js
TypeError: handler is not a function only when deployed to firebase but not when serve
# Examples bug report ## with-firebase-hosting ## Error: > handler is not a function - Downloaded the sample using the instructions in the description and ensuring everything works - Integrated custom `express` server to handle redirection see https://github.com/jojonarte/with-express-firebasehosting-next - Tested the application okay with `npm run serve` everything in the application works - Tested with `npm run deploy`, deployment was successful however opening the hosting URL results in `Error: could not handle the request` and in firebase functions log `TypeError: handler is not a function at cloudFunction (/srv/node_modules/firebase-functions/lib/providers/https.js:57:9) at /worker/worker.js:783:7 at /worker/worker.js:766:11 at _combinedTickCallback (internal/process/next_tick.js:132:7) at process._tickDomainCallback (internal/process/next_tick.js:219:9)` ## To Reproduce Steps to reproduce the behavior, please provide code snippets or a repository: 1. Go to https://github.com/jojonarte/with-express-firebasehosting-next 2. follow manual download instructions 3. follow firebase setup instructions 4. `npm install` 5. `npm run serve` should make the app work 5. `npm run deploy` 6. view the URL ## should build and upload to hosting and firebase functions and error shouldn't show just like in `npm run serve` I expected it should build and deploy the cloud function and hosting properly to firebase just like how it works using `npm run serve` ## System information - OS: macOS - Version of Next.js: "^9.1.7"
good first issue,examples
low
Critical
554,015,783
vscode
[css] text-underline-position value completion is wrong
https://developer.mozilla.org/en-US/docs/Web/CSS/text-underline-position ![image](https://user-images.githubusercontent.com/4033249/72968813-90c5fc00-3dc4-11ea-8958-0481f94e1681.png) `under` should be there.
bug,css-less-scss
low
Minor
554,029,248
pytorch
addition of attention based techniques to pytorch
## πŸš€ Feature <!-- A clear and concise description of the feature proposal --> Stand alone self attention and attention augmented convolution networks perform better than standard convolution networks on image classification experiments, I think these two should be added to PyTorch. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too --> https://arxiv.org/abs/1904.09925 https://arxiv.org/abs/1906.05909 ## Pitch <!-- A clear and concise description of what you want to happen. --> something like, nn.AugmentedConv() nn.StandAloneAttention() cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @zhangguanheng66
module: nn,module: convolution,triaged,function request
low
Minor
554,060,312
TypeScript
Support `@extends` tag for ES5-style classes
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with the latest published version. It may have already been fixed. For npm: `typescript@next` This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly --> **TypeScript Version:** 3.7.5 and 3.8.0-dev.20200123 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** extends not attached to a class I'm currently evaluating if TypeScript can be used to type-check old closure compiler projects which use JSDoc annotations for typing and to generate *.d.ts files for it so the old projects can be easily used within new typescript projects. But unfortunately the `@extends` annotation is not working as expected. I would understand if TypeScript simply didn't support it (Then this would be a feature request) but the error message **error TS8022: JSDoc '@extends' is not attached to a class** thrown by the compiler suggests that there is some support for but it doesn't work as expected. Please note that the real class inheritance code which actually extends class `Sub` from `Base` is omitted in the code example below because it is irrelevant to the compiler. **Code** ```ts // Compile with: tsc --checkJS --allowJS --outDir out test.js /** * @constructor * @class */ function Base() {} Base.prototype.foo = function() {} /** * @constructor * @class * @extends {Base} */ function Sub() {} Sub.prototype.bar = function() {} ``` **Expected behavior:** Typescript should recognize type `Sub` to be a class which extends class `Base`. **Actual behavior:** Compilation fails with this error: ``` $ tsc --checkJS --allowJS --outDir out test.js test.js:14:10 - error TS8022: JSDoc '@extends' is not attached to a class. 14 function Sub() {} ~~~ Found 1 error. ```
Suggestion,Awaiting More Feedback
low
Critical
554,126,929
godot
Memory leak(?) when using "Creation CollisionPolygon2D Sibling"
**Godot version: 3.1.2 stable win64 ** **OS/device including version: Windows 10** **Issue description:** I was playing arroun with the "create CollisionPolygon" sprite tool. I generated one with lots of points, so godot and my game got very laggy. (I did some tests before with more simplification and my program worked fine). Then i went back to the more simplified version of that collisionPolygon but when running my game it was very slow. The game was using arround 500mb of memory. I was super lost, because it worked fine and now it was laggy. I restarted my pc and the game went back to normal (arround 25/30mb of memory usage), so i'm pretty sure this is a memory leak. **Steps to reproduce:** from an sprite, use the "Create CollisionPolygon" (i have the spanish version, it says something like "create brother of (or for) collisionPolygon2D", i'm not sure about the exact name in english); then at simplification use the default value (2). Attach some movement code to that sprite so you can see if it lags. (spoiler alert, it doesn't). Now generate that collisionPolygon again but at simplification use some ridiculous value like 0.3, run the game, it's going to be laggy (if not, use a lower value). Then generate it again with the first value (2), run the game and you will see it lags a lot. **Minimal reproduction project:** I really don't have time to do a demo file rn, sorry!
bug,topic:core,confirmed
low
Major
554,128,084
go
net/mail: add helpers for msg-id
[RFC 5322 section 3.6.4](https://tools.ietf.org/html/rfc5322#section-3.6.4) defines message identification fields which require parsing (e.g. they can contain CFWS). `net/mail` already provides helpers for `Date` and address lists. Would some helpers for `Message-ID` and message identifier lists be a welcome addition? Ref https://github.com/emersion/go-message/pull/70
NeedsInvestigation,FeatureRequest
low
Minor
554,146,412
rust
Severe slowdown when wrapping a [MaybeUninit<T>; N] in a struct
Consider the following function: ```rust #![feature(maybe_uninit_extra)] use std::{mem::MaybeUninit, ops::Range}; const N: usize = 10_000; const RANGE: Range<u32> = 2500..7500; fn foo() -> u32 { unsafe { let mut array = MaybeUninit::<[MaybeUninit<u32>; N]>::uninit().assume_init(); let mut len = 0; for value in RANGE { array.get_unchecked_mut(len).write(value); len += 1; } (0..len).map(|i| array.get_unchecked(i).read()).sum() } } ``` This runs as fast as I would expect. But if I put `array` and `len` in a struct, like this: ```rust struct S { array: [MaybeUninit<u32>; N], len: usize, } pub fn bar() -> u32 { unsafe { let mut s = S { array: MaybeUninit::uninit().assume_init(), len: 0, }; for value in RANGE { s.array.get_unchecked_mut(s.len).write(value); s.len += 1; } (0..s.len).map(|i| s.array.get_unchecked(i).read()).sum() } } ``` This runs about 15x as slowly using (although these didn't change anything) ``` [profile.bench] lto = true codegen-units = 1 ``` with the 2020-01-22 nightly toolchain. This difference can be observed with much smaller values of `N`, too, and blackboxing values didn't make a difference. [Playground with benchmarks](https://play.rust-lang.org/?version=nightly&mode=release&edition=2018&gist=ece1d1e8eea46b962166a27325fb60f8)
I-slow,C-enhancement,A-codegen,T-compiler
low
Major
554,170,999
flutter
Inconsistency in the documentation about MainActivity content for v2 embedding enabled
The documentation seems inconsistency about the content of the `MainActivity` class of a app created with android embedding v2 enabled (default). 1. https://github.com/flutter/flutter/wiki/Upgrading-pre-1.12-Android-projects say to replace the `onCreate` method with a `configureFlutterEngine` method 2. https://flutter.dev/docs/development/packages-and-plugins/plugin-api-migration say to keep the class empty because plugins are registered automatically The default template when creating app with v2 enabled (the default) is to have the `configureFlutterEngine` method present. That seems to generate this kind of issues : https://github.com/FirebaseExtended/flutterfire/issues/1669 because plugin are registered twice. Keep class empty seems to fix that Is `configureFlutterEngine` useful ?
platform-android,engine,d: wiki,P2,team-android,triaged-android
low
Minor
554,184,271
pytorch
Batched torch.eig() and gradient of torch.eig() for real eigenvalues
## πŸš€ Feature I propose myself to implement the batched and gradient calculation (backward) for `torch.eig` if the eigenvalues are all real. ## Motivation The cases where the matrix is real non-symmetric and has real eigenvalues can have quite a lot of applications, for example, for a real invertible matrix `P` and a symmetric matrix `A`, the following matrix `B=PAP^(-1)` should still have real eigenvalues and eigenvectors. Also, it seems that the `torch.eig` is not as developed as its cousin, `torch.symeig` in terms of the features. ## Pitch <!-- A clear and concise description of what you want to happen. --> `torch.eig()` should work as it is now, and it should be able to propagate the gradient if all the eigenvalues are real. If there is a complex eigenvalue, then it should raise a `RuntimeError`, just like what it does with `.backward()` at the moment. ## Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered, if any. --> ## Additional context <!-- Add any other context or screenshots about the feature request here. --> I have a rough implementation of the differentiable `eig` function for real eigenvalues here: https://gist.github.com/mfkasim1/60e1c1a7b599f163c594dd183c3d4507 cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @vincentqb @vishwakftw @jianyuh @mruberry @heitorschueroff @SsnL
module: autograd,triaged,module: batching,module: linear algebra
low
Critical
554,187,760
rust
Compile error in main code when doc-testing a crate that has the same name as a module
In a crate that has the same name as a top-level module, exporting a name from this module causes rustdoc (and only rustdoc) to error when it is ran. ```rust // Crate name: `this_crate`. mod this_crate { pub struct Item; } pub use this_crate::Item; ``` <details> <summary>error[E0659]: `this_crate` is ambiguous</summary> <pre> > cargo test --doc Finished test [unoptimized + debuginfo] target(s) in 0.00s Doc-tests this_crate error[E0659]: `this_crate` is ambiguous (name vs any other name during import resolution) --> /tmp/test_dir/no-alloc/src/lib.rs:7:9 | 7 | pub use this_crate::Item; | ^^^^^^^^^^ ambiguous name | = note: `this_crate` could refer to a crate passed with `--extern` = help: use `::this_crate` to refer to this crate unambiguously note: `this_crate` could also refer to the module defined here --> /tmp/test_dir/no-alloc/src/lib.rs:1:1 | 1 | mod this_crate { pub struct Item; } | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ = help: use `crate::this_crate` to refer to this module unambiguously error: aborting due to previous error For more information about this error, try `rustc --explain E0659`. running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out &gt; echo $? 0 </pre> </details>
T-rustdoc,C-bug,A-doctests
low
Critical
554,247,999
react
Bug: styles object using css variables and both a shorthand and a specific property renders incorrectly
React does not produce the correct css inline styles when using css variables for both the shorthand property and another specific one (like `padding` and `paddingRight`). The styles object: ```js { padding: "calc(var(--spacing) * 1)", paddingRight: "calc(var(--spacing) * 3)", paddingBottom: "calc(var(--spacing) * 4)" }; ``` produces the following styles: ![image](https://user-images.githubusercontent.com/23476208/72995030-1d39e400-3df0-11ea-9235-0e6ad00718b2.png) and the following html: ```html <span style="padding-top: ; padding-right: calc(var(--spacing) * 3); padding-bottom: calc(var(--spacing) * 4); padding-left: ;">App</span> ``` even though the computed properties tab of the dev-tools appear to be correct and the padding is properly rendered in the screen: ![image](https://user-images.githubusercontent.com/23476208/72995225-6e49d800-3df0-11ea-9770-98f062008ca3.png) If I remove the css-variable, everything works as expected. **React version**: From v15.0.0 to 16.12.0 _Note_: Below v15.0.0 the styles are correctly produced: ```html <span style="padding:calc(var(--spacing) * 1);padding-right:calc(var(--spacing) * 3);padding-bottom:calc(var(--spacing) * 4);">App</span> ``` ## Steps To Reproduce 1. Add a style object to a component that has both a property shorthand and a specific one (like `padding` and `paddingRight`) and uses a css variable (like `var(--spacing)`. 2. Render that component and inspect using dev-tools. Link to code example: https://codesandbox.io/s/heuristic-wood-bjr1y styles object: ```js { padding: "calc(var(--spacing) * 1)", paddingRight: "calc(var(--spacing) * 3)", paddingBottom: "calc(var(--spacing) * 4)" }; ``` ## The current behavior React does not produces the correct css inline styles when using css variables for both the shorthand property and another specific one: ```html <span style="padding-top: ; padding-right: calc(var(--spacing) * 3); padding-bottom: calc(var(--spacing) * 4); padding-left: ;">App</span> ``` ## The expected behavior Inline styles using css variables that have both a shorthand and a specific one should produce the correct styles. ```html <span style="padding: calc(var(--spacing) * 1); padding-right: calc(var(--spacing) * 3); padding-bottom: calc(var(--spacing) * 4);">App</span> ```
Component: DOM,Type: Discussion
low
Critical
554,260,347
godot
viewport texture transparency glitch if use panorama_sky from this viewport.
**Godot version:** 3.2 rc2. 3.1.2 work correctly. **OS/device including version:** win10 64 **Issue description:** If i use viewport for panorama_sky in node WorldEnvinronment, and use this viewport for sprite texture, then with the change in the size of the sprite, its transparency changes. It should not be like that. **Steps to reproduce:** on video, https://youtu.be/Ff2rcxoZLZ8 you can comment/uncomment first string of script and change sprite size in editor. **Minimal reproduction project:** in attach. [error.zip](https://github.com/godotengine/godot/files/4104037/error.zip)
bug,topic:rendering,confirmed
low
Critical
554,265,387
create-react-app
Option to override Chrome when starting dev server
### Is your proposal related to a problem? From the docs: > By default, Create React App will open the default system browser, favoring Chrome on macOS. This doesn't _really_ make sense; it either opens the default browser or it doesn't. For example, if: 1. I have Firefox and Chrome open 2. Firefox is my default browser 3. I run `npm start` 4. CRA will open the app in Chrome ### Describe the solution you'd like I think what `openChrome.applescript` does is amazing for people who use Chrome. It's logical that this should be the default. What I propose is allowing the user to bypass the `openChrome.applescript` with a environment variable similar to what y'all are doing with `BROWSER=none`. I already have working changes that I could submit as a PR. It: 1. Allows users to specify `BROWSER=default` in `.env` 2. Sets `shouldTryOpenChromeWithAppleScript` to false if `BROWSER` is `default` 3. Returns `BROWSER` to `undefined` so the rest of `openBroswer.js` can do its thing (opening in the default browser) ### Describe alternatives you've considered I thought about changing `openChrome.applescript` to make it do the same thing with Firefox as it does with Chrome, unfortunately FF doesn't have as good of support as Chrome for AppleScripts. The only other thought I had was to modify `shouldTryOpenChromeWithAppleScript` to check what the OS default browser is, but I'm not sure if that's possible. I figure my suggestion would be the least invasive, allowing people to opt-in to this behavior. ### Additional context Not really sure if there's anything else. Let me know if you'd like me to submit this as a PR.
issue: proposal,needs triage
low
Minor
554,300,287
flutter
[camera] give computeBestCaptureSize option
I have been trying to take photos with maximum resolution with the flutter camera plugin. however, the resolution of the captured images does never have the best capture size available through Camera2 API, independently of the ResolutionPreset parameter. Looking at the code I found that the capture size is always using the best CamcorderProfile for ResolutionPreset [ packages/camera/android/src/main/java/io/flutter/plugins/camera/Camera.java](https://github.com/flutter/plugins/blob/master/packages/camera/android/src/main/java/io/flutter/plugins/camera/Camera.java) ```java recordingProfile = CameraUtils.getBestAvailableCamcorderProfileForResolutionPreset(cameraName, preset); captureSize = new Size(recordingProfile.videoFrameWidth, recordingProfile.videoFrameHeight); previewSize = computeBestPreviewSize(cameraName, preset); ``` The capture size should be the best available resolution for capturing image, like this: ```java captureSize = CameraUtils.computeBestCaptureSize(streamConfigurationMap); ``` Or give the plugin an option to select a resolution for the capture. Actually, the "computeBestCaptureSize" is implemented but never used in the current version of camera plugin.
c: new feature,p: camera,package,c: proposal,team-ecosystem,P3,triaged-ecosystem
low
Minor
554,300,524
flutter
Flutter dev server breaks seekTo() in video_player when playing from assets.
When testing an audio only (MP3) file, VideoPlayerController seekTo(Duration position) always resets the current position to zero when targeting web and using a Chrome client. It works correctly with an AVD.
c: new feature,tool,a: video,platform-web,p: video_player,package,P3,team-web,triaged-web
medium
Major
554,340,429
TypeScript
Jsdoc @this show incorrect type in inherited method
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with the latest published version. It may have already been fixed. For npm: `typescript@next` This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly --> **TypeScript Version:** 3.7.3 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** @this Jsdoc factory **Code** ```ts // ts class Base { constructor(obj = {}) {} static create<T extends typeof Base> (this: T, obj = {}): InstanceType<T> { return Reflect.construct(this, [obj]) } } class Extended extends Base { constructor(obj) { super(obj) } } const b = Base.create() const e = Extended.create() // js with jsdoc class Base { constructor(obj = {}) {} /** * @template {typeof Base} T * @this T * @param {any} obj * @returns {InstanceType<T>} */ static create(obj) { return Reflect.construct(this, [obj]) } } class Extended extends Base { constructor(obj) { super(obj) } } const b = Base.create() const e = Extended.create() ``` **Expected behavior:** The b and e const type should be Base and Extended **Actual behavior:** When jsdoc @this is used in the javascript version b and e are of type Base. The ts version show correct types. **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> **Related Issues:** <!-- Did you find other bugs that looked similar? -->
Bug
low
Critical
554,345,155
pytorch
[jit] Use `typing.get_type_hints` instead of parsing types manually
This is an weird case but it illustrates the problem (copied from #16492): ```python import torch import typing from typing import List, Tuple Tuple = {torch.Tensor : int} def g() -> Tuple[torch.Tensor]: return 2 print(typing.get_type_hints(g)) g = torch.jit.script(g) print(g.graph) ``` The problem is we're parsing types manually instead of using Python to resolve the types. #29623 kind of fixes this in that it calls out to Python to resolve types, but we shouldn't even be parsing types in the first place, since it leads to many paths to resolve types that can be complicated to reason about (e.g. special cases in `script_type_parser.cpp` and `annotations.py`) This has been brought up before in #29094 cc @suo
oncall: jit,triaged
low
Minor
554,360,167
godot
Editing values inside a sub-inspector cannot be undone if the sub-inspector gets hidden
Godot 3.2 rc1 I was editing a Theme, then unfolded a StyleBox inside it, from which I modified a few `expand_margin_*` properties. Then I wanted to undo what I've done, but Ctrl+Z had no effect. This might be reproducible with any nested inspector. 1) Open this `theme.tres` in the inspector: [theme.zip](https://github.com/godotengine/godot/files/4105047/theme.zip) 2) Open the `panel` style, set Expand Margin Left to 4 3) Edit another resource 4) Click on the Previous button of the inspector (or double-click again on the theme) 5) Use Ctrl+Z: it does nothing. Step variants: 3') Close the `panel` style by clicking on it, which folds it. 4') Click again on the `panel` style to unfold it It used to be a more immediate issue in https://github.com/godotengine/godot/issues/23231, which was fixed, but as soon as you edit something else or even just fold the nested inspector, history is lost again. That said, I would understand that using Ctrl+Z to undo something you can't see might be confusing, so maybe it could re-show the resource when doing so. Note: my original use case was to revert the theme (or parts of it) to its *saved values* (and not default values the circle arrows do) because I was just fiddling with it to test a few things. However I could not find a way to do this without restarting the whole editor.
bug,topic:editor,confirmed
low
Minor
554,366,899
rust
Tracking issue for RFC 2700: numeric constants as associated consts
This is a tracking issue for [the RFC 2700](https://github.com/rust-lang/rfcs/blob/master/text/2700-associated-constants-on-ints.md) (rust-lang/rfcs#2700): "Deprecate stdlib modules dedicated to numeric constants and move those constants to associated consts". **Steps:** - [x] Add new constants (see #68325) - [x] Stabilize new constants ([see instructions on rustc-guide][stabilization-guide]) - [x] Update test suite to use new constants (#78380) - [x] Fix error messages using old symbols (#78382) - [x] Support for indeterminate deprecation dates (#78381) - [x] Deprecate-in-future the old items - [ ] Fully deprecate the old items - [x] Adjust documentation ([see instructions on rustc-guide][doc-guide]) [stabilization-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#stabilization-pr [doc-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#documentation-prs **Unresolved questions:** - [x] [Resolved: [Yes](https://github.com/rust-lang/rust/issues/68490#issuecomment-747022696)] Should the old items be deprecated? See the RFC thread as well as ["unresolved questions"](https://github.com/rust-lang/rfcs/blob/master/text/2700-associated-constants-on-ints.md#unresolved-questions): > How long should we go before issuing a deprecation warning? At the extreme end of the scale we could wait until the next edition of Rust is released, and have the legacy items only issue deprecation warnings when opting in to the new edition; this would limit disruption only to people opting in to a new edition (and, being merely an trivially-addressed deprecation, would constitute far less of a disruption than any ordinary edition-related change; any impact of the deprecation would be mere noise in light of the broader edition-related impacts). However long it takes, it is the opinion of the author that deprecation should happen eventually, as we should not give the impression that it is the ideal state of things that there should exist three ways of finding the maximum value of an integer type; we expect experienced users to intuitively reach for the new way proposed in this RFC as the "natural" way these constants ought to be implemented, but for the sake of new users it would be a pedagogical wart to allow all three to exist without explicitly calling out the preferred one. - [x] [Resolved: [No](https://github.com/rust-lang/rust/issues/68490#issuecomment-747022696)] Should constants from `std::{f32, f64}::consts` also be made associated consts? From [the alternative question of the RFC](https://github.com/rust-lang/rfcs/blob/master/text/2700-associated-constants-on-ints.md#alternatives): > Unlike the twelve integral modules, the two floating-point modules would not themselves be entirely deprecated by the changes proposed here. This is because the `std::f32` and `std::f64` modules each contain a `consts` submodule, in which reside constants of a more mathematical bent (the sort of things other languages might put in a `std::math` module). > > It is the author's opinion that special treatment for such "math-oriented constants" (as opposed to the "machine-oriented constants" addressed by this RFC) is not particularly precedented; e.g. this separation is not consistent with the existing set of associated functions implemented on `f32` and `f64`, which consist of a mix of both functions concerned with mathematical operations (e.g. `f32::atanh`) and functions concerned with machine representation (e.g. `f32::is_sign_negative`). However, although earlier versions of this RFC proposed deprecating `std::{f32, f64}::consts` (and thereby `std::{f32, f64}` as well), the current version does not do so, as this was met with mild resistance (and, in any case, the greatest gains from this RFC will be its impact on the integral modules). > > Ultimately, there is no reason that such a change could not be left to a future RFC if desired. However, one alternative design would be to turn all the constants in `{f32, f64}` into associated consts as well, which would leave no more modules in the standard library that shadow primitive types. A different alternative would be to restrict this RFC only to the integral modules, leaving f32 and f64 for a future RFC, since the integral modules are the most important aspect of this RFC and it would be a shame for them to get bogged down by the unrelated concerns of the floating-point modules.
B-RFC-approved,T-libs-api,C-tracking-issue,Libs-Tracked
medium
Critical
554,375,386
pytorch
"Tried to register multiple operators with the same name and the same overload name" error is confusing
I wrote this registration: ``` auto registry2 = torch::RegisterOperators() // Some operations need to be transformed to their batched versions .op("aten::mv", torch::RegisterOperators::options() .kernel(VMapModeKey, [] (const Tensor& a, const Tensor& b) -> Tensor { return at::matmul(a, b.unsqueeze(2)).squeeze(2); })) ; ``` and it errored with: ``` Tried to register multiple operators with the same name and the same overload name but different schemas: aten::mv(Tensor _0, Tensor _1) -> (Tensor _0) vs aten::mv(Tensor self, Tensor vec) -> (Tensor) (findOrRegisterSchema_ at ../aten/src/ATen/core/dispatch/Dispatcher.cpp:64) ``` I am pretty familiar with this code, but I have no idea what to do. The schemas look the same to me!
triaged,module: dispatch,better-engineering
low
Critical
554,386,395
go
net/http: wrap more errors?
While trying to understand some net/http errors, I found that many net/http errors don't wrap their underlying errors yet (run `grep -R "fmt.Errorf.*%v" net/http` to see some). @bradfitz are you open to making the default be to wrap errors in net/http? That is, can someone do a somewhat indiscriminate pass through net/http, wrapping errors everywhere they see an opportunity to do so?
NeedsInvestigation
low
Critical
554,405,263
pytorch
MultivariateNormal.rsample: use eigen-decomposition when Cholesky fails
## πŸš€ Feature Similar to [what BoTorch did here](https://github.com/pytorch/botorch/blob/3d8976f51d3bba0f730557ed61668d343df3ff1f/botorch/sampling/qmc.py#L136-L143), for `MultivariateNormal.rsample`, shall we support [in this line](https://github.com/pytorch/pytorch/blob/master/torch/distributions/multivariate_normal.py#L149) to use the eigen-decomposition when Cholesky decomposition fails, especially when the covariance matrix is near-singular or ill-conditioned? cc @vincentqb @fritzo @neerajprad @alicanb @vishwakftw @SsnL @jianyuh
module: distributions,triaged,enhancement,module: linear algebra
low
Major
554,439,047
flutter
Allow users to pass in Xcode build settings as env variables to "flutter build ios-framework" FLUTTER_XCODE_
This was done for all other commands via env variables with the `FLUTTER_XCODE_` prefix (like `FLUTTER_XCODE_CODE_SIGN_STYLE`) with https://github.com/flutter/flutter/pull/43553 but was missed when creating `build ios-framework` command. Example that should work, but doesn't: ``` $ FLUTTER_XCODE_IPHONEOS_DEPLOYMENT_TARGET="9.0" flutter build ios-framework --output=.` ```
platform-ios,tool,a: existing-apps,P3,team-ios,triaged-ios
low
Major
554,458,804
rust
Lint missing Copy impl/derive on fully public types.
Any public `enum`/`struct` which has only `pub` fields and isn't `#[non_exhaustive]`, can be "manually copied", i.e. a new value can always be produced by copying out the data fields (of the active variant, in the `enum` case) and passing the values back to the constructor. Therefore, we might want to lint cases in which such a type doesn't implement `Copy`, because the most likely reasons it doesn't are (assuming no `Drop` impl): * it would be fine but was forgotten * the fields weren't actually meant to be publicly accessible * this could be hinted at by phrasing the lint to express the fact that anyone can copy all the fields out and make a new value out of them **EDIT**: @dtolnay (in https://github.com/rust-lang/rust/issues/68497#issuecomment-577955798) also pointed out compile-times as a big reason to *deliberately* skip, or make opt-in, some derives. A workaround, in the opt-in case, could be, at the crate-level: ```rust #![cfg_attr(not(feature = "clone-impls"), allow(this_lint))] ``` <hr/> A good (and bad, at the same time) example is `std::ops::Range<T>`, which can be copied by simply doing `r.start..r.end` (or `Range { start: r.start, end: r.end }`, like any `struct`). It's a bad example because the current (suboptimal) convention avoids `Copy` on types implementing `Iterator` (and I suppose the lint could take that into account), but if anyone else wrote a similar struct (and it wasn't an iterator), they would probably want it to be `Copy`. cc @Manishearth @oli-obk
A-lints,T-lang,C-feature-request
low
Major
554,469,246
flutter
Auto-submit bot should set the commit message of the squashed commit
This came up in the Flutter Engine weekly about the behavior of the commit queue opted into via the "waiting for the tree to go green" label. @dnfield clarified that only the headline of the primary commit message is kept while the descriptions are discarded. Some commit authors put a [significant](https://github.com/flutter/engine/commit/ad582b5089256e56727b2d7306fb097078a3dcdc) [amount](https://github.com/flutter/engine/commit/c7d0fb787922cb60531c415ed7bf652afbaea552) of [effort](https://github.com/flutter/engine/commit/b454251f03f27ff422b2c197761493a9e3133909) in adding a descriptive commit message and keeping the commit message updated across amendments during the review. These messages are also useful when locally reviewing patches and blames. While linking to the PR certainly works, it is one more step to follow from the commit and it would be preferable to keep things in the Git repo instead of needing GitHub for basic code navigation tasks. As it stands, this behavior is a hinderance to the adoption of the commit queue.
team-infra,P2,triaged-infra
low
Critical
554,478,747
pytorch
Reduce RPC branches for Python/Built-inOp/TorchScript
## πŸš€ Improvement Built-inOp/TorchScript code paths could be packed together. After #32466 and #30630 Consolidate `pyRpcBuiltin` and `rpcTorchscript`. `pyRemoteBuiltin` and `remoteTorchscript`. Also, in `rpc/api.py`, `def _invoke_rpc` should have 3 branches and perhaps a better name. Overall, the goal is to refactor python_functions.h/cpp to remove the python dependencies and extract a C++ API (probably moving the python logic into rpc/init.cpp)? rpc_sync, rpc_async or rpc_remote should have a pure C++ API for the JIT to call when binding to JIT. cc @suo @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar
oncall: jit,triaged,module: rpc
low
Minor
554,493,242
TypeScript
Support a file blacklist in tsserver configuration options
My scenario is this: suppose I'm working in my editor, and I open a file which, by chance, crashes the language service. Repeatedly. (usually because of it's size or shape). Today, after the service crashes 5 times in short order, the service is disabled by the editor (which if it crashes on load, is what will occur). What I propose is this; if the service crashes after a file open request is issued, and we would shut down the service, instead we automatically add the file to a workspace-local tsserver blacklist config (probably in `.vscode/settings.json`) and reload the language service with the new option. The language service will then refuse to actually load the blacklisted file contents, instead reporting them as empty files/empty modules/empty json documents and potentially issue a warning-type error message in the diagnostics that certain files are blacklisted, which may affect the compilation. cc @mjbvz do you think this'd be a good idea for a slightly more progressive degradation of experience when the language service has trouble with a file?
Suggestion,In Discussion,Domain: TSServer
low
Critical
554,498,001
TypeScript
Nullish coalescing should always include the type of the right operand
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with the latest published version. It may have already been fixed. For npm: `typescript@next` This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly --> ``` $ ./node_modules/.bin/tsc --version Version 3.7.2 ``` <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** **Code** A toy example would be something like this. ```ts let foo: string = ""; foo = "bar" ?? 123; ``` This one is obviously fine since `"bar"` is always truthy. However, this becomes a little bit problematic when you consider the idiom of having `Record` objects and checking their truthiness before using them. ```ts const elts = ["foo", "bar", "spam", "spam", "foo", "eggs"]; const counts: Record<string, number>; for (const elt of elts) { // This really **should** raise an error. counts[elt] = counts[elt] ?? "zero"; counts[elt] += 1; } ``` **Expected behavior:** An error should be raised. **Actual behavior:** Curiously, an error is **not** raised in strict mode but **is** raised in un-strict mode. ```sh $ ./node_modules/.bin/tsc ./foo.ts foo.ts:5:3 - error TS2322: Type 'number | "zero"' is not assignable to type 'number'. Type '"zero"' is not assignable to type 'number'. 5 counts[elt] = counts[elt] ?? "zero"; ~~~~~~~~~~~ Found 1 error. ``` ```sh $ ./node_modules/.bin/tsc --strict ./foo.ts # No error, exits 0 and emits JS. ``` **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> [Playground Link](http://www.typescriptlang.org/play/?strictNullChecks=false&ts=3.8.0-dev.20200122#) Toggling the `strictNullChecks` config option will show the issue. **Related Issues:** <!-- Did you find other bugs that looked similar? -->
Suggestion,In Discussion
low
Critical
554,498,807
go
cmd/link: trailing padding after "Go" in ELF note name
https://www.sco.com/developers/gabi/latest/ch5.pheader.html#note_section says [emphasis added]: > namesz and name > The first namesz bytes in name contain a null-terminated character representation of the entry's owner or originator. There is no formal mechanism for avoiding name conflicts. By convention, vendors use their own name, such as XYZ Computer Company, as the identifier. If no name is present, namesz contains 0. Padding is present, if necessary, to ensure 8 or 4-byte alignment for the descriptor (depending on whether the file is a 64-bit or 32-bit object). **Such padding is not included in namesz.** It looks like we get this right for the "NetBSD" tag, where we include a single nul-terminator character within the name (as measured by namesz), but then include an extra padding zero-byte for alignment. However, for the "Go" tag, we include an extra nul-terminator within the name itself. The second nul-terminator should actually be padding. Pointed out by Mark Kettenis from OpenBSD. /cc @4a6f656c
help wanted,NeedsFix,compiler/runtime
low
Major
554,521,619
pytorch
[feature request] Out-variant and dtype argument for torch.argmax / torch.argmin / torch.argsort (and friends)
It could save a lot of memory when relevant axis is very thin (e.g. 2 or 3 element-sized axis), and the result is known to fit in a byte (x8 memory save) cc @heitorschueroff
module: memory usage,triaged,module: sorting and selection,function request,module: reductions
low
Major