id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
426,894,448
go
runtime: stop single goroutine
During debugging it would be useful to resume all goroutines except a single one, as it's the case in other programming languages. In #25578 we have the request to resume a single goroutine while all others are stopped so this would be the similar/opposite request. I'm sorry I don't know what else I could put in this request to make it more useful. Feel free to edit it with more details. Thank you.
NeedsInvestigation,Debugging,compiler/runtime
low
Critical
426,899,836
go
cmd/compile: keep variable declaration order in debug_info
Rather than adding `DeclColumn` info to `debug_info`, it would be great to write the variables in the declaration order. This would make it easier to display the variables in a predictable and consistent order while debugging and thus better align with the source code presentation.
NeedsInvestigation,Debugging,compiler/runtime
low
Critical
426,908,858
rust
unreachability warning inconsistently provided with thread spawn
Hi, `charmander` on `#rust` discussed an interesting issue where unreachability of code was inconsistently reported when threads return `!`. We reduced it to the following example: ```rust #![feature(never_type)] fn run() -> ! { loop {} } #[allow(dead_code)] fn spawny<F>(closure: F) -> std::thread::JoinHandle<!> where F: Send + 'static + FnOnce() -> !, { std::thread::spawn(closure) } fn main() { // This one results in the println!() being marked unreachable let handle = spawny(move || run()); // This one does not // let handle = std::thread::spawn(move || run()); let _res = handle.join().unwrap(); println!("hello, world"); } ``` If the thread is spawned via the `spawny()` function, then `println!()` is marked unreachable, otherwise if spawned via an in-line `std::thread::spawn()` call, it is not. In both cases `_res` has the type `!`.
A-lints
low
Minor
426,952,925
material-ui
Implement DrawerMenuItem
<!--- Provide a general summary of the feature in the Title above --> Drawer MenuItem: https://material-components.github.io/material-components-web-catalog/#/component/drawer Demo: https://material-components.github.io/material-components-web-catalog/#/component/drawer 1. The Drawer MenuItem could capture the keyboard event. 2. The Drawer MenuItem show the primary color when selected. 3. The Drawer MenuItem have the rounded shape. 4. The Drawer MenuItem Text is highlighted with darker primary color. Shape: https://material.io/design/shape/about-shape.html#shape-customization-tool 1. Shape could apply to small, medium and large components. 2. This could be easily done by providing HOC or something else to change the root classes radius, but this is not easy for TextField, especially Outlined TextField and Filled TextFIeld. <!-- Thank you very much for contributing to Material-UI by creating an issue! ❤️ To avoid duplicate issues we ask you to check off the following list. --> <!-- Checked checkbox should look like this: [x] --> - [x] This is not a v0.x issue. <!-- (v0.x is no longer maintained) --> - [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate. ## Benchmark - https://vuetifyjs.com/en/components/lists/ - https://ant.design/components/menu/#header
new feature,design: material
low
Major
426,978,175
godot
AnimationPlayer not emiting animation_finished for queued animations
**Godot version:** 3.1.stable.official **OS/device including version:** Windows 10 **Issue description:** When several animations are queued in an AnimationPlayer, signal `animation_finished` is not emited when changing between animations, but `animation_started` is. ![image](https://user-images.githubusercontent.com/778778/55233045-9942ba00-5227-11e9-9ca0-c1f092726c47.png) Both signals (`animation_started` an `animation_finished`) provides the animation name, but behave different as the first one is called for each individual animation, but the second doesn't. I think they both should be called for each animation, and maybe include two new signal to indicate the global/queue start and end. Note: Tween class has `tween_started` and `tween_completed` signals, and emits them for each individual "animation". **Minimal reproduction project:** [Test.zip](https://github.com/godotengine/godot/files/3022638/Test.zip)
discussion,topic:core
low
Major
427,099,499
react
Profiler marks
Can we expose a lighter weight set of Performance "marks" for people consuming browser Performance tracing? e.g. when a particular `Profiler` commits. This should be a lot less heavyweight than the full mark-and-measure stuff.
Type: Enhancement,Component: Developer Tools,React Core Team
low
Major
427,110,053
go
x/tools/gopls: implement rangeFormatting LSP request
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.12.4 linux/amd64 $ go list -m golang.org/x/tools golang.org/x/tools v0.0.0-20190428024724-550556f78a90 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="/home/myitcv/gostuff/src/github.com/myitcv/govim/.bin" GOCACHE="/home/myitcv/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/myitcv/gostuff" GOPROXY="" GORACE="" GOROOT="/home/myitcv/gos" GOTMPDIR="" GOTOOLDIR="/home/myitcv/gos/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/home/myitcv/gostuff/src/github.com/myitcv/govim/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build120717318=/tmp/go-build -gno-record-gcc-switches" </pre></details> ### What did you do? Considering the following [`govim`](https://github.com/myitcv/govim/tree/master/cmd/govim) [`testscript`](https://godoc.org/github.com/rogpeppe/go-internal/testscript) in which I attempt to range-format line numbers 3-5 (inc) in `main.go`: ``` vim ex 'e main.go' vim ex '3,5GOVIMGoFmt' vim ex 'noautocmd w' cmp main.go main.go.golden -- go.mod -- module mod -- main.go -- package main func main(){ fmt.Println("Hello, world") } -- main.go.golden -- package main func main() { fmt.Println("Hello, world") } ``` I get the error: ``` ToUTF16Column: point is missing offset ``` The params sent were: ``` &protocol.DocumentRangeFormattingParams{ TextDocument: protocol.TextDocumentIdentifier{URI:"file://$WORK/main.go"}, Range: protocol.Range{ Start: protocol.Position{Line:2, Character:0}, End: protocol.Position{Line:5, Character:0}, }, Options: protocol.FormattingOptions{}, } ``` ### What did you expect to see? No error. ### What did you see instead? The above error. cc @stamblerre @ianthehat
help wanted,FeatureRequest,gopls
medium
Critical
427,119,179
vscode
Task input parameters from showOpenDialog
It sure would be helpful if the task and launch input pickers implemented by #4758 were expanded to include one that wrapped the [`window.showOpenDialog` API](https://code.visualstudio.com/api/references/vscode-api#window) with its options for file vs. folder and filtering. The [`window.showOpenDialog` API](https://code.visualstudio.com/api/references/vscode-api#window) offers to _extension authors_ the ability to offer the user a dialog to select a file (optionally with an extension filter) or a folder. The feature available to _users_ is to be able to define [input variables](https://code.visualstudio.com/docs/editor/variables-reference#_input-variables) for usage as task parameters. This user-facing feature only offers the ability to either select between preset strings (`pickString`) or accept a manually-typed input string (`promptString`), but offers no ability to present the user with a file or folder selection dialog. This feature could be very helpful for users as well, not just extension authors. For example, I want to debug a program that supports command-line parameters for input files or directories, and I'd rather prompt the user to provide them differently for each session, than to have them hard-coded in a shared project tasks file.
feature-request,api,tasks
medium
Critical
427,134,415
godot
Exported variable named "script" breaks the actual script
Godot v3.1.stable.official Linux Mint 19.1 In visual script, if a variable is called "script" (all lowercase), the script breaks. If the project is restarted, the script is cleared from the inherited node. **Steps to reproduce:** Attach a visual script to a node, add a variable, name that variable "script" (all lowercase) and run the project. **Minimal reproduction project:** [var called script.zip](https://github.com/godotengine/godot/files/3023997/var.called.script.zip) in Node2D visual script rename the var "new_variable" to "script" and run the project to break it. Workaround after error (don't panic!): Just rename the variable called "script" After reloading the project, the script must be reattached to the node and the variable renamed. PS: I don't know if this issue is present in GDscript or nativeScript as well.
bug,confirmed,topic:visualscript
low
Critical
427,141,502
go
gccgo: dereference of nil pointer to zero-width type does not panic
The Go spec says "For an operand x of pointer type *T, the pointer indirection *x denotes the variable of type T pointed to by x. If x is nil, an attempt to evaluate *x will cause a run-time panic." This program correctly panics with cmd/compile, but does not with gccgo 8.0.0: package main func main() { var p *struct{} *p = *p } /cc @ianlancetaylor
NeedsInvestigation
low
Major
427,144,400
godot
High RAM usage of visual scripts
Godot v3.1.stable.official Linux Mint 19.1 In my project, not the one linked here, literally 60% of the RAM usage is caused by visual scripts attached to instanced scenes. (GDscript only 6,7%) Is it possible to share scripts (or parts of scripts) attached to instanced scenes by all instances of that scene and avoid duplicated data in memory? Just like textures (hopefully) are handled? **Minimal reproduction project:** [ScriptSizeRep.zip](https://github.com/godotengine/godot/files/3026938/ScriptSizeRep.zip) The output of the reproduction project shows the ram usage. Just run it. Then open the sprite.tscn scene, detach the script from the sprite, run it again and compare the outputs. The difference of RAM usage is significant. (270MB compared to 90MB) This issue report was edited a lot, sorry about that, but it should be fine now.
bug,discussion,confirmed,topic:visualscript,performance
medium
Major
427,145,334
node
Supported asymmetric key types
This is a meta issue to keep track of asymmetric key types supported by OpenSSL and node. The following list includes all key types supported by OpenSSL 1.1.1b. Checked items are fully supported by node's `KeyObject` API: - [x] `EVP_PKEY_RSA`: https://github.com/nodejs/node/pull/24234 - [ ] `EVP_PKEY_RSA2`: appears to be unusable? - [x] `EVP_PKEY_RSA_PSS`: https://github.com/nodejs/node/pull/26960 - [x] `EVP_PKEY_DSA`: https://github.com/nodejs/node/pull/24234 - [ ] `EVP_PKEY_DSA1`: alias for `NID_dsa_2`, but treated like `EVP_PKEY_DSA` by OpenSSL - [ ] `EVP_PKEY_DSA2`: alias for `NID_dsaWithSHA`, but treated like `EVP_PKEY_DSA` by OpenSSL - [ ] `EVP_PKEY_DSA3`: alias for `NID_dsaWithSHA1`, but treated like `EVP_PKEY_DSA` by OpenSSL - [ ] `EVP_PKEY_DSA4`: alias for `NID_dsaWithSHA1_2`, but treated like `EVP_PKEY_DSA` by OpenSSL - [x] `EVP_PKEY_DH`: https://github.com/nodejs/node/pull/31178 - [ ] `EVP_PKEY_DHX` - [x] `EVP_PKEY_EC`: https://github.com/nodejs/node/pull/24234 - [ ] `EVP_PKEY_SM2`: https://github.com/nodejs/node/pull/37066 - [x] `EVP_PKEY_X25519`: https://github.com/nodejs/node/pull/26774 - [x] `EVP_PKEY_X448`: https://github.com/nodejs/node/pull/26774 - [x] `EVP_PKEY_ED25519`: https://github.com/nodejs/node/pull/26319 and https://github.com/nodejs/node/pull/26554 - [x] `EVP_PKEY_ED448`: https://github.com/nodejs/node/pull/26319 and https://github.com/nodejs/node/pull/26554 The next step is to determine which of the above key types need to be dealt with in which way. Some of these types do not represent actual asymmetric keys (e.g., `EVP_PKEY_SCRYPT`) and thus don't need to be dealt with in the `KeyObject` API: - `EVP_PKEY_SCRYPT`: KDF - `EVP_PKEY_HMAC`: MAC - `EVP_PKEY_CMAC`: MAC - `EVP_PKEY_HKDF`: KDF - `EVP_PKEY_POLY1305`: MAC - `EVP_PKEY_SIPHASH`: MAC / PRF - `EVP_PKEY_TLS1_PRF`: PRF
crypto,openssl
low
Minor
427,155,667
flutter
Clearly document relationships between WidgetsBindingObserver/WidgetsBinding/RouteObserver/RouteAware/NavigatorObserver/Navigator
Extracting from #29596 It's very easy to get them all mixed up but `WidgetsBindingObserver.didPopRoute` `RouteObserver.didPop` `RouteAware.didPop` `NavigatorObserver.didPop` all sound like the same thing while WidgetsBindingObserver.didPopRoute and NavigatorObserver.didPop for instance are completely unrelated. Explicitly cross-reference their relationships. cc @goderbauer to consider in your routing refactor perhaps.
framework,d: api docs,f: routes,P2,team-framework,triaged-framework
low
Minor
427,159,939
go
x/build/cmd/gopherbot: too aggressive for issue titles that contain the word "document"
`gopherbot` is too aggressive when it comes to [identifying issues](https://github.com/golang/build/blob/c72a0eda0790357f78aaa0ea71fd3cf88015facc/cmd/gopherbot/gopherbot.go#L1915-L1916) to be labelled "documentation": ```go func isDocumentationTitle(t string) bool { if !strings.Contains(t, "doc") && !strings.Contains(t, "Doc") { return false } t = strings.ToLower(t) if strings.HasPrefix(t, "doc:") { return true } if strings.HasPrefix(t, "docs:") { return true } if strings.HasPrefix(t, "cmd/doc:") { return false } if strings.HasPrefix(t, "go/doc:") { return false } if strings.Contains(t, "godoc:") { // in x/tools, or the dozen places people file it as return false } return strings.Contains(t, "document") || strings.Contains(t, "docs ") } ``` This came to light with https://github.com/golang/go/issues/31150 which has the title: ``` x/tools/cmd/gopls: textDocument/rangeFormatting gives error "ToUTF16Column: point is missing offset" ```
Documentation,Builders,NeedsFix
low
Critical
427,161,406
pytorch
documentation for C++ / libtorch autograd profiler
## 📚 Documentation <!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new --> I have a Pytorch C++ frontend (LibTorch) based deployment codebase. I am trying to add profiling support to it. I am thinking of using autograd profiler for it, which seems to be the best option as far as getting layer-by-layer timings is concerned. I recently saw a change getting pushed in which enables access to the autograd profiler from the C++ frontend (https://github.com/pytorch/pytorch/pull/16580). Is there any documentation for its usage ? Or some sort of a primer...?
module: docs,module: cpp,triaged
low
Minor
427,166,006
pytorch
FP32 depthwise convolution is slow in GPU
Just tested it in IPython ``` import torch as t conv2d = t.nn.Conv2d(32,32,3,1,1).cuda() conv2d_depthwise = t.nn.Conv2d(32,32,3,1,1,groups=32).cuda() inp = t.randn(2,32,512,512).cuda() # warm up o = conv2d(inp) o = conv2d_depthwise(inp) %timeit conv2d(inp) %timeit conv2d_depthwise(inp) ``` get ``` 1000 loops, best of 3: 1.7 ms per loop 1000 loops, best of 3: 2.99 ms per loop ``` Group convolution is much slower than normal convolution, which is supposed to be the opposite. I'm using `1.1.0a0+65d6f10_2_ged1fa68` ,cuda10, driver:410.78,Titan Xp btw, there is an issue of depthwise convolution being slow in CPU #13716
high priority,module: dependency bug,module: performance,module: cudnn,module: cuda,module: convolution,triaged
high
Major
427,175,144
pytorch
Speed-up torch.cat on CPU
## 🚀 Feature Please speed-up torch.cat on CPU. It should generally be about as fast as a `clone()` call on the output, since both read and write the same number of elements. It should have good single-threaded and multi-threaded performance. On large tensors, it seems to be ~4x slower than I would expect. ### Example that does not fit in cache ```python x = [torch.randn(1, 282240) for _ in range(64)] %timeit torch.cat(x, dim=0) # 42 ms -- expected ~9-11 ms # note that clone is reasonable speed at this size y = torch.cat(x, dim=0) %timeit y.clone() # 11.4 ms ```
module: performance,module: cpu,triaged
medium
Major
427,177,295
TypeScript
Conjunction of two disjunctions cause incorrect errors
**TypeScript Version:** 3.3 **Search Terms:** * Conjunction **Code** ```ts interface Small { small: true, callbackWhenSmallIsFalsy?: undefined; } interface NotSmall { small?: false; callbackWhenSmallIsFalsy: () => void; } interface Green { green?: false numberWhenGreenTrue?: undefined } interface NotGreen { green: true numberWhenGreenTrue: number; } type SmallProps = Small | NotSmall; type GreenProps = Green | NotGreen; type Props = SmallProps & GreenProps; const try1: Props = { green: true, numberWhenGreenTrue: 5, small: true } // OK const try2: Props = { green: false, numberWhenGreenTrue: 5, small: true } // Unexpected Error: 'callbackWhenSmallIsFalsy' is missing... // Expected Error: 5 is not assignable to undefined for 'numberWhenGreenTrue' const try3: Props = { green: false, small: true } // OK const try4: Props = { green: false, small: true, callbackWhenSmallIsFalsy: () => 5 } // Acceptable Error: Types of property 'small' are incompatible. // TS can't know which of the conditions to use, so this is okay. NTH: It lists multiple possible errors ``` **Expected behavior:** in `try3` I don't expect the error to be about `callbackWhenSmallIsFalsy`. I would expect an error about `green` or `numberWhenGreenTrue` **Actual behavior:** When the conjunction is evaluated as false, it appears that the compiler tries to force an error into the first part of the conjunction, when the falsehood is from the second part of the conjunction. **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> http://www.typescriptlang.org/play/#src=interface%20Small%20%7B%0A%20%20%20%20small%3A%20true%2C%0A%20%20%20%20callbackWhenSmallIsFalsy%3F%3A%20undefined%3B%0A%7D%0A%0Ainterface%20NotSmall%20%7B%0A%20%20%20%20small%3F%3A%20false%3B%0A%20%20%20%20callbackWhenSmallIsFalsy%3A%20()%20%3D%3E%20void%3B%0A%7D%0A%0Ainterface%20Green%20%7B%0A%20%20%20%20green%3F%3A%20false%0A%20%20%20%20numberWhenGreenTrue%3F%3A%20undefined%0A%7D%0A%0Ainterface%20NotGreen%20%7B%0A%20%20%20%20green%3A%20true%0A%20%20%20%20numberWhenGreenTrue%3A%20number%3B%0A%7D%0A%0Atype%20SmallProps%20%3D%20Small%20%7C%20NotSmall%3B%0Atype%20GreenProps%20%3D%20Green%20%7C%20NotGreen%3B%0A%0Atype%20Props%20%3D%20SmallProps%20%26%20GreenProps%3B%0A%0Aconst%20try1%3A%20Props%20%3D%20%7B%0A%20%20%20%20green%3A%20true%2C%0A%20%20%20%20numberWhenGreenTrue%3A%205%2C%0A%20%20%20%20small%3A%20true%0A%7D%20%2F%2F%20OK%0A%0Aconst%20try2%3A%20Props%20%3D%20%7B%0A%20%20%20%20green%3A%20false%2C%0A%20%20%20%20numberWhenGreenTrue%3A%205%2C%0A%20%20%20%20small%3A%20true%0A%7D%20%2F%2F%20Unexpected%20Error%3A%20'callbackWhenSmallIsFalsy'%20is%20missing...%0A%2F%2F%20Expected%20Error%3A%205%20is%20not%20assignable%20to%20undefined%20for%20'numberWhenGreenTrue'%0A%0A%20const%20try3%3A%20Props%20%3D%20%7B%0A%20%20%20%20green%3A%20false%2C%0A%20%20%20%20small%3A%20true%0A%7D%20%2F%2F%20OK%0A%0A%20const%20try4%3A%20Props%20%3D%20%7B%0A%20%20%20%20green%3A%20false%2C%0A%20%20%20%20small%3A%20true%2C%0A%20%20%20%20callbackWhenSmallIsFalsy%3A%20()%20%3D%3E%205%0A%7D%20%2F%2F%20Acceptable%20Error%3A%20Types%20of%20property%20'small'%20are%20incompatible.%0A%2F%2F%20TS%20can't%20know%20which%20of%20the%20conditions%20to%20use%2C%20so%20this%20is%20okay.%20NTH%3A%20It%20lists%20multiple%20possible%20errors **Related Issues:** <!-- Did you find other bugs that looked similar? -->
Suggestion,Experience Enhancement
low
Critical
427,223,291
pytorch
Memory not being deallocated in backward()
## 🐛 Bug I've recently discovered an issue with memory not being freed after the first iteration of training. It's not a leak, as memory usage stays consistent after the second pass through the loop. It appears on both CPU and GPU, however it is much more significant when running on CPU. The issue seems to come from the either backward or optimizer.step(), as removing their calls provides stable memory usage. I ran into this while attempting to train a rather large model that uses pretty much all of my available GPU memory. It will complete the first iteration successfully, then OOM during the second. ## To Reproduce Steps to reproduce the behavior: I have compiled a minimal CPU and GPU gist that should reproduce this issue: [CPU](https://gist.github.com/mdlockyer/3ff43f00ad7b7e2c2a3a7f33469658da) [GPU](https://gist.github.com/mdlockyer/1b728751113067c47ef104a5ecf1691d) The CPU gist uses the memory-profile package, so that will need to be installed with pip ## Expected behavior The memory usage should be relatively the same in the first pass through the training loop, and all following loops. ## Environment PyTorch version: 1.0.1.post2 Is debug build: No CUDA used to build PyTorch: None OS: Mac OSX 10.13.6 GCC version: Could not collect CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip3] numpy==1.16.2 [pip3] torch==1.0.1.post2 [pip3] torchvision==0.2.2.post3 [conda] torch 1.0.1.post2 <pip> [conda] torchsummary 1.5.1 <pip> [conda] torchvision 0.2.1 <pip> ## Additional context I ran some profiles on the CPU memory usage that highlight the issue: ### With backward pass and update: ![with_backward_4](https://user-images.githubusercontent.com/30844227/55268463-cc9d3d00-5257-11e9-9ff0-cc1e82c08081.png) #### Iteration 1 Line # Mem usage Increment Line Contents ================================================ 23 360.0 MiB 360.0 MiB @profile 24 def train(model, criterion, optim): 25 360.1 MiB 0.0 MiB x = torch.rand(1, 3, 8, 8) 26 360.1 MiB 0.0 MiB y = torch.ones(1, 1, 8, 8) 27 28 402.7 MiB 42.6 MiB out = model(x) 29 402.7 MiB 0.1 MiB loss = criterion(out, y) 30 31 402.7 MiB 0.0 MiB optim.zero_grad() 32 663.8 MiB 261.1 MiB loss.backward() 33 664.0 MiB 0.1 MiB optim.step() 34 664.0 MiB 0.0 MiB optim.zero_grad() 35 664.0 MiB 0.0 MiB del x, y, out, loss 36 664.0 MiB 0.0 MiB gc.collect() #### Iteration 2 Line # Mem usage Increment Line Contents ================================================ 23 664.0 MiB 664.0 MiB @profile 24 def train(model, criterion, optim): 25 664.0 MiB 0.0 MiB x = torch.rand(1, 3, 8, 8) 26 664.0 MiB 0.0 MiB y = torch.ones(1, 1, 8, 8) 27 28 701.7 MiB 37.7 MiB out = model(x) 29 701.7 MiB 0.0 MiB loss = criterion(out, y) 30 31 701.7 MiB 0.0 MiB optim.zero_grad() 32 671.7 MiB 0.0 MiB loss.backward() 33 671.7 MiB 0.0 MiB optim.step() 34 671.7 MiB 0.0 MiB optim.zero_grad() 35 671.7 MiB 0.0 MiB del x, y, out, loss 36 671.7 MiB 0.0 MiB gc.collect() ### Without backward pass and update: ![without_backward_2](https://user-images.githubusercontent.com/30844227/55268515-42090d80-5258-11e9-8a54-81e2afeadf85.png) #### Iteration 1 Line # Mem usage Increment Line Contents ================================================ 23 351.2 MiB 351.2 MiB @profile 24 def train(model, criterion, optim): 25 351.2 MiB 0.0 MiB x = torch.rand(1, 3, 8, 8) 26 351.3 MiB 0.0 MiB y = torch.ones(1, 1, 8, 8) 27 28 392.4 MiB 41.1 MiB out = model(x) 29 392.5 MiB 0.1 MiB loss = criterion(out, y) 30 31 392.5 MiB 0.0 MiB optim.zero_grad() 32 #loss.backward() 33 #optim.step() 34 392.5 MiB 0.0 MiB optim.zero_grad() 35 361.7 MiB 0.0 MiB del x, y, out, loss 36 361.7 MiB 0.0 MiB gc.collect() #### Iteration 2 Line # Mem usage Increment Line Contents ================================================ 23 361.7 MiB 361.7 MiB @profile 24 def train(model, criterion, optim): 25 361.7 MiB 0.0 MiB x = torch.rand(1, 3, 8, 8) 26 361.7 MiB 0.0 MiB y = torch.ones(1, 1, 8, 8) 27 28 392.0 MiB 30.3 MiB out = model(x) 29 392.0 MiB 0.0 MiB loss = criterion(out, y) 30 31 392.0 MiB 0.0 MiB optim.zero_grad() 32 #loss.backward() 33 #optim.step() 34 392.0 MiB 0.0 MiB optim.zero_grad() 35 361.7 MiB 0.0 MiB del x, y, out, loss 36 361.7 MiB 0.0 MiB gc.collect() cc @ezyang @gchanan @zou3519 @SsnL @albanD @gqchen
module: autograd,module: memory usage,triaged,quansight-nack
medium
Critical
427,231,957
TypeScript
intellisense typeroots for jsconfig.json
## Suggestion Add the `typeRoots` property for jsconfig.json like tsconfig.json ## Use Cases In an Asp.net Core project , there are no intellisence for `view.cshtml` file's `<script>` when my type definition file put in `wwwroot/@typings` ## Examples ``` /aspnetcoreproject | wwwroot |@typings |tsconfig.json |Views |Home |index.cshtml (can not find the type definition files) |jsconfig.json ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Experience Enhancement
low
Major
427,243,609
TypeScript
Better support for global registration patterns
## Suggestion It seems likely the compiler logic used for globalThis could be reused to support automated global types for custom elements , which is especially annoying and difficult to deal with in jsx ## Use Cases this is the current dev workflow for creating and using custom elements The custom element loading-spinner.ts ```ts class LoadingSpinner extends HTMLElement { constructor() { super(); this.innerHTML = `<div class="loading-spinnner">` } //omitting code for brevity } customElements.define('loading-spinner', LoadingSpinner); //<-- could this be added to a global namespace using similar logic as globalThis ? ``` the type definition loading-spinner.d.ts ```ts //this is coupled to a specific JSX namespace's types (preact in this case) //maybe adding a JSX.CustomElementsNamespace to lib.d.ts could be an option as well? interface LoadingSpinner extends JSX.HTMLAttributes {} declare namespace JSX { interface IntrinsicElements { 'loading-spinner': LoadingSpinner; } } ``` Using the custom component usage in a jsx component ```ts import h from Reactish let MyComponent = () => { return ( <loading-spinner/> //<--so difficult not to get errors here ) }
Suggestion,Needs Proposal
low
Critical
427,256,298
neovim
Remove insert delay for read-only files
https://vi.stackexchange.com/questions/3001/remove-the-insert-delay-after-entering-insert-in-a-read-only-file This is standard vim behavior hence skipping neovim specific details. ------- When I edit a file marked read-only, there is a jarring delay upon editing it. Please make it configurable if possible. Thank you. The attached SO question has info on where the edit would go...
enhancement,ux,core
low
Major
427,278,683
rust
Missing line information for calls to diverging functions
Under some circumstances, `rustc` emits debug information with missing line numbers for calls to diverging functions. ```rust fn main() { if True == False { // unreachable diverge(); } diverge(); } #[derive(PartialEq)] pub enum MyBool { True, False, } use MyBool::*; fn diverge() -> ! { panic!(); } ``` Compiling with `rustc main.rs -g --emit=llvm-ir -Clto` and then inspecting the debug info for `main` in the `.ll` file, the line information is 0: ```llvm call void @_ZN4main7diverge17h595c254fa559ce50E(), !dbg !3943 !3943 = !DILocation(line: 0, scope: !3942) ``` If the unreachable call is removed, the line info matches the source file as expected: ```llvm !4340 = !DILocation(line: 16, column: 4, scope: !4338) ``` I get the same behavior with the latest stable ``` rustc 1.33.0 (2aa4c46cf 2019-02-28) binary: rustc commit-hash: 2aa4c46cfdd726e97360c2734835aa3515e8c858 commit-date: 2019-02-28 host: x86_64-unknown-linux-gnu release: 1.33.0 LLVM version: 8.0 ``` and with the latest nightly ``` rustc 1.35.0-nightly (237bf3244 2019-03-28) binary: rustc commit-hash: 237bf3244fffef501cf37d4bda00e1fce3fcfb46 commit-date: 2019-03-28 host: x86_64-unknown-linux-gnu release: 1.35.0-nightly LLVM version: 8.0 ``` cc @japaric
A-debuginfo,E-needs-test,T-compiler,C-bug
low
Critical
427,289,818
godot
Inconsistent Behavior For String/Bool Conversion
When converting a bool to a String, the String is set to "True" and "False" for the appropriate boolean values. However, when converting a String to a bool, the bool is set to true if the string is non-empty, regardless of the contents. This inconsistent behavior means that if a bool set to false is converted to a string, and then is converted back, it will become true. Furthermore, the current string-to-bool conversion is not very useful for many (or even most) of the use cases for such conversions, e.g. custom text data parsing, where usually one wants to convert a string such as "true" to a bool. I propose the following: 1. The bool to String conversion remains as it is. 2. For the String to bool conversion, the result is true if the String is "True", "TRUE", "true" or "1", and false if the String is "False", "FALSE", "false" or "0". If the string is set to anything else, then it is true if non-empty, and false otherwise. If this proposal is accepted, then I will gladly work on the pull request to implement it.
discussion,topic:core
low
Major
427,296,349
neovim
folding by column
could neovim support folding by column? for example ```c if (a > 1) { printf("xxx") } else { printf("yyy") } ``` will be folded ```c if (c > 1) {...} else {...} ```
enhancement,needs:discussion
low
Major
427,321,619
rust
Implement a custom allocator to provide detailed memory usage info during compilation
Idea from @eddyb: It would be great if we could measure the memory used by the compiler during compilation in a very fine-grained way. One idea to do this would be to implement a custom allocator which could record statistics for memory allocated and deallocated. This allocator could also be integrated with rustc's arenas so that we could see exactly how much memory is used by each arena at any given instant.
T-compiler,C-feature-request,I-compilemem,A-self-profile
low
Minor
427,323,973
pytorch
[FR] add CPU information in collect_env.py
module: collect_env.py,triaged,enhancement
low
Minor
427,329,977
TypeScript
Document --preserveWatchOutput in tsc --help
**TypeScript Version:** 3.4.1 I had to look up https://github.com/Microsoft/TypeScript/issues/26873 to find what the flag was in order to use it in a `--build` setup as it's not documenting in `--help`.
Docs
low
Minor
427,335,488
pytorch
[FR] Warn in cuda init if cuda < 10 is used with RTX cards
There are a lot of bug reports from users trying to use CUDA 9 builds with RTX cards. We should warn about the incompatibility.
module: cuda,module: molly-guard,triaged
low
Critical
427,339,643
TypeScript
bug: Decorator Method Name Type Restriction By Enum
**TypeScript Version:** 3.3.3 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** is:issue decorator type method name propertyKey **Code:** this works as expected: ```ts interface SomeTypeMap { fieldOne: string; fieldTwo: number; } function MethodDecorator<Key extends keyof SomeTypeMap>( target: SomeClass, methodName: Key, descriptor: TypedPopertyDescriptor<(...args: any[]) => SomeTypeMap[Key]>) { /* some implementation */ } class SomeClass { // works fine, as expected @MethodDecorator public fieldOne() { return ""; } // compiler error, as expected // since it returns string instead of number @MethodDecorator public fieldTwo() { return ""; } // compiler error, as expected // since method name is not a key in SomeTypeMap @MethodDecorator public fieldThree() { return ""; } } ``` this does not work as expected: ```ts enum Field { One = "fieldOne" } interface SomeTypeMap { [Field.One]: string; } function MethodDecorator<Key extends keyof SomeTypeMap>( target: SomeClass, methodName: Key, descriptor: TypedPopertyDescriptor<(...args: any[]) => SomeTypeMap[Key]>) { /* some implementation */ } class SomeClass { // compiler error, unexpected @MethodDecorator public [Field.One]() { return ""; } } ``` **Expected behavior:** In the second code example, I would expect there to be no compiler errors. **Actual behavior:** Error message: `Argument of type 'string' is not assignable to parameter of type Field` The method name seems to be of type `string` here, which is true, but it should also be of type `Field` **Playground Link:** can't enable experimental decorators in the playground. **Related Issues:** #17795 less related: #30102 **Comments:** Thanks for all your hard work TypeScript team.
Needs Investigation
low
Critical
427,357,439
neovim
dot-repeat fold operation
- `nvim --version`: v0.3.4 - Vim (version: ) behaves differently? no - Operating system/version: macOs - Terminal name/version: iTerm - `$TERM`: xterm-256color ### Steps to reproduce using `nvim -u NORC` ``` nvim -u NORC move to one code block type zfa{ will fold the block code move to next code block and type the dot ``` ### Actual behaviour repeat last modifications operation, not the fold operation ### Expected behaviour repeat the fold action and fold the current code block
enhancement,folds,normal-mode
low
Minor
427,369,621
TypeScript
event argument has no target.result property on IDBRequest: success event
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.4.0-dev.201 updated mar 1 & still no fix https://github.com/Microsoft/TypeScript/blob/00bf32ca3967b07e8663d0cd2b3e2bbf572da88b/lib/lib.webworker.d.ts goto line 1860 **Search Terms:** onsuccess target result eventtarget idbrequest **Code** please note this code is a cut & past from https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB#Generating_handlers ```ts var request = indexedDB.open("MyTestDatabase"); request.onsuccess = function(event) { db = event.target.result; }; ``` **Expected behavior:** no error **Actual behavior:** error TS2339: Property 'result' does not exist on type 'EventTarget' **Playground Link:** N/A **Related Issues:** <!-- Did you find other bugs that looked similar? --> FileReader.onLoad/onLoadEnd event argument has no target.result property #4163 (Fixed) IDBOpenDBRequest.onupgradeneeded also needs the same augmentation IDBTransaction.onerror event.target.error will most likely also need type definitions updated
Bug,Domain: lib.d.ts
low
Critical
427,372,406
rust
Check that non-overwrite accesses to downcast projections are dominated by variant checks.
Pattern-matching in Rust, e.g.: ```rust fn f<T>(_: T) {} fn g<T>(_: T) {} // T and E have Copy bounds to reduce MIR verbosity. pub fn foo<T: Copy, E: Copy>(r: Result<T, E>) { match r { Ok(x) => f(x), Err(e) => g(e), } } ``` turns into this MIR (slightly cleaned up): ```rust fn foo(_1: std::result::Result<T, E>) -> () { let mut _0: (); // return place let mut _2: isize; let mut _3: T; let mut _4: E; bb0: { _2 = discriminant(_1); switchInt(move _2) -> [0isize: bb2, 1isize: bb3, otherwise: bb1]; } bb1: { unreachable; } bb2: { _3 = ((_1 as Ok).0: T); _0 = const f(move _3) -> bb4; } bb3: { _4 = ((_1 as Err).0: E); _0 = const g(move _4) -> bb4; } bb4: { return; } } ``` We already have a dominator tree for MIR, so we can build on top of that and compute the known variants for places (in this case, `Ok` and `Err` for `_1`). Then we can just check that any read/borrow/etc. access (any access with does not fully overwrite the previous value, really) within a downcast (e.g. `(_1 as Ok).0`) is dominated by a variant check for that variant (i.e. `_1` being `Ok`, via `_2` being `discriminant(_1)`). That said, the kind of dataflow borrowck already needs to do might easily include this too (e.g. treating `(_1 as Ok)` as initialized iff `_1` is initialized and `discriminant(_1) == 0` was checked). (Also tempting: moving `Discriminant` into `Operand` to be able to get rid of the `_2` and have `switchInt(discriminant(_1))` directly) cc @rust-lang/wg-compiler-nll @oli
C-enhancement,A-codegen,T-compiler,A-MIR
low
Minor
427,385,727
go
x/net/http2: sendWindowUpdate may send invalid window size increment
Disclaimer: I didn't run into an issue here, but I'm working on an HTTP/2 implementation and saw something that could possibly not be according to the spec. The code linked below contains a loop that sends multiple `WINDOW_UPDATE` frames if the window size increment is higher than the allowed by the HTTP/2 spec. It then sends a last `WINDOW_UPDATE` frame after the loop has exited. https://github.com/golang/net/blob/74de082e2cca95839e88aa0aeee5aadf6ce7710f/http2/server.go#L2189-L2193 However, because the check reads: `for n >= maxUint31`, the last frame sent could contain a window size increment of 0 if the initial `n` is a multiple of 2^31 - 1. That is not allowed by the HTTP/2 spec: From: https://httpwg.org/specs/rfc7540.html#rfc.section.6.9 > A receiver MUST treat the receipt of a WINDOW_UPDATE frame with an flow-control window increment of 0 as a stream error (Section 5.4.2) of type PROTOCOL_ERROR; errors on the connection flow-control window MUST be treated as a connection error (Section 5.4.1). I think the fix here would be just to change the `>=` check to `>`. I suspect that the reason nobody ran into this before is that it's already extremely unlikely that all the conditions for this bug to manifest are actually met in the wild. P.S.: The questions in the issue template didn't really apply to this issue. Apologies in advance if this is not very helpful.
NeedsInvestigation
low
Critical
427,392,193
vscode
Smart Backspace feature
There is a good example of how PHPStorm handles it, https://blog.jetbrains.com/phpstorm/2014/09/smart-backspace-in-phpstorm-8/ VSCode really lacks feature like this as it requires to delete all the spaces / tabs before the cursor when you want to go back to previous line end using backspace.
feature-request,editor-commands
high
Critical
427,402,680
TypeScript
Synchronize unsaved config files with the TS Server
**Request** TS Server currently only reads `tsconfig` and `jsconfig` files from disk. When editing a config file in an editor like vscode, you have to save it to the disk for the project configuration changes to take effect. We'd like to be able to synchronize unsaved config files with the tsserver as well, the same way we synchronize unsaved js/ts files. This would mainly be helpful for error reporting in the config files
Suggestion
low
Critical
427,402,897
flutter
FlutterPlatformViewFactory always passes a CGRectZero
It seems like we always receive a CGRectZero when creating a platform view. Why? Shouldn't it receive the size from flutter? https://github.com/flutter/engine/blob/f3ec767458f12bb3099248fdc57d6c0d1051f042/shell/platform/darwin/ios/framework/Source/FlutterPlatformViews.mm#L78
framework,engine,a: platform-views,P2,found in release: 1.21,team-engine,triaged-engine
low
Major
427,408,691
opencv
Request to change the test code(or interface?)
Hello! I just submitted a [pr](https://github.com/opencv/opencv/pull/14189), and the robot reports errors on *modules/python/test/test_facedetect.py**, as > Traceback (most recent call last): File "/build/precommit_linux64/opencv/modules/python/test/test_facedetect.py", line 55, in test_facedetect rects = detect(gray, cascade) File "/build/precommit_linux64/opencv/modules/python/test/test_facedetect.py", line 18, in detect rects[:,2:] += rects[:,:2] TypeError: list indices must be integers or slices, not tuple and the source file is https://github.com/opencv/opencv/blob/360758e8ae5ebaf94c62f739d37892eae4222df4/modules/python/test/test_facedetect.py#L13-L19 Since the C++ version of this code provides vector<Rect>, the python version should return LIST OF TUPLES instead of NUMPY 2D ARRAY as above, which satisfy the consistency of vector convertion between Python and C as well. https://github.com/opencv/opencv/blob/360758e8ae5ebaf94c62f739d37892eae4222df4/modules/objdetect/src/cascadedetect.cpp#L1351-L1356 Which can be fixed by convert it to numpy array `rects = np.array(rects)` <!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). Please: * Read the documentation to test with the latest developer build. * Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue. * Try to be as detailed as possible in your report. * Report only one problem per created issue. This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => 4.1.0 - Operating System / Platform => Ubuntu 16.04 - Compiler => gcc 5.4.0 ##### Detailed description <!-- your description --> ##### Steps to reproduce <!-- to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file -->
category: python bindings,RFC
low
Critical
427,409,597
godot
Material assigned to OBJ mesh doesn't display in the running project
**Godot version:** v3.1.stable.official **OS/device including version:** Windows 10 **Issue description:** Imported model has textures in editor, but when I press F5 (in-game) it appears white, like without textures. **Steps to reproduce:** Not sure, imported obj file, then double clicked, tried to change materials, looked like they were saved, so I placed model to main game scene, in editor there are textures, but once I started game, there was none. There were 2 materials expected from that .obj model, cause it had UVs and 2 materials inside Here is screenshot of texture being in editor, but not in game ![image](https://user-images.githubusercontent.com/46406204/55291117-b15f3880-53e3-11e9-994b-7e1ab8dd2864.png) And here's how I "fixed" the issue ![image](https://user-images.githubusercontent.com/46406204/55291122-bde39100-53e3-11e9-9dfd-ed6ebdbc8cba.png)
bug,topic:rendering,topic:editor,confirmed,topic:3d
low
Minor
427,453,460
react
useMemo / useCallback cache busting opt out
According to the `React` docs, `useMemo` and `useCallback` are subject to cache purging: > You may rely on useMemo as a performance optimization, not as a semantic guarantee. In the future, React may choose to “forget” some previously memoized values and recalculate them on next render, e.g. to free memory for offscreen components. Write your code so that it still works without useMemo — and then add it to optimize performance. [source](https://reactjs.org/docs/hooks-reference.html#usememo) I am working on moving `react-beautiful-dnd` over to using hooks https://github.com/atlassian/react-beautiful-dnd/issues/871. I have the whole thing working and tested 👍 It leans quite heavily on `useMemo` and `useCallback` right now. If the memoization cache for is cleared for a dragging item, the result will be a cancelled drag. This is not good. My understanding is that `useMemo` and `useCallback` are currently *not* subject to cache purging based on this language: > In the **future**, React may choose to “forget” **Request 1**: Is it possible to opt-out of this cache purging? Perhaps a third `options` argument to `useMemo` and `useCallback`: ```js const value = useMemo(() => ({ hello: 'world' }), [], { usePersistantCache: true }); ``` (Naming up for grabs, but this is just the big idea) A work around is to use a custom memoization toolset such as a `useMemoOne` which reimplements `useMemo` and `useCallback` just using `ref`s [see example](https://twitter.com/alexandereardon/status/1108488559881641986) I am keen to avoid the work around if possible. **Request 2**: While *request 1* is favourable, it would be good to know the exact conditions in which the memoization caches are purged
Type: Discussion
high
Major
427,454,006
go
cmd/go: support easy way to install a remote binary while respecting remote 'replace' and 'exclude' directives
### What version of Go are you using (`go version`)? <pre> $ go version go version go1.12.1 linux/amd64 </pre> ### Does this issue reproduce with the latest release? Yes, including tip. ### What did you do? ``` cd /tmp go get foo ``` where: * `foo` is a module with `replace` directives chosen by the module author, and * `foo` is a binary command, and * I want `foo` to build as specified by the module author ### What did you expect to see? The `replace` directives in the `go.mod` for `foo` respected in my resulting `foo` binary. ### What did you see instead? The `replace` directives in the `go.mod` for `foo` are currently ignored. ### Summary The suggestion in this issue is to provide some easy way to install a remote binary while respecting `replace` and `exclude` directives in the remote module. This seems necessary based on observed usage, as well as this seems part of the promise from the proposal and elsewhere that _"a module author is in complete control of that module's build when it is the main program being built"_. That promise of control by the module author seems especially important when the author is publishing a binary for use by others. ### Background As far I as I was able to follow the design discussions: * By design, modules are a less expressive system than a more traditional approach, and * It seems the complete control given to the top-level build via `replace` and `exclude` directives was important aspect of balancing out that reduction in expressiveness. For example, the ["Build Control"](https://go.googlesource.com/proposal/+/master/design/24301-versioned-go.md#build-control) section in the official proposal has a discussion about this, including (with emphasis added): > Minimal version selection gives the top-level module in the build additional control, allowing it to exclude specific module versions or replace others with different code, but those exclusions and replacements only apply when found in the top-level module, not when the module is a dependency in a larger build. > > _A module author is therefore in complete control of that module's build when it is the main program being built_, but not in complete control of other users' builds that depend on the module. _I believe this distinction will make this proposal scale to much larger, more distributed code bases than the Bundler/Cargo/Dep approach_. There are similar sentiments expressed in the initial vgo blog series, [such as](https://research.swtch.com/vgo-mvs): > Minimal version selection is very simple. It achieves simplicity by > eliminating all flexibility about what the answer must be: the build > list is exactly the versions specified in the requirements. _A real system > needs more flexibility, for example the ability to exclude certain module > versions or replace others_. Those arguments seem reasonably compelling. It seems, however, the current Go modules system does not quite deliver on that promise when, for example: * The author of module `foo` decided to use a `replace` or `exclude`, and * The module `foo` is _"the main program being built"_ via a `go get foo` executed outside another module. In that scenario, the current modules system ignores the remote `replace` or `exclude` in `foo`. In other words, that scenario seems to illustrate not delivering on the promise of "_A module author is therefore in complete control of that module's build when it is the main program being built_". ### "Don't use replace" as an alternative solution When the concern in this issue has been raised in the past, sometimes the response has been something like "People shouldn't really use `replace` when releasing a module". However, I think that falls short as a solution. @rogpeppe has stated for example that `juju` cannot currently be built without `replace` directives. In general, one could imagine that the need for `replace` directives going down over time as the ecosystem adapts modules and semver more faithfully, but it is hard to imagine the need for need for `replace` directives going so low that the need for `replace` could be approximated by zero. For example: * Use of semver is never perfect, especially in the face of human error. * Changes can [break](http://www.hyrumslaw.com/) a consumer without changing the statically checkable API. * `v0` is a "compatibility free zone", yet people in practice and by necessity still use `v0` dependencies. MVS can deliver incompatible results in the face of multiple `require` directives for a `v0` dependency. * Etc. I think the on-going need for `replace` is especially true given the purposeful reduction in expressivity elsewhere in the modules system. ### "git clone" as an alternative solution In discussions on this topic, a response is sometimes made along the lines of "If an author of a binary needs to use `replace` directives, they can always just update their readme to ask users to not do `go get` or `go install` and instead do a `git clone` followed by `go install`". A readme in theory could also include `git clone --branch`. However, I think a `git clone` solution falls short given: * The future benefits of GOPROXY mirrors and notaries, etc. * The benefits of the `go` command automatically picking a good semver tag for you, with a default of `@latest` (e.g., the semver-aware logic described in ["Module aware go get"](https://golang.org/cmd/go/#hdr-Module_aware_go_get)). * The need to update the readme over time (e.g., changing the recommended semver tag if the readme supplies a specific recommended semver tag to use with `git`, or changing the readme if it normally specifies a more standard `go get` variation but then is only temporarily updated to specify using `git clone` in order to respect a `replace` directive for a few months while waiting for resolution of an upstream problem, etc.). * `go get some/cool/cmd` has proven to be popular within the Go community, including as a concrete "gateway" to Go for people who are not yet developing Go themselves. * `go get` nicely hides most VCS differences. Hugo is an example of an early modules adopter that currently has a `replace` directive in its `go.mod`. It has the following [installation instructions](https://github.com/gohugoio/hugo): > Since Hugo 0.48, Hugo uses the Go Modules support built into Go 1.11 to build. The easiest is to clone Hugo in a directory outside of GOPATH, as in the following example: > > ``` > mkdir $HOME/src > cd $HOME/src > git clone https://github.com/gohugoio/hugo.git > cd hugo > go install > ``` That might be the right choice for Hugo based on its needs and the current state of modules in Go 1.12, but those instructions for example ignore the benefit of semver tags. Personally, I would view it as a step backwards if something along those lines became the recommended solution if you have a `replace` directive. ### Other possible solutions There are likely many possible solutions, but here are three sample solutions to help start conversation here. Under all three of these options, it could be reasonable reject any filesystem-based replace directives in a remote go.mod that reach outside the module (as suggested by Bryan in https://github.com/golang/go/issues/24250#issuecomment-419098182). To help avoid placing a testing burden on authors to check for that, `go release` could warn if that condition is found, which is likely a good thing to do anyway as suggested in the `go release` issue in https://github.com/golang/go/issues/26420#issuecomment-471701258. #### Option 1 If #30515 is adopted with a new `-b` flag (for binary or bare), and `go get -b foo` ends up meaning "install foo while ignoring any current module's `go.mod` and without altering the current module's `go.mod`", it could be natural for `go get -b foo` to also respect any `replace` or `exclude` directives in `foo`. The rationale could be that there is no other top-level module being considered aside from `foo` when `-b` is specified. The same could apply if a different spelling than `-b` is selected for #30515 (e.g., perhaps #30515 is resolved via a `go get -global`, `go get -clone`, `go install`, etc.). #### Option 2 If #30515 is not adopted, then `go get foo` when run outside of a module could be redefined in 1.13 to respect any `replace` or `exclude` directives in `foo`. The rationale could be that there is no other top-level module being considered aside from `foo` when doing `go get foo` outside of a module. This behavior was suggested by multiple people for Go 1.12 (including by Bryan in https://github.com/golang/go/issues/24250#issuecomment-419098182), but the ultimate Go 1.12 behavior was different, including Bryan commented in [CL 148517](https://go-review.googlesource.com/c/go/+/148517) that his 1.12 change was "as minimal a change as I could comfortably make to enable 'go get' outside of a module for 1.12". (Part of the context for Bryan's comment in the CL is I think that minimal change might have been implemented after the 1.12 freeze). #### Option 3 The behavior in Option 1 and Option 2 could both be the 1.13 behavior. In other words: * `go get foo` when run outside of a module in 1.13 would now respect any `replace` or `exclude` directives in `foo` (in addition to respecting the remote module's `require` directives and not changing any local `go.mod`). * `go get -b foo` in general means "run as if in a directory outside any module". (Under this definition, it would imply `replace` or `exclude` directives in a remote `foo` are respected). ### Relationship to other issues This has been discussed in multiple issue such as #27643, #24250, #30515 and others, but usually as a side topic. Perhaps this issue here can be closed as a duplicate of #30515, but @mvdan, the author of #30515, asked that this aspect be discussed in a different issue than #30515, so hence this issue is being opened now.
NeedsDecision,early-in-cycle,modules
medium
Critical
427,474,639
go
cmd/compile: internal compiler error: Type.Elem UNSAFEPTR
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version devel +4091cf972a Sun Mar 31 23:35:35 2019 +0000 linux/amd64 </pre> ### Does this issue reproduce with the latest release? reproducible only on tip ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/home/travis/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/travis/gopath" GOPROXY="" GORACE="" GOROOT="/home/travis/.gimme/versions/go" GOTMPDIR="" GOTOOLDIR="/home/travis/.gimme/versions/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build848806544=/tmp/go-build -gno-record-gcc-switches" </pre></details> ### What did you do? <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> I have a pretty old package with some hacks for optimization. I use some of its features in my web framework. In travis, I check if the framework pass the tests on tip. One of the steps in travis script is the `go get -v`. When go get tries to precompile that old package, it crashes. So, one of the files looks like this: ``` package runtimer import ( "unsafe" // #nosec ) func PtrToString(ptr unsafe.Pointer) string { return *(*string)(ptr) } func PtrToStringPtr(ptr unsafe.Pointer) *string { return (*string)(ptr) } func PtrPtrToStringPtr(ptr *unsafe.Pointer) *string { return (*string)(*ptr) // <- compiler complains about this line } ``` Then I've commented out the last func. Then the file looks like this: ``` package runtimer import ( "unsafe" // #nosec ) func PtrToString(ptr unsafe.Pointer) string { return *(*string)(ptr) } func PtrToStringPtr(ptr unsafe.Pointer) *string { return (*string)(ptr) // <- and then it complains about this line } /* func PtrPtrToStringPtr(ptr *unsafe.Pointer) *string { return (*string)(*ptr) } */ ``` ### What did you expect to see? `unsafe.Pointer` works as it work two years ago ### What did you see instead? ``` # github.com/gramework/runtimer ../runtimer/utils.go:16:2: internal compiler error: Type.Elem UNSAFEPTR goroutine 34 [running]: runtime/debug.Stack(0x1039ae0, 0xc00000e018, 0x0) /home/travis/.gimme/versions/go/src/runtime/debug/stack.go:24 +0x9d cmd/compile/internal/gc.Fatalf(0xe8ef7d, 0xc, 0xc0007c7550, 0x1, 0x1) /home/travis/.gimme/versions/go/src/cmd/compile/internal/gc/subr.go:190 +0x292 cmd/compile/internal/types.(*Type).Elem(0xc000061e60, 0xc000538460) /home/travis/.gimme/versions/go/src/cmd/compile/internal/types/type.go:801 +0xff cmd/compile/internal/ssa.(*Func).computeZeroMap(0xc000804000, 0xe12f01) /home/travis/.gimme/versions/go/src/cmd/compile/internal/ssa/writebarrier.go:391 +0x101 cmd/compile/internal/ssa.writebarrier(0xc000804000) /home/travis/.gimme/versions/go/src/cmd/compile/internal/ssa/writebarrier.go:80 +0x6b cmd/compile/internal/ssa.Compile(0xc000804000) /home/travis/.gimme/versions/go/src/cmd/compile/internal/ssa/compile.go:90 +0x476 cmd/compile/internal/gc.buildssa(0xc0003c9b80, 0x0, 0x0) /home/travis/.gimme/versions/go/src/cmd/compile/internal/gc/ssa.go:288 +0xbf3 cmd/compile/internal/gc.compileSSA(0xc0003c9b80, 0x0) /home/travis/.gimme/versions/go/src/cmd/compile/internal/gc/pgen.go:297 +0x4d cmd/compile/internal/gc.compileFunctions.func2(0xc000404de0, 0xc000407200, 0x0) /home/travis/.gimme/versions/go/src/cmd/compile/internal/gc/pgen.go:362 +0x49 created by cmd/compile/internal/gc.compileFunctions /home/travis/.gimme/versions/go/src/cmd/compile/internal/gc/pgen.go:360 +0x128 ```
NeedsInvestigation,compiler/runtime
medium
Critical
427,505,079
flutter
Dead code in scaffold.dart
There's some code that can be cleaned out of scaffold.dart to make it faster and smaller: https://github.com/flutter/flutter/pull/21535/files
team,framework,f: material design,P2,team-design,triaged-design
low
Minor
427,539,538
neovim
node-client: autocmd error not raised when command invoked from RPC client
<!-- Before reporting: search existing issues and check the FAQ. --> - `nvim --version`: NVIM v0.4.0-442-g0920c6ca8 - Vim (version: ) behaves differently? Yes - Operating system/version: MacOS - Terminal name/version: iTerm2 - `$TERM`: xterm-256color ### Steps to reproduce using `nvim -u NORC` Create a node remote plugin like: ``` js module.exports = (plugin) => { let { nvim } = plugin plugin.registerCommand('OpenList', async () => { await nvim.command('botright 5sp list://abc') this.window = await nvim.window await this.window.request(`nvim_win_set_height`, [this.window, 5]) }) } ``` in `~/.config/nvim/rplugin/node/` and run `:UpdateRemotePlugins` ``` nvim -u NORC :autocmd BufEnter * lcd %:p:h :OpenList ``` ### Actual behaviour <img width="389" alt="Screen Shot 2019-04-01 at 2 50 28 PM" src="https://user-images.githubusercontent.com/251450/55308496-86312380-548d-11e9-8947-a4762540d9c6.png"> ### Expected behaviour Neovim send error to client or ignore the error like vim.
bug,api,provider,remote-plugin
low
Critical
427,540,512
pytorch
How to compile/install caffe2 with cuda 9.0?
I'm building caffe2 on ubuntu 18.04 with CUDA 9.0? But when I run "python setup.py install" command, I have met issue about version of CUDA. It needs to CUDA 9.2 instead of 9.0 but i only want to build with 9.0. How to pass it? Thank you!
caffe2
low
Minor
427,555,623
go
x/crypto/ssh/terminal: ReadPassword keeps echo disabled when stopped with Ctrl+C
### What version of Go are you using (`go version`)? <pre> $ go version go version go1.12.1 darwin/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? Linux and macOS ### What did you do? Tried to use the following code to read passwords (from https://qiita.com/moutend/items/12d53750363edbbc3d6b): ``` package main import ( "fmt" "log" "syscall" "golang.org/x/crypto/ssh/terminal" ) func main() { fmt.Print("Password: ") password, err := terminal.ReadPassword(int(syscall.Stdin)) if err != nil { log.Fatal(err) } else { fmt.Printf("\nYour password is %v\n", string(password)) } } ``` ### What did you expect to see? If I press Ctrl+C at the point when I'm supposed to enter password I'm expecting my terminal back working as before running the code. ### What did you see instead? I'm unable to see what I'm typing anymore as echo is kept disabled
NeedsInvestigation
low
Major
427,573,473
kubernetes
Reduce default CPU requests of kube system services
By default some system services request too much CPU resources: * fluentd-gcp (100mCPU) * kube-dns (260mCPU) * heapster (138mCPU) However their real life those services consume maximum 10mCPU. And such requests doesn't allow to create small clusters. Also services mentioned above are running on all pods in cluster pool.
sig/scalability,kind/feature,lifecycle/frozen
medium
Major
427,623,677
vue-element-admin
How to change to tabs-like multi-page to support iframe caching
How does router-view change to tabs-like multi-page when you click on the left routing menu? Because I need to embed external iframe, and I need caching, I won't reload iframe when I switch tabs.
feature
low
Minor
427,659,866
kubernetes
Cannot drain node with pod with more than one Pod Disruption Budget
**What would you like to be added**: When attempting to drain a node that has a pod scheduled on it with more than one PodDisruptionBudget the drain fails with the following message: "Err: This pod has more than one PodDisruptionBudget, which the eviction subresource does not support." It would be great if this was actually supported. **Why is this needed**: It is not possible to upgrade clusters that have any pods with more than one PodDisruptionBudget since they fail on drain.
kind/bug,priority/important-soon,area/usability,kind/feature,sig/apps,lifecycle/frozen,needs-triage
medium
Critical
427,713,480
TypeScript
Reflect.has fails to act as type guard (should act same as "in" operator)
**TypeScript Version:** ^3.4.0-dev.20190330 **Search Terms:** Reflect.has in operator **Code** ```ts const test1 = (a: { field: number } | {}) => (("field" in a) ? a.field : 0); const test2 = (a: { field: number } | {}) => Reflect.has(a, "field") ? a.field : 0; ``` **Expected behavior:** Both compile successfully. "in" operator acts as a type guard per https://github.com/Microsoft/TypeScript/issues/10485. Reflect.has should act [the same as "in" operator here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Reflect/has). We'd like to use Reflect.has a lot more and this case holds us back. Thank you! **Actual behavior:** "in" operator line compiles; `Reflect.has` line does not compile. Error: > allPasos.ts:445:31 - error TS2339: Property 'field' does not exist on type '{} | { field: number; }'. > Property 'field' does not exist on type '{}'. 445 Reflect.has(a, "field") ? a.field : 0; **Playground Link:** https://goo.gl/2R4dkS **Related Issues:** https://github.com/Microsoft/TypeScript/issues/10485
Suggestion,Domain: lib.d.ts,Experience Enhancement
low
Critical
427,733,904
go
runtime: TestLldbPython failing with 'no intvar'
During all.bash, I 100% reproducibly get: ``` --- FAIL: TestLldbPython (7.82s) runtime-lldb_test.go:187: Unexpected lldb output: Created target Created breakpoint Process launched Hit breakpoint Stopped at main.go:10 no intvar FAIL FAIL runtime 35.143s ``` On macOS. ``` $ lldb --version lldb-1001.0.12.1 Swift-5.0 ``` The builders aren't failing because of https://github.com/golang/go/issues/31123. cc @heschik @aarzilli @thanm @dr2chase
Testing,NeedsFix
medium
Major
427,733,947
opencv
stitching detailed
##### System information (version) opencv version = 3.4.5 Linux Mint 19 Kernel version = 4.15.0-46-generic gcc compiler 7.3.0 Qt Creator 4.5.2 Based on Qt 5.9.5 (GCC 7.3.0, 64 bit) CPU: Intel Core i5-7600K GPU: GeForce GTX 1060, Driver Version: 418.39, CUDA Version: 10.1 ##### Detailed description When I try to use stitching detailed with GPU support its throwing exception that there is no CUDA devices. Look like very strange because another GPU support algorithms work well. Here is an example of running rezults: -- for stitcher: ``` ./example_cpp_stitching_detailed --try_cuda yes ../../../../../qtProjetcs/OPENCV/data/cam* --output ../../../../../output.jpg Finding features... Features in image #1: 906 Features in image #2: 1130 Features in image #3: 922 Features in image #4: 815 Features in image #5: 1185 Features in image #6: 1349 Features in image #7: 1155 Features in image #8: 846 Finding features, time: 2.73863 sec Pairwise matchingPairwise matching, time: 0.416088 sec Initial camera intrinsics #1: K: [402.6621092208531, 0, 516.5; 0, 402.6621092208531, 290.5; 0, 0, 1] R: [-0.0012106675, 0.025132069, 1.2375336; -0.00024010938, 0.99085152, 0.0013792274; -0.79925025, 0.0042853854, -0.01345437] Initial camera intrinsics #2: K: [402.6621092208531, 0, 516.5; 0, 402.6621092208531, 290.5; 0, 0, 1] R: [-0.49395916, 0.015130629, 0.62282497; -0.00049254886, 0.70598614, 0.00056736136; -0.39896688, 0.0027366213, -0.509947] Initial camera intrinsics #3: K: [402.6621092208531, 0, 516.5; 0, 402.6621092208531, 290.5; 0, 0, 1] R: [-0.49653655, 0.0098318448, 0.0021413246; -0.0001232428, 0.49794412, 9.1621878e-05; 0.0061601428, 0.0028555968, -0.50224859] Initial camera intrinsics #4: K: [402.6621092208531, 0, 516.5; 0, 402.6621092208531, 290.5; 0, 0, 1] R: [-1.9485691, 0.002625125, -2.4911025; 0.003152034, 2.7831903, -0.0018687831; 1.5857258, 0.047782131, -1.960164] Initial camera intrinsics #5: K: [402.6621092208531, 0, 516.5; 0, 402.6621092208531, 290.5; 0, 0, 1] R: [0.002758041, -0.026730334, -2.4876845; 0.00086431153, 1.9743909, 5.3480748e-05; 1.5844687, 0.026593724, 0.010055152] Initial camera intrinsics #6: K: [402.6621092208531, 0, 516.5; 0, 402.6621092208531, 290.5; 0, 0, 1] R: [0.99693823, -0.009097049, -1.246878; -0.00098260806, 1.4026989, 2.9962863e-05; 0.795367, 0.013093419, 1] Initial camera intrinsics #7: K: [402.6621092208531, 0, 516.5; 0, 402.6621092208531, 290.5; 0, 0, 1] R: [1, 0, 0; 0, 1, 0; 0, 0, 1] Initial camera intrinsics #8: K: [402.6621092208531, 0, 516.5; 0, 402.6621092208531, 290.5; 0, 0, 1] R: [0.49476224, 0.0006063492, 0.61949396; 0.0008142544, 0.70039827, 0.00048807176; -0.40517563, 0.00061461265, 0.49267703] Camera #1: K: [503.1842986733872, 0, 516.5; 0, 503.1842986733872, 290.5; 0, 0, 1] R: [-0.00056268368, -0.00087417668, 0.99999946; 0.00011095451, 0.99999952, 0.00087423931; -0.99999988, 0.00011144718, -0.00056258682] Camera #2: K: [503.5501239876086, 0, 516.5; 0, 503.5501239876086, 290.5; 0, 0, 1] R: [-0.7074759, -0.00098034146, 0.70673674; -0.00063396804, 0.99999946, 0.00075250654; -0.7067371, 8.4332656e-05, -0.70747626] Camera #3: K: [503.0556313293633, 0, 516.5; 0, 503.0556313293633, 290.5; 0, 0, 1] R: [-0.99999964, -0.00072452176, -0.0004785601; -0.00072449475, 0.9999997, -5.6286342e-05; 0.00047860108, -5.5938959e-05, -0.99999988] Camera #4: K: [502.9573581509302, 0, 516.5; 0, 502.9573581509302, 290.5; 0, 0, 1] R: [-0.70678526, 0.00042905656, -0.70742804; 0.00067780865, 0.99999976, -7.069041e-05; 0.70742786, -0.00052946294, -0.70678538] Camera #5: K: [503.792948601518, 0, 516.5; 0, 503.792948601518, 290.5; 0, 0, 1] R: [0.00049857609, -5.4464817e-06, -0.99999982; -6.1808154e-05, 0.99999994, -5.4774337e-06; 0.99999994, 6.1810948e-05, 0.00049857656] Camera #6: K: [503.343950424277, 0, 516.5; 0, 503.343950424277, 290.5; 0, 0, 1] R: [0.70705438, 0.00012996799, -0.70715916; -0.00015053386, 1, 3.327732e-05; 0.70715922, 8.2923099e-05, 0.70705438] Camera #7: K: [502.5416704969377, 0, 516.5; 0, 502.5416704969377, 290.5; 0, 0, 1] R: [0.99999994, -1.1823431e-10, 0; 3.4742698e-10, 1, 4.6566129e-10; -4.6566129e-10, 0, 1] Camera #8: K: [502.8878212319973, 0, 516.5; 0, 502.8878212319973, 290.5; 0, 0, 1] R: [0.70661741, -0.00082619523, 0.70759523; 0.00071325153, 0.99999958, 0.0004553434; -0.70759541, 0.00018294016, 0.70661789] Warping images (auxiliary)... Warping images, time: 0.363623 sec terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(3.4.5) /home/serj/filestorage/GITS/opencv/modules/core/include/opencv2/core/private.cuda.hpp:113: error: (-213:The function/feature is not implemented) The called functionality is disabled for current build or platform in function 'throw_no_cuda' Aborted (core dumped) ``` ----------------------------------------------------------------------------------------------------------------------- -- for test stitcher: ``` ./opencv_test_stitching CTEST_FULL_OUTPUT OpenCV version: 3.4.5 OpenCV VCS version: 3.4.5 Build type: Release Compiler: /usr/bin/c++ (ver 7.3.0) Parallel framework: pthreads CPU features: SSE SSE2 SSE3 *SSE4.1 *SSE4.2 *FP16 *AVX *AVX2 *AVX512-SKX? Intel(R) IPP version: ippIP AVX2 (l9) 2019.0.0 Gold (-) Jul 24 2018 OpenCL Platforms: NVIDIA CUDA dGPU: GeForce GTX 1060 6GB (OpenCL 1.2 CUDA) Current OpenCL device: Type = dGPU Name = GeForce GTX 1060 6GB Version = OpenCL 1.2 CUDA Driver version = 418.39 Address bits = 64 Compute units = 10 Max work group size = 1024 Local memory size = 48 KB Max memory allocation size = 1 GB 494 MB 816 KB Double support = Yes Host unified memory = No Device extensions: cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer Has AMD Blas = No Has AMD Fft = No Preferred vector width char = 1 Preferred vector width short = 1 Preferred vector width int = 1 Preferred vector width long = 1 Preferred vector width float = 1 Preferred vector width double = 1 [==========] Running 8 tests from 8 test cases. [----------] Global test environment set-up. [----------] 1 test from OCL_SphericalWarperTest [ RUN ] OCL_SphericalWarperTest.Mat [ OK ] OCL_SphericalWarperTest.Mat (291 ms) [----------] 1 test from OCL_SphericalWarperTest (291 ms total) [----------] 1 test from OCL_CylindricalWarperTest [ RUN ] OCL_CylindricalWarperTest.Mat [ OK ] OCL_CylindricalWarperTest.Mat (1 ms) [----------] 1 test from OCL_CylindricalWarperTest (1 ms total) [----------] 1 test from OCL_PlaneWarperTest [ RUN ] OCL_PlaneWarperTest.Mat [ OK ] OCL_PlaneWarperTest.Mat (1 ms) [----------] 1 test from OCL_PlaneWarperTest (1 ms total) [----------] 1 test from OCL_AffineWarperTest [ RUN ] OCL_AffineWarperTest.Mat [ OK ] OCL_AffineWarperTest.Mat (2 ms) [----------] 1 test from OCL_AffineWarperTest (2 ms total) [----------] 1 test from MultiBandBlender [ RUN ] MultiBandBlender.CanBlendTwoImages Segmentation fault (core dumped) ```
priority: low,category: gpu/cuda (contrib),category: stitching
low
Critical
427,784,285
flutter
Flutter App in Profile mode cannot install on Samsung J7 Prime
I create this issue again because this two issues are already closed. #19751 #20062 I also facing that issue when I run flutter app in profile mode. I cannot install flutter app exported with profile mode in Samsung J7 Prime. Here is log file. Flutter crash report; please file at https://github.com/flutter/flutter/issues. ## command flutter build aot --suppress-analytics --quiet --target /home/kyawswaraung/Projects/Project/lib/main.dart --target-platform android-arm --output-dir /home/kyawswaraung/Projects/Project/build/app/intermediates/flutter/profile --profile ## exception ProcessException: ProcessException: No such file or directory Command: /home/kyawswaraung/flutter/bin/cache/artifacts/engine/android-arm-profile/linux-x64/gen_snapshot --causal_async_stacks --packages=.packages --deterministic --snapshot_kind=app-aot-blobs --vm_snapshot_data=/home/kyawswaraung/Projects/Project/build/app/intermediates/flutter/profile/vm_snapshot_data --isolate_snapshot_data=/home/kyawswaraung/Projects/Project/build/app/intermediates/flutter/profile/isolate_snapshot_data --vm_snapshot_instructions=/home/kyawswaraung/Projects/Project/build/app/intermediates/flutter/profile/vm_snapshot_instr --isolate_snapshot_instructions=/home/kyawswaraung/Projects/Project/build/app/intermediates/flutter/profile/isolate_snapshot_instr --no-sim-use-hardfp --no-use-integer-division /home/kyawswaraung/Projects/Project/build/app/intermediates/flutter/profile/app.dill ``` #0 runCommandAndStreamOutput (package:flutter_tools/src/base/process.dart:140:27) <asynchronous suspension> #1 GenSnapshot.run (package:flutter_tools/src/base/build.dart:68:12) #2 AOTSnapshotter.build (package:flutter_tools/src/base/build.dart:194:55) <asynchronous suspension> #3 BuildAotCommand.runCommand (package:flutter_tools/src/commands/build_aot.dart:137:56) <asynchronous suspension> #4 FlutterCommand.verifyThenRunCommand (package:flutter_tools/src/runner/flutter_command.dart:545:18) <asynchronous suspension> #5 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:482:33) <asynchronous suspension> #6 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:142:29) <asynchronous suspension> #7 _rootRun (dart:async/zone.dart:1124:13) #8 _CustomZone.run (dart:async/zone.dart:1021:19) #9 _runZoned (dart:async/zone.dart:1516:10) #10 runZoned (dart:async/zone.dart:1463:12) #11 AppContext.run (package:flutter_tools/src/base/context.dart:141:18) <asynchronous suspension> #12 FlutterCommand.run (package:flutter_tools/src/runner/flutter_command.dart:473:20) #13 CommandRunner.runCommand (package:args/command_runner.dart:196:27) <asynchronous suspension> #14 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:396:21) <asynchronous suspension> #15 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:142:29) <asynchronous suspension> #16 _rootRun (dart:async/zone.dart:1124:13) #17 _CustomZone.run (dart:async/zone.dart:1021:19) #18 _runZoned (dart:async/zone.dart:1516:10) #19 runZoned (dart:async/zone.dart:1463:12) #20 AppContext.run (package:flutter_tools/src/base/context.dart:141:18) <asynchronous suspension> #21 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:356:19) <asynchronous suspension> #22 CommandRunner.run.<anonymous closure> (package:args/command_runner.dart:111:29) #23 new Future.sync (dart:async/future.dart:224:31) #24 CommandRunner.run (package:args/command_runner.dart:111:11) #25 FlutterCommandRunner.run (package:flutter_tools/src/runner/flutter_command_runner.dart:242:18) #26 run.<anonymous closure> (package:flutter_tools/runner.dart:60:20) <asynchronous suspension> #27 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:142:29) <asynchronous suspension> #28 _rootRun (dart:async/zone.dart:1124:13) #29 _CustomZone.run (dart:async/zone.dart:1021:19) #30 _runZoned (dart:async/zone.dart:1516:10) #31 runZoned (dart:async/zone.dart:1463:12) #32 AppContext.run (package:flutter_tools/src/base/context.dart:141:18) <asynchronous suspension> #33 runInContext (package:flutter_tools/src/context_runner.dart:48:24) <asynchronous suspension> #34 run (package:flutter_tools/runner.dart:51:10) #35 main (package:flutter_tools/executable.dart:52:9) <asynchronous suspension> #36 main (file:///home/kyawswaraung/flutter/packages/flutter_tools/bin/flutter_tools.dart:8:3) #37 _startIsolate.<anonymous closure> (dart:isolate/runtime/libisolate_patch.dart:298:32) #38 _RawReceivePortImpl._handleMessage (dart:isolate/runtime/libisolate_patch.dart:171:12) ``` ## flutter doctor ``` [!] Flutter (Channel beta, v1.2.1, on Linux, locale en_US.UTF-8) • Flutter version 1.2.1 at /home/kyawswaraung/flutter • Framework revision 8661d8aecd (6 weeks ago), 2019-02-14 19:19:53 -0800 • Engine revision 3757390fa4 • Dart version 2.1.2 (build 2.1.2-dev.0.0 0a7dcf17eb) ✗ Downloaded executables cannot execute on host. See https://github.com/flutter/flutter/issues/6207 for more information On Debian/Ubuntu/Mint: sudo apt-get install lib32stdc++6 On Fedora: dnf install libstdc++.i686 On Arch: pacman -S lib32-libstdc++5 (you need to enable multilib: https://wiki.archlinux.org/index.php/Official_repositories#multilib) [✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3) • Android SDK at /home/kyawswaraung/Android/Sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-28, build-tools 28.0.3 • Java binary at: /snap/android-studio/73/android-studio/jre/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01) • All Android licenses accepted. [✓] Android Studio (version 3.3) • Android Studio at /snap/android-studio/73/android-studio • Flutter plugin version 33.3.1 • Dart plugin version 182.5215 • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01) [!] IntelliJ IDEA Community Edition (version 2018.3) • IntelliJ at /snap/intellij-idea-community/126 ✗ Flutter plugin not installed; this adds Flutter specific functionality. ✗ Dart plugin not installed; this adds Dart specific functionality. • For information about installing plugins, see https://flutter.io/intellij-setup/#installing-the-plugins [!] IntelliJ IDEA Ultimate Edition (version 2018.3) • IntelliJ at /home/kyawswaraung/software/idea-IU-183.4588.61 ✗ Flutter plugin not installed; this adds Flutter specific functionality. ✗ Dart plugin not installed; this adds Dart specific functionality. • For information about installing plugins, see https://flutter.io/intellij-setup/#installing-the-plugins [✓] Connected device (1 available) • SM J700H • 3300a7bd1e49725f • android-arm • Android 6.0.1 (API 23) ! Doctor found issues in 3 categories. ```
platform-android,tool,P3,e: samsung,team-android,triaged-android
low
Critical
427,787,929
TypeScript
[RFC] Improved UX via --noExplicitErrors
Leading up to TypeScript 3.0, we actively sought out users across different companies to discuss their biggest pain-points with TypeScript. One common theme truly stood out: error messages and UX. It turns out that at these organizations, TypeScript is often the first typed language that their engineers encounter. In some cases, JavaScript is the *only* language many of their engineers know. This took us by surprise, since most programmers we speak to about JavaScript tend to at least know one other language such as English or Latvian. Nevertheless, we took this seriously, and strived to do better here. For a couple of versions, [we even put together meta-issues to improve the UX](https://github.com/Microsoft/TypeScript/issues?q=label%3AMeta-Issue+label%3A%22Domain%3A+Error+Messages%22+is%3Aclosed). The reception of improvements from 2.9 to 3.2 was phenomenal. Despite being such a basic need, [TypeScript became significantly more approachable](https://twitter.com/DavidGranado/status/1087836016931753985) thanks to this focus. But we're not close to done. ![](https://pbs.twimg.com/media/Db79LbVX4AApUbQ.jpg:large) ![A bad error message](https://pbs.twimg.com/media/DPunBJTWsAADtno.jpg) ![A user showing off a bad error message on Twitter](https://user-images.githubusercontent.com/972891/55309786-c6ac8500-542c-11e9-870b-2e1dfc78e55e.png) We can do better. That's why today, I'm proposing a revolutionary new strategy for error messages everywhere. # A new reporting mode There's an old saying: it's not what you say, it's how you say it. Now, I've gotten in trouble a few dozen times thanks to that saying, but I have a feeling that it might go over better when applied to a type-checker for optional static types like TypeScript. Telling people about errors *differently* goes a long way. Inspired by TypeScript's `--pretty` flag, I'm proposing a new diagnostic reporting mode. # The `--noExplicitErrors` flag Today, JavaScript users experience *no* type-checking errors before running their code. Instead, they deal with much cleaner runtime errors like ``` Uncaught TypeError: undefined is not a function at <anonymous>:1017:19 at ezekiel:25:17 at <anonymous>:1017:19 at cage:4:33 ``` What's nice about these error messages? * The first line makes the problem glaringly obvious. * You know exactly where the problem came from (`<anonymous>:1017:19`). * The word "type" only comes up once, which means most users won't be scared. Let's contrast that with TypeScript. Consider the following code: ```ts let myPromise = Promise.resolve([ Math.random() ? { success: true, result: [1, 2, 3, "hello"]} : { success: false, error: "hello" } ] as const); function foo(param: PromiseLike<{ success: true, error: string }>) { // ... } foo(myPromise); ``` This appears to be every-day very extremely good readable good code, so one would hope for little trouble in compiling/running it. What's TypeScript have to say about it? ``` Argument of type 'Promise<readonly [{ success: boolean; result: (string | number)[]; error?: undefined; } | { success: boolean; error: string; result?: undefined; }]>' is not assignable to parameter of type 'PromiseLike<{ success: true; error: string; }>'. Types of property 'then' are incompatible. Type '<TResult1 = readonly [{ success: boolean; result: (string | number)[]; error?: undefined; } | { success: boolean; error: string; result?: undefined; }], TResult2 = never>(onfulfilled?: (value: readonly [{ success: boolean; result: (string | number)[]; error?: undefined; } | { ...; }]) => TResult1 | PromiseLike<...>,...' is not assignable to type '<TResult1 = { success: true; error: string; }, TResult2 = never>(onfulfilled?: (value: { success: true; error: string; }) => TResult1 | PromiseLike<TResult1>, onrejected?: (reason: any) => TResult2 | PromiseLike<TResult2>) => PromiseLike<...>'. Types of parameters 'onfulfilled' and 'onfulfilled' are incompatible. Types of parameters 'value' and 'value' are incompatible. Type 'readonly [{ success: boolean; result: (string | number)[]; error?: undefined; } | { success: boolean; error: string; result?: undefined; }]' is missing the following properties from type '{ success: true; error: string; }': success, error ``` O͠H M̵Y͝ ̸G͢O͜D ̴ͪ́̃̌ͬ̿M̸̋̅͆ͪͭ̄̀̒͠Ȃ̅ͨ̿͑ͤͯͫK̈́̓ͩͪȆ̽ͨ̾ ̷ͯ͌́ͩͬ̚Iͮ͋͗ͩ͂҉͡Tͪ̄̾ͬͮ͆ ̴ͯ̏́̓̈́͠͠Sͦ͗́͘T͐ͤ̆̉̚҉͘͝Ó̢ͭ͘͞P- uh, I mean, look, TypeScript has saved us! The newly proposed `--noExplicitErrors` flag entirely eliminates the problem of long, difficult-to-parse error messages. We took a look at *every single error message we provided*, and thought long and hard about whether each message was fully applicable to every user and would unambiguously improve their lives. If a single person might not benefit from the message, it likely wasn't worth its weight, and was entirely omitted under `--noExplicitErrors`. So what's an error message for this snippet look like in `--noExplicitErrors`? Well, let's find out. First we need to run the compiler ``` tsc --strict --noExplicitErrors sample.ts ``` That's it! Now let's take a look at those beautiful new error messages! ``` ``` It's **gorgeous** 😍. `--noExplicitErrors` has removed every single confusing part from that original error message. This flag will truly help TypeScript meet the JavaScript community where it is - an error-checking experience without any errors! Existing TypeScript users - just imagine how much less anxious you'll be the next time you hover over a red squiggle in your editor! ![Before/after applying this flag](https://user-images.githubusercontent.com/972891/55311851-045fdc80-5432-11e9-959d-75b1f1f544be.png) While the feature isn't quite ready, we'll be looking for feedback and beta-testers over the next few months. The TypeScript team has tried running with this new flag recently, and while we've had a harder time in some cases of figuring out what was going wrong, we felt less like we were "fighting" the type-checker and more like we were just writing code. The type-checker has become our friend, and real friends will *never* bring up your problems. ## FAQ ### Why is it called "noExplicitErrors"? All of our compiler option names are ***very*** carefully thought out, and we believe it shows. Here's a few examples of well-named compiler options: * `isolatedModules` * `noImplicitThis` * `alwaysStrict` * `noImplicitUseStrict` * `types` * `typeRoots` * `rootDir` * `rootDirs` * `baseUrl` * `allowSyntheticDefaultImports` What's so good about these compiler flag names? They communicate **exactly** what they do. Which is how we arrived at the current name of this new mode. `--noExplicitErrors` is going to be the mode for unambiguously clean errors. ### Can I contribute? If the error messages are still hard to read under `--noExplicitErrors`, consider filing an issue on our issue tracker, or try re-reading the error message a few times. ### What about haiku error messages? Last year, [we received a wave of interest in haiku-formatted error messages](https://twitter.com/codingchaos/status/973637013915164673). While it's been in the "Future" bucket on [our rolling feature roadmap](https://github.com/Microsoft/TypeScript/wiki/Roadmap), it unfortunately will have to take a back seat to `--noExplicitErrors`. We think that `--noExplicitErrors` will enable the next big wave of TypeScript users, and we simply can't wait on that. Besides, we'd probably do limericks first anyway. *A haiku doesn't even rhyme*.
Suggestion,In Discussion,Domain: Error Messages,Add a Flag
high
Critical
427,795,080
pytorch
Add build tests for feature environment vars
We have quite a few feature toggle env vars. Sometimes, build error only emerges by setting specific feature vars (#18691 #18582). Shall we have build CI tests to cover all of them?
todo,module: ci,triaged
low
Critical
427,835,517
create-react-app
Different Hash Names in AWS CodePipeline
<!-- PLEASE READ THE FIRST SECTION :-) --> ### Is this a bug report? Yes <!-- If you answered "Yes": Please note that your issue will be fixed much faster if you spend about half an hour preparing it, including the exact reproduction steps and a demo. If you're in a hurry or don't feel confident, it's fine to report bugs with less details, but this makes it less likely they'll get fixed soon. In either case, please fill as many fields below as you can. If you answered "No": If this is a question or a discussion, you may delete this template and write in a free form. Note that we don't provide help for webpack questions after ejecting. You can find webpack docs at https://webpack.js.org/. --> ### Did you try recovering your dependencies? <!-- Your module tree might be corrupted, and that might be causing the issues. Let's try to recover it. First, delete these files and folders in your project: * node_modules * package-lock.json * yarn.lock Then you need to decide which package manager you prefer to use. We support both npm (https://npmjs.com) and yarn (http://yarnpkg.com/). However, **they can't be used together in one project** so you need to pick one. If you decided to use npm, run this in your project directory: npm install -g npm@latest npm install This should fix your project. If you decided to use yarn, update it first (https://yarnpkg.com/en/docs/install). Then run in your project directory: yarn This should fix your project. Importantly, **if you decided to use yarn, you should never run `npm install` in the project**. For example, yarn users should run `yarn add <library>` instead of `npm install <library>`. Otherwise your project will break again. Have you done all these steps and still see the issue? Please paste the output of `npm --version` and/or `yarn --version` to confirm. --> Yes ### Which terms did you search for in User Guide? <!-- There are a few common documented problems, such as watcher not detecting changes, or build failing. They are described in the Troubleshooting section of the User Guide: https://facebook.github.io/create-react-app/docs/troubleshooting Please scan these few sections for common problems. Additionally, you can search the User Guide itself for something you're having issues with: https://facebook.github.io/create-react-app/ If you didn't find the solution, please share which words you searched for. This helps us improve documentation for future readers who might encounter the same problem. --> Deployment, static files, hosting ### Environment <!-- To help identify if a problem is specific to a platform, browser, or module version, information about your environment is required. This enables the maintainers quickly reproduce the issue and give feedback. Run the following command in your React app's folder in terminal. Note: The result is copied to your clipboard directly. `npx create-react-app --info` Paste the output of the command in the section below. --> ```Environment Info: System: OS: Linux 4.14 Ubuntu 14.04.5 LTS, Trusty Tahr CPU: x64 Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz Binaries: Node: 10.14.0 - /usr/bin/node npm: 6.4.1 - /usr/bin/npm npmPackages: react: 16.8.4 => 16.8.4 react-dom: 16.8.4 => 16.8.4 react-scripts: 2.1.8 => 2.1.8 npmGlobalPackages: create-react-app: Not Found ``` ### Steps to Reproduce <!-- How would you describe your issue to someone who doesn’t know you or your project? Try to write a sequence of steps that anybody can repeat to see the issue. --> 1. Make changes to my CRA App 2. Commit those changes to github 3. Codepipeline sees changes and kicks off the build process in 2 different regions (us-east-1 and us-east-2) ### Expected Behavior <!-- How did you expect the tool to behave? It’s fine if you’re not sure your understanding is correct. Just write down what you thought would happen. --> Based on the docs, I would expect the "contenthash" values on both of my builds to be the same for my main.js, main.css and chunk.js files. ### Actual Behavior <!-- Did something go wrong? Is something broken, or not behaving as you expected? Please attach screenshots if possible! They are extremely helpful for diagnosing issues. --> On the 2 different boxes, my app builds successfully, but the hashnames are totally different. I know this is happening because CodePipeline is bringing my code into a directory that they are creating dynamically. The "hash" is based on the directory that the code is built in. ### Reproducible Demo <!-- If you can, please share a project that reproduces the issue. This is the single most effective way to get an issue fixed soon. There are two ways to do it: * Create a new app and try to reproduce the issue in it. This is useful if you roughly know where the problem is, or can’t share the real code. * Or, copy your app and remove things until you’re left with the minimal reproducible demo. This is useful for finding the root cause. You may then optionally create a new project. This is a good guide to creating bug demos: https://stackoverflow.com/help/mcve Once you’re done, push the project to GitHub and paste the link to it below: --> Clone the CRA base into a local working directory (./cra1 as an example). Clone it again into another local working directory (./cra2 as an example). Run `npm run build` on both folders. Note the "compiled" filenames are different but their content is the same. <!-- What happens if you skip this step? We will try to help you, but in many cases it is impossible because crucial information is missing. In that case we'll tag an issue as having a low priority, and eventually close it if there is no clear direction. We still appreciate the report though, as eventually somebody else might create a reproducible example for it. Thanks for helping us help you! -->
issue: bug
low
Critical
427,857,395
rust
Synchronization primitives not robust against unwinding
Various synchronization primitives in `std` are not robust against unwinding triggered inside the `std` implementation. This may result in * double panics * deadlocks * Mutexes being poisoned that shouldn't be Such unwinding may be triggered in many different ways, for example: * pthread_cancel on certain pthread implementations * assertion/unwrap failure due to a variety of causes: * The kernel may return an unexpected error value from a system call * The libc implementation may return an unexpected error from a call * An unexpected return value due to the use of Linux [seccomp](https://lwn.net/Articles/656307/) * [Iago attacks](https://hovav.net/ucsd/dist/iago.pdf) I've identified at least the following cases. Fixes for some of these have been proposed and rejected in #58042 and #58461. | Primitive | Unwind during blocking in | Failure occurs in | Test case | | ------------ | ------------------------- | --------------------- | --------- | | MPSC oneshot | recv/recv_timeout | Receiver drop | https://github.com/jethrogb/rust/blob/95e8613d4e92f5f4b5487f2e7b4b936ea17d96d2/src/test/run-fail/mpsc-recv-unwind/oneshot.rs | | MPSC sync | recv/recv_timeout | Receiver drop | https://github.com/jethrogb/rust/blob/95e8613d4e92f5f4b5487f2e7b4b936ea17d96d2/src/test/run-fail/mpsc-recv-unwind/sync.rs | | MPSC shared | recv/recv_timeout | Receiver drop | https://github.com/jethrogb/rust/blob/95e8613d4e92f5f4b5487f2e7b4b936ea17d96d2/src/test/run-fail/mpsc-recv-unwind/shared.rs | | MPSC stream | recv/recv_timeout | Receiver drop | https://github.com/jethrogb/rust/blob/95e8613d4e92f5f4b5487f2e7b4b936ea17d96d2/src/test/run-fail/mpsc-recv-unwind/stream.rs | | Condvar | wait/wait_timeout | wait | https://github.com/jethrogb/rust/blob/dd20f165ded66619ee040f5d819a4490fad3bd5c/src/test/run-pass/condvar-wait-panic-poison.rs | | thread | park/park_timeout | park/ThreadInfo::with | https://github.com/rust-lang/rust/pull/58461#issuecomment-471787169 |
A-concurrency,C-bug,T-libs
low
Critical
427,860,856
TypeScript
If not all sources are under rootDir, you only get an error message when combined with outDir, not with outFile
I realize this may be the currently intended behavior — somewhere, I think I saw it explained that rootDir only affects the layout of files under the output directory. However, I argue that its current behavior, when combined with outFile, can be improved. The issue is this: Suppose you specify a rootDir, and at least one file is NOT under that directory. Then: * If you are compiling with `--outDir`, you get an error message, as expected: error TS6059: File '...' is not under 'rootDir' '...'. 'rootDir' is expected to contain all source files. * But if you compile with `--outFile`, you get no error message, *and* the compiler completely ignores the specified rootDir. This means, for example, that if you have a bug in your tsconfig.json, generated AMD files will have unexpected module names. **TypeScript Version:** 3.4.1 **Search Terms:** rootdir, amd, outfile **Code** A project with two tsconfig files — one targeting commonjs with `--outDir`, and one targeting amd with `--outFile`. tsconfig.commonjs.json: ```json { "compilerOptions": { "rootDir": "src", "module": "commonjs", "outDir": "out" }, "includes": [ "src/**/*.ts", "other-src/**/*.ts" ] } ``` tsconfig.amd.json: ```json { "compilerOptions": { "rootDir": "src", "module": "amd", "outFile": "out/bundle.js" }, "includes": [ "src/**/*.ts", "other-src/**/*.ts" ] } ``` src/foo.ts: ```ts export const name = "foo"; ``` other-src/bar.ts: ```ts export const name = "bar"; ``` **Expected behavior:** Compile with `tsc -p tsconfig.commonjs.json`. As expected, you get a TS6059 error, since `other-src/bar.ts` is not under the rootDir `src`. Compile with `tsc -p tsconfig.amd.json`. I would expect that to get the same error, since. **Actual behavior:** When compiling with `tsc -p tsconfig.amd.json`, it compiles without complaint. But if you look at the generated `out/bundle.js`, you see this — notice the module names! They are `"other-src/mylib"` and `"src/foo"`, completely ignoring the (incorrect) rootDir that I specified. ```js define("other-src/mylib", ["require", "exports"], function (require, exports) { "use strict"; exports.__esModule = true; exports.name = "mylib"; }); define("src/foo", ["require", "exports"], function (require, exports) { "use strict"; exports.__esModule = true; exports.name = "foo"; }); ``` **Related Issues:** <!-- Did you find other bugs that looked similar? --> In https://github.com/Microsoft/TypeScript/issues/9858#issuecomment-234379302, @mhegazy explained: > the rootDir is used to build the output folder structure. all files have to be under rootDir for the compiler to know where to write the output. without this constraint, if there are files that are not under rootDir, the compiler has two options 1. either they will be emitted in a totally unexpected place, or 2. not emitted at all. i would say an error message is much better than a silent failure. I agree with this reasoning; but I think it should apply to outFile scenarios as well as to outDir ones.
Bug
medium
Critical
427,878,214
flutter
Add ability for text field TextInputAction to be dynamic
The textInputAction for a text field is hardcoded at creation of the field. I would like the ability for the text input action to change dynamically based on the content of the field. E.g. if the text field has valid text in it, make the action show send, but if the field is blank, switch to done.
a: text input,c: new feature,framework,f: material design,P2,team-framework,triaged-framework
low
Major
427,886,499
kubernetes
APIserver logs status code 200 instead of 500 for serialization errors
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> **What happened**: I submitted a request with a serialization error [(error location)](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go#L103) The APIServer logged a 200 for the request even though the request had a serialization error and is a 500 elsewhere in the apiserver code: ``` E0401 12:41:54.216095 172135 writers.go:176] apiserver was unable to write a JSON response: aaron-prindle-injected-1: tcp broken E0401 12:41:54.216128 172135 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"aaron-prindle-injected-1: tcp broken"} I0401 12:41:54.217262 172135 wrap.go:47] GET /api/v1/pods?timeout=100ms: (2.162959ms) 200 [podlist-error/v0.0.0 (linux/amd64) kubernetes/$Format [::1]:44860] ``` full logs here: https://gist.github.com/aaron-prindle/4eb47176263fb449a350343f44ed885d **What you expected to happen**: I expected the APIServer to log this request as a 500, possibly with some stack information. **How to reproduce it (as minimally and precisely as possible)**: Submit a request with a serialization error [(error location)](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go#L103). Then view the apiserver logs. I was able to reproducibly hit this error by using this code snippet in [writers.go:103-105](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go#L103-L105) along with with a specific timeout on my requests for identification: ``` err := encoder.Encode(object, w) if req.URL.String() == "/api/v1/pods?timeout=100ms" { err = fmt.Errorf("aaron-prindle-injected-1: tcp broken") } if err != nil { errSerializationFatal(err, encoder, w) } ``` **Anything else we need to know?**: I was able to track down this issue to [timeout.go:200-202](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/server/filters/timeout.go#L200-L202). It appears that timeout.go is masking the 500 here, 500 is being [passed in](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/server/filters/timeout.go#L196) but for some reason (still having trouble understanding where it is set) tw.wroteHeader is true which is causing the timeout handler to not update the header with a 500 and log the 200. If I remove either of the wroteHeader= true lines, [timeout.go:179](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/server/filters/timeout.go#L179) or [timeout.go:204 ](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/server/filters/timeout.go#L204) I still see the error. Only when both of those sets are removed does the apiserver correctly log a 500. A simple change that appears to work without modifying the wroteHeader logic is to change [timeout.go:200](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/server/filters/timeout.go#L200) to ``` if tw.timedOut || (tw.wroteHeader && code != http.StatusInternalServerError) || tw.hijacked) { ``` Logs with this change: ``` E0401 13:00:54.347310 180356 writers.go:176] apiserver was unable to write a JSON response: aaron-prindle-injected-1: tcp broken E0401 13:00:54.347337 180356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"aaron-prindle-injected-1: tcp broken"} I0401 13:00:54.348675 180356 wrap.go:47] GET /api/v1/pods?timeout=100ms: (2.953008ms) 500 goroutine 10036 [running]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0002156c0, 0x1f4) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:208 +0xc8 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0002156c0, 0x1f4) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:187 +0x35 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004105460, 0x1f4) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:227 +0x9b k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0xc005996570, 0x1f4) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:365 +0x45 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.httpResponseWriterWithInit.Write(0x0, 0x5ac9bda, 0x10, 0x1f4, 0x8f2d3c0, 0xc00224f668, 0xc004b327e0, 0x81, 0x81, 0xc00074aa80, ...) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:51 +0x1ba k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.errSerializationFatal(0x8e9c420, 0xc0059e0cb0, 0x7f678ceef5e8, 0xc007fd7290, 0x0, 0x5ac9bda, 0x10, 0x1f4, 0x8f2d3c0, 0xc00224f668) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:192 +0x1ce k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject(0x5ac9bda, 0x10, 0x7f678ceef5e8, 0xc007fd7290, 0x8f2d3c0, 0xc00224f668, 0xc008e4d800, 0xc8, 0x8eb4740, 0xc000215a40) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:125 +0x1d4 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated(0x8f318c0, 0xc00128d860, 0x0, 0x0, 0x5aa1355, 0x2, 0x8f2d3c0, 0xc00224f668, 0xc008e4d800, 0xc8, ...) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:150 +0x3d7 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObject(0xc8, 0x0, 0x0, 0x5aa1355, 0x2, 0x8f318c0, 0xc00128d860, 0x8eb4740, 0xc000215a40, 0x8f2d3c0, ...) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:72 +0x2b5 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.transformResponseObject(0x8f415c0, 0xc0059965a0, 0x8f94400, 0xc001e0bc40, 0x8f318c0, 0xc00128d860, 0x8ebe7c0, 0xc000316300, 0x8ea3ba0, 0xc00039c070, ...) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/response.go:57 +0x169a k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.ListResource.func1(0x8f2d3c0, 0xc00224f668, 0xc008e4d800) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/get.go:276 +0xe17 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulListResource.func1(0xc0059964e0, 0xc007da2060) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1074 +0x101 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc0059964e0, 0xc007da2060) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:271 +0x254 k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc000e3e630, 0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:277 +0x985 k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x5ac0ffb, 0xe, 0xc000e3e630, 0xc000674af0, 0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x4e4 k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc001afe310, 0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:108 +0x162 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00936a540, 0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x38d k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00020ae70, 0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x85 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x5ac59b2, 0xf, 0xc001a5f8c0, 0xc00020ae70, 0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x6c3 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x4fa net/http.HandlerFunc.ServeHTTP(0xc00403d200, 0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/Downloads/go1.12.1.linux-amd64/go/src/net/http/server.go:1995 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x5c7 net/http.HandlerFunc.ServeHTTP(0xc00442a120, 0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/Downloads/go1.12.1.linux-amd64/go/src/net/http/server.go:1995 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1ec3 net/http.HandlerFunc.ServeHTTP(0xc00403d240, 0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d800) /usr/local/google/home/aprindle/Downloads/go1.12.1.linux-amd64/go/src/net/http/server.go:1995 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d700) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:81 +0x527 net/http.HandlerFunc.ServeHTTP(0xc000f62f50, 0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d700) /usr/local/google/home/aprindle/Downloads/go1.12.1.linux-amd64/go/src/net/http/server.go:1995 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithCORS.func1(0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d700) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/cors.go:75 +0x1d7 net/http.HandlerFunc.ServeHTTP(0xc004067ec0, 0x7f678cdfdfa0, 0xc00224f650, 0xc008e4d700) /usr/local/google/home/aprindle/Downloads/go1.12.1.linux-amd64/go/src/net/http/server.go:1995 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004519260, 0xc003058020, 0x8f43c80, 0xc00224f650, 0xc008e4d700) /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:118 +0xb3 created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP /usr/local/google/home/aprindle/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:107 +0x1b1 logging error output: "{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"aaron-prindle-injected-1: tcp broken\",\"code\":500}\n" [podlist-error/v0.0.0 (linux/amd64) kubernetes/$Format [::1]:46486] ``` full logs: https://gist.github.com/aaron-prindle/a9bc329d770987f40ef549b58e03f89d This bug might affect more than the serialization error outlined. **Environment**: - Kubernetes version (use `kubectl version`): 1.14 - Cloud provider or hardware configuration: Debian GNU/Linux - OS (e.g: `cat /etc/os-release`): PRETTY_NAME="Debian GNU/Linux buster/sid" NAME="Debian GNU/Linux" ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" - Kernel (e.g. `uname -a`): Linux "" 4.19.20-1""1-amd64 #1 SMP Debian 4.19.20-1""1 (2019-02-12 > 2018) x86_64 GNU/Linux - Install tools: - Others:
kind/bug,sig/api-machinery,priority/important-longterm,lifecycle/frozen
low
Critical
427,894,419
go
cmd/compile, runtime: reduce function prologue overhead
As of Go 1.12, the stack bound check in every function prologue looks like ``` MOVQ FS:0xfffffff8, CX CMPQ 0x10(CX), SP JBE 0x4018e4 ``` (or some variation thereof). This involves a chain of three dependent instructions, the first two of which are memory loads and the last of which is a conditional branch. I don't have hard data for this, but I suspect this is really bad for the CPU pipeline. The two loads are absolutely going to be in cache, but the CPU still has to wait for the first load before issuing the second, and has to wait for the second before resolving the branch. The branch is highly predictable and can probably be speculated over, but since almost every single function has such a branch, it's probably somewhat likely the branch predictor cache will fail us here. Function prologue overhead was also reported to be high in [The benefits and costs of writing a POSIX kernel in a high-level language](https://www.usenix.org/conference/osdi18/presentation/cutler) by Cutler et al. One way we could address this is by putting the stack bound in a dedicated register (leveraging our new ability to change the internal ABI, #27539). This would make the prologue a single register/register compare and a branch. The branch would still probably have poor prediction cache locality, but the register/register comparison would happen so quickly that we would lose very little to a speculation failure. We're already moving toward implementing goroutine preemption using signals, which would make it easy to poison this stack bound register when requesting a preemption. /cc @dr2chase @randall77 @josharian
Performance,NeedsInvestigation,compiler/runtime
low
Critical
427,895,564
TypeScript
3.4 Regression on Type inference with union types
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.4.0-dev.201xxxxx <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** Type inference, discriminated union types **Code** ```ts type AllBoxes = ABox | BBox; export interface ICanFindBoxes { findBoxes<T extends AllBoxes, K extends string>(boxName: K): T extends { type: K } ? T[] : never; } class Box<T extends string = string> implements ICanFindBoxes{ type: T; public findBoxes<T extends AllBoxes, K extends string>(boxName: K): T extends { type: K } ? T[] : never { throw 'argg'; } } export class ABox extends Box<'a'>{ } export class BBox extends Box<'b'>{} ``` **Expected behavior:** It compiles **Actual behavior:** Compile time error: ```sh Property 'findBoxes' in type 'Box<T>' is not assignable to the same property in base type 'ICanFindBoxes'. Type '<T extends AllBoxes, K extends string>(boxName: K) => T extends { type: K; } ? T[] : never' is not assignable to type '<T extends AllBoxes, K extends string>(boxName: K) => T extends { type: K; } ? T[] : never'. Two different types with this name exist, but they are unrelated. Type '(T extends { type: K; } ? T[] : never) | ({ type: K; } & ABox)[] | ({ type: K; } & BBox)[]' is not assignable to type 'T extends { type: K; } ? T[] : never'. Type '({ type: K; } & ABox)[]' is not assignable to type 'T extends { type: K; } ? T[] : never'. 11 public findBoxes<T extends AllBoxes, K extends string>(boxName: K): T extends { type: K } ? T[] : never { ``` The compiler is incorrectly identifying that the signatures are different when they are not. If I simplify the union type to only one it works (but this defeats the purpose of the signature). This was working in < 3.4.1
Needs Investigation
low
Critical
427,899,601
rust
Tracking issue for HashMap::extract_if and HashSet::extract_if
The feature gate for the issue is `#![feature(hash_extract_if)]` (previously `hash_drain_filter`) Currently only Vec and LinkedList have a drain_filter method, while other collections such as HashMap and HashSet do not. This means that currently, removing items from a collection, and getting ownership of those items is fairly...unidiomatic and cumbersome. For references, see https://github.com/rust-lang/rfcs/issues/2140 ### Implementation History - #76458 - removed drain-on-drop behavior, renamed to extract_if - #104455 - https://github.com/rust-lang/libs-team/issues/136
A-collections,T-libs-api,proposed-final-comment-period,C-tracking-issue,disposition-merge
high
Critical
427,919,004
go
strconv: document exact grammar of Parse{Float,Int,Uint}
`strconv.Parse{Float,Int,Uint}` is frequently used in the implementation of other grammars (e.g., [JSON](https://json.org/)), which may be a subset or superset of what `strconv.ParseX` currently does today. However, what `strconv.Parse` does today is not well-specified. What exactly is the input grammar that is accepted? For example, the documentation for `ParseFloat` only says "If s is well-formed", but does not specify what "well-formed" means. I suspect that the answer to this is whether the input is well-formed according to the [grammar in the Go specification](https://golang.org/ref/spec#Floating-point_literals). If so, this needs to be documented. Furthermore, this also opens the question for whether the input grammar is stable. Apparently, this is not the case since `strconv.ParseX` was recently augmented to support the new number literal grammars ([CL/160241](https://go-review.googlesource.com/c/160241) and [CL/160244](https://go-review.googlesource.com/c/160244)). This change is going to silently break a number of use-cases that were assuming that the input of `ParseX` was constrained to the grammar prior to Go1.13. If so, the instability of the input grammar should also be documented. (BTW, I think the change in semantics is entirely reasonable and I support it; but I see a lot of code silently broken by this change. Documenting this better would help dissuade future abuse.) \cc @cybrcodr, @griesemer, @rsc
Documentation,NeedsFix
low
Critical
427,930,765
TypeScript
Support custom typeof functions
## Search Terms - custom typeof - custom typeof function - custom type guard ## Suggestion When using a&nbsp;custom `typeof`&#x2011;like function, I’d like TypeScript compiler to be able to infer the correct type in the scope guarded by the custom `typeof`&#x2011;like function. ## Use Cases I need this to provide proper type information for the [`type` function from Blissfuljs](https://blissfuljs.com/docs.html#fn-type) and similar projects. The&nbsp;current approach requires defining the&nbsp;`type(…)`&nbsp;function as&nbsp;<code>type(obj:&nbsp;any):&nbsp;string</code> and&nbsp;then doing a&nbsp;type&nbsp;cast every time `something` is&nbsp;accessed within the `if`&nbsp;block: ```ts /// <reference types="blissfuljs"/> declare var something: any; if ($.type(something) === "array") { (something as any[]).forEach(v => {/* stuff */}) } ``` ## Examples <details><summary><code>type-func.d.ts</code></summary> <br/> ```ts /** * @param obj The variable to check the type of. * @return The result of `typeof obj` or the class name in lowercase for objects. * In the case of numbers, if the value is `NaN`, then the result is `nan`. */ declare function type(obj: null): "null"; declare function type(obj: undefined): "undefined"; // This is to ensure that the type system short-circuits when it encounters primitive types. declare function type(obj: number): "number" | "nan"; declare function type(obj: string): "string"; declare function type(obj: symbol): "symbol"; declare function type(obj: boolean): "boolean"; // Needed to ensure proper return values when wrapper objects are used. /* tslint:disable:ban-types */ declare function type(obj: number | Number): "number" | "nan"; declare function type(obj: string | String): "string"; declare function type(obj: symbol | Symbol): "symbol"; declare function type(obj: boolean | Boolean): "boolean"; declare function type(obj: Function): "function"; /* tslint:enable:ban-types */ declare function type(obj: any[]): "array"; declare function type(obj: RegExp): "regexp"; declare function type(obj: any): string; export = type; ``` </details> `example.ts` ```ts import type = require("./type-func"); declare var something: any; if (type(something) === "array") { // $ExpectType any[] something; } else if (type(somthing) === "number") { // $ExpectType number something; } ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Awaiting More Feedback
low
Major
427,941,519
go
cmd/compile: unexpected difference in compiled code for returned array
### What version of Go are you using (`go version`)? <pre> $ go version go version go1.12 linux/amd64 </pre> ### Does this issue reproduce with the latest release? Yes. ### What did you do? I compiled this code: ```go package test func test() [16]byte { x := [16]byte{0} return x } ``` ### What did you expect to see? I expected the generated assembly code to be similar to this program's code: ```go package test func test() [16]byte { x := [16]byte{1} return x } ``` ### What did you see instead? Instead, the generated code is much more bloated than I anticipated, even involving stack movement. ```asm subq $24, SP movq BP, 16(SP) leaq 16(SP), BP xorps X0, X0 movups X0, "".~r0+32(SP) movups X0, "".x(SP) movups "".x(SP), X0 movups X0, "".~r0+32(SP) movq 16(SP), BP addq $24, SP ret ``` As pointed out by @vcabbage, this issue is probably related to the assignment to "x", given that the compiler produces better code for the following program: ```go package test func test() [16]byte { return [16]byte{0} } ``` Another interesting case is the one below, where the "good" case above becomes as bad as the "bad" one: ```go package test func test() [16]byte { x := [16]byte{1} y := x return y } ```
NeedsInvestigation,binary-size,compiler/runtime
low
Major
427,990,728
go
cmd/compile: add line numbers for function exit paths?
Noticed while looking at https://github.com/golang/go/issues/31193. ```go func count(x uint) { if x != 0 { count(x - 1) } } ``` compiles to: ``` "".count STEXT size=70 args=0x8 locals=0x10 0x0000 00000 (count_test.go:8) TEXT "".count(SB), ABIInternal, $16-8 0x0000 00000 (count_test.go:8) MOVQ (TLS), CX 0x0009 00009 (count_test.go:8) CMPQ SP, 16(CX) 0x000d 00013 (count_test.go:8) JLS 63 0x000f 00015 (count_test.go:8) SUBQ $16, SP 0x0013 00019 (count_test.go:8) MOVQ BP, 8(SP) 0x0018 00024 (count_test.go:8) LEAQ 8(SP), BP 0x001d 00029 (count_test.go:9) MOVQ "".x+24(SP), AX 0x0022 00034 (count_test.go:9) TESTQ AX, AX 0x0025 00037 (count_test.go:9) JNE 49 0x0027 00039 (<unknown line number>) MOVQ 8(SP), BP 0x002c 00044 (<unknown line number>) ADDQ $16, SP 0x0030 00048 (<unknown line number>) RET 0x0031 00049 (count_test.go:10) DECQ AX 0x0034 00052 (count_test.go:10) MOVQ AX, (SP) 0x0038 00056 (count_test.go:10) CALL "".count(SB) 0x003d 00061 (count_test.go:10) JMP 39 0x003f 00063 (count_test.go:8) CALL runtime.morestack_noctxt(SB) 0x0044 00068 (count_test.go:8) JMP 0 ``` Note the `<unknown line number>` entries. It seems to me that those could be attributed to the closing brace of the function: It is code executed as the function exits. I don't know whether adding line numbers here would be good or bad for debugging, or good or bad for binary size. I just thought I'd mention it in case @dr2chase sees an opportunity here. If not, we can close.
NeedsInvestigation,Debugging,compiler/runtime
low
Critical
428,015,084
create-react-app
create react app not wroking
<!-- PLEASE READ THE FIRST SECTION :-) --> ### Is this a bug report? (write your answer here) <!-- If you answered "Yes": Please note that your issue will be fixed much faster if you spend about half an hour preparing it, including the exact reproduction steps and a demo. If you're in a hurry or don't feel confident, it's fine to report bugs with less details, but this makes it less likely they'll get fixed soon. In either case, please fill as many fields below as you can. If you answered "No": If this is a question or a discussion, you may delete this template and write in a free form. Note that we don't provide help for webpack questions after ejecting. You can find webpack docs at https://webpack.js.org/. --> ### Did you try recovering your dependencies? <!-- Your module tree might be corrupted, and that might be causing the issues. Let's try to recover it. First, delete these files and folders in your project: * node_modules * package-lock.json * yarn.lock Then you need to decide which package manager you prefer to use. We support both npm (https://npmjs.com) and yarn (http://yarnpkg.com/). However, **they can't be used together in one project** so you need to pick one. If you decided to use npm, run this in your project directory: npm install -g npm@latest npm install This should fix your project. If you decided to use yarn, update it first (https://yarnpkg.com/en/docs/install). Then run in your project directory: yarn This should fix your project. Importantly, **if you decided to use yarn, you should never run `npm install` in the project**. For example, yarn users should run `yarn add <library>` instead of `npm install <library>`. Otherwise your project will break again. Have you done all these steps and still see the issue? Please paste the output of `npm --version` and/or `yarn --version` to confirm. --> (Write your answer here.) ### Which terms did you search for in User Guide? <!-- There are a few common documented problems, such as watcher not detecting changes, or build failing. They are described in the Troubleshooting section of the User Guide: https://facebook.github.io/create-react-app/docs/troubleshooting Please scan these few sections for common problems. Additionally, you can search the User Guide itself for something you're having issues with: https://facebook.github.io/create-react-app/ If you didn't find the solution, please share which words you searched for. This helps us improve documentation for future readers who might encounter the same problem. --> (Write your answer here if relevant.) ### Environment <!-- To help identify if a problem is specific to a platform, browser, or module version, information about your environment is required. This enables the maintainers quickly reproduce the issue and give feedback. Run the following command in your React app's folder in terminal. Note: The result is copied to your clipboard directly. `npx create-react-app --info` Paste the output of the command in the section below. --> (paste the output of the command here) ### Steps to Reproduce <!-- How would you describe your issue to someone who doesn’t know you or your project? Try to write a sequence of steps that anybody can repeat to see the issue. --> (Write your steps here:) 1. i created a folder in my D drive named react 2. wrote the comand npm init react-app my-app 3. but it shows the folllowing error. ### Expected Behavior <!-- How did you expect the tool to behave? It’s fine if you’re not sure your understanding is correct. Just write down what you thought would happen. --> i expected to create my project folder (Write what you thought would happen.) ### Actual Behavior <!-- Did something go wrong? Is something broken, or not behaving as you expected? Please attach screenshots if possible! They are extremely helpful for diagnosing issues. --> ![Screenshot (59)](https://user-images.githubusercontent.com/48636726/55375702-c85c7280-552a-11e9-826d-4c80ec2ad92b.png) (Write what happened. Please add screenshots!) ![Screenshot (59)](https://user-images.githubusercontent.com/48636726/55375740-e629d780-552a-11e9-8098-19aa52f6d903.png) ### Reproducible Demo i just started react js using thus command <!-- If you can, please share a project that reproduces the issue. This is the single most effective way to get an issue fixed soon. There are two ways to do it: * Create a new app and try to reproduce the issue in it. This is useful if you roughly know where the problem is, or can’t share the real code. * Or, copy your app and remove things until you’re left with the minimal reproducible demo. This is useful for finding the root cause. You may then optionally create a new project. This is a good guide to creating bug demos: https://stackoverflow.com/help/mcve Once you’re done, push the project to GitHub and paste the link to it below: --> (Paste the link to an example project and exact instructions to reproduce the issue.) <!-- What happens if you skip this step? We will try to help you, but in many cases it is impossible because crucial information is missing. In that case we'll tag an issue as having a low priority, and eventually close it if there is no clear direction. We still appreciate the report though, as eventually somebody else might create a reproducible example for it. Thanks for helping us help you! -->
issue: question
low
Critical
428,016,977
godot
Opengles 2.0 BUG BackBufferCopy child node, change the ZIndex buffer to screen cache. BackBuffer Copy cannot be sampled correctly
**Godot version:** 3.2 dev **OS/device including version:** windows N 1080ti opengles 2.0 **Issue description:** BackBufferCopy child node, change the ZIndex buffer to screen cache. BackBuffer Copy cannot be sampled correctly
bug,topic:rendering,confirmed
low
Critical
428,032,141
react
Is it recommended to fetch in effect or should it be imperative
In out team we encountered a explosive discussion on how we should handle the relationship of a fetch and its parameters, after searching in community I still find various solutions to this, I'd like to raise this discussion to find a best practive. ## Background Suppose we have a simple list view like: <img width="687" alt="Jietu20190402-130206@2x" src="https://user-images.githubusercontent.com/639549/55377435-87268b80-5547-11e9-97a8-313a5713ced3.png"> Whenever user types keyword in textbox and clicks "Search" button, or they change the page number, we should fetch a new list from remote and render it in table. We use redux to manage global state of this simple app, the store is structured as: ```js { filter: '', pageIndex: 0, results: [] } ``` We developed a total of 3 solutions to demonstrate how the change of `filter` and `pageIndex` should cause a fetch of `results`. ## Use effect and separation of view and logic This is the first demo: https://codesandbox.io/s/20x1m39w00 In this implementation we tried to: 1. Utilize `useEffect` to trigger a fetch when any parameter changes. 2. Do not pass any parameter as prop to `components/List` component. In my point of view, I like this solution best because: 1. It have a very clear separation of view and logic, `components/List` does not receive any redundant props such as `filter` or `pageIndex`. 2. It theoretically treat a callback prop as a normal one, make it a dependency of `useEffect`. 3. It works in a **reactive** way, which means "we trigger a fetch not because the action taken from user, only because the change of state". Still we have concerns about it: 1. It obviously triggers more render and updates because change of `filter` or `pageIndex` does not dispatch `FETCH_RESULTS` immediately, this cause a sync dispatch in effect which we previously avoided by `no-set-state-did-update` rule. 2. We create a state update from another state update, this "chaining" is not clear enough for developers and may cause unwanted infinite loop. ## Use effect and params together The second demo is much like the first one: https://codesandbox.io/s/54o1rjvyv4 The only change is we pass `filter` and `pageIndex` to `components/List`, in this case we believe **effect is a part of component** so that every dependencies used to form an effect should be passed as prop. This solution gives a more clear view of what is used to fetch data in `components/List`, this is a highly adopted solution in community, however we're not sure this is recommended officially. ## Imperative action to fetch data As opposed to previous, this is our third demo: https://codesandbox.io/s/p5yv48x97x In this solution we changed our thought and implement the app in a more "redux way": 1. We trigger the fetch on user interactions, either click on "Search" button or change the page number, however either interaction only provides its own parameter, we don't provide `pageIndex` when "Search" button is clicked. 2. We have a thunk which computes a new parameter object based on current state using `getState()` function, a `FETCH_RESULTS` action is dispatched. 3. We have several reducers to observe `FETCH_RESULTS` action and updates corresponding parameter in global state. 4. Fetched list is connected to `components/List` component, this component now is a pure presentational component, no lifecycle effect is involved. 5. To solve the first fetch when application is mounted, we create an `containers/App` container component. By doing these we eliminated the "chaining state update" issue, however it introduces several concerns: 1. If we add more user interactions in the future, the `loadResults` thunk could be more and more complex. 2. The use of `getState` in `redux-thunk` is not highly recommended in community, we found some articles stating that developers should avoid to use it in most cases. 3. We can't explain the exist of the `containers/App` container only to trigger a fetch on mount, thee `useEffect` take no dependencies and `exhaustive-deps` rules complains about it, not paring mount and update is also a big uncomfortable point to us. 4. Trigger fetch from user interactions is what we called "imperative", we're confused about whether a reactive framework like react recommends imperative programming. ------ Since we are not able to get a conclusion for a very long time, we decide to raise this issue for more discussion to find a better solution to these very common use cases.
Type: Discussion,Component: Hooks
low
Minor
428,051,691
rust
Thread locals keep Rust shared library from unloading `dlclose`.
OS: Ubuntu 18.04 Doesn't seem to happen on Windows, untested on MacOS. I created [`cr-sys`](https://github.com/Neopallium/cr-sys) crate to wrap [cr.h: A Simple C Hot Reload Header-only Library](https://github.com/fungos/cr). It works, except that each version of the plugin isn't unloaded correctly, if the plugin uses TLS destructors registered with `__cxa_thread_atexit_impl`. C++ plugins can be compiled with `-fno-use-cxa-atexit` to disable the use of `__cxa_thread_atexit`. But I can't find away to do this with Rust plugins. Right now the only work around is to not unload old plugin and make sure to use a different name for each plugin version. Is there any way to run the TLS destructors registered by the plugin or to disable TLS destructors for a Rust shared library? Related issue on `cr.h`: https://github.com/fungos/cr/issues/35
O-linux,A-thread-locals,C-bug,T-libs
low
Minor
428,176,679
scrcpy
flatpak package
Hi,I have packaged scrcpy with flatpak.You can find it in https://github.com/12wk34/scrcpy-flatpak Could you add it to flathub.org cause they require the developer to do it. The more information about app submission: https://github.com/flathub/flathub/wiki/App-Submission
distrib
medium
Critical
428,221,838
kubernetes
single-use controller config defaults should auto-apply
The functions added in https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/apis/config/v1alpha1/defaults.go#L68-L105 should be normal defaults. The "RecommendedDefaults" helper function approach was only needed for config structs used in multiple contexts (like client configuration, rate limiting, leader election, etc). The controller-specific config structs should use normal defaulting functions so we don't have to remember to wire them up in every use. /cc @stewart-yu
priority/backlog,kind/cleanup,lifecycle/frozen,wg/component-standard
low
Minor
428,266,989
go
x/tools/cmd/goimports: do not prefix packages from GOROOT if it is inside a module
### What version of Go are you using (`go version`)? <pre> go version go1.12beta2 darwin/amd64 </pre> ### Does this issue reproduce with the latest release? Yes. ### What operating system and processor architecture are you using (`go env`)? <details> <pre> GOARCH="amd64" GOBIN="" GOCACHE="/Users/dottedmag/Library/Caches/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/dottedmag/go" GOPROXY="" GORACE="" GOROOT="/Users/dottedmag/tectonic/_deps/go-1.12beta2" GOTMPDIR="" GOTOOLDIR="/Users/dottedmag/tectonic/_deps/go-1.12beta2/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/bw/6hyp_vyj68v973qbhlbcvlcr0000gn/T/go-build721712428=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre> </details> ### What did you do? I have a `GOROOT` inside a module (a per-repository Go installation for a hermetic build). I run `goimports`, it tries to resolve a package (say, `reflect`), it loads packages from `GOROOT`, and they end up being prefixed with a path inside the current module. ### What did you expect to see? ``` package mypkg import "reflect" type x reflect.Type ``` ### What did you see instead? ``` package mypkg import "my.module/_deps/go-1.12beta2/src/reflect" type x reflect.Type ``` ### How to fix [Patch redacted by @bcmills. Please send a PR or CL so that we can verify CLA compliance.]
help wanted,NeedsFix,Tools
low
Critical
428,267,939
pytorch
MaxPool with n-dimensional tensors
## 🚀 Feature: Currently `MaxPoolxd(...)` accepts only up to `x+2`-dimensional input tensors. The idea is to make `MaxPool` to work with any `n`-dimensional tensor, such that the maxpooling operation is applied only on the last `x` dimensions. ## Motivation It can be useful to avoid verbose reshaping from/to the required shape of `MaxPool`. ## Alternatives For `MaxPool2d`, this is what I'm currently doing. It's pretty verbose and it can be better rewritten but it does work with any `n`-dimensional input tensor. ```python import torch from torch.nn.modules.pooling import _MaxPoolNd class GenericMaxPool2d(_MaxPoolNd): def forward(self, input): input_shape = input.shape c = input.shape[-3] h_in = input.shape[-2] w_in = input.shape[-1] input = input.view(-1, c, h_in, w_in) output = torch.nn.functional.max_pool2d(input, self.kernel_size, self.stride, self.padding, self.dilation, self.ceil_mode, self.return_indices) h_out = output.shape[-2] w_out = output.shape[-1] return output.view(*input_shape[:-3], c, h_out, w_out) ``` cc @albanD @mruberry
module: nn,triaged,enhancement,module: pooling
low
Minor
428,282,339
pytorch
Numerical instability KL divergence RelaxedOneHotCategorical
## 🐛 Bug It seems that computing the KL divergence RelaxedOneHotCategorical leads to numerical instabilities. ## To Reproduce ```python import torch from torch.distributions import RelaxedOneHotCategorical p_m = RelaxedOneHotCategorical(torch.tensor([2.2]), probs=torch.tensor([0.1, 0.2, 0.3, 0.4])) batch_param_obtained_from_a_nn = torch.rand(2, 4) q_m = RelaxedOneHotCategorical(torch.tensor([5.4]), logits=batch_param_obtained_from_a_nn) z = q_m.rsample() kl = - torch.mean(q_m.log_prob(z).exp() * (q_m.log_prob(z) - p_m.log_prob(z))) z tensor([[0.2671, 0.2973, 0.2144, 0.2212], [0.2431, 0.2550, 0.3064, 0.1954]]) kl tensor(-766.7020) ``` ## Expected behavior KL divergence grows to much ## Environment - PyTorch Version (e.g., 1.0): 1.1 - OS (e.g., Linux): Mac - How you installed PyTorch (`conda`, `pip`, source): pip - Python version: 3.7
module: numerical-stability,module: distributions,triaged
low
Critical
428,318,364
TypeScript
Nested conditional type with generic tuple argument always expands to false branch.
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** next (`3.4.0-dev.20190330`), latest (`3.4.1`) <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** * tuple conditional type * tuple compare * tuple comparison **Code** ```ts type Not<T extends boolean> = [T] extends [true] ? false : true; type NoOp<T extends boolean> = Not<Not<T>>; // always expands true (see intellisense) type t0 = Not<Not<false>>; // false type t1 = NoOp<false>; // true (but should be false) ``` **Expected behavior:** Type `t1` must be `false` and `NoOp` must be of the nested conditional type that depends on `T`, but not of simple `true` unit type. **Actual behavior:** As you see, something is broken when you use generic argument for tuple comparison in conditional type, because when you expand your types manually (`t0` is an expanded representation of `t1`) it works as expected. **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> [Klick me](https://www.typescriptlang.org/play/index.html#src=type%20Not%3CT%20extends%20boolean%3E%20%3D%20%5BT%5D%20extends%20%5Btrue%5D%20%3F%20false%20%3A%20true%3B%0D%0Atype%20NoOp%3CT%20extends%20boolean%3E%20%3D%20Not%3CNot%3CT%3E%3E%3B%0D%0A%0D%0Atype%20t10%20%3D%20Not%3CNot%3Cfalse%3E%3E%3B%20%2F%2F%20false%0D%0Atype%20t11%20%3D%20NoOp%3Cfalse%3E%3B%20%20%20%20%20%2F%2F%20true%0D%0A) **Related Issues:** <!-- Did you find other bugs that looked similar? --> This bug report originated from my [stackoverflow question](https://stackoverflow.com/questions/55465053/typescript-tuples-conditional-comparison-always-evaluates-to-false). Additional credits to @dragomirtitian and @fictitious for helping and localizing the bug to a minimal representation.
Bug
low
Critical
428,347,793
TypeScript
TS 3.4: Error when passing dynamically imported generic type as a function argument
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.4.0-dev.201xxxxx <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** TS2322, dynamic import, dynamic import never, dynamic import generic **Code** .\node_modules\.bin\tsc --jsx react --lib es2017 --strict index.tsx ```ts // ===== page.tsx ====== import * as React from 'react' export interface PageProps { title: string } export class Page extends React.Component<PageProps> { render() { return <div>{this.props.title}</div> } } // ===== index.tsx ===== import * as React from 'react' import{ PageProps, Page } from './page' export function myFunction<TProps>(loader: () => Promise<React.ComponentType<TProps>>) { } // No error myFunction(() => Promise.resolve(Page)) // No error const loader: () => Promise<React.ComponentType<PageProps>> = () => import('./page').then(m => m.Page) // Error myFunction(() => import('./page').then(m => m.Page)) ``` **Expected behavior:** No compile error. This was the behavior in TS 3.3 and earlier. **Actual behavior:** There is an error after upgrading to TS 3.4: ``` index.tsx:14:18 - error TS2322: Type 'Promise<typeof Page | ComponentClass<never, any> | FunctionComponen t<never>>' is not assignable to type 'Promise<ComponentType<PageProps>>'. Type 'typeof Page | ComponentClass<never, any> | FunctionComponent<never>' is not assignable to type 'C omponentType<PageProps>'. Type 'ComponentClass<never, any>' is not assignable to type 'ComponentType<PageProps>'. Type 'ComponentClass<never, any>' is not assignable to type 'ComponentClass<PageProps, any>'. Type 'PageProps' is not assignable to type 'never'. 14 myFunction(() => import('./page').then(m => m.Page)) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ index.tsx:4:44 4 export function myFunction<TProps>(loader: () => Promise<React.ComponentType<TProps>>) { ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The expected type comes from the return type of this signature. ``` Repro here: https://github.com/srmagura/ts-import-repro
Bug
medium
Critical
428,366,060
rust
Re-enable LLVM and debug assertions for slow builders
Assertions are currently disabled on a number of builders: * test-various - debug asserts, intentional - needed to properly test code size of some tests * dist-x86_64-apple - llvm, debug * dist-x86_64-apple-alt - llvm, debug * x86_64-apple - llvm, debug * dist-aarch64-apple - llvm, debug
O-macos,P-low,T-infra
low
Critical
428,418,595
go
x/fuzzdata: new repository for fuzzing corpus data
### Summary In #30719 and #30979, `dvyukov/go-fuzz` compatible fuzz functions were landed in: * the standard library in [src/image/png/fuzz.go](https://github.com/golang/go/blob/master/src/image/png/fuzz.go) * `golang.org/x` in [golang.org/x/image/tiff/fuzz.go](https://github.com/golang/image/blob/master/tiff/fuzz.go) The follow-up item in this issue here is to add a new `golang.org/x/corpus` repository to hold an example fuzzing corpus for portions of the standard library and x subrepos. This is to help with the exploration requested by the core Go team in discussion of the #19109 proposal to "make fuzzing a first class citizen". The name for the new corpus repo alternatively could be `golang.org/x/fuzz` or something else; some additional naming discussion below. Note that this issue is at least currently intended to be solely about creating the repository itself, and this issue does not cover checking in any corresponding fuzzing corpus (which is likely to be a follow-up issue). ### Background See the "Background" section of #30719 or https://github.com/golang/go/issues/19109#issuecomment-441442080. ### Additional Details As part of the #19109 proposal discussion, there were multiple comments/requests from the core Go team asking to develop a better understanding of how a fuzzing corpus looks and behaves when it resides in a repository. For example, this March 2017 request in https://github.com/golang/go/issues/19109#issuecomment-289573239 from Russ: > add fuzz tests to at least the x subrepos and maybe the standard library, so that we can understand the implications of having them in the source repos (including how much space we're going to spend on testdata/fuzz corpus directories In the March 2017 proposal document in https://github.com/golang/go/issues/19109#issuecomment-285456008, @dvyukov proposed: > For the standard library it is proposed to check in corpus into `golang.org/x/fuzz` repo. `golang.org/x/fuzz` seems to be a very reasonable name. Two alternative possible names: * `golang.org/x/corpus` * `golang.org/x/fuzzcorpus` Personally, I think `x/corpus` or `x/fuzzcorpus` might be more evocative than `x/fuzz` (e.g., someone might incorrectly think `x/fuzz` is where the fuzzing implementation or fuzzing functions live), but I do not have a strong opinion on the name and suspect others will have stronger opinions. Until there is additional feedback on the name, the rest of this comment here will use the term `golang.org/x/corpus`. ### Populating the corpus The initial seeding of the corpus can likely come from https://github.com/dvyukov/go-fuzz-corpus for a given Fuzz function. If that happens, then right now, there would be two corpus directories populated: [go-fuzz-corpus/png/corpus](https://github.com/dvyukov/go-fuzz-corpus/tree/master/png/corpus) and [go-fuzz-corpus/tiff/corpus](https://github.com/dvyukov/go-fuzz-corpus/tree/master/tiff/corpus). In parallel, multiple people are making progress on integrating `dvyukov/go-fuzz` into oss-fuzz in google/oss-fuzz#36, google/oss-fuzz#2188, dvyukov/go-fuzz#213 and elsewhere. Most likely, it will make sense to periodically update the `golang.org/x/corpus` repo with the output from oss-fuzz (which otherwise by default is stored in a Google Cloud Storage bucket). However, the exact mechanism of populating and updating `golang.org/x/corpus` repo I think can be discussed outside of this particular issue. (Reason: In general, it seems more tractable to make progress if things are broken down into more manageable discrete chunks of work, especially to break out into separate steps the things that must be done by someone on the core Go team, vs. could be done by someone from the broader community. This issue is focused on the creation of the repo, which presumably must be done by someone on the core Go team, whereas populating the corpus can be done by a greater range of people from the broader community in consultation with the core Go team). I am of course happy to discuss anything here, and happy to be corrected if any of the above is different than how people would like to proceed. CC @dvyukov @josharian @FiloSottile @bradfitz @acln0
NeedsDecision
low
Critical
428,466,635
flutter
Engine often does not correctly handle errors from the Dart embedding API
The DART_CHECK_VALID macro had incorrect behavior in the face of Isolate.kill and Dart_Cleanup. It was removed upstream. The engine is still in maybe places calling Dart_PropagateError in a context where is not Dart frame to which the error can be propagated. One example is the initialization of the core libraries that happens in the isolate creation callback.
engine,dependency: dart,P2,team-engine,triaged-engine
low
Critical
428,479,730
pytorch
Value of torch.backends.cudnn.benchmark Baked into JIT-Traced Modules ( 150x slowdown on ConvTranspose2d() ) [jit] [libtorch] [cudnn]
## 🐛 Bug If you trace a module with `torch.jit.trace(...)` and load that script module in C++ via LibTorch, the resulting behavior in C++ depends on whether or not the `torch.backends.cudnn.benchmark` flag was set. Calls to `at::globalContext().setBenchmarkCuDNN(true/false)` from the C++ API at runtime appear to have no effect. ## To Reproduce **NOTE**: I was not able to verify this issue still exists on the latest nightly (20190402) because it appears the latest nightly (at least on Windows) cannot run JIT-traced models. Even the simplest model gives the following error: ``` INVALID_ARGUMENT:: Cannot find field. (deserialize at ..\torch\csrc\jit\import.cpp:108) (no backtrace available) ``` 1) Run the python script below: `python test.py 0` or `python test.py 1` 2) Compile + run the C++ code below. 3) Observe: a) Average time per call. I see ~0.8ms in the python script and either ~0.8 or ~120ms in C++ depending on the flag used in python. In either case, C++ sets benchmarking ON. (GTX 1080) b) Kernel run by CuDNN. w/either setting of the flag, the python code runs `cudnn::detail::dgrad_engine<...>`. With the flag ON, it runs `cudnn::detail::dgrad2d_alg1_1<...>` once (taking ~120ms) and then chooses the faster `dgrad_engine`. If the flag was ON in python, C++ also chooses `dgrad_engine` but if the flag was OFF in python, it always chooses `dgrad2d_alg1_1` regardless of the flag setting in C++. I observed the choice of kernel using `nvprof python test.py 0/1`. Python Script (`test.py`): ```python import sys import time import torch as th th.backends.cudnn.benchmark = bool(int(sys.argv[1])) mod = th.nn.ConvTranspose2d(8, 3, 4, 2, 1).cuda() inp = th.zeros(1, 8, 512, 512).cuda() mod(inp); mod(inp); mod(inp) smod = th.jit.trace(mod, (inp,), check_trace=False) smod.save("smod.ptj") N = 1000 th.cuda.synchronize() start = time.time() for _ in range(N): mod(inp) th.cuda.synchronize() end = time.time() print("Time (ms):", 1000*(end-start)/N) ``` C++ Code: ```c++ #include <chrono> #include <iostream> #include <c10/cuda/CUDAGuard.h> #include <torch/script.h> #include <torch/torch.h> #include <cuda_runtime_api.h> int main() { at::globalContext().setBenchmarkCuDNN(true); auto nograd = torch::NoGradGuard(); try { auto mod = torch::jit::load("smod.ptj"); mod->to(torch::kCUDA); torch::jit::Stack input_stack = {torch::zeros({1, 8, 512, 512}, torch::kCUDA)}; mod->forward(input_stack); mod->forward(input_stack); mod->forward(input_stack); const int N = 100; cudaDeviceSynchronize(); const auto start = std::chrono::high_resolution_clock::now(); for (int i = 0; i < N; ++i) { mod->forward(input_stack); cudaDeviceSynchronize(); } const auto end = std::chrono::high_resolution_clock::now(); const float elapsed = std::chrono::duration<float, std::milli>(end - start).count() / N; std::cout << "Time (ms): " << elapsed << std::endl; } catch (c10::Error e) { std::cerr << e.what() << std::endl; return 1; } return 0; } ``` ## Expected Behavior I would expect that either: 1) the C++ setting of `at::globalContext().setBenchmarkCuDNN(true)` should be respected (choosing the correct algorithm) or 2) at least print a warning that it is being overridden by the value of the flag at trace time. ## Additional Info I printed the JIT graphs generated with benchmarking ON/OFF and got the following with the flag OFF: ``` graph(%input : Float(1, 8, 512, 512), %1 : Float(8, 3, 4, 4), %2 : Float(3)): %3 : int = prim::Constant[value=2](), scope: ConvTranspose2d %4 : int = prim::Constant[value=2](), scope: ConvTranspose2d %5 : int[] = prim::ListConstruct(%3, %4), scope: ConvTranspose2d %6 : int = prim::Constant[value=1](), scope: ConvTranspose2d %7 : int = prim::Constant[value=1](), scope: ConvTranspose2d %8 : int[] = prim::ListConstruct(%6, %7), scope: ConvTranspose2d %9 : int = prim::Constant[value=1](), scope: ConvTranspose2d %10 : int = prim::Constant[value=1](), scope: ConvTranspose2d %11 : int[] = prim::ListConstruct(%9, %10), scope: ConvTranspose2d %12 : bool = prim::Constant[value=1](), scope: ConvTranspose2d %13 : int = prim::Constant[value=0](), scope: ConvTranspose2d %14 : int = prim::Constant[value=0](), scope: ConvTranspose2d %15 : int[] = prim::ListConstruct(%13, %14), scope: ConvTranspose2d %16 : int = prim::Constant[value=1](), scope: ConvTranspose2d %17 : bool = prim::Constant[value=0](), scope: ConvTranspose2d %18 : bool = prim::Constant[value=0](), scope: ConvTranspose2d %19 : bool = prim::Constant[value=1](), scope: ConvTranspose2d %20 : Float(1, 3, 1024, 1024) = aten::_convolution(%input, %1, %2, %5, %8, %11, %12, %15, %16, %17, %18, %19), scope: ConvTranspose2d return (%20) ``` The only change when the flag is ON is that register %17 is 1 instead of 0. I suppose this is where the "hardcoding" of the flag might be happening? ## Environment Python code was run on Linux, C++ code was run on Windows - PyTorch Version (e.g., 1.0): 1.0.0.dev20190311 on linux, 2336f0ba0 on Windows - OS (e.g., Linux): Fedora 29, Windows 10 1809 - How you installed PyTorch (`conda`, `pip`, source): conda (pytorch-nightly) - Python version: 3.7 - CUDA/cuDNN version: CUDA 10, cuDNN 7.4.2 - GPU models and configuration: Titan RTX (linux), GTX 1080 (windows) cc @suo
oncall: jit,triaged
low
Critical
428,492,147
TypeScript
Suggestion: support compile time annotations alongside runtime decorators
## Search Terms annotation compile time decorator ## Suggestion The way that decorators are currently implemented in TypeScript mirrors their functionality in JS in that they exist at run time and work on run time objects. Because of TypeScript's transpiled nature, a tremendous amount of important metadata is lost by the time decorators are run. Even with the `reflect-metadata` package, important type information is lost. This represents a nearly insurmountable hurdle in developing high quality software in some extremely important areas. Consider the case of dependency injection. Any of the TS containers require manually wiring the container together and, even worse, a healthy amount of invocation-site syntactical noise. I come from a Java background, and I think about the Spring DI container, where there is no manual configuration and there is no invocation-site noise, because there's no invocation site. The container is constructed at compile time based on annotations and classpath discovery, which leads to a much smoother developer experience. Building a DI container in TS with that level of developer ergonomics would be a pretty herculean task. Consider the case of automatically generating webservice code from static model definitions. I'm not even sure if this is currently possible, but it's a feature that developers coming from mature web frameworks expect. I believe that these two things, as well as a myriad of other cases where AOP is well applied, are important pieces that should be available in the TS ecosystem. TS is a compiled language and constraining the metaprogramming (decorator) facilities to what is available in JS code prevents these tools from being developed, or at least greatly slows their progress. To this aim, I would like to suggest the addition of compiler annotations: functions that run on compiler nodes at compile time, have the ability to output additional code and otherwise interact with the compiler, but do not have the ability to mutate the compiler nodes. These suggested annotations would only ever be compile-time metadata, whereas decorators are run-time mutators. Adding or removing decorators will change run time behavior, but annotated code should always compile to the same code whether annotations are added or removed. Annotated code may inform a framework that other code is to be generated, but are otherwise completely erased in the JS code and exist only as a compiler facility. I believe it's fully possible to do all of this now with a custom compiler step that outputs generated code, and it may even be possible to integrate with the language server so that code doesn't have to constantly be generated. I don't believe this violates any of the design goals/non-goals of TS, as the annotations don't change the behavior (or emitted JS) of the annotated code, merely provide an easier facility for library developers to interact with TS and take advantage of the compile step. I don't want to presume to recommend a syntax or an API to this feature. To summarize, annotations would: * Only exist at compile time and be entirely erased by the compiler * Interact with the compiler directly, having access to the compiler node and API * Be able to output additional code to be compiled as part of the currently-running compile step * Be attachable to (almost?) any node, but have to declare what they can accept (e.g. an annotation for `DisableLint` would be allowed anywhere, but a `Injectable` annotation would likely want to narrow its scope)
Suggestion,Awaiting More Feedback
medium
Major
428,525,508
rust
Why are 'maybe' bounds not permitted in trait objects?
For example, ```rust type Foo = dyn Send + ?Sized; ``` gives the error `` `?Trait` is not permitted in trait object types ``. Is there any good reason for this rule? Either from a language-theoretic or implementation viewpoint. I ask mainly because I'm try to enforce this for trait objects that use trait aliases in the compiler, but this error is currently raised during AST validation, and would be very difficult to do during typeck, since information about 'maybe' bounds is only included in the HIR and not the type system. CC @Centril
C-enhancement,A-trait-system,T-lang,A-trait-objects
low
Critical
428,550,192
rust
Run-pass checks for warnings, notes, etc.
When writing https://github.com/rust-lang/rust/pull/59658, I found it neccesary to check for warnings in a `run-pass` test because `deny(warning)` would observably change the warning pattern I was testing for in the first place. It would be nice if `run-pass` would check for `WARN`, `NOTE`, `HELP`, etc. and cause a test failure if those are not found when expected. A bonus point for failing for unexpected `WARN`s etc. as well.
A-testsuite,A-lints,T-compiler
low
Critical
428,640,454
nvm
Node installed via nvm is not available in tools which weren't run from a terminal
- Operating system and version: Ubuntu 18.04 (Version doesn't seem to be relevant, I have seen this behavior on various versions) - `nvm debug` output: <details> ```sh nvm --version: v0.33.11 $SHELL: /bin/bash $SHLVL: 1 $HOME: /home/norbert-sule $NVM_DIR: '$HOME/.nvm' $PATH: $NVM_DIR/versions/node/v10.14.2/bin:$HOME/.local/bin:$HOME/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin $PREFIX: '' $NPM_CONFIG_PREFIX: '' $NVM_NODEJS_ORG_MIRROR: '' $NVM_IOJS_ORG_MIRROR: '' shell version: 'GNU bash, version 4.4.19(1)-release (x86_64-pc-linux-gnu)' uname -a: 'Linux 4.15.0-46-generic #49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux' OS version: Ubuntu 18.04.2 LTS curl: /usr/bin/curl, curl 7.58.0 (x86_64-pc-linux-gnu) libcurl/7.58.0 OpenSSL/1.1.0g zlib/1.2.11 libidn2/2.0.4 libpsl/0.19.1 (+libidn2/2.0.4) nghttp2/1.30.0 librtmp/2.3 wget: /usr/bin/wget, GNU Wget 1.19.4 built on linux-gnu. git: /usr/bin/git, git version 2.17.1 grep: /bin/grep (grep --color=auto), grep (GNU grep) 3.1 awk: /usr/bin/awk, GNU Awk 4.1.4, API: 1.1 (GNU MPFR 4.0.1, GNU MP 6.1.2) sed: /bin/sed, sed (GNU sed) 4.4 cut: /usr/bin/cut, cut (GNU coreutils) 8.28 basename: /usr/bin/basename, basename (GNU coreutils) 8.28 rm: /bin/rm, rm (GNU coreutils) 8.28 mkdir: /bin/mkdir, mkdir (GNU coreutils) 8.28 xargs: /usr/bin/xargs, xargs (GNU findutils) 4.7.0-git nvm current: v10.14.2 which node: $NVM_DIR/versions/node/v10.14.2/bin/node which iojs: which npm: $NVM_DIR/versions/node/v10.14.2/bin/npm npm config get prefix: $NVM_DIR/versions/node/v10.14.2 npm root -g: $NVM_DIR/versions/node/v10.14.2/lib/node_modules ``` </details> - `nvm ls` output: <details> <!-- do not delete the following blank line --> ```sh v8.14.0 -> v10.14.2 default -> 10 (-> v10.14.2) node -> stable (-> v10.14.2) (default) stable -> 10.14 (-> v10.14.2) (default) iojs -> N/A (default) lts/* -> lts/dubnium (-> v10.14.2) lts/argon -> v4.9.1 (-> N/A) lts/boron -> v6.15.1 (-> N/A) lts/carbon -> v8.14.0 lts/dubnium -> v10.14.2 ``` </details> - How did you install `nvm`? (e.g. install script in readme, Homebrew): Installed via the snippet from the README. - What steps did you perform? * Installed NVM * Expected WebStorm to be able to detect the installed / default node version. - What happened? In the settings of WebStorm a warning tells me that it couldn't find the node binary in PATH and thus I have to specify an absolute path which is not portable. The cause is that nvm installs itself to `.bashrc`. It seems that .bashrc never gets evaluated if you start your desktop and run an app from the desktop environment directly (ie: clicking on an icon, autostart, etc). This results in the app having an environment with a PATH variable which lacks the appropriate node path. - What did you expect to happen? I expect WebStorm to be able to find the node binary. More broadly: I expect the PATH environment variable inherited from the context of my desktop environment to contain the path of the default node installation. - Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`? No, standard Ubuntu install. Note: I have worked around the issue by copying nvm related code to .profile. It is executed right when the user logs in and thus the gui and other tools can inherit the changes to PATH. According to the bash man page: > .bash_profile is executed for login shells, while .bashrc is executed for interactive non-login shells. .bash_profile was not available in my home folder and I don't really have a great understanding on these files so instead I opted to simply use .profile instead. Still, based on this fact and the fact that .bashrc is only evaluated when you start a bash shell in the terminal, I wonder why it was chosen as the target for nvm's scripts.
installing nvm: profile detection
low
Critical
428,666,288
TypeScript
`pipe` loses generics
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.4.1 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** generic rest parameters pipe compose **Code** I'm defining `pipe` using generic rest parameters as recommended here: https://github.com/Microsoft/TypeScript/issues/29904#issuecomment-471334674. ```ts // Copied from https://github.com/Microsoft/TypeScript/issues/29904#issuecomment-471334674 declare function pipe<A extends any[], B>(ab: (...args: A) => B): (...args: A) => B; declare function pipe<A extends any[], B, C>( ab: (...args: A) => B, bc: (b: B) => C, ): (...args: A) => C; declare function pipe<A extends any[], B, C, D>( ab: (...args: A) => B, bc: (b: B) => C, cd: (c: C) => D, ): (...args: A) => D; declare const myGenericFn: <T>(t: T) => string[]; declare const join: (strings: string[]) => string; // Expected type: `<T>(t: T) => string` // Actual type: `(t: any) => string` const fn1 = pipe( myGenericFn, join, ); // Workaround: // Expected and actual type: `<T>(t: T) => string` const fn2 = pipe( myGenericFn, strings => join(strings), ); ``` **Playground Link:** https://www.typescriptlang.org/play/index.html#src=declare%20function%20pipe%3CA%20extends%20any%5B%5D%2C%20B%3E(ab%3A%20(...args%3A%20A)%20%3D%3E%20B)%3A%20(...args%3A%20A)%20%3D%3E%20B%3B%0D%0Adeclare%20function%20pipe%3CA%20extends%20any%5B%5D%2C%20B%2C%20C%3E(%0D%0A%20%20%20%20ab%3A%20(...args%3A%20A)%20%3D%3E%20B%2C%0D%0A%20%20%20%20bc%3A%20(b%3A%20B)%20%3D%3E%20C%2C%0D%0A)%3A%20(...args%3A%20A)%20%3D%3E%20C%3B%0D%0Adeclare%20function%20pipe%3CA%20extends%20any%5B%5D%2C%20B%2C%20C%2C%20D%3E(%0D%0A%20%20%20%20ab%3A%20(...args%3A%20A)%20%3D%3E%20B%2C%0D%0A%20%20%20%20bc%3A%20(b%3A%20B)%20%3D%3E%20C%2C%0D%0A%20%20%20%20cd%3A%20(c%3A%20C)%20%3D%3E%20D%2C%0D%0A)%3A%20(...args%3A%20A)%20%3D%3E%20D%3B%0D%0A%0D%0Adeclare%20const%20myGenericFn%3A%20%3CT%3E(t%3A%20T)%20%3D%3E%20string%5B%5D%3B%0D%0Adeclare%20const%20join%3A%20(strings%3A%20string%5B%5D)%20%3D%3E%20string%3B%0D%0A%0D%0A%2F%2F%20Expected%20type%3A%20%60%3CT%3E(t%3A%20T)%20%3D%3E%20string%60%0D%0A%2F%2F%20Actual%20type%3A%20%60(t%3A%20any)%20%3D%3E%20string%60%0D%0Aconst%20fn1%20%3D%20pipe(%0D%0A%20%20%20%20myGenericFn%2C%0D%0A%20%20%20%20join%2C%0D%0A)%3B%0D%0A%0D%0A%2F%2F%20Workaround%3A%0D%0A%2F%2F%20Expected%20and%20actual%20type%3A%20%60%3CT%3E(t%3A%20T)%20%3D%3E%20string%60%0D%0Aconst%20fn2%20%3D%20pipe(%0D%0A%20%20%20%20myGenericFn%2C%0D%0A%20%20%20%20strings%20%3D%3E%20join(strings)%2C%0D%0A)%3B%0D%0A **Related Issues:** https://github.com/Microsoft/TypeScript/issues/29904 /cc @ahejlsberg
Needs Investigation
low
Critical
428,722,604
pytorch
[Feature Request] Flattened indices option for max pooling
## 🚀 Feature Option to return flattened indices for max pooling layers. ## Motivation Some operations require to have access to the flattened indices that ONNX returns initially. For instance, let's consider this vision example: I am working on a layer that counts how many times the element (pixel) at position `i` was the winner (selected as maximum) in a max pooling 2d operation. As things are right now, this is the solution: ```python def forward(self, input): _, indices = self.maxpool2d(input) N, C, H, W = input.size() minlength = H * W Nout, Cout, Hout, Wout = indices.size() indices_count = [t.bincount(minlength=minlength) for t in indices.view(-1, Hout * Wout).unbind()] return torch.stack(indices_count).view(input.size()) ``` This implementation is very slow. The overhead introduced by running this for-loop could be avoided if flattened indices were returned. This is how the code would look like then: ```python def forward(self, input): input_flattened_length, input_shape = input.nelement(), input.size() _, indices = self.maxpool2d(input) return indices.view(-1).bincount(minlength=input_flattened_length).view(input_shape) ``` The indices for max pooling 2d are currently referencing local frames, non-flattened. See [this issue](https://github.com/pytorch/pytorch/pull/16455#issuecomment-460776407) for a clearer picture of what this means. ## Pitch This feature would allow to return flattened indices, in the same way as [tf.nn.max_pool_with_argmax](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool_with_argmax) does. ## Alternatives 1. To implement `apply_along_axis`. This is a draft showing how it would solve my problem: ```python def forward(self, input): _, indices = self.maccpool2d(-input) N, C, H, W = input.size() minlength = H * W Nout, Cout, Hout, Wout = indices.size() r = np.apply_along_axis(lambda x: np.bincount(x, minlength=minlength), axis=-1, arr=indices.view(N, C, -1).numpy()) return torch.from_numpy(r).view(input.size()) ``` 2. To add a frequency counter that takes the indices returned by max pooling 2d (for instance) and sets in the location of the tensor corresponding to each index, the amount of times it was encountered in the returned indices. ## Additional context In short, what I am trying to implement is the following: ![CodeCogsEqn (2)](https://user-images.githubusercontent.com/7946422/55476426-d1c30900-5616-11e9-85ba-4ae3f2a91677.gif) where C contains the unfolded set of kernel windows. In words: for each kernel window `c`, it adds `1` in the output at the location of the winner, `1[argmax c]`. cc @albanD @mruberry @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof
module: nn,triaged,enhancement,module: pooling
low
Major
428,751,207
godot
Create Outline Mesh unexpected result for PlaneMesh and flat CubeMesh
**Godot version:** Godot v3.1.stable.official **OS/device including version:** Windows 10 64bit **Issue description:** The "Create Outline Mesh" doesn't work properly for PlaneMesh (or CubeMesh with x xor y xor z set to 0), creating an outline that is just a copy of the original mesh offset by the specified outline width. **Left:** result of the automatically generated outline, **Right:** expected result Top: ![Top](https://i.imgur.com/CVrR1F3.png) Bottom: ![Bottom](https://i.imgur.com/vlYV6n7.png) Side: ![Side](https://i.imgur.com/5ICPA53.png) **Steps to reproduce:** 1. Create a MeshInstance 2. Set the MeshInstances Mesh to a PlaneMesh (or CubeMesh with x xor y xor z set to 0) 3. Generate the outline by clicking on Mesh -> Create Outline Mesh **Minimal reproduction project:** [OutlineIssue.zip](https://github.com/godotengine/godot/files/3038749/OutlineIssue.zip)
topic:rendering,topic:editor,documentation
low
Minor
428,870,399
pytorch
Running custom operator tests manually is too difficult
To compile/run custom operator tests one needs to: 1. build the repo normally 2. find the folder where the custom operator tests live 3. make a new cmake build folder for them 4. build cmake, setting the CMAKE_PREFIX_PATH correctly, otherwise the FindTorch.cmake file is not found and it doesn't work 5. figure out how to invoke the C++ tests 6. figure out how to invoke the python tests. The precise details for osx and linux are spread across at least 4 separate CI scripts hidden in dot folders. Instead: There should be a python test/test_custom_ops.py that simply runs all those steps for you. cc @ezyang @gchanan
module: cpp-extensions,module: tests,triaged,enhancement
low
Minor
428,906,438
flutter
Touching the screen should be represented as 0x01 in buttons
## Reasoning Currently, "touching" is not considered as a button. More specifically, | Action | Dispatched `PointerDownEvent` | | -- | -- | | Touch screen tap start | `buttons = 0x0` | | Stylus tap start | `buttons = <0_or_other_buttons>` | | Inv-stylus tap start | `buttons = <0_or_other_buttons>` | However, in reality, touch and stylus touch are almost always considered the same as left mouse button. Currently, a gesture that wants to recognize "touch tap, stylus tap or LMB press" has to use: ``` (event.device == PointerDeviceKind.mouse && event.buttons == kPrimaryMouseButton) || (event.buttons == 0) ``` If we define `0x01` as the "primary button" that is equivalent to LMB on all devices, then a gesture that wants to recognize the aformentioned buttons can simply use: ``` event.buttons == kPrimaryButton ``` where `kPrimaryButton` is a new constant with value `1`. Another benefit of this change is to fix the misalignment between `buttons` and `down`. Currently, the relationship between `(buttons != 0)` and `(down != false)` is different across devices: - The two booleans are the same for mouse. - The two booleans are independent for stylus. - And for touch, `buttons` doesn't make sense. After the change, the two booleans will always be the same. Last but not least, this change also aligns with the design where stylus buttons start from `0x02` instead of `0x01`. Some discussion and demand can be found at [[1]](https://docs.google.com/a/google.com/document/d/1rlHXrnhic2Wz0fXI8PX2HCghZLrG1xJIcQ6Draf3l-4/edit?disco=AAAACqf2puI) [[2]](https://docs.google.com/a/google.com/document/d/1rlHXrnhic2Wz0fXI8PX2HCghZLrG1xJIcQ6Draf3l-4/edit?disco=AAAACqf2puA) [[3]](https://github.com/flutter/flutter/pull/30339#discussion_r271087162) ## Changes We propose to add `0x01` to `buttons` whenever the pointer is in contact with the screen. More specifically, | Action | Dispatched `PointerDownEvent` | | -- | -- | | Touch screen tap start | `buttons = 0x1` | | Stylus tap start | `buttons = 0x1 \| <other_buttons>` | | Inv-stylus tap start | `buttons = 0x1 \| <other_buttons>` | No changes are made to mouse events. ## Breaking: No Since Flutter does not support gestures that recognize buttons, this change will not break existing code, except for the code that writes their own gesture recognizers that distinguishes buttons. ## Development plan The correct place for this feature belongs to the engine (embedder). Unfortunately, this change requires tracking the state of `down`, which is done by `PointerEventConverter`. `PointerEventConverter` will be moved to embedder by https://github.com/flutter/flutter/pull/28972 https://github.com/flutter/engine/pull/8064 and https://github.com/flutter/engine/pull/8088, but before that happens we'll have to patch them in framework. ## Related PR https://github.com/flutter/flutter/pull/30457
framework,f: gestures,c: proposal,P3,team-framework,triaged-framework
low
Minor
428,950,128
rust
Move the compiler flags in the unstable book to the rustc book
# Description Currently the unstable book contains three sections: - Unstable compiler flags - Unstable language features - Unstable library features. As has been mentioned in #41142 it might be better to move the unstable compiler flags to the rustc book. I see the following reasons why: - Compiler flags are already mentioned [in the rustc book](https://doc.rust-lang.org/rustc/command-line-arguments.html) to keep documentation close together I believe it would make sense to keep all compiler flags close together. In order to avoid documentation duplication (which is hard to maintain), I believe placing them in the rustc book is more logical then not mentioning them in the rustc book and only having them in the unstable book. - It seems that the unstable compiler flags in the unstable book are not maintained, unless there are only two unstable compiler flags (which I believe is not true since -Z lists many options). I'd like to hear your opinion on this and, if you agree, under which section we should place this documentation. We could also aim to automize the generation of these nightly compiler flags?
A-frontend,C-enhancement,T-compiler,A-docs
low
Minor
428,978,599
storybook
Javascript heap out of memory when doing build-storybook
**Describe the bug** When trying to do build-storybook it gets to 92% and then fails with the following out of memory error: ``` 92% chunk asset optimization TerserPlugin FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 1: 00007FF7CF93EEE5 2: 00007FF7CF918CD6 3: 00007FF7CF9196E0 4: 00007FF7CFD80D3E 5: 00007FF7CFD80C6F 6: 00007FF7CFCCC594 7: 00007FF7CFCC2B67 8: 00007FF7CFCC10DC 9: 00007FF7CFCCA0B7 10: 00007FF7CFCCA136 11: 00007FF7CFDEF7B7 12: 00007FF7CFEC87FA 13: 000000D1798DC6C1 error Command failed with exit code 134. ``` The entirety of my source code is less than 1 MB so not sure why I would be OOM. **To Reproduce** Steps to reproduce the behavior: Run `yarn build-storybook` **Expected behavior** The build shouldn't fail. **Screenshots** If applicable, add screenshots to help explain your problem. **Code snippets** My current webpack.config.js ``` const TerserPlugin = require('terser-webpack-plugin'); module.exports = { optimization: { // This line added as an earlier solution to this OOM error minimizer: [new TerserPlugin({ parallel: false })], }, resolve: { extensions: ['.ts', '.tsx'] }, module: { rules: [{ test: /\.js$/, use: 'source-map-loader', exclude: /node_modules/, enforce: 'pre' }, { test: /\.tsx?$/, use: 'ts-loader', exclude: /node_modules/ }, { test: /\.scss$/, use: [{ loader: "style-loader" }, { loader: "css-loader", options: { sourceMap: true } }, { loader: "sass-loader", options: { sourceMap: true } }] }, { test: /\.stories\.tsx?$/, loaders: [{ loader: require.resolve('@storybook/addon-storysource/loader'), options: { parser: 'typescript' } }], enforce: 'pre', } ] } } ``` My current .storybook config.js ``` import { addDecorator, configure } from '@storybook/react'; import { withOptions } from '@storybook/addon-options'; import { withKnobs } from '@storybook/addon-knobs'; // add withKnobs addDecorator(withKnobs); // Option defaults: // Full settings here: https://github.com/storybooks/storybook/tree/master/addons/options addDecorator( withOptions({ name: 'some-package Storybook', showAddonPanel: false, addonPanelInRight: true, showStoriesPanel: false, sortStoriesByKind: true }) ); function loadStories() { const req = require.context('../src/', true, /\.stories\.tsx?$/) req.keys().forEach((filename) => req(filename)) } configure(loadStories, module); ``` I also tried changing the build-storybook command in my package.json to force increase the memory, but that doesn't seem to work either. ` "build-storybook": "npx --max_old_space_size=16384 build-storybook"` My full package.json ``` { "name": "some-package", "version": "0.0.1", "description": "PackageDescription", "registry": "https://myurl.com/", "main": "dist/charts.js", "types": "dist/charts.d.ts", "license": "ISC", "engines": { "node": ">=8.*", "yarn": ">=1.*" }, "scripts": { "auth": "npx vsts-npm-auth -config .npmrc", "dev": "webpack --mode=development", "dev:watch": "webpack --mode=development --watch", "build": "webpack --mode=production", "lint": "tslint ./src/**/*.{ts,tsx}", "fix_coverage": "node ./fix_coverage.js", "test": "yarn lint && jest --coverage", "test:watch": "yarn lint && jest --watch", "test:watchAll": "yarn lint && jest --watchAll", "storybook": "start-storybook -p 6006", "build-storybook": "npx --max_old_space_size=16384 build-storybook" }, "dependencies": { "json5": "^2.1.0", "lodash": "4.17.11", "moment": "^2.24.0", "react-localization": "^1.0.13", "react-select": "^2.3.0", "react-split-pane": "0.1.77", "sinon": "^7.2.3", "ts-mock-imports": "^1.2.2" }, "peerDependencies": { "office-ui-fabric-react": "6.*", "plotly.js": "1.*", "react": "16.*", "react-dom": "16.*", "react-localization": "^1.0.13", "react-plotly.js": "2.*", "uuid": "^3.3.2" }, "devDependencies": { "@babel/core": "7.1.6", "@storybook/addon-knobs": "4.1.16", "@storybook/addon-options": "4.1.16", "@storybook/addon-storysource": "4.1.16", "@storybook/react": "4.1.16", "@types/enzyme": "^3.1.15", "@types/jest": "^23.3.10", "@types/lodash": "4.14.118", "@types/plotly.js": "1.41.0", "@types/react": "16.7.9", "@types/react-dom": "16.0.11", "@types/react-plotly.js": "2.2.2", "@types/react-select": "^2.0.11", "@types/storybook__addon-knobs": "4.0.4", "@types/storybook__react": "4.0.1", "@types/uuid": "^3.4.4", "babel-loader": "8.0.4", "clean-webpack-plugin": "1.0.0", "css-loader": "^2.1.0", "dts-bundle-webpack": "1.0.1", "enzyme": "^3.8.0", "enzyme-adapter-react-16": "^1.7.1", "enzyme-to-json": "^3.3.5", "fork-ts-checker-webpack-plugin": "0.5.0", "fs-jetpack": "2.2.0", "gulp-jest": "^4.0.2", "identity-obj-proxy": "^3.0.0", "inline-css": "2.4.1", "jest": "^23.6.0", "jest-canvas-mock": "^1.1.0", "jest-cli": "^24.1.0", "jest-junit": "5.2.0", "net": "^1.0.2", "node-sass": "^4.11.0", "office-ui-fabric-react": "6.158.0", "plotly.js": "^1.44.4", "react": "16.6.3", "react-dom": "16.6.3", "react-plotly.js": "2.2.0", "sass-loader": "^7.1.0", "source-map-loader": "0.2.4", "style-loader": "^0.23.1", "terser-webpack-plugin": "^1.2.3", "tls": "^0.0.1", "ts-jest": "23.10.5", "ts-loader": "5.3.1", "tslint": "5.11.0", "tslint-config-prettier": "1.17.0", "tslint-react": "3.6.0", "typescript": "3.2.1", "webpack": "4.29.6", "webpack-cli": "3.3.0" } } ``` **System:** - OS: Windows 10 - Device: HP Desktop - Browser: Chrome - Framework: React - Addons: @storybook/addon-knobs, @storybook/addon-options, @storybook/addon-storysource - Version: 4.1.16
question / support,has workaround,performance issue
high
Critical
428,989,944
node
Documented way to add certificates to existing SecureContext
In some cases, I've found that I've wanted to add a single CA to the list of trusted CAs that Node.js uses by default. There seems to be no documented way to do this. As it stands, officially, if you want to use non standard CAs, you can, but must also specify all CAs that might have otherwise been loaded automatically. From the `createSecureContext` documentation: > `ca <string> | <string[]> | <Buffer> | <Buffer[]>` Optionally override the trusted CA certificates. Default is to trust the well-known CAs curated by Mozilla. **Mozilla's CAs are completely replaced when CAs are explicitly specified using this option.** [...] In trying to accomplish this, I've come across a seemingly stable but undocumented API https://github.com/nodejs/node/issues/20432#issuecomment-441514919 > ```js > const tls = require('tls'); > // Create context with default CAs from Mozilla > const secureContext = tls.createSecureContext(); > // Add a CA certificate from Let's Encrypt > // https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem.txt > secureContext.context.addCACert(`-----BEGIN CERTIFICATE----- > MIIEkjCCA3qgAwIBAgIQCgFBQgAAAVOFc2oLheynCDANBgkqhkiG9w0BAQsFADA/ > MSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT > DkRTVCBSb290IENBIFgzMB4XDTE2MDMxNzE2NDA0NloXDTIxMDMxNzE2NDA0Nlow > SjELMAkGA1UEBhMCVVMxFjAUBgNVBAoTDUxldCdzIEVuY3J5cHQxIzAhBgNVBAMT > GkxldCdzIEVuY3J5cHQgQXV0aG9yaXR5IFgzMIIBIjANBgkqhkiG9w0BAQEFAAOC > AQ8AMIIBCgKCAQEAnNMM8FrlLke3cl03g7NoYzDq1zUmGSXhvb418XCSL7e4S0EF > q6meNQhY7LEqxGiHC6PjdeTm86dicbp5gWAf15Gan/PQeGdxyGkOlZHP/uaZ6WA8 > SMx+yk13EiSdRxta67nsHjcAHJyse6cF6s5K671B5TaYucv9bTyWaN8jKkKQDIZ0 > Z8h/pZq4UmEUEz9l6YKHy9v6Dlb2honzhT+Xhq+w3Brvaw2VFn3EK6BlspkENnWA > a6xK8xuQSXgvopZPKiAlKQTGdMDQMc2PMTiVFrqoM7hD8bEfwzB/onkxEz0tNvjj > /PIzark5McWvxI0NHWQWM6r6hCm21AvA2H3DkwIDAQABo4IBfTCCAXkwEgYDVR0T > AQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAYYwfwYIKwYBBQUHAQEEczBxMDIG > CCsGAQUFBzABhiZodHRwOi8vaXNyZy50cnVzdGlkLm9jc3AuaWRlbnRydXN0LmNv > bTA7BggrBgEFBQcwAoYvaHR0cDovL2FwcHMuaWRlbnRydXN0LmNvbS9yb290cy9k > c3Ryb290Y2F4My5wN2MwHwYDVR0jBBgwFoAUxKexpHsscfrb4UuQdf/EFWCFiRAw > VAYDVR0gBE0wSzAIBgZngQwBAgEwPwYLKwYBBAGC3xMBAQEwMDAuBggrBgEFBQcC > ARYiaHR0cDovL2Nwcy5yb290LXgxLmxldHNlbmNyeXB0Lm9yZzA8BgNVHR8ENTAz > MDGgL6AthitodHRwOi8vY3JsLmlkZW50cnVzdC5jb20vRFNUUk9PVENBWDNDUkwu > Y3JsMB0GA1UdDgQWBBSoSmpjBH3duubRObemRWXv86jsoTANBgkqhkiG9w0BAQsF > AAOCAQEA3TPXEfNjWDjdGBX7CVW+dla5cEilaUcne8IkCJLxWh9KEik3JHRRHGJo > uM2VcGfl96S8TihRzZvoroed6ti6WqEBmtzw3Wodatg+VyOeph4EYpr/1wXKtx8/ > wApIvJSwtmVi4MFU5aMqrSDE6ea73Mj2tcMyo5jMd6jmeWUHK8so/joWUoHOUgwu > X4Po1QYz+3dszkDqMp4fklxBwXRsW10KXzPMTZ+sOPAveyxindmjkW8lGy+QsRlG > PfZ+G6Z6h7mjem0Y+iWlkYcV4PIWL1iwBi8saCbGS5jN2p8M+X+Q7UNKEkROb3N6 > KOqkqm57TH2H3eDJAkSnh6/DNFu0Qg== > -----END CERTIFICATE-----`); > // Use it > const sock = tls.connect(443, 'host', {secureContext}); > ``` It looks to be a real API for a number of reasons: - No underscores (`_`) - It is named in a [change log](https://github.com/nodejs/node/blob/3516052bee118dce767dd330fa857f6615c5b28a/doc/changelogs/CHANGELOG_V7.md) - Tests [use](https://github.com/nodejs/node/pull/10389/files#diff-8959a6c6e723d70e9903a7d3549f02b1R48) [this](https://github.com/nodejs/node/blob/8b4af64f50c5e41ce0155716f294c24ccdecad03/test/parallel/test-tls-addca.js#L15) [function](https://github.com/nodejs/node/blob/8b4af64f50c5e41ce0155716f294c24ccdecad03/test/internet/test-tls-add-ca-cert.js#L47-L49). - Internal code [uses this API](https://github.com/nodejs/node/blob/f512f5ea138fe86e47c0179d5733044daf6f4fe6/lib/_tls_common.js#L100-L114) Would it make sense to document this feature? ### Alternatives If the default list of CAs were accessible in node, we could do this ourselves without adding extra APIs. I have not actually looked at if this is possible or unintentionally exposed by the SecureContext API. ### Related Issues https://github.com/nodejs/node/issues/4464 https://github.com/nodejs/node/issues/20432 https://github.com/nodejs/node/pull/26908#issuecomment-479147423
tls,crypto,feature request,never-stale
medium
Critical
428,990,855
flutter
Make IgnorePointer more customizable
We wrap the entire scrollable's child in an IgnorePointer which is the right behavior on Android. But for iOS, even though the large title nav bar moves with the scrollable, it's not 'part of the scrollable'. Vertical drag gestures on the nav bar doesn't apply to the scrollable and the contents of the nav bar should be tappable even while scrolling. Create a new PointerFilter widget which always traverses its children for hit test when tapped. The descendent tree may include a FilteredPointer widget which itself would both hit test its own children, and also run a predicate function. If both passes, it re-wraps its descendents' hit test entry with a filtered hit test entry. The parent PointerFilter's render object, as it processes the children's hit test result, only unpacks the filtered hit test entries and sends that back up to its parents.
platform-ios,framework,a: fidelity,c: proposal,P2,team-ios,triaged-ios
low
Minor
428,992,483
three.js
Non POT ImageBitmap texture can be upside down
##### Description of the problem Non POT `ImageBitmap` texture can be upside down. Demo: https://jsfiddle.net/7do6um2f/ (Use Chrome) Non POT image can be resized to POT `canvas` in `WebGLRenderer(WebGLTexture)`. If original image is `ImageBitmap` and `texture.flipY` is `true`, `texture` will be upside down. This is from the difference of `ImageBitmap` API from others. `ImageBitmap` requires `flipY` on bitmap creation while others requires on uploading data to GPU (= `texture.flipY`). `texture.flipY` used on uploading data to GPU is ignored for `ImageBItmap`. But if non POT `ImageBitmap` is converted to `canvas`, `texture.flipY` will have an effect. So texture will be upside down. `texture.flipY` has no effect for `ImageBitmap` so we should force it to `false` if texture's image is `ImageBitmap`? API difference has another issue. (May be it isn't so serious tho.) `FlipY` and `premultiplyAlpha` is required on bitmap creation. `texture.flipY/premultiplyAlpha` used on uploading data to GPU are ignored for `ImageBitmap`. So the following code has no effect for `ImageBitmap`, but the code has effect if `ImageBitmap` is resized and converted to `canvas`. ``` texture.premultiplyAlpha = ! texture.premultiplyAlpha; texture.flipY = ! texture.flipY; texture.needsUpdate = true; ``` ##### Three.js version - [x] Dev - [ ] r103 - [ ] ... ##### Browser - [x] All of them - [ ] Chrome - [ ] Firefox - [ ] Internet Explorer ##### OS - [x] All of them - [ ] Windows - [ ] macOS - [ ] Linux - [ ] Android - [ ] iOS ##### Hardware Requirements (graphics card, VR Device, ...)
Bug
low
Major
429,004,896
flutter
Write gesture test for GoogleMap#onTap once we have end to end testing
We don't support testing gestures on platform views yet.
team,platform-android,platform-ios,a: platform-views,p: maps,package,team-ecosystem,P2,c: tech-debt,triaged-ecosystem
low
Minor
429,059,722
TypeScript
Add JSDOC @module support for intellisense.
## Search Terms Intellisense jsdoc support for modules ## Suggestion When adding `/** @module moduleName Module Description. */` to a module, then doing this: import * as myName from "./moduleName"; I think it makes sense, when importing the whole namespace, to include the jsdoc `@module` comment. I also think that when pressing Ctrl+Space for code completion on `"./"` (to get a list of modules) should also show the module documentation. ## Use Cases Better support for module documentation. I'm developing some module libraries and it would be great to give end users a good experience with better intellisense documentation. ## Examples _aModule.ts_ `/** @module aModule Does something awesome. */` _app.ts_ import * as aModule from "./aModule"; // (moduleName should both have the doc details) // ("./{*.*}" - all files in code completion should show the documentation as well) ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Awaiting More Feedback
low
Major
429,062,162
flutter
_scaffoldKey.currentState.showBottomSheet with grey background overlay
I'm using _scaffoldKey.currentState.showBottomSheet instead of showBottomSheet because it handle the keyboard better than showBottomSheet. But the problem is when I'm using _scaffoldKey.currentState.showBottomSheet, it not showing grey background overlay. It is possible to add the background overlay ?
framework,f: material design,a: quality,P2,workaround available,team-design,triaged-design
low
Major
429,093,686
frp
proposal - reload config file on save
## abstract I'd like to contribute to this project by adding a new feature. I wish to support reloading the config files automatically on save. ## details Users can enable this feature in `frpc.ini` ``` reload_on_save = true ``` and whenever the config file is written, a `reload` is automatically triggered. ### implementation details I would use [https://github.com/fsnotify/fsnotify](fsnotify)(Cross-platform file system notifications for Go) to watch the config files and use `go's channel` to pass the information asynchronously. ## motivation It's more natural for users (including me) to `ssh` to the client and change the config file. AdminUI is great but `ssh` is better. A reload on save would be helpful.
proposal
low
Major