id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
339,587,638
go
cmd/go: list: incomplete error message "no Go files in"
``` $ cat a.go package p import "missing" $ command go version go version devel +4da84adc0c Mon Jul 9 19:35:21 2018 +0000 linux/amd64 $ command go list -export -e a.go go build missing: no Go files in command-line-arguments ``` I expect the fix is something like this: ``` --- a/src/cmd/go/internal/load/pkg.go +++ b/src/cmd/go/internal/load/pkg.go @@ -185,6 +185,9 @@ func (e *NoGoError) Error() string { // to appear at the end of error message. return "no non-test Go files in " + e.Package.Dir } + if e.Package.Dir == "" { + return "no directory for package " + e.Package.ImportPath + } return "no Go files in " + e.Package.Dir } ```
NeedsFix,GoCommand
low
Critical
339,592,032
react-native
TextInput becomes slow after lots of typing
## Environment ``` $ react-native info Environment: OS: Linux 4.9 Node: 8.11.3 Yarn: 1.7.0 npm: 5.6.0 Watchman: Not Found Xcode: N/A Android Studio: Not Found Packages: (wanted => installed) react: 16.3.1 => 16.3.1 react-native: 0.56.0 => 0.56.0 (*) ``` (*) In my test app, actually v0.55.4 plus the patch from #19645 . Others on #19126 report the same symptoms using v0.56. ## Description After typing a lot of text into a controlled TextInput, like ~500 char on a recent fast phone, we start dropping frames. It gets worse the more you type; at ~1000 char, we very frequently drop frames and the app looks noticeably laggy. ## Reproducible Demo The original repro is by @s-nel: https://github.com/s-nel/rn19126 Originally reported at https://github.com/facebook/react-native/issues/19126#issuecomment-402904164; more discussion in subsequent comments in that thread. In particular, here's the video from that repro: ![video](https://github.com/s-nel/rn19126/raw/master/frameloss.gif) Here's a video by @s-nel on [my test app](https://github.com/gnprice/rn-textinput-test), with this description: > Yes I can reproduce from [that repo] after typing for a couple minutes straight. I don't have to clear anything as the original bug description [in #19126] indicates. Here you can see my JS framerate drop to single digits from adding a few characters ![image](https://user-images.githubusercontent.com/504326/42430009-adf31f14-8390-11e8-9c2e-527533484d35.gif) Here's my description of replicating that repro, again in my test app: > I just tried again with my test app, on RN 0.55.4 + patch. If I type continuously for about 60 seconds -- maybe 400-500 characters, at a rough estimate (just gibberish, both thumbs constantly typing letters) -- then the perf overlay gets to about "10 dropped so far", and zero stutters. If I continue and get up to about 90 seconds, the "dropped" figure climbs faster, and the framerate (in "JS: __ fps") noticeably drops. At about 150 seconds (~1000 characters?), the "dropped" figure rapidly climbs past 200, and the framerate hangs out around 30. > > Even at that point, it's still "0 stutters". And if I ignore the perf overlay and try typing normal text and watching it like I were using an app for real: it's a bit laggy, but doesn't make things feel unusable or outright broken, and I think most of the time I wouldn't even notice. That was on a Pixel 2 XL; the numbers will probably vary with hardware. ## Background from previous bug This bug seems closely related to #19126; the repro steps and symptoms seem to be exactly the same, except quantitatively much less severe. (The original reports of #19126 describe clearing the input as part of the repro; I now think that was probably a red herring.) The extreme symptoms of #19126 were absent in v0.54, introduced in v0.55, and fixed in v0.56 by #19645 . As I wrote when I figured out the cause of that (leading to the one-line fix in #19645): > [This buggy NaN comparison] means every text shadow node will create a `CustomLetterSpacingSpan` around its contents, even when the `letterSpacing` prop was never set. It must be that we're somehow ending up with large numbers of these shadow nodes -- that sounds like a bug in itself, but I guess it's normally low-impact -- and this condition is failing to prune them from causing a bunch of work here. I think we're now looking at exactly that underlying bug. It may well have been present in v0.54 and earlier releases; I don't think we have any data on that. [**EDIT:** @piotrpazola finds [below](https://github.com/facebook/react-native/issues/20119#issuecomment-489081902) that the bug is present in v0.50, and absent in v0.49.] I don't know anything more about what causes it; even the hypothesis that we're ending up with large numbers of text shadow nodes is just an inference from how it interacts with the line fixed in #19645. So, more debugging is required. I don't expect to do that debugging myself, because these remaining symptoms are no longer one of the top issues in our own app. But I hope somebody else will!
Ran Commands,Impact: Regression,Component: TextInput,Platform: Android,Bug
high
Critical
339,602,090
kubernetes
NamespaceSpec and ObjectMeta both contain a "finalizers" field
<!-- This form is for bug reports and feature requests ONLY! If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/). If the matter is security related, please disclose it privately via https://kubernetes.io/security/. --> **Is this a BUG REPORT or FEATURE REQUEST?**: > Uncomment only one, leave it on its own line: > /kind bug > /kind feature **What happened**: `Namespace` has two finalizers fields: `Namespace.ObjectMeta.finalizers` and `Namespace.Spec.finalizers`. `NamespacedResourcesDeleter` uses `NamespaceSpec.finalizers` and `ObjectMeta.finalizers` appears to be unused/ineffective (please correct me if I'm wrong). **What you expected to happen**: `NamespacedResourcesDeleter` should use `ObjectMeta.finalizers`, or if there is a technical reason for not doing so, it should be documented how and why the fields are different. Also, if the `ObjectMeta.finalizers` field is completely unused for namespaces (can't quite tell if it is), maybe the API should fail namespace creation requests that set it. TL;DR: I mistakenly put a finalizer in the namespace metadata, instead of the namespace spec. That should either be supported, obviously wrong via documentation/naming, or not be a valid request.
kind/documentation,sig/api-machinery,lifecycle/frozen
low
Critical
339,608,188
pytorch
Accumulate into accreal instead of real for CPU loss functions
All the loss fns except for MSELoss (that I am patching right now) accumulate into real instead of accreal on CPU cc @albanD @mruberry @jbschlosser @ezyang @zou3519 @gqchen @pearu @nikitaved @soulitzer
module: nn,module: loss,triaged
low
Minor
339,611,549
go
cmd/compile: teach prove about more type conversion operations
#26292 is about teaching the prove pass that, if `x byte` is <N, `int(x)` is also <N. In particular, it helps when using byte variables as indices, since that seems to mean an implicit conversion to int. That was causing an unnecessary bounds check in an `encoding/json` hot path. Here are some other cases which might be worthwhile to do. I don't know how useful they would be for software out there, like removing bounds checks in the standard library. * If `x uint32` is `<10`, `uint(x) < 10` (and same for `int32/int`) * If `x uint64` is `<10` and we're on 64-bit, `uint(x) < 10` * If `x uint` is `<10`, `byte(x) < 10` * If `x int` is non-negative and `<10`, `uint(x) < 10` /cc @aclements @rasky
Performance,NeedsInvestigation,compiler/runtime
low
Minor
339,643,383
go
x/net/http2: verify that net/http.Server.SetKeepAlivesEnabled(false) shuts down HTTP/2 server
Per https://github.com/golang/go/issues/20239#issuecomment-402199944 and its immediate reply, it seems that https://go-review.googlesource.com/c/net/+/43230 might've broken `net/http.Server.SetKeepAlivesEnabled(false)` shutting down HTTP/2 server connections. I guess there's no test for it. /cc @tombergan @pam4
help wanted,NeedsFix
low
Critical
339,659,783
rust
Consider using pair mode to return scalar pair bools as i1
Starting in #51583, we're representing scalar pair `bool`s as `i8` in LLVM aggregates to match their memory storage, whereas they are `i1` as immediate values. When a pair is the argument to a function, we use `PassMode::Pair` and pass each part like independent immediate values. We don't use that mode for return values though, so a paired `bool` will be extended to `i8` for return, then truncated back to `i1` when the caller unpacks it. Quoting @eddyb in https://github.com/rust-lang/rust/pull/51583#discussion_r195895958: > I wonder if they should be using the pair "passing mode" that arguments do, and create a LLVM aggregate type on the fly, using the immediate types for the pair components. That way we'd get `{i1, i1}` for returns, but everything else would see `{i8, i8}`. > > Not sure it's worth the complexity though. When inlining, LLVM should collapse the zext followed by trunc, just like it gets rid of packing into a pair and unpacking it.
A-codegen,T-compiler,A-cranelift
low
Minor
339,663,677
rust
`trivial_casts` warning misfires when casting from reference -> pointer -> void pointer
When casting from a reference to a pointer to a void pointer (necessary sometimes when doing C FFI), the `trivial_casts` warning fires, despite this being an necessary cast. ```rust #![warn(trivial_casts)] extern crate libc; fn main() { let x : libc::c_int = 4; let y : *const libc::c_void = &x as *const libc::c_int as *const libc::c_void; } ``` This produces the warning: > ``` > warning: trivial cast: `&i32` as `*const i32`. Cast can be replaced by coercion, this might require type ascription or a temporary variable > ``` You can work around this by using coercion to assign the intermediate type to a variable, but this shouldn't be necessary.
A-lints,C-bug
low
Major
339,678,431
react-native
Image Orientation on Android
## Environment React Native Environment Info: System: OS: macOS High Sierra 10.13.5 CPU: x64 Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz Memory: 23.63 MB / 16.00 GB Shell: 3.2.57 - /bin/bash Binaries: Node: 10.6.0 - /usr/local/bin/node npm: 6.1.0 - /usr/local/bin/npm SDKs: iOS SDK: Platforms: iOS 11.4, macOS 10.13, tvOS 11.4, watchOS 4.3 IDEs: Android Studio: 2.3 AI-162.3871768 Xcode: 9.4.1/9F2000 - /usr/bin/xcodebuild npmPackages: react: 16.4.1 => 16.4.1 react-native: 0.56.0 => 0.56.0 npmGlobalPackages: react-native-cli: 2.0.1 react-native-git-upgrade: 0.2.7 ## Description There is an issue with how images are being rotated, specifically portrait images on Android. Portrait images are somewhat ambiguously rotated when they render. I found similar issues in the past explaining this problem that were fixed, but I am still experiencing it in 0.56. The Exif "orientation" value is seemingly not always respected. In screenshot below, both of these photos have orientation of "6", which from my research means rotate 90 degrees. As you can see, only one of the two were rotated (both photos of the cup were taken vertically). ![41812015-344c35d4-76f1-11e8-9e84-5a29a74672b1](https://user-images.githubusercontent.com/10585687/42486892-a9da41ca-83d4-11e8-8bc1-3a9c0203e681.jpg) This is another screenshot using the sample app linked below in the reproducible demo section. I took 6 photos vertically in a row, then displayed them from CameraRoll. The first one in the grid is not rotated correctly. ![42421293-807d1a04-82a9-11e8-890e-73c56aeca746](https://user-images.githubusercontent.com/10585687/42486926-e7dd2b7c-83d4-11e8-87f5-101f69891ab8.jpg) ## Reproducible Demo I could not create a snack with the most recent version of react native (0.56) as it appears that CRNA still uses 0.55 by default. However, I have a repo setup for a simple app that displays a photo grid from CameraRoll. I have only been able to reproduce this issue on Samsung Galaxy S8 and Samsung Galaxy S8+. To reproduce: - Take a few vertical photos with a physical device - clone https://github.com/phillmill/rn-android-image-orientation-test - run npm install - react-native run-android ## A workaround: Currently my workaround is to pass a false argument to setAutoRotateEnabled() in ./ReactAndroid/src/main/java/com/facebook/react/views/image/ReactImageView.java and compile react native from source. From there, I then rotate the image manually by reading the Exif value & using transforms accordingly. There may be a better work around, and I'd love to hear one, but that's all I have for now!
Platform: Android
low
Major
339,750,688
opencv
Add 'ptrT' methods to 'cv::Mat_'
`cv::Mat_` already provides the means to get a row/element pointer via `uchar* ptr(...)` and `template<T> T* ptr(...)` methods which are inconvenient since the element type is known at compile time, and there's an `operator[]` monstrosity (in my opinion) which makes source code unintelligible and somewhat unreadable: ``` cv::Mat_<float> mat; ... auto data1 = mat(42); // works the same for all mats (cv::Mat, cv::Matx, cv::Mat_) auto data2 = mat[13]; // additional bracket operator confuses the API semantics ``` I would propose adding a set of `_Tp* ptrT(...)` overloads with the same arguments as `template<T> T* ptr(...)`.
category: core,RFC
low
Minor
339,811,967
rust
2018 path clarity: When path lookup fails because of shadowed crate, provide hint
In Rust 2018, the new path system allows a local name to shadow a crate name: ```rust use std::num; struct Newtype(num::BigInt); // ERROR ``` The error is because `num` refers to the local name of the `num` module imported on the first line, so `num::BigInt` fails to look inside the crate. This can be avoided by using `extern::num::BigInt`, which explicitly refers to the crate (note that @aturon proposed [here](https://internals.rust-lang.org/t/relative-paths-in-rust-2018) to make `::num::BigInt` be the disambiguation syntax; this doesn't affect the feature request). The compiler provides no hint of this, however. It could, upon path failure, provide an information along the lines of "`num` refers to the name declared here; the `num` crate contains `BigInt`, use `extern::num::BigInt` to refer to it" (except written more clearly). EDIT: The same could apply to any other case where shadowing hides a name that's used as a prefix in a path, but crates seem like the most important.
C-enhancement,A-diagnostics,A-resolve,T-compiler,D-terse
low
Critical
339,829,624
vscode
Counter-intuitive "editor.suggestSelection" behavior when completion is "kept open"
- VSCode Version: 1.25 - OS Version: Windows 10 In the Haxe extension, we want to support the following two use cases: - `identifier.|` - request completion for the fields within `identifier`. - `some.package.|` - filter packages / types by the dot path that's currently being typed. In this case, the completion items don't change when typing a dot, so the suggest widget should simple be kept open (but filtered accordingly). To support of the first use case, we need to register `"."` as a trigger character, which means that `provideCompletionItems()` is invoked for the other case as well, even though it wouldn't be necessary there. But so far so good, we can simply return the same completion items again to keep the suggest widget open, and adjust the `range` of the items to match against the entire dot path typed so far. Here's a simple extension that simulates that: ```ts 'use strict'; import * as vscode from 'vscode'; export function activate(context: vscode.ExtensionContext) { vscode.languages.registerCompletionItemProvider('plaintext', { provideCompletionItems(doc, pos, token, context) { var range = new vscode.Range(new vscode.Position(0, 0), pos); if (doc.getText(range) == "Json.") { return [ {label: "parse"}, {label: "stringify"} ]; } return [ {label: "haxe", range: range}, {label: "haxe.ds", range: range}, {label: "haxe.ds.Map", range: range}, {label: "haxe.ds.BalancedTree", range: range}, {label: "Json - haxe.Json", insertText: "Json", range: range} ]; } }, "."); } ``` This approach actually works quite well, but it causes an annoying interaction with the default behavior of `"editor.suggestSelection"` (default is `recentlyUsed`). If the `Json - haxe.Json` entry has been selected before, it is auto-selected when typing the dot in `haxe.`: ![](https://i.imgur.com/jZSTbky.gif) This might not look like a big issue there, but with many completion items between the packages and `haxe.Json` it's more problematic. The packages you were just filtering against are now scrolled totally out of view: ![](https://i.imgur.com/tJS3NE0.gif) Of course, this can be worked around with `"editor.suggestSelection": "first"`, but that's not a great experience for our users. `preselect: true` also doesn't seem to help. Even if that's set for the `"haxe.ds"` item in the sample code above, the `Json` item is still auto-selected on `haxe.|`. I think the underlying issue here is that there isn't really a concept of "keeping the completion open / leaving the results unchanged", so VSCode thinks "ah, there were results, so this is must be a _new_ completion and the `"editor.suggestSelection"` logic should be applied". Could the intention of keeping completion open be detected implicitly somehow, perhaps by comparing the previous to the current `provideCompletionItems()` result? If so, the `"editor.suggestSelection": "recentlyUsed"` step could be skipped in those cases.
suggest,under-discussion
low
Minor
339,848,521
go
cmd/compile: internal prefix paths leaking into generated DWARF
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version devel +c78b7693ab Tue Jul 10 05:08:40 2018 +0000 linux/amd64 however this same problem appears to be present in older released (1.10, 1.9 etc) ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? linux/amd64 ### What did you do? Within my GOPATH, I have a subdir "b.v1" containing b.go: ``` $ cat b.v1/b.go package b var q int func Top(x int) int { q += 1 if q != x { return 3 } return 4 } func OOO(y int) int { defer func() { q += Top(y) }() t := q q = y * y return t } ``` then a main program that imports the package b: ``` package main import ( "fmt" b "b.v1" ) var z int func main() { z = b.OOO(z) if b.Top(1) != 2 { fmt.Printf("Beware the Jabberwock, my son!\n") } } ``` When I compile this program and look at the generated DWARF, it appears that the package path for things in "b" have been mangled (for functions and variables -- compile unit still shows the correct path): ``` <1><7324d>: Abbrev Number: 3 (DW_TAG_subprogram) <7324e> DW_AT_name: issue26237/b%2ev1.Top <73264> DW_AT_inline: 1 (inlined) <73265> DW_AT_external: 1 <2><73266>: Abbrev Number: 16 (DW_TAG_formal_parameter) <73267> DW_AT_name: x <73269> DW_AT_variable_parameter: 0 <7326a> DW_AT_type: <0x128a> <2><7326e>: Abbrev Number: 0 ... <1><14ec2>: Abbrev Number: 7 (DW_TAG_variable) <14ec3> DW_AT_name: issue26237/b%2ev1.q <14ed7> DW_AT_location: 9 byte block: 3 80 8f 56 0 0 0 0 0 (DW_OP_addr: 568f80) <14ee1> DW_AT_type: <0x128a> <14ee5> DW_AT_external: 1 ``` Note the "%2ev1". It looks like this mangling is being done in PathToPrefix() in cmd/internal/objabi, no doubt to hide/mangle the "." within the name. While it seems fine to do mangling within the compiler, it doesn't seem as though the mangling should be leaking into the generated DWARF -- names there should reflect the original package path. If I build this program and run GDB on it, I can't print out the value of the variable 'q' without knowing the special mangled name, which seems unfriendly. ### What did you expect to see? (gdb) p b.v1.q 0 (gdb) ### What did you see instead? (gdb) p 'b.v1.q' No symbol "b.v1.q" in current context.
NeedsFix,Debugging,compiler/runtime
low
Major
339,853,491
go
x/text/cmd/gotext: extract command crash if no messages
### What version of Go are you using (`go version`)? go1.9.2 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? windows/amd64 ### What did you do? Run `gotext extract` command on a project directory without any messages to extract and no `golang.org/x/text/message` import. ### What did you expect to see? A blank or error response. ### What did you see instead? A runtime error : ``` >gotext extract panic: runtime error: invalid memory address or nil pointer dereference [signal 0xc0000005 code=0x0 addr=0x0 pc=0x866423] goroutine 1 [running]: golang.org/x/text/message/pipeline.(*extracter).seedEndpoints(0xc0421c6000) C:/GO-projects/src/golang.org/x/text/message/pipeline/extract.go:100 +0x63 golang.org/x/text/message/pipeline.Extract(0xc0420dc750, 0x1, 0x1, 0x0) C:/GO-projects/src/golang.org/x/text/message/pipeline/extract.go:45 +0x5c main.runExtract(0xd6c4c0, 0xc0420dc750, 0xc042046390, 0x0, 0x0, 0x0, 0x0) C:/GO-projects/src/golang.org/x/text/cmd/gotext/extract.go:29 +0x63 main.main() C:/GO-projects/src/golang.org/x/text/cmd/gotext/main.go:150 +0x34e ```
NeedsInvestigation
low
Critical
339,925,192
rust
2018 lint migrating to `use crate::...` doesn't consider mixed imports
Originally reported as https://github.com/rust-lang-nursery/rustfix/issues/124 Text inlined below... --- Given the code: ```toml # Cargo.toml [package] name = "test-badfix" version = "0.1.0" [dependencies] futures = "*" ``` ```rust // src/lib.rs #![feature(rust_2018_preview)] extern crate futures; mod a; pub struct Useful; ``` ```rust // src/a.rs pub use { Useful, futures::Future, }; ``` A call to `cargo fix --prepare-for 2018` causes `a.rs` to be changed to: ```rust pub use crate::{ Useful, futures::Future, }; ``` Which doesn't really seem right, it seems ```rust pub use { crate::Useful, futures::Future, }; ``` would be more correct. Interestingly enough, the code generated by rustfix does compile...not sure if that's intended behavior, but is unrelated as the code ```rust #![feature(rust_2018_preview)] extern crate futures; use crate::futures::Future; ``` compiles fine.
A-lints,C-bug,A-suggestion-diagnostics,A-edition-2018
low
Minor
339,930,369
angular
Proposal for an `ngFormChange` event
(Note that I'd be willing to work on a PR for this, but I don't want to start working on it unless I knew there was a chance it'd get accepted, so I wanted to create this issue first to test the water.) ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code>[x] Feature request</code></pre> ## Current behavior Currently, due to the changes in Angular 6, when you subscribe to a `ngForm`'s `valueChanges`, the "plain JS model" values have not yet been updated: ```ts @Component({ selector: 'my-form-component', template: ` <ngForm> <input type="text" name="First Name" [(ngModel)]="user.firstName" /> </ngForm> `, }) class MyFormComponent { user: User = { firstName: '' }; @ViewChild(NgForm) ngForm: NgForm; ngAfterViewInit() { this.form.valueChanges.subscribe(() => { console.log(this.user.firstName); }); } } ``` When you type `C` in the textbox in the above example, the value that gets logged will still be an empty string `''`. `this.user.firstName` isn't updated until after the `valueChanges` emits. This is intended behavior due to other issues. ## Desired behavior Because of this, there is currently _no way_ to hook into a change in an `ngForm` and have the updated model value. To be consistent with template-driven forms, it'd be great if there was an `ngFormChange` event on an `ngForm` that would emit _after_ the model is updated: ```ts @Component({ selector: 'my-form-component', template: ` <ngForm (ngFormChange)="onFormChange()"> <input type="text" name="First Name" [(ngModel)]="user.firstName" /> </ngForm> `, }) class MyFormComponent { user: User = { firstName: '' }; onFormChange() { console.log(this.user.firstName); } } ``` This is similar to the original code, but the proposal would be that this event is fired after `this.user.firstName` is updated, so now it'd `console.log` `C` when you first type `C`, etc. I don't think there is a sane `$event` value to pass in in this case... we could send in the form value based on `name` values (cuz we can't get the form model value in the `ngForm`). Could maybe just not send anything. The real issue this is aiming to solve is the _timing_. ## Minimal reproduction of the problem with instructions Here's an example of `valueChanges` being updated "late" in Angular 6: https://stackblitz.com/edit/angular-6-value-changes-v2?file=src%2Fapp%2Fapp.component.ts (Compared to how it worked in Angular 5: https://stackblitz.com/edit/angular-5-value-changes-v2?file=src%2Fapp%2Fapp.component.ts) ## What is the motivation / use case for changing the behavior? We find it _very_ useful in our code to have a way to know when a template driven form is updated (we've found it 100x easier to use template forms over reactive forms, and we get better type hinting), and we relied on `valueChanges` in Angular 5 that got broke when we updated to Angular 6. I understand why the change would be made, but that doesn't mean an `ngFormChange` type event wouldn't still be very useful. See: https://github.com/angular/angular/issues/24312 ## Environment <pre><code> Angular version: 6+ Browser: - [x] Chrome (desktop) version latest - [x] Firefox version latest For Tooling issues: - Node version: 10, 8 <!-- run `node --version` --> - Platform: Mac, Windows Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
feature,state: Needs Design,freq2: medium,area: forms,feature: under consideration,feature: votes required
medium
Critical
339,936,686
pytorch
/usr/bin/ld: cannot find -lpthreads
On Debian Buster cmake . leads to an error ``` Determining if the function pthread_create exists in the pthreads failed with the following output: Change Dir: /home/zfsdt/temp/pytorch/CMakeFiles/CMakeTmp Run Build Command:"/usr/bin/make" "cmTC_d9266/fast" /usr/bin/make -f CMakeFiles/cmTC_d9266.dir/build.make CMakeFiles/cmTC_d9266.dir/build make[1]: Entering directory '/home/zfsdt/temp/pytorch/CMakeFiles/CMakeTmp' Building C object CMakeFiles/cmTC_d9266.dir/CheckFunctionExists.c.o /usr/bin/cc -DCHECK_FUNCTION_EXISTS=pthread_create -fPIE -o CMakeFiles/cmTC_d9266.dir/CheckFunctionExists.c.o -c /usr/share/cmake-3.11/Modules/CheckFunctionExists.c Linking C executable cmTC_d9266 /usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_d9266.dir/link.txt --verbose=1 /usr/bin/cc -DCHECK_FUNCTION_EXISTS=pthread_create CMakeFiles/cmTC_d9266.dir/CheckFunctionExists.c.o -o cmTC_d9266 -lpthreads /usr/bin/ld: cannot find -lpthreads collect2: error: ld returned 1 exit status CMakeFiles/cmTC_d9266.dir/build.make:86: recipe for target 'cmTC_d9266' failed make[1]: *** [cmTC_d9266] Error 1 make[1]: Leaving directory '/home/zfsdt/temp/pytorch/CMakeFiles/CMakeTmp' Makefile:126: recipe for target 'cmTC_d9266/fast' failed make: *** [cmTC_d9266/fast] Error 2 ``` For compilation with pthreads -pthread should be used not -lpthreads
module: build,triaged,has workaround
low
Critical
339,968,905
go
runtime: Caller returns wrong file and line if called in a deferred function during panic
### What version of Go are you using (`go version`)? ### Does this issue reproduce with the latest release? Tested on b001ffb. ### What did you do? https://play.golang.org/p/mqVHDikFu8D ### What did you expect to see? ``` return: main.go:13 panic: main.go:18 ``` ### What did you see instead? ``` return: main.go:13 panic: asm_amd64p32.s:459 ``` This bug was found in the discussion of #26275.
NeedsInvestigation,compiler/runtime
low
Critical
339,996,826
vscode
Outline view + Markdown: Do not display the "#"s
<!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> The Outline view for Markdown already displays headers and sub-headers in a structured way; there is no need to pollute the view with "#"s to show the header levels. ![image](https://user-images.githubusercontent.com/103355/42536060-08a6d31e-845f-11e8-8846-4bb7a7fad80a.png)
feature-request,markdown,under-discussion
high
Critical
340,001,618
TypeScript
Terse mode output from `tsc --build`
## Search Terms tsc --build mode console logging output verbose ## Suggestion Have a middle-ground between `verbose` and "nothing" when running `tsc -b` ## Use Cases Initial feedback was that "`tsc -b` should print nothing by default (if there are no errors)", which is a reasonable first principle to work from. However, it's nice to have some rough idea of what's going on, especially because a `tsc -b` invocation might take quite a while, depending on what's up to date. Users today can get more output by running with `--verbose`, which prints some fairly detailed spew: ``` message TS6350: Project 'tsconfig.json' is out of date because oldest output 'lib/fetch.js' is older than newest input 'src/fetch.ts' ``` The `verbose` output was written primarily as an "Explain *why* something is happening"; there is no in-between setting that says simply "Explain *what* is happening". This is compounded under `--watch` where `tsc -b` prints errors, but doesn't print "No errors", so you can't really know if your compilation succeeded or if tsc simply missed a file change. ## Proposal `--terse` strikes a middle ground between the default (silent) and `verbose`. Its short name would be `-t` which doesn't conflict with anything I can think of adding in the near term. ## Examples ``` > tsc -b -t src message TS6350: 'src/compiler' is up-to-date message TS6350: 'src/services' is up-to-date message TS6351: Building 'src/server'... message TS6351: Building 'src/tests'... message TS6352: Build completed ``` ``` > tsc -b -t -w src message TS6350: All projects are up to date message TS6351: 'src/compiler' was modified, building... src/compiler/parser.ts:21:1 - error TS1128: Declaration or statement expected. 21 ? ~ message TS6351: 'src/compiler' was modified, building... message TS6351: Building 'src/server'... message TS6351: Building 'src/tests'... message TS6352: All projects are up to date ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,In Discussion
medium
Critical
340,051,807
flutter
Support running individual tests on a device
Currently Flutter can run all widget tests in a file on a device by invoking "flutter run..." OR Flutter can run an individual widget test in a headless environment by invoking "flutter test..." However, there is not currently an execution option to run a specific widget test on a device. The current work around is to comment out all the tests you don't want to run and then use "flutter run...". Developers should not have to do this. Running a single test on a device is what a developer will tend to do when he/she is curious about existing or intended behavior. It is also what a developer will do when correcting or updating a single test. A developer should not have to alter any source code to accomplish this.
a: tests,tool,P3,team-tool,triaged-tool
low
Minor
340,066,988
go
x/net/http2: h2spec violation 8.1.2.2
``` $ ./h2spec -S -k -t -h 127.0.0.1 -p 8080 http2/8.1.2.2 Hypertext Transfer Protocol Version 2 (HTTP/2) 8. HTTP Message Exchanges 8.1. HTTP Request/Response Exchange 8.1.2. HTTP Header Fields 8.1.2.2. Connection-Specific Header Fields × 1: Sends a HEADERS frame that contains the connection-specific header field -> The endpoint MUST respond with a stream error of type PROTOCOL_ERROR. Expected: GOAWAY Frame (Error Code: PROTOCOL_ERROR) RST_STREAM Frame (Error Code: PROTOCOL_ERROR) Connection closed Actual: DATA Frame (length:51, flags:0x01, stream_id:1) × 2: Sends a HEADERS frame that contains the TE header field with any value other than "trailers" -> The endpoint MUST respond with a stream error of type PROTOCOL_ERROR. Expected: GOAWAY Frame (Error Code: PROTOCOL_ERROR) RST_STREAM Frame (Error Code: PROTOCOL_ERROR) Connection closed Actual: DATA Frame (length:53, flags:0x01, stream_id:1) Failures: Hypertext Transfer Protocol Version 2 (HTTP/2) 8. HTTP Message Exchanges 8.1. HTTP Request/Response Exchange 8.1.2. HTTP Header Fields 8.1.2.2. Connection-Specific Header Fields × 1: Sends a HEADERS frame that contains the connection-specific header field -> The endpoint MUST respond with a stream error of type PROTOCOL_ERROR. Expected: GOAWAY Frame (Error Code: PROTOCOL_ERROR) RST_STREAM Frame (Error Code: PROTOCOL_ERROR) Connection closed Actual: DATA Frame (length:51, flags:0x01, stream_id:1) × 2: Sends a HEADERS frame that contains the TE header field with any value other than "trailers" -> The endpoint MUST respond with a stream error of type PROTOCOL_ERROR. Expected: GOAWAY Frame (Error Code: PROTOCOL_ERROR) RST_STREAM Frame (Error Code: PROTOCOL_ERROR) Connection closed Actual: DATA Frame (length:53, flags:0x01, stream_id:1) ``` There is a CL which I will rebase and associate to this issue.
NeedsInvestigation
low
Critical
340,067,832
go
x/net/http2: h2spec violation 5.1
``` $ ./h2spec -S -k -t -h 127.0.0.1 -p 8080 http2/5.1 Failures: Hypertext Transfer Protocol Version 2 (HTTP/2) 5. Streams and Multiplexing 5.1. Stream States × 5: half closed (remote): Sends a DATA frame -> The endpoint MUST respond with a stream error of type STREAM_CLOSED. Expected: GOAWAY Frame (Error Code: STREAM_CLOSED) RST_STREAM Frame (Error Code: STREAM_CLOSED) Connection closed Actual: DATA Frame (length:42, flags:0x01, stream_id:1) × 6: half closed (remote): Sends a HEADERS frame -> The endpoint MUST respond with a stream error of type STREAM_CLOSED. Expected: GOAWAY Frame (Error Code: STREAM_CLOSED) RST_STREAM Frame (Error Code: STREAM_CLOSED) Connection closed Actual: DATA Frame (length:42, flags:0x01, stream_id:1) ``` These tests don't always fail so it is most likely an issue with the h2spec test.
NeedsInvestigation
low
Critical
340,068,319
go
x/net/http2: h2spec violation 6.1.2
``` $ ./h2spec -S -k -t -h 127.0.0.1 -p 8080 http2/6.1/2 Hypertext Transfer Protocol Version 2 (HTTP/2) 6. Frame Definitions 6.1. DATA × 2: Sends a DATA frame on the stream that is not in "open" or "half-closed (local)" state -> The endpoint MUST respond with a stream error of type STREAM_CLOSED. Expected: GOAWAY Frame (Error Code: STREAM_CLOSED) RST_STREAM Frame (Error Code: STREAM_CLOSED) Connection closed Actual: DATA Frame (length:42, flags:0x01, stream_id:1) ``` This test also does not always fail so I believe its an h2spec issue. It fails more often if you run the 6.1 series, i.e. ./h2spec -S -k -t -h 127.0.0.1 -p 8080 http2/6.1
NeedsInvestigation
low
Critical
340,134,406
rust
Most core trait impls for empty arrays are over-constrained
All of the following have `where T: Trait` on their impls for `[T; 0]`, but don't need it: - Debug - Copy - Clone - PartialOrd - Ord - PartialEq - Eq - Hash A demo of those not in rustdoc: ```rust struct Nothing; fn must_be_copy<T: Copy>() {} fn must_be_clone<T: Clone>() {} pub fn empty_array_is_copy_and_clone() { must_be_copy::<[Nothing; 0]>(); //^ error: the trait bound `Nothing: std::marker::Copy` is not satisfied in `[Nothing; 0]` must_be_clone::<[Nothing; 0]>(); //^ error: the trait bound `Nothing: std::clone::Clone` is not satisfied in `[Nothing; 0]` } ``` Kudos to whomever did the `Default` impls for getting this right: https://doc.rust-lang.org/std/primitive.array.html#impl-Default-29
T-libs-api,A-array
low
Critical
340,303,123
TypeScript
Support @typedef as member of namespace
**TypeScript Version:** 3.0.0-dev.20180711 **Code** ```js class C { /** @typedef {number} N */ /** @type {N} */ x = 0; // works } /** * @memberof C * @typedef {number} M */ /** @type {C.N} */ const x = 0; // broke /** @type {C.M} */ const y = 0; // broke ``` **Expected behavior:** No error. **Actual behavior:** `C.N` and `C.M` don't exist.
Suggestion,In Discussion,Domain: JSDoc,checkJs,Domain: JavaScript
low
Critical
340,326,752
pytorch
[Caffe2] Can't find Caffe2<->ONNX conversion tools
Hi, I have some problems finding the `convert-caffe2-to-onnx` and `convert-onnx-to-caffe2` tools. In the past, I installed those tools from the [onnx-caffe2 repository](https://github.com/onnx/onnx-caffe2) without any problems. Now, this repository has been deprecated and it has been merged inside Caffe2. I expected to find both tools along with the Caffe2 installation but (at least for me) this is not the case. At first, I tried to install Caffe2 from anaconda (I tried both versions CPU and GPU) but I was not able to find those tools as binaries and if I try to directly execute the python script from which the executables are generated (`/caffe2/python/onnx/bin/conversion.py`) it gives me segmentation fault. I also tried using the docker images but also there I didn't found any conversion tool inside the container. In the end, the only way to make it work required to build caffe2 from source, using the setup_caffe2.py script. When building from source I used the system described below. Is this the only possible way to have access to such converters? Is it possible to use them also from the pre-built binaries (such as those from anaconda)?? ## System Info PyTorch version: 0.4.0 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Ubuntu 16.04.3 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.5 Is CUDA available: Yes CUDA runtime version: 9.0.176 GPU models and configuration: GPU 0: GeForce GTX 1080 Ti Nvidia driver version: 390.48 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.6.0.21 /usr/lib/x86_64-linux-gnu/libcudnn.so.7.0.4 /usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a /usr/local/MATLAB/R2017a/bin/glnxa64/libcudnn.so.5.1.5 /usr/local/lib/python2.7/dist-packages/torch/lib/libcudnn.so.7 Versions of relevant libraries: [pip3] numpy (1.14.2) [pip3] torch (0.4.0) [pip3] torchvision (0.2.1) [conda] Could not collect
caffe2
low
Critical
340,359,211
angular
Upgraded AngularJS component which requires another upgraded component can't find the parent controller when using Angular content projection
## I'm submitting a... <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [x] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior I have ngUpgrade setup with `downgradeModule`, for performance reasons. I have a downgraded Angular component `ngComponentA`, which references upgraded AJS component `ajsComponentB` in its template. `ajsComponentA` supports content projection and `ajsComponentB` supports transclusion -- The `ng-content` projection is placed inside of the former's transclude block, like so: ``` <ajs-component-b> <span>Some other content</span> <ng-content></ng-content> </ajs-component-b> ``` Then, `ngComponentA` is used with another upgraded component `ajsComponentC` which requires `ajsComponentA`, like: ``` <ajs-component-a> <ajs-component-c></ajs-component-c> </ajs-component-a> ``` This produces the error: `Unhandled Promise rejection: [$compile:ctreq] Controller 'ajsComponentA', required by directive 'ajsComponentC', can't be found!` When all of the above are AngularJS components everything works properly. ## Expected behavior Transclusion from an upgraded component and Content Projection from a downgraded component should be able to be used together with an upgraded AngularJS component that uses `require`. ## Minimal reproduction of the problem with instructions The scenario is covered in the "Current behavior" section, and is reproduced at: https://stackblitz.com/edit/ngupgradestatic-playground-uvr14t - `common-filter` corresponds to `ngComponentA` in the above description - `ajs-filter` corresponds to `ajsComponentB` - `ajs-button` corresponds to `ajsComponentC` ## What is the motivation / use case for changing the behavior? Gives a user more flexibility in which components can be upgraded, as sometimes it isn't feasible to upgrade and entire dependency chain of components. ## Environment <pre><code> Angular version: 6.0.4 AngularJS version: 1.7.2 Browser: - [ ] Chrome (desktop) version XX - [ ] Chrome (Android) version XX - [x] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX </code></pre>
type: bug/fix,freq1: low,workaround2: non-obvious,area: upgrade,state: confirmed,P3
low
Critical
340,366,493
react-native
[InputAccessoryView] Can't specify keyboard-conditional bottom padding
## Environment ``` React Native Environment Info: System: OS: macOS High Sierra 10.13.5 CPU: x64 Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz Memory: 25.07 MB / 16.00 GB Shell: 3.2.57 - /bin/bash Binaries: Node: 9.3.0 - ~/.nvm/versions/node/v9.3.0/bin/node Yarn: 1.6.0 - /usr/local/bin/yarn npm: 6.1.0 - ~/.nvm/versions/node/v9.3.0/bin/npm Watchman: 4.9.0 - /usr/local/bin/watchman SDKs: iOS SDK: Platforms: iOS 11.4, macOS 10.13, tvOS 11.4, watchOS 4.3 Android SDK: Build Tools: 23.0.1, 25.0.0, 25.0.1, 25.0.2, 25.0.3, 26.0.1, 26.0.2, 26.0.3, 27.0.3 API Levels: 23, 25, 26 IDEs: Android Studio: 3.1 AI-173.4819257 Xcode: 9.4.1/9F2000 - /usr/bin/xcodebuild npmPackages: react: 16.4.1 => 16.4.1 react-native: ^0.56.0 => 0.56.0 ``` ## Description If you have a bottom tab bar, you want the `InputAccessoryView` to appear above it when the keyboard is collapsed. However, when the keyboard is expanded, you want the `InputAccessoryView` to appear directly above the keyboard. This sort of "conditional" bottom padding is currently impossible with `InputAccessoryView`. It seems that when the keyboard is expanded, the distance between the bottom of the screen and the `InputAccessoryView` is fixed. Attempting to change this distance with `height`, `padding`, or `margin` style properties does not seem possible. ## Reproducible Demo I've created a minimal repro [here](https://github.com/Ashoat/TabsWithInputAccessoryView).
Issue: Author Provided Repro,📮Known Issues,Bug
medium
Critical
340,374,678
go
x/review/git-codereview: when working on the Go repo, use its bin/gofmt if present
gofmt changed in Go 1.11, and git-codereview uses the gofmt from $PATH, so when that's the Go 1.10 version, it complains as `git-codereview: gofmt needs to format these files (run 'git gofmt')`. It should instead use the gofmt in GOROOT.
help wanted,NeedsFix
low
Major
340,383,909
godot
Collisions of RigidBody2D are offsetted when moving.
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** <!-- Specify commit hash if non-official. --> 9f82368d40f1948de708804645374ea02ca6e7db **OS/device including version:** <!-- Specify GPU model and drivers if graphics-related. --> `Linux eb 4.17.2-1-ARCH #1 SMP PREEMPT Sat Jun 16 11:08:59 UTC 2018 x86_64 GNU/Linux` **Issue description:** <!-- What happened, and what was expected. --> Collisions of RigidBody2D are offsetted when moving. In our current project we swap the mode back and forth from MODE_CHARACTER to MODE_KINEMATIC. KinematicBody2D seems not affected. **Steps to reproduce:** See the attached demo scenes, videos and gif. **Minimal reproduction project:** (use arrow keys to move) [collisionOffsetIssue.zip](https://github.com/godotengine/godot/files/2185978/collisionOffsetIssue.zip) [video1](http://games.code0.xyz/collision1.mp4) [video2](http://games.code0.xyz/0000-1602.mkv) <!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. --> ![collisionissue](https://user-images.githubusercontent.com/13794470/42595812-b582404c-8553-11e8-8986-5128d298a6ab.gif)
bug,confirmed,topic:physics
low
Critical
340,428,374
opencv
capture.read() fails in a process but not in a thread.
- OpenCV => 3.3.1 - Operating System / Platform => OSX High Sierra - Compiler => /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ ##### Detailed description a video is able to be read using opencv when ran in a thread but not a process (capture.read returns false). a working video is attached as well to demonstrate this possible but there is something about the demo video that breaks it in a process specifically. ##### Steps to reproduce run test.py in the attached zip, you will see that the demo video returns false when opened from a process. [videosAndTestScript.zip](https://github.com/opencv/opencv/files/2186404/videosAndTestScript.zip)
category: videoio,platform: ios/osx,incomplete
low
Minor
340,434,284
go
mime/multipart: Allow limiting maximum amount of part's header data
This is feature request, to allow user of `mime/multipart` package's `Reader` to specify limit of header data loaded into memory when calling `NewPart()` function. `Reader.NewPart()` calls `net/textproto` package's `Reader.ReadMIMEHeader()` function which without any limitation loads all header data into memory. `mime/multipart`'s `Reader` is also used in `net/http` package to parse `multipart/form-data` POST requests, which means that functions like `Request.ParseMultipartForm`, `Request.FormValue`, `Request.FormFile`, `Request.MultipartReader` are affected by this. I believe this can be used as remote denial of service attack vector. Possible solution for this would be limiting HTTP request body size (done by default in some servers like nginx), but this is unsuitable for cases where server needs to accept POST requests with big files (for example, file hosting service). Similar case like this is `net/http` package's `Server`'s `MaxHeaderBytes` field, which limits ammount of header data, but not body.
NeedsInvestigation
low
Minor
340,455,625
TypeScript
Folding ranges not returned for highly nested functions
_From @ryaa on July 9, 2018 5:38_ <!-- Please search existing issues to avoid creating duplicates. --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> <!-- Use Help > Report Issue to prefill these. --> - VSCode Version: Version 1.25.0 (1.25.0) - OS Version: MacOS High Sierra 10.13.5 Steps to Reproduce: 1. Create .ts file and add multiple enclosed blocks of code 2. Note that starting some level (> 7?) the folding icon is not displayed and folding/unfolding does not work - see attached (note that + icon is not displayed) Please note that //#region seems to be working <img width="1366" alt="screen shot 2018-07-09 at 08 31 53" src="https://user-images.githubusercontent.com/3608222/42432482-6af91b9c-8353-11e8-946f-72e87746095f.png"> <!-- Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes _Copied from original issue: Microsoft/vscode#53842_
Suggestion,Needs Proposal,VS Code Tracked,Domain: Outlining
low
Major
340,482,936
pytorch
[Caffe2] Not handle attribute 'spatial' of ONNX operator 'BatchNormalization'
## Issue description If I run a ONNX model contains 'BatchNormalization' operator and it has 'spatial' attribute, caffe2 will give me an error message: ``` terminate called after throwing an instance of 'caffe2::EnforceNotMet' what(): [enforce fail at backend.cc:1057] . Don't know how to map unexpected argument spatial (from operator SpatialBN) ``` I found that caffe2 do not handle attribute 'spatial' in Caffe2Backend::CreateBatchNormalization: ```C++ Caffe2Ops Caffe2Backend::CreateBatchNormalization( OnnxNode* onnx_node, int opset_version) { if (opset_version < 6) { auto& attributes = onnx_node->attributes; attributes.remove("consumed_inputs"); } if (opset_version >= 7) { auto& attributes = onnx_node->attributes; auto* attr = attributes.AddRewrittenAttribute("is_test"); attr->set_i(1); } return CommonOnnxNodeToCaffe2Ops(onnx_node, opset_version); } ```
caffe2
low
Critical
340,501,173
three.js
ColladaLoader: Add support for primitive type: polygons
The ColladaLoader fails to load this model. https://raw.githubusercontent.com/0ad/0ad/master/binaries/data/mods/public/art/meshes/skeletal/elephant_african_baby.dae Results in these warnings (in Firefox), the mesh is not visible: Unsupported primitive type: polygons ColladaLoader.js:1980:7 Undefined sampler. Access image directly (see #12530). ColladaLoader.js:1493:6 Invalid opaque type "null" of transparent tag. ColladaLoader.js:1619:8 Here is the fiddle. https://jsfiddle.net/k4qt0om8/ I can provide more Collada files that reproduce this issue if requested +1
Enhancement,Loaders
low
Minor
340,530,514
TypeScript
tsc --build / Project References Feedback & Discussion
Continuation of #3469 "Medium-sized Projects" which has grown too large for GitHub to display comfortably Please use this thread for feedback, general discussion, questions, or requests for help / "is this a bug" discussions. Active bugs / PRs in this area: * https://github.com/Microsoft/TypeScript-Handbook/pull/788 Handbook page Possible next steps: * [ ] (none yet; @andy-ms) Find unloaded downstream projects during rename * [ ] #25376 Infer project references from monorepo tooling package.json files * [ ] #25562 Terse mode output for `tsc -b` * [ ] (none yet; @weswigham ?) Intermediate build outputs to temp folders * [ ] (none; stretch) Parallelization / worker processes under `tsc -b` Other interesting links: * https://github.com/RyanCavanaugh/learn-a - sample project references repo using lerna * https://github.com/RyanCavanaugh/project-references-outfile - sample project references repo using `--outFile` * https://github.com/Microsoft/TypeScript/issues/3469#issuecomment-341317069 Ryan's outline of what we're doing/did and why * https://github.com/Microsoft/TypeScript/issues/3469#issuecomment-400439520 pre-draft of handbook page * https://github.com/RyanCavanaugh/learn-a/pull/3/files `learn-a` repo using `yarn` workspaces!
Discussion,Scenario: Monorepos & Cross-Project References
high
Critical
340,545,295
vue
Component scope attribute is lost when conditionally rendering root node
### Version 2.5.16 ### Reproduction link [https://codesandbox.io/s/5vj19q8yk](https://codesandbox.io/s/5vj19q8yk) ### Steps to reproduce 1. observe that text is green 2. click two times on checkbox to trigger slot's hide and show cycle ### What is expected? 3. text should still be green In other words, the toggled `div` should have `GreenSlot`'s `data-v` attribute applied ### What is actually happening? 3. observe that text is red In other words, the toggled `div` doesn't have correct `data-v` applied. --- Also tested on beta, issue persists. In my use-case, I cannot trivially replace it with external `v-if`, as the component that does the toggling contains important internal logic that decides if it should be shown or not. It's not just a simple prop. <!-- generated by vue-issues. DO NOT REMOVE -->
has workaround
medium
Minor
340,550,157
vscode
Provide an API to track a position in a document across edits
I've come across a few places where this'd be handy lately. Most recently, I run some tests for my user and I get back positions in the document of where each test is - this allows them to click on the test in the runner/results to jump directly to it. Unfortunately, if the user modifies their test file then this location information becomes out of date and now jumps the user to the wrong location when they click it. This can be really common - you run your tests; 5 of them fail; you start working through them - working on the first test shifts the position of the tests below such that it's now difficult to jump to them from the test list. VS Code is presumably already tracking things like this for things like decorations so it'd be nice if we could use it too. For example, imagine an API like this: ```ts // Get some token that represents a live position in the document const loc = document.trackLocation(position); // User makes some edits // Request the new position console.log(document.positionOf(loc)); ```
feature-request,api,editor-api
medium
Major
340,603,576
pytorch
[doc] many losses still mention size_average in formula
e.g., see equation of https://pytorch.org/docs/master/nn.html#torch.nn.KLDivLoss @li-roy cc @jlin27 @mruberry @albanD @jbschlosser
module: docs,module: nn,module: loss,triaged,module: deprecation
low
Minor
340,651,238
godot
Animated mesh strange behaviour
**Godot version:** v3.1.dev.custom_build 0b7df80eb1b1f6415b397f0ac115da5014b7b8a8 **OS/device including version:** Windows 10 x64 Intel HD 4000 & GT 820M **Issue description:** I'm currently developing a fps game with huge world, and when i am moving to far location, the mesh got strange result. Mesh at position 0, 0, 0: ![test6](https://user-images.githubusercontent.com/10296472/42639389-16461bbc-861a-11e8-813a-4b7714fda17a.png) And when it's transform translated into 2048, 0, 0: ![test4](https://user-images.githubusercontent.com/10296472/42639228-b492863a-8619-11e8-93b5-bf274026e51f.png) It was not happening when the mesh doesn't belong to armature node or exported without animation. ![test5](https://user-images.githubusercontent.com/10296472/42639526-6556aad2-861a-11e8-999b-e1958a03d62f.png) **Steps to reproduce:** 1. Import an animated mesh, then instance it into the scene. 2. Translate the mesh x origin to 2048 or more. 3. See the magic happens. **Minimal reproduction project:** [sample.zip](https://github.com/godotengine/godot/files/2188870/sample.zip)
bug,topic:animation,topic:3d
low
Minor
340,680,834
react-native
VirtualizedList- inefficient function passing for CellRenderer
<!-- Requirements: please go through this checklist before opening a new issue --> - [x] Review the documentation - [x] Search for existing issues - [ ] Use the latest React Native release - *using version 0.55.2 did not see any changes to the FlatListComponent since that version.* ## Environment Run `react-native info` in your terminal and paste its contents here. ``` Environment: OS: Windows 10 Node: 8.11.1 Yarn: 0.21.3 npm: 5.8.0 Watchman: Not Found Xcode: N/A Android Studio: Version 3.0.0.0 AI-171.4408382 Packages: (wanted => installed) react: 16.3.1 => 16.3.1 react-native: ^0.55.2 => 0.55.4 ``` ## Description I used the npm package why-did-you-update to help optimize my components and prevent re-renders. However there is a re-render that I cannot control. This happens for every item in the list. My gues ``` CellRenderer.props: Changes are in functions only. Possibly avoidable re-render? Functions before: {onLayout: ƒ} Functions after: {onLayout: ƒ} ``` It seems this is caused by an anonymous function at https://github.com/facebook/react-native/blob/d756d94b3a7e2812f17f549c57767ac63734b49c/Libraries/Lists/VirtualizedList.js#L691 I propose that this function be defined on the class to prevent unnecessary re-renders.
JavaScript,Help Wanted :octocat:,Resolution: PR Submitted,Component: VirtualizedList,Bug,Newer Patch Available
medium
Major
340,702,179
angular
[Feature] @ContentChildren() option to traverse just ng-container and ng-template
## I'm submitting a... <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [x] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior `@ContentChild()` matches only direct descendants or all descendands with the `{descendants: true}` option ## Expected behavior Another option which allow to query direct descendands together with **all and only** descendands inside ng-template or ng-container tags. Or some kind of reference/option which, put on nested elements, would make them discoverable by `@ContentChildren` even if nested. The second could be preferrable because the decision is left to who is writing the template and not to who wrote the component. ## What is the motivation / use case for changing the behavior? See [issue](https://github.com/angular/material2/issues/12060) with `matPrefix` for mat-form-field in angular/material2. In some cases being able to manually flag ng-container and ng-template as "traversable" could easily prevent the creation of a new component or the duplication of all common code which can be a lot. In the provided use case, only the input element had to change while all other mat-form-field pieces are always the same: here's an abstract of the use case with more context that the stackblitz provided into the linked issue. ```html <mat-form-field *ngFor="let fields of fields"> <mat-label>{{ field.name }}</mat-label> <input *ngIf="field.status == 'disabled'; else switchConditions" matInput disabled> <ng-template #switchConditions> <ng-container [ngSwitch]="field.type"> <ng-container *ngSwitchCase="'date'"> <!-- matPrefix here won't work because @ContentChildren do not traverse ng-template and ng.container --> <mat-datepicker-toggle matPrefix [for]="datePicker"></mat-datepicker-toggle> <input matInput [(ngModel)]="field.value" [matDatepicker]="datePicker"> <mat-datepicker #datePicker></mat-datepicker> </ng-container> <ng-container *ngSwitchCase="'select'"> <mat-select [(ngModel)]="field.value"> <mat-option *ngFor="let option of field.options" [value]="option"> {{ option }} </mat-option> </mat-select> </ng-container> <!-- type text or number --> <input *ngSwitchDefault matInput [(ngModel)]="field.value"> </ng-container> </ng-template> <button mat-icon-button matSuffix (click)="/* remove field */"><mat-icon>cancel</mat-icon></button> </mat-form-field> ``` ## Environment <pre><code> Angular version: 6.0.9 </code></pre> ## Related #20810 and many others
feature,area: core,core: queries,feature: insufficient votes,feature: votes required
medium
Critical
340,723,564
TypeScript
Services for non-homomorphic mapped type
**TypeScript Version:** 3.0.0-dev.20180712 **Code** ```ts interface I { a: number; b: number; } type J = { [K in "b"]: I[K] }; declare const j: J; j.b; // Go-to-definition does not work ``` **Expected behavior:** Since the type we're getting is `I[K]`, the mapped property symbol should have a reference back to the corresponding property in `I`. **Actual behavior:** The mapped type creates an unrelated property symbol and services don't work.
Suggestion,Help Wanted,Domain: Mapped Types,Experience Enhancement,Domain: Symbol Navigation
low
Minor
340,756,825
TypeScript
Allow regular flags to mix with --build
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** `3.0.0-rc`, `3.0.0-dev.20180712` <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** build mode, --build **Code** I have project with the following `tsconfig.json`: ```json { "compilerOptions": { "module": "commonjs", "moduleResolution": "node", "target": "es2017", "lib": ["es2017"], "strict": true, "sourceMap": true, "noEmitOnError": true, "types": ["node"] }, "include": [ "src/**/*.ts" ] } ``` and compile it with command: ``` node_modules/.bin/tsc --build tsconfig.json --outDir out --rootDir /home/kostya/proj/foo ``` Note, that `outDir` and `rootDir` options come from command line arguments, because I want to share output directory for another command (`clean`). **Expected behavior:** `outDir` and `rootDir` options will be took from command line. **Actual behavior:** Compiler ignores this options: ``` node_modules/.bin/tsc --build tsconfig.json --outDir out --rootDir /home/kostya/proj/foo message TS6096: File '/home/kostya/proj/foo/--outDir' does not exist. message TS6096: File '/home/kostya/proj/foo/out' does not exist. message TS6096: File '/home/kostya/proj/foo/--rootDir' does not exist. ``` **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> **Related Issues:** <!-- Did you find other bugs that looked similar? --> https://github.com/Microsoft/TypeScript/issues/25600
Suggestion,In Discussion
medium
Critical
340,823,357
go
net/http: test WebAssembly Fetch implementation of Transport
While taking a look at CL https://go-review.googlesource.com/c/go/+/123537 for https://github.com/golang/go/issues/26349, a recommendation was that we should add tests to avoid regressions such as in that issue raised. @bradfitz added a reminder in https://github.com/golang/go/issues/26051#issuecomment-404644951
Testing,help wanted,NeedsFix,arch-wasm
low
Minor
340,842,714
TypeScript
In JS, `object` is treated as 'any'
**TypeScript Version:** 3.0.0-dev.20180711 **Code** ```ts /** @typedef {object} T */ /** @type {T} */ const x = 0; ``` **Expected behavior:** Error at `x`: `0` is not an object. **Actual behavior:** No error.
Suggestion,Domain: lib.d.ts,Awaiting More Feedback,Domain: JSDoc,checkJs,Domain: JavaScript
low
Critical
340,843,610
pytorch
[feature request] Make `torch.gather` broadcastable.
Currently, `torch.gather` does not broadcast. For example: ```python t = torch.tensor([[1,2],[1,2]]) torch.gather(t, 1, torch.tensor([[0,1,0],[0,1,0]])) ``` gives ``` tensor([[ 1, 2, 1], [ 1, 2, 1]]) ``` But ```python t = torch.tensor([[1,2],[1,2]]) torch.gather(t, 1, torch.tensor([[0,1,0]])) ``` and ```python t = torch.tensor([[1,2]]) torch.gather(t, 1, torch.tensor([[0,1,0],[0,1,0]])) ``` gives runtime errors. It would be more convenient to allow broadcasting on every dimension except the dim specified as the second argument in gather, so that the above two codes can give the same result as the first code. Does this proposal sound good? If yes, I can work on it.
triaged,function request
medium
Critical
340,844,593
go
x/image/tiff: compressed tiffs are invalid (at least on Mac OS X)
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version go1.10.3 darwin/amd64 ### Does this issue reproduce with the latest release? I'm on go1.10. ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/Users/xxx/Library/Caches/go-build" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/xxx/projects/go" GORACE="" GOROOT="/usr/local/opt/go/libexec" GOTMPDIR="" GOTOOLDIR="/usr/local/opt/go/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/kc/_mt5mw0j5ksfg846hgllc5lr0000gn/T/go-build015281372=/tmp/go-build -gno-record-gcc-switches -fno-common" ``` ### What did you do? [Here is a reproduction case.](https://github.com/cantino/tiff_bug) ### What did you expect to see? A valid white tiff. ### What did you see instead? A system error.
NeedsInvestigation
low
Critical
340,853,284
rust
Correcting Path::components on non-"Unix" platforms
`Path::components` is incorrect on Redox. I would like to develop the solution here: https://github.com/rust-lang/rust/pull/51537. The following is a description of the problem. Suppose you have the following path: `file:/home/user/foo/bar.txt` You split the path into components using `Path::components` ```rust Path::new("file:/home/user/foo/bar.txt").components().collect::<Vec<()>>() ``` In Linux, this would be equivalent to the following: ```rust vec![ Component::Normal("file:"), Component::Normal("home"), Component::Normal("user"), Component::Normal("foo"), Component::Normal("bar.txt"), ] ``` Joining the components with the current directory would give you a path such as this: `./file:/home/user/foo/bar.txt` In Redox, we want to be able to get from the original path to components back to the original path without any modifications. Here are examples of this usage of `Path::components`: https://github.com/uutils/coreutils/search?q=components In Redox, we have the following options for the `file:` component: 1. `Component::Normal("file:")` 2. `Component::Normal("file:")` followed by `Component::RootDir` 3. `Component::Prefix(Prefix::Verbatim("file:"))` 4. `Component::Prefix(Prefix::Scheme("file:"))` **Option 1** `Component::Normal("file:")` The path mentioned above would end up being the following after being rebuilt from its components: `./file:/home/user/foo/bar.txt` This is the old way of doing things. It not only makes `Path::components` useless on Redox. Canonicalizing a path will always add a scheme like `file:` to the beginning, so it is likely that path handling will be incorrect. Absolute paths would always be interpreted as relative. :x: This option is unusable for Redox. **Option 2** `Component::Normal("file:")` followed by `Component::RootDir` This would preserve the original meaning of the path, such that it could be rebuilt from its components as follows: `file:/home/user/foo/bar.txt` However, this may require a large amount of work to handle, as it seems likely that code exists that only checks the first component for being `Component::RootDir` or `Component::Prefix` in order to identify an absolute path. The documentation for `Prefix` provides one such example, which has likely inspired similar usage: https://doc.rust-lang.org/std/path/enum.Prefix.html#examples :x: This option would likely break the expectations of many consumers of the `Prefix` enum. **Option 3** `Component::Prefix(Prefix::Verbatim("file:"))` This option preserves the original meaning of the path, a rebuilt path would be this: `file:/home/user/foo/bar.txt` This, however, is overloading a variant meant to be used on Windows, for a path looking like this: `\\?\home\user\foo\bar.txt` This means different code will be needed when running on Redox to correctly parse paths to components and turn components into paths. :heavy_check_mark: This does leave the enum untouched, while allowing for the correct parsing of paths on Redox. The only downside is a possible lack of clarity due to overloading the meaning of the `Prefix::Verbatim` variant. **Option 4** `Component::Prefix(Prefix::Scheme("file:"))` This option also preserves the original meaning of the path, a rebuilt path would be this: `file:/home/user/foo/bar.txt` This is the most clear option, having separate code to handle specifically the Redox scheme abstraction. This has the downside of changing a stable enum, `Prefix`. There is, however, the possibility of having the extra enum variant be `#[cfg(target_os = "redox")]`, so as to preserve the `Prefix` enum on other platforms. :heavy_check_mark: This option could be used to have correct path parsing without affecting the stability of the `Prefix` enum on non-Redox platforms, if a `cfg` attribute is used. **Conclusion** Potentially the `Prefix` enum would be done different after a major version bump, perhaps using extension traits that are platform specific. I would think that there would be an opaque `Prefix` struct, and something like `.into_os_prefix()` would provide you with an os-specific enum to match against. For the time being, options 3 and 4 seem to be possible, with some caveats, and would allow code using stable Rust to quickly do-the-right-thing on Redox.
O-windows,T-libs-api,C-bug,T-libs,A-io,O-UEFI,O-nintendo3ds
medium
Critical
340,941,935
pytorch
[feature request] allow forward_pre_hook to "preprocess" input
basically allowing returning new input tensor(s)
todo,feature,triaged
low
Minor
340,943,760
opencv
opencv_createsamples consistently failing on Ubuntu 16.04.4
- OpenCV => 3.4.2 - Operating System / Platform => Ubuntu 16.04.4 - Compiler => GCC The following command: ` opencv_createsamples -bgcolor 0 -bgthresh 0 -maxxangle 1.1 -maxyangle 1.1 -maxzangle 0.5 -maxidev 40 -w 25 -h 25 -num 100 -img filename.tif -bg filelist.txt -vec filename.vec ` Consistently fails with `Segmentation fault` . Checked all files, and command line options. Can be reproduced with any typical setup.
category: apps,incomplete
low
Major
340,952,274
vscode
[json] package.json: complete package versions from scopes or private registries
While there are numerous issues explaining that package _names_ cannot be autocompleted for scopes on the official registry or for private registries, that doesn't mean you shouldn't be able to autocomplete package _versions_. And I'd argue that atleast for private registries autocompleting the version is generally more important than autocompleting the name. The name is usually well known for internal dependencies. But the version range available; that not so much... Anyway, for scoped packages and even for scopes that are redirected to other registries using `registry` entries in `.npmrc`, the versions can be gotten from a simple `npm view` command, which returns a JSON structure that also holds all of a package's versions. No reason the functionality for that command couldn't be tapped for autocompletion as well.
feature-request,json
medium
Major
341,116,005
vscode
Allow diagnostics messages to have markdown (or formatted text) content
**Feature request** Allow diagnostics to display formatted content. The specific request was to show part of a diagnostic message in bold. A few potential options: - Allow diagnostics to use markdown content - Allow diagnostics to use formatted text - Add an "important span" to diagnostic messages that lets us control the styling of the diagnostic. (This request came out of the FB meeting)
feature-request,api,languages-diagnostics
high
Critical
341,117,355
flutter
Add API for asynchronously loading a font
Please see the following timeline trace (produced by our client in https://github.com/flutter/flutter/issues/813#issuecomment-404830195): [Archiv.zip](https://github.com/flutter/flutter/files/2193775/Archiv.zip) There's a 100ms jank caused by font loading: ![image](https://user-images.githubusercontent.com/22987568/42707667-df19e300-868f-11e8-8887-5c1d760e3b9d.png)
c: new feature,framework,engine,c: performance,P2,team-engine,triaged-engine
low
Minor
341,142,665
react
[Umbrella] Releasing Suspense
Let's use this issue to track the remaining tasks for releasing Suspense to open source. **Last updated: March 24, 2022** **Blog post: [The Plan for React 18](https://reactjs.org/blog/2021/06/08/the-plan-for-react-18.html)** ## Completed: React 16 - [x] Release `<Suspense>` with `React.lazy` for client-side lazy loading ## Completed: [React 18 Alpha](https://reactjs.org/blog/2021/06/08/the-plan-for-react-18.html) - [x] Implement concurrent rendering, which is a prerequisite to everything else. - [x] Fix [fundamental flaws](https://github.com/facebook/react/pull/18796) in the concurrency model that made the behavior difficult to understand and caused many bugs. - [x] Rewrite [how React traverses the tree](https://github.com/facebook/react/pull/19261) to unblock fixing Suspense quirks. - [x] Redesign how [React integrates with the scheduler](https://github.com/facebook/react/pull/19121) to simplify the model, fix bugs, and prepare for native browser scheduling. - [x] [Fix `<Suspense>` quirks](https://github.com/reactwg/react-18/discussions/7): Previously, effects would fire inside a suspended tree too early. For example, you would see an effect from a component that's still hidden behind a placeholder. Now effects will run only _after_ the content has been revealed. We expect this to fix existing application code bugs. - [x] [Hiding and showing existing content should re-fire layout effects](https://github.com/reactwg/react-18/discussions/31): If a component that's already visible suspends, we show a placeholder, and later show it again. However, there was no way for the component to know that it was hidden or shown. For example, a tooltip component measuring its screen position would get incorrect measurements while it's hidden. Now we fire `useLayoutEffect` cleanup (same as `componentWillUnmount`) on "hide", and `useLayoutEffect` setup (same as `componentDidMount`) on "show". We expect this to fix existing application and library code bugs. - [x] [`<Suspense>` on the server no longer throws](https://github.com/reactwg/react-18/discussions/22): It used to be a hard error to render `<Suspense>` in a tree on the server. Now, **for the old server renderer**, it silently emits the fallback (and lets the client try to render the content instead). This shouldn't affect existing apps because previously it was not possible to render `<Suspense>` on the server at all. - [x] [`startTransition`](https://github.com/reactwg/react-18/discussions/41) lets you avoid hiding existing content even if it suspends again. This is useful to implement the "show old data while refetching" pattern with minimal code. - [x] Built-in throttling of Suspense reveals: To avoid updating the screen too often and causing visual jank, React "waits" a little bit before revealing the next level of spinners — in case _even more_ content is available by that time. In other words, revealing nested Suspense fallbacks is automatically throttled by React. - [x] [New Streaming Suspense Server Renderer](https://github.com/reactwg/react-18/discussions/37): - [x] Initial streaming renderer implementation. - [x] `React.lazy` works with SSR out of the box. - [x] **Streaming HTML:** React uses your `<Suspense>` boundaries to stream the page HTML in visual chunks. - [x] **Selective Hydration:** React uses your `<Suspense>` boundaries to hydrate the page in chunks, improving responsiveness. - [x] React prioritizes hydrating the part of the page you are interacting with. - [x] React keeps the browser responsive during hydration of `<Suspense>` boundaries. - [x] React captures and replays missed events after hydration. - [x] [Technical preview of Server Components:](https://reactjs.org/blog/2020/12/21/data-fetching-with-react-server-components.html) - [x] Implement the server with support for suspending. - [x] Prototype a caching layer. - [x] Prototype React I/O libraries like `react-fetch` and `react-pg`. - [x] Support lazy-loaded elements for server trees. ## Completed: [React 18](https://reactjs.org/blog/2021/06/08/the-plan-for-react-18.html#projected-react-18-release-timeline) - [x] Finalize [New Streaming Suspense Server Renderer](https://github.com/reactwg/react-18/discussions/37): - [x] Make it pass all of our existing tests. - [x] Prove it out in production (currently we use a hack in its place). - [x] Add the missing "static markup" APIs for things like emails. - [x] Fix known bugs with hydrating Suspense. - [x] Move the new server renderer from `react-dom/unstable-fizz` to `react-dom/server`. - [x] Fall back to client rendering from closest `<Suspense>` on mismatches instead of patching up the tree. - [x] Add `onRecoverableError` to gather production reports about SSR mismatches. ### Features that may or may not appear in 18.x - [ ] `<SuspenseList>` lets you declaratively coordinate the order in which `<Suspense>` nodes inside will reveal. - [x] Implementation. - [ ] Server support - [ ] Finalize and document the API. - [ ] "Backup" `<Suspense>` boundaries (not final naming): A way to specify that you'd like React to ignore this boundary during initial render (as if it's not there), unless React is forced to hide existing content. We sometimes call these "ugly spinners" or "last resort spinners". This use case might seem a bit exotic but we've needed it quite a few times. - [x] Initial implementation as `unstable_avoidThisFallback` - [x] Server support - [ ] Pick a good name - [ ] `<Suspense>` for CPU-bound trees (not final naming): A way to tell React to immediately show a placeholder _without even trying_ to render the content. This is useful if you have an expensive tree inside. This use case is unrelated to network — it's about showing a spinner for some tree that takes a while to render. See https://github.com/facebook/react/pull/19936. - [x] Initial implementation as `unstable_expectedLoadTime` - [ ] Adjust the heuristics - [x] Server support - [ ] Pick a good name - [ ] An API to prioritize hydrating a particular DOM element's parent tree. - [x] Implement as `ReactDOM. unstable_scheduleHydration` - [ ] Pick a name - [ ] Reducing jank: Take another look at adjusting the small details to reduce any visual jank to the minimum. For example, throttle reveal of Suspense boundaries between siblings as well. ## React 18.x (post-18.0): Suspense for Data Fetching All of the above changes are **foundational architectural improvements** to `<Suspense>`. They fill the gaps in the mechanism and make it deeply integrated with all parts of React (client and server). However, they don't prescribe a particular data fetching strategy. That will likely come after the 18.0 release, and we're hoping that to have something during the next 18.x minor releases. This work will include: - [ ] [React I/O libraries like `react-fetch`](https://codesandbox.io/s/sad-banach-tcnim), which is a lightweight and easiest way to fetch data with Suspense. - [x] Initial implementation - [ ] Finalize the API - [ ] [Built-in Suspense `<Cache>`](https://github.com/reactwg/react-18/discussions/25) which will likely be the primary recommended way for third-party data fetching libraries to integrate with Suspense. (For example, `react-fetch` uses it internally.) - [x] Initial implementation - [ ] Try it in production - [ ] Investigate what's missing - [ ] Figure out the recommended strategy for normalized caches - [x] [Server Components](https://reactjs.org/blog/2020/12/21/data-fetching-with-react-server-components.html), which will be the recommended way to fetch data with Suspense in a way that scales great and integrates with React Fetch as well as third-party libraries. - [x] Initial implementation - [x] Basic Server Context implementation - [x] Server Context features for refetching - [x] Figure out the layering between Server Components and New SSR - [ ] (This section has many follow-up questions, so it's incomplete) - [ ] Clear documentation and recommendations for data fetching library authors on how to integrate with Suspense
Type: Umbrella,React Core Team
high
Critical
341,171,716
pytorch
Build error: cannot find -lonnxifi_loader
On master. It also seems `python3 setup.py install` still tries to build Caffe2: ``` [ 61%] Linking CXX shared library ../lib/libcaffe2.so /usr/bin/ld: cannot find -lonnxifi_loader collect2: error: ld returned 1 exit status caffe2/CMakeFiles/caffe2.dir/build.make:2382: recipe for target 'lib/libcaffe2.so' failed make[2]: *** [lib/libcaffe2.so] Error 1 CMakeFiles/Makefile2:1282: recipe for target 'caffe2/CMakeFiles/caffe2.dir/all' failed make[1]: *** [caffe2/CMakeFiles/caffe2.dir/all] Error 2 Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 Failed to run 'bash tools/build_pytorch_libs.sh --use-cuda --use-nnpack --use-mkldnn nccl caffe2 nanopb libshm gloo THD c10d' ``` cc @malfet @seemethere @walterddr
module: build,triaged
low
Critical
341,174,372
flutter
Unmount everything and dispose States when host activity dies
Looking at #19159, it seems reasonable looking at the API doc to expect the State.dispose to be called when the FlutterActivity is finishing/being destroyed. Yet it never gets called and there's an inbalance between things registered and/or created during initState() and things unregistered and/or released during dispose(). Should we: 1. Have FlutterView also handle onDestroy() (and equivalent on iOS) 2. Have a new AppLifecycleState for finishing 3. Have the widget framework unmount and dispose everything on onDestroy or at least have the State.dispose() API doc reference AppLifecycleState if we don't want to do this automatically? cc @Hixie @jason-simmons also cc @matthew-carroll since this scenario will surface more frequently with add-to-app
c: new feature,framework,engine,P2,team-engine,triaged-engine
low
Major
341,174,531
go
cmd/compile: consider using DWARF 5
DWARF 5 has several advantages over previous versions of DWARF. Notably, 1. It supports position-independent representations, which significantly reduces the number of relocations in object files and hence the size of object files and the load on the linker. In the `go` binary, 49% of the 503,361 total relocations are in the DWARF. 2. It supports much more compact location and range list formats. The location and range list sections are 6% of the 12MiB of the `go` binary, even when zlib compressed. 3. It has an official language code for Go. :) DWARF 5 is quite new, and I don't think the rest of the ecosystem is ready yet, but I wanted to get the idea floating. It is supported by the GNU and LLVM toolchains and some debuggers. Support was added in [GCC 7.1](https://gcc.gnu.org/gcc-7/changes.html) (May 2017) and [GDB 8.0](https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=blob_plain;f=gdb/NEWS;hb=gdb-8.0-release) (June 2017). It appears to be in the latest LLVM, which covers most of the Xcode tools, though I can't find when it was added. It is currently *not* supported by LLDB or the macOS linker. We could potentially get around the macOS linker by leaving out the Go DWARF from the objects we pass to the system linker and then merging it in to the final binary (we already do a merge step). This is more feasible with DWARF5 because it's mostly position-independent, so we wouldn't need dsymutil to relocate it for us. /cc @cherrymui @heschik @dr2chase @randall77 @ianlancetaylor
NeedsDecision,Debugging
medium
Critical
341,185,768
TypeScript
Ideas for faster cold compiler start-up
# Background For some users, cold compile times are getting to be a bit long - so much so that it's impacting people's non-watch-mode experience, and [giving people a negative perception of the compiler](https://twitter.com/garybernhardt/status/1007690864909529088). Compilation is already a hard sell for JavaScript users. If we can get some speed wins, I think it'd ease a lot of the pain of starting out with TypeScript. # Automatic `skipDefaultLibCheck` `lib.d.ts` is a pretty big file, and it's only going to grow. Realistically, most people don't ever declare symbols that conflict with the global scope, so we made the `skipDefaultLibCheck` (and also the `skipLibCheck` flags) for faster compilations. [We can suggest this flag to users](https://twitter.com/drosenwasser/status/1007721634193477632), but the truth is that it's not discoverable. It's also often misused, so I want to stop recommending it to people. 😄 It'd be interesting to see if we can get the same results of `skipDefaultLibCheck` based on the code users have written. Any program that doesn't contribute a global augmentation, or a declaration in the global scope, doesn't really need to have `lib.d.ts` checked over again. @mhegazy and I have discussed this, and it sounds like we have the necessary information after the type-checker undergoes symbol-merging. If no symbols ever get merged outside of lib files, we can make the assumption that `lib` files never need to get checked. But this requires knowing that all lib files have already had symbols merged up front before any other files the compiler is given. ## Pros * Running with `skipDefaultLibCheck` removes anywhere between 400-700ms on my machine from a "Hello world" file, so we could expect the same here. ## Cons * We'd have to be careful about our assumptions with this flag. * Users who edit `lib.d.ts` wouldn't see erroneous changes in a compiler (so we'd likely need a `forceDefaultLibCheck`). * Ideally, in the absence of edited `lib.d.ts` files, only our team ever needs to run `forceDefaultLibCheck`, reducing the cost for all other TypeScript users. # V8 Snapshots ~3 years ago, the V8 team introduced [custom startup snapshots](https://v8project.blogspot.com/2015/09/custom-startup-snapshots.html). In that post > Limitations aside, startup snapshots remain a great way to save time on initialization. We can shave off 100ms from the startup spent on loading the TypeScript compiler in our example above (on a regular desktop computer). We're looking forward to seeing how you might put custom snapshots to use! Obviously my machine's not the same as that aforementioned desktop, but I'm getting just a bit over 200ms for running `tsc -v`, so we could possibly minimize a decent chunk of our raw startup cost. Maybe @hashseed or @bmeurer would be able to lend some insight for how difficult this would be. # Minification @RyanCavanaugh and I tried some offhand loading benchmarks with Uglify and managed 1. to reduce `typescript.js`'s size by about half 2. to reduce import time by about 30ms I don't know how impactful 30ms is, but the size reduction sounds appealing.
Suggestion,Needs Proposal
medium
Critical
341,215,249
rust
Dropping unused and unreferenced external crate should not trigger recompile
After compiling a project that has a reference to an external crate in `Cargo.toml` but that is not imported (no `extern crate foo;`), a subsequent `cargo build` after dropping the unused dependency from `Cargo.toml` triggers a full rebuild. Unless the removal triggers a change in the version of a different dependency in the lock file (which I don't think can currently happen in this scenario), a recompile should be skipped. (I'd even go further and say that a remote crate imported with `extern crate foo` or even `#[macro_use] extern crate foo;` that is not actually used and does not introduce build scripts, etc. should also not trigger a rebuild, but that is at least understandable as it can affect type negotiations, etc. in which case it should be more of an "early abort" once the AST has been formed and [it becomes possible to determine that no types have changed](https://github.com/rust-lang/rust/issues/52122).)
C-enhancement,I-compiletime,T-compiler,A-incr-comp,WG-incr-comp
low
Minor
341,231,219
rust
Warn about #[macro_export] mod m?
I haven't used macros that often, but had to make changes in a 3rd party project. During refactoring I tried to import macros from another module. I knew I had to add `macro_use`, `macro_export` somehow, but couldn't remember details. While `rustc` was somewhat helpful in warning me on my first attempt: ``` error: cannot find macro `test!` in this scope --> src/x/u/u.rs:5:1 | 5 | test! { | ^^^^^^^^^^ | = help: have you added the `#[macro_use]` on the module/import? ``` It did not warn me about doing this: ``` #[macro_export] mod m; ``` where in reality I should have done that: ``` #[macro_use] mod m; ``` This was particularly confusing since the used macro was a few modules down in the hierarchy, and I should have done `#[macro_use]` on each level. Since I wasn't warned, I assumed `#[macro_export] mod m;` was correct. Instead, I tried a few permutations adding `#[macro_use]` to other places, which all didn't work out. Two things that would have worked for me: - Warn about `#[macro_export] mod m;` - Make the help text above more explicit, e.g., by suggesting that `macro_use` needs to be added to all parent modules (maybe with a link to the [error index](https://doc.rust-lang.org/error-index.html)) Update: using `rustc 1.29.0-nightly (9fd3d7899 2018-07-07)`
C-enhancement,A-attributes,A-lints,T-compiler,D-papercut
low
Critical
341,236,473
opencv
float numbers are not supported by CV_CheckEQ
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => latest ##### Detailed description See the code https://github.com/opencv/opencv/blob/fa66c6b797416292e2c6befb60b34b6ba0420499/modules/core/include/opencv2/core/check.hpp#L110-L111 The comment says `CV_CheckTypeEQ` supports `float` and `double`. However, the implementation is https://github.com/opencv/opencv/blob/fa66c6b797416292e2c6befb60b34b6ba0420499/modules/core/include/opencv2/core/check.hpp#L85 which is not correct to compare two floats using `==`. ##### Steps to reproduce The following code ```.cpp CV_CheckEQ(1.2, 0.2*6, "your message"); ``` should be valid according to the above comment, but it throws. ##### Possible solutions Provide a macro like ```.cpp CV_CheckDoubleEQ(value1, value2, eps, msg) ``` `glog` provides `CHECK_EQ`, `CHECK_DOUBLE_EQ` and `CHECK_NEAR`. https://github.com/google/glog/blob/367518f65049da0e53123d98eb24e56674a5c57e/src/glog/logging.h.in#L836-L846 <!-- to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file -->
feature,category: core
low
Critical
341,244,512
rust
include_str fails with large files
I just tried to `include_str!()` a 14g file. It failed with: <details> <summary>failure with 16g file</summary> ```text % cargo build Compiling foobar v0.1.0 (file:///home/benaryorg/.local/tmp/tmp.nlZAcHIVlh) fatal runtime error: memory allocation failed error: Could not compile `foobar`. ``` </details> --- Same error with everything down to five gigabytes (which would fit in my ram thrice). Then I thought, hey, 32-bit and ran the following: <details> <summary>failure with 2^32+1 file</summary> ```text % truncate -s 4294967297 file # 2^32 + 1 % cargo build Compiling foobar v0.1.0 (file:///home/benaryorg/.local/tmp/tmp.nlZAcHIVlh) fatal runtime error: memory allocation failed error: Could not compile `foobar`. To learn more, run the command again with --verbose. ``` </details> <details> <summary>failure with 2^32 file</summary> ``` % truncate -s 4294967296 file # 2 ^ 32 % cargo build Compiling foobar v0.1.0 (file:///home/benaryorg/.local/tmp/tmp.nlZAcHIVlh) fatal runtime error: memory allocation failed error: Could not compile `foobar`. To learn more, run the command again with --verbose. ``` </details> <details> <summary>failure with 2^32-1 file</summary> ``` % truncate -s 4294967295 file # 2^32 - 1 % cargo build Compiling foobar v0.1.0 (file:///home/benaryorg/.local/tmp/tmp.nlZAcHIVlh) thread 'main' panicked at 'assertion failed: line_len == 0 || ((*lines)[line_len - 1] < pos)', libsyntax_pos/lib.rs:969:9 note: Run with `RUST_BACKTRACE=1` for a backtrace. error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports note: rustc 1.27.1 (5f2b325f6 2018-07-07) running on x86_64-unknown-linux-gnu note: compiler flags: -C debuginfo=2 -C incremental --crate-type bin note: some of the compiler flags provided by cargo are hidden error: Could not compile `foobar`. To learn more, run the command again with --verbose. ``` </details> <details> <summary>failure with 2^32-1 file & RUST_BACKTRACE</summary> ``` % RUST_BACKTRACE=1 cargo build Compiling foobar v0.1.0 (file:///home/benaryorg/.local/tmp/tmp.nlZAcHIVlh) thread 'main' panicked at 'assertion failed: line_len == 0 || ((*lines)[line_len - 1] < pos)', libsyntax_pos/lib.rs:969:9 stack backtrace: 0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49 1: std::sys_common::backtrace::print at libstd/sys_common/backtrace.rs:71 at libstd/sys_common/backtrace.rs:59 2: std::panicking::default_hook::{{closure}} at libstd/panicking.rs:211 3: std::panicking::default_hook at libstd/panicking.rs:227 4: rustc::util::common::panic_hook 5: std::panicking::rust_panic_with_hook at libstd/panicking.rs:467 6: std::panicking::begin_panic 7: syntax_pos::FileMap::next_line 8: syntax::codemap::CodeMap::new_filemap_and_lines 9: syntax::ext::source_util::expand_include_str 10: <F as syntax::ext::base::TTMacroExpander>::expand 11: syntax::ext::expand::MacroExpander::expand_invoc 12: syntax::ext::expand::MacroExpander::expand 13: syntax::ext::expand::MacroExpander::expand_crate 14: rustc_driver::driver::phase_2_configure_and_expand_inner::{{closure}} 15: rustc::util::common::time 16: rustc_driver::driver::phase_2_configure_and_expand 17: rustc_driver::driver::compile_input 18: rustc_driver::run_compiler_impl 19: <scoped_tls::ScopedKey<T>>::set 20: syntax::with_globals 21: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once 22: __rust_maybe_catch_panic at libpanic_unwind/lib.rs:105 23: rustc_driver::run 24: rustc_driver::main 25: std::rt::lang_start::{{closure}} 26: std::panicking::try::do_call at libstd/rt.rs:59 at libstd/panicking.rs:310 27: __rust_maybe_catch_panic at libpanic_unwind/lib.rs:105 28: std::rt::lang_start_internal at libstd/panicking.rs:289 at libstd/panic.rs:374 at libstd/rt.rs:58 29: main 30: __libc_start_main 31: <unknown> query stack during panic: end of query stack error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports note: rustc 1.27.1 (5f2b325f6 2018-07-07) running on x86_64-unknown-linux-gnu note: compiler flags: -C debuginfo=2 -C incremental --crate-type bin note: some of the compiler flags provided by cargo are hidden error: Could not compile `foobar`. To learn more, run the command again with --verbose. ``` </details> --- Content of the file was generated by `base64 /dev/urandom | pv -Ss 16g > file` (and subsequently `truncate`d). A file created using `truncate` yields only the error of ≥2³² as described above though, so the error might have to do with `include_str!`. --- This is happening on: <details> <summary>tested rustc stable</summary> ``` % rustc --version --verbose rustc 1.27.1 (5f2b325f6 2018-07-07) binary: rustc commit-hash: 5f2b325f64ed6caa7179f3e04913db437656ec7e commit-date: 2018-07-07 host: x86_64-unknown-linux-gnu release: 1.27.1 LLVM version: 6.0 ``` </details> <details> <summary>tested rustc nightly</summary> ``` % rustup run nightly rustc --version --verbose rustc 1.29.0-nightly (254f8796b 2018-07-13) binary: rustc commit-hash: 254f8796b729810846e2b97620032ecaf103db33 commit-date: 2018-07-13 host: x86_64-unknown-linux-gnu release: 1.29.0-nightly LLVM version: 7.0 ``` </details> --- Used code: ```rust fn main() { let _res = include_str!("../file"); } ``` --- Nightly yields a litte more info: <details> <summary>nightly with backtrace and empty (truncated) file</summary> ``` % RUST_BACKTRACE=1 rustup run nightly cargo build --verbose Compiling foobar v0.1.0 (file:///home/benaryorg/.local/tmp/tmp.nlZAcHIVlh) Running `rustc --crate-name foobar src/main.rs --crate-type bin --emit=dep-info,link -C debuginfo=2 -C metadata=0edabc38d37db739 -C extra-filename=-0edabc38d37db739 --out-dir /home/benaryorg/.local/tmp/tmp.nlZAcHIVlh/target/debug/deps -C incremental=/home/benaryorg/.local/tmp/tmp.nlZAcHIVlh/target/debug/incremental -L dependency=/home/benaryorg/.local/tmp/tmp.nlZAcHIVlh/target/debug/deps` memory allocation of 8589934592 bytes failed error: Could not compile `foobar`. Caused by: process didn't exit successfully: `rustc --crate-name foobar src/main.rs --crate-type bin --emit=dep-info,link -C debuginfo=2 -C metadata=0edabc38d37db739 -C extra-filename=-0edabc38d37db739 --out-dir /home/benaryorg/.local/tmp/tmp.nlZAcHIVlh/target/debug/deps -C incremental=/home/benaryorg/.local/tmp/tmp.nlZAcHIVlh/target/debug/incremental -L dependency=/home/benaryorg/.local/tmp/tmp.nlZAcHIVlh/target/debug/deps` (signal: 6, SIGABRT: process abort signal) ``` </details> --- **Edit:** seems as if the problem is happening due to running out of RAM because of doubling allocation sizes. Reported `memory allocation of ${n} bytes failed` messages on nightly are always powers of two and htop shows those allocations adding up pretty well. However this also happens with a 1g file. Running out of 16g of RAM when `include_str!()`ing a 1g file shouldn't happen, yet I can't tell where the problematic allocation(s) happen(s).
T-compiler,I-compilemem,C-bug
low
Critical
341,244,771
pytorch
Build error: flat_hash_map.h(226) C2133
## Issue description When building the caffe2 on a Windows 10 using pytorch\scripts\biuld_windows.bat, 5 errors occurred: ``` "C:\VS\Setup\pytorch\build\ALL_BUILD.vcxproj" (default target) (1) -> "C:\VS\Setup\pytorch\build\caffe2\caffe2.vcxproj" (default target) (5) -> "C:\VS\Setup\pytorch\build\caffe2\core\dispatch\dispatch.vcxproj" (default target) (9) -> (ClCompile target) -> C:\VS\Setup\pytorch\caffe2/utils/flat_hash_map/flat_hash_map.h(226): error C2133: 'public: static std::array<s ka::detailv3::sherwood_v3_entry_constexpr<`template-type-parameter-1'> const ,4> const ska::detailv3::EntryDefau ltTable<`template-type-parameter-1'>::table': unknown size (compiling source file C:\VS\Setup\pytorch\caffe2\cor e\dispatch\Dispatcher.cpp) [C:\VS\Setup\pytorch\build\caffe2\core\dispatch\dispatch.vcxproj] C:\VS\Setup\pytorch\caffe2/utils/flat_hash_map/flat_hash_map.h(226): error C2133: 'public: static std::array<s ka::detailv3::sherwood_v3_entry_constexpr<`template-type-parameter-1'> const ,4> const ska::detailv3::EntryDefau ltTable<`template-type-parameter-1'>::table': unknown size (compiling source file C:\VS\Setup\pytorch\caffe2\cor e\dispatch\OpSchemaRegistration.cpp) [C:\VS\Setup\pytorch\build\caffe2\core\dispatch\dispatch.vcxproj] C:\VS\Setup\pytorch\caffe2/utils/flat_hash_map/flat_hash_map.h(226): error C2133: 'public: static std::array<s ka::detailv3::sherwood_v3_entry_constexpr<`template-type-parameter-1'> const ,4> const ska::detailv3::EntryDefau ltTable<`template-type-parameter-1'>::table': unknown size (compiling source file C:\VS\Setup\pytorch\caffe2\cor e\dispatch\DispatchTable.cpp) [C:\VS\Setup\pytorch\build\caffe2\core\dispatch\dispatch.vcxproj] C:\VS\Setup\pytorch\caffe2/utils/flat_hash_map/flat_hash_map.h(226): error C2133: 'public: static std::array<s ka::detailv3::sherwood_v3_entry_constexpr<`template-type-parameter-1'> const ,4> const ska::detailv3::EntryDefau ltTable<`template-type-parameter-1'>::table': unknown size (compiling source file C:\VS\Setup\pytorch\caffe2\cor e\dispatch\KernelRegistration.cpp) [C:\VS\Setup\pytorch\build\caffe2\core\dispatch\dispatch.vcxproj] C:\VS\Setup\pytorch\caffe2/utils/flat_hash_map/flat_hash_map.h(226): error C2133: 'public: static std::array<s ka::detailv3::sherwood_v3_entry_constexpr<`template-type-parameter-1'> const ,4> const ska::detailv3::EntryDefau ltTable<`template-type-parameter-1'>::table': unknown size (compiling source file C:\VS\Setup\pytorch\caffe2\cor e\dispatch\TensorTypeIdRegistration.cpp) [C:\VS\Setup\pytorch\build\caffe2\core\dispatch\dispatch.vcxproj] ``` ## Code example Simply run the build_windows.bat as instructed on the website https://caffe2.ai/docs/getting-started.html?platform=windows&configuration=compile. ## System Info - PyTorch or Caffe2: Caffe2 - How you installed PyTorch (conda, pip, source): source - Build command you used (if compiling from source): build_windows.bat - OS: Microsoft Windows 10 Pro - Python version: 3.6 - CUDA/cuDNN version: 9.1.85 - GPU models and configuration: GPU 0: GeForce GPU - GCC version (if compiling from source): - CMake version: version 3.12.0-rc3 - Versions of any other relevant libraries:
caffe2
low
Critical
341,246,097
rust
rustc_target, rustc_codegen_llvm: use () for the metadata of pointers to extern { type }s.
Right now we check in a bunch of places (with, e.g. `type_has_metadata`) whether an unsized type has no metadata, i.e. its pointer is like a regular pointer to sized types, and this is only the case for the new `extern { type Foo; }` opaque FFI types. What we could do instead, at the same time to simplify some the code handling these, and to make progress on custom DSTs, is to treat pointers to such types as having metadata of `()`, i.e. they'd be `(*data, ())` pairs, which are still represented as just a pointer, but backends can handle the second field uniformly across all the unsized types (`extern { type }`, slices, `dyn Trait`). The main reason we can reuse a lot of the existing code is that `()` can be represented as an immediate value in LLVM already. This trick would not work for custom DSTs with non-immediate metadata types, e.g. `(usize, usize, usize)` for a 2D matrix "slice" would require some indirection. But it is useful to start challenging the assumption that the metadata is pointer-sized. cc @rust-lang/compiler @plietar @mikeyhew
C-enhancement,A-codegen,E-mentor,T-compiler,A-layout
low
Major
341,258,634
pytorch
PredictorTest.SimpleBatchSizedMapInput intermittently hangs
Example: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-cuda9.0-cudnn7-ubuntu16.04-test/8089/console ``` 13:19:14 [ RUN ] PredictorTest.SimpleBatchSizedMapInput 13:19:14 I0714 13:19:14.288452 208 net_dag_utils.cc:102] Operator graph pruning prior to chain compute took: 2.162e-06 secs 13:19:14 I0714 13:19:14.288487 208 net_async_base.cc:430] Using estimated CPU pool size: 32; NUMA node id: -1 13:19:14 I0714 13:19:14.288497 208 net_async_base.cc:440] Created new CPU pool, size: 32; NUMA node id: -1 14:03:46 Build timed out (after 45 minutes). Marking the build as failed. ```
caffe2
low
Critical
341,265,456
godot
Particles still emit at old location when emission is turned off, moved and turned back on
___ ***Bugsquad note:** This issue has been confirmed several times already. No need to confirm it further.* ___ Godot 3.0.5 Windows 10 64 bits GeForce GTX 1060 6GB/PCIe/SSE2 Repro: 1) Place world space particles at position A and enable emission. Wait some time. 2) Disable emission 3) Move particles at position B 4) Enable emission: see a few particles are still wrongly emitted from A 5) Next particles are correctly emitted from B Test project: [ParticlesEmissionBug.zip](https://github.com/godotengine/godot/files/2195239/ParticlesEmissionBug.zip) Use left and right arrow keys to emit from left or right I use this in my game to emit sparks when scrapping walls, and this bug makes them appear sometimes at the wrong place instead of the contact point.
bug,topic:rendering,confirmed,topic:particles
medium
Critical
341,274,846
rust
Macros 2.0: macro defined and used in same function-like scope can't resolve its own items
Maybe this is just a misunderstanding of hygiene, but if a macro 2.0 is defined and used in the same function-like scope (function/const) then it can't resolve its own items: ```Rust #![feature(decl_macro)] trait Tr { fn foo(self); } const C: () = { macro implit { () => { fn doit() { println!("Hi"); } impl ::Tr for () { fn foo(self) { doit(); } } } } #[cfg(works)] const T: () = { implit!(); () }; // this works with no error #[cfg(not(works))] implit!(); //~ ERROR cannot find function `doit` in this scope }; fn main() { ().foo(); } ``` ## Expected Result `doit()` should resolve to the `fn doit` in the scope of the macro, printing `Hi!` ## Actual Result Resolution error. Note that 1. Having the macro defined *and used* in a module (i.e., if `const C: ()` was a `mod _xyz`) does work. 2. Having the macro defined in 1 function-like scope, and used in a child scope of it (see `cfg(works)`) also works. This error only occurs if the macro is defined and used in the *same* scope. cc @petrochenkov @nrc
A-resolve,T-compiler,A-macros-2.0,F-decl_macro
low
Critical
341,282,328
rust
Custom derive: can't use const as attribute value
I'm using serde to implement deserialization for a custom protocol. To keep the code dry, I've pulled all relevant strings into constants, but that creates a problem when trying to use the constants as attributes values in serde's derive macro. Here's a playground with a minimal example to reproduce the problem: http://play.rust-lang.org/?gist=47230ae9671b24b312bef9540e520ddf&version=stable&mode=debug&edition=2015 The check that seems to produce the error is part of `libsyntax`: https://github.com/rust-lang/rust/blob/0db03e635a5e38ebc7635637b870b8fbcc8a7e46/src/libsyntax/parse/attr.rs#L261-L263 With my very basic knowledge of the rust compiler, I can see two potential ways this could be made possible: 1) Run derive macros after the constants have been expanded. This would be transparent to the macros (they could treat the expanded constants as literals), however this might be a no-go due to the way that the passes are ordered within rustc? 2) Relax the check in `libsyntax` (and possibly other places) to allow for identifiers to be valid attribute values on the syntax level, so that custom derive macros could use the identifier as part of their code gen (this wouldn't work for all macros, but at least for serde it looks like it'd be good enough). If possible, could you give an indication as to whether this is something that you'd like in the compiler (in which case I'd be interested in working on a PR or RFC for it), or whether you think this is either impossible or undesirable for any reason?
A-attributes,A-macros,T-compiler,C-bug,A-proc-macros
medium
Critical
341,295,626
rust
Default impls cannot take into account associated types
Suppose I have some trait with a default method: ```rust trait Foo { fn bar(&self, isize) { .. } } ``` and I want to add an associated type: ```rust trait Foo { type Baz = isize; fn bar(&self, Self::Baz); } ``` This is a breaking change unless I move the old default method to a new `default impl`. But there is no way to condition the `default impl` on choice the choice of associated type. For example: ```rust default impl<A: Foo<Baz = isize>> Foo for A { fn bar(&self, isize) { .. } } ``` is prohibited, and both ```rust default impl<A> Foo for A { type Baz = isize; fn bar(&self, Self::Baz); } ``` and ```rust default impl<A> Foo for A { default type Baz = isize; fn bar(&self, Self::Baz); } ``` do the wrong thing.
T-lang,A-specialization,C-bug,F-specialization
low
Minor
341,300,888
TypeScript
Duplicated synthetic comment emit on module declarations
**TypeScript Version:** master as of 2018-07-12 **Search Terms:** module enum declaration synthetic comment **Code** As a patch for `unittests/transform.ts`: ```ts diff --git a/src/testRunner/unittests/transform.ts b/src/testRunner/unittests/transform.ts index 6c74c558b1..a39522b204 100644 --- a/src/testRunner/unittests/transform.ts +++ b/src/testRunner/unittests/transform.ts @@ -296,6 +296,36 @@ namespace ts { } } }); +^M + testBaseline("transformAddCommentToModule", () => {^M + return transpileModule(`// Comment on module^M +module TestModule {^M + export const x = 1;^M +}^M +`, {^M + transformers: {^M + before: [addSyntheticComment],^M + },^M + compilerOptions: {^M + target: ScriptTarget.ES5,^M + newLine: NewLineKind.CarriageReturnLineFeed,^M + }^M + }).outputText;^M +^M + function addSyntheticComment(context: TransformationContext) {^M + return (sourceFile: SourceFile): SourceFile => {^M + return visitNode(sourceFile, rootTransform, isSourceFile);^M + };^M + function rootTransform<T extends Node>(node: T): VisitResult<T> {^M + if (isModuleDeclaration(node)) {^M + setEmitFlags(node, EmitFlags.NoLeadingComments);^M + setSyntheticLeadingComments(node, [{ kind: SyntaxKind.SingleLineCommentTrivia, text: " comment!", pos: -1, end: -1, hasTrailingNewLine: true }]);^M + return node;^M + }^M + return visitEachChild(node, rootTransform, context);^M + }^M + }^M + });^M }); } diff --git a/tests/baselines/reference/transformApi/transformsCorrectly.transformAddCommentToModule.js b/tests/baselines/reference/transformApi/transformsCorrectly.transformAddCommentToModule.js new file mode 100644 index 0000000000..11e988f96e --- /dev/null +++ b/tests/baselines/reference/transformApi/transformsCorrectly.transformAddCommentToModule.js @@ -0,0 +1,6 @@ +// Comment on module^M +var TestModule;^M +// comment!^M +(function (TestModule) {^M + TestModule.x = 1;^M +})(TestModule || (TestModule = {}));^M ``` **Expected behavior:** See above, `// comment!` should be emitted once, either on the `var ...` line, or on the immediately invoked function expression. **Actual behavior:** `// comment!` gets emitted on both: ``` // Comment on module^M // comment!^M var TestModule;^M // comment!^M (function (TestModule) {^M TestModule.x = 1;^M })(TestModule || (TestModule = {}));^M ``` From the code, I suspect the same happens for `enum`s. **Related Issues:** This is somewhat similar to #17594, but effectively the opposite: because two nodes as marked as the original node for the module declaration, synthetic comments get emitted twice.
Bug
low
Minor
341,354,341
react
Investigate IE/Edge select rendering bug
This is a follow up from an issue related to change events on selects in IE/Edge (https://github.com/facebook/react/issues/4672). It looks like this is no longer an issue, but there's a visual regression on IE/Edge that might be avoidable. **Reproduction** https://codepen.io/nhunzaker/pen/qybxmz **Observation** From @jasonwilliams (https://github.com/facebook/react/issues/4672#issuecomment-404534681): > change and MouseUp both fire for me in Microsoft Edge 42.17134.1.0 @nhunzaker Although, the rendering of the select box is weird, it doesn't appear to expand when i click on it **We need to:** - [ ] Capture a GIF of the behavior for documentation purposes (this can just live in this thread) - [ ] Reproduce the test case outside of React, so that we can isolate the mechanics involved - [ ] Fix it :)
Browser: IE,Component: DOM,Type: Needs Investigation
low
Critical
341,364,679
rust
Weird span for `#[bench]` method error
Given ``` #[bench] fn bar(x: isize) { } ``` The current output is suboptimal: ``` error[E0308]: mismatched types --> $DIR/issue-12997-2.rs:16:1 | LL | fn bar(x: isize) { } | ^^^^^^^^^^^^^^^^^^^^ expected isize, found mutable reference | = note: expected type `isize` found type `&mut __test::test::Bencher` ```
C-enhancement,A-diagnostics,T-compiler
low
Critical
341,368,233
pytorch
Windows CPU version much slower than Unix versions
Actually, this was first detected when we tried to build 0.4 packages and do some benchmarking. At first, I thought the reason may be that `OpenMP` is not well supported. But recently, when I managed to build PyTorch CPU version with Intel C++ Compiler, it was still much slower in some of the complex networks like SqueezeNet and MobileNet. So I tried to use `Intel Advisor` to locate the problems. And here is the screenshot: (I tried to get the log but it keeps crashing...) ![image](https://user-images.githubusercontent.com/9998726/42740291-19d4b278-88d8-11e8-8af8-047519399392.png) It actually detected the potential problems for us. But I just don't know whether these loops should be vectorized or remain the scalar version. @cpuhrsch , may be you can shed some light for me?
module: windows,triaged
medium
Critical
341,383,860
vue
When a getter is defined that does not define a setter, no recursive reactive is made.
### Version 2.5.17-beta.0 ### Reproduction link [https://jsfiddle.net/ts0307/pd8zr3sk/](https://jsfiddle.net/ts0307/pd8zr3sk/) ### Steps to reproduce Run JSFiddle snippet ### What is expected? I expect, the result is shown as {"bar": "b"} instead of {"bar": "a"} ### What is actually happening? if ((!getter || setter) && arguments.length === 2) { val = obj[key] } let childOb = !shallow && observe(val) My example is that the data object defines that the getter does not define a setter, causing the above judgment to fail, val is not evaluated, and no recursive reactive is made. <!-- generated by vue-issues. DO NOT REMOVE -->
has workaround
low
Minor
341,416,818
kubernetes
Prevent mass livenessProbe failures from taking down all pods in a Deployment
> /kind feature **What happened**: A livenessProbe failed across all pods in a deployment, which took the application down. Given it affected all pods, it would have been better in this instance for the application to continue in a degraded state versus being hard down. However, if this type of failure happened only on a subset of pods, then having the pods restart would have been the right behavior. Because of this behavior, it is tricky to strike the right balance between an agressive livenessProbe that detects and (effectively) remediates unhealthy pods and one that is safe enough to not take a whole deployment down. **What you expected to happen**: Thinking out loud, one way Kubernetes could stop this type of cascading failure is by honoring the "maxUnavailable" of the PodDisruptionBudge and stoping further livenessProbe actions once the constraint is violated (and we could alert when a deployment gets into this state). At the very least, the "Configure Liveness and Readiness Probes" documentation should be updated to highlight this edge case since it can cause a catastrophic failure. **How to reproduce it (as minimally and precisely as possible)**: Create a scenario that causes the liveness check to fail on all pods of a deployment at once. **Environment**: - Kubernetes version (use `kubectl version`): 1.9.4 - Cloud provider or hardware configuration: on-prem - OS (e.g. from /etc/os-release): RHEL 7
sig/scheduling,sig/node,kind/feature,sig/apps,lifecycle/frozen
low
Critical
341,475,163
rust
better error message for wrong lifetime annotations?
This was my code ````rust pub(crate) fn parse_rev<'a>(repo: &<'a>Repository, rev_raw: &str) -> Revspec<'a> { ... } ```` Obviously this should have been ````rust pub(crate) fn parse_rev<'a>(repo: &'a Repository, rev_raw: &str) -> Revspec<'a> { ... } ```` (````<'a>```` => ````'a````) but the error message had me thinking that something was wrong with the argument itself which was confusing ```` error: expected type, found `'a` --> src/git.rs:26:37 | 26 | pub(crate) fn parse_rev<'a>(repo: &<'a>Repository, rev_raw: &str) -> Revspec<'a> { | ^^ error[E0425]: cannot find value `repo` in this scope --> src/git.rs:29:21 | 29 | let obj = match repo.revparse(rev_raw) { | ^^^^ not found in this scope ```` It would be awesome if rustc could suggest ````'a```` instead of ````<'a>```` here instead of telling that the something with the repo is missing in the function arguments. It's already pointing to the right place in the first message but maybe there's a way to improve this even further =) rustc 1.29.0-nightly (31f1bc7b4 2018-07-15)
C-enhancement,A-diagnostics,A-parser,T-compiler,A-suggestion-diagnostics
low
Critical
341,535,338
opencv
Opencv system crash with Java program in Mat.n_delete() w/ libc
[hs_err_pid20872.log](https://github.com/opencv/opencv/files/2197931/hs_err_pid20872.log) ##### System information (version) OpenCV => 3.4 Operating System/Platform => Ubuntu 16.04 64-bit Complier => OpenCV built with GNU 5.4.0 and Java 1.8.0.171 ##### Detailed description A fatal error has been detected by the Java Runtime Environment: SIGSEGV (0xb) at pc=0x00007f540e0eaef2, pid=15373, tid=0x00007f53d584d700 JRE version: Java(TM) SE Runtime Environment (8.0_171-b11) (build 1.8.0_171-b11) Java VM: Java HotSpot(TM) 64-Bit Server VM (25.171-b11 mixed mode linux-amd64 compressed oops) Problematic frame: C [libc.so.6+0x7fef2] Core dump written. Default location: /media/projects/obstruction/core or core.15373 An error report file with more information is saved as: media/projects/obstruction/hs_err_pid15373.log Compiled method (nm) 2719603 1397 n 0 org.opencv.core.Mat::n_delete (native) total in heap [0x00007f53f942bcd0,0x00007f53f942c018] = 840 relocation [0x00007f53f942bdf8,0x00007f53f942be40] = 72 main code [0x00007f53f942be40,0x00007f53f942c010] = 464 oops [0x00007f53f942c010,0x00007f53f942c018] = 8 If you would like to submit a bug report, please visit: http://bugreport.java.com/bugreport/crash.jsp The crash happened outside the Java Virtual Machine in native code. See problematic frame for where to report the bug. Java program performing a number of different imaging operations with Mat data structures core dumps in Finalizer at org.opencv.core.Mat.n_delete() Crashes at different spots in the Java code, seems to be timing dependent. ##### Steps to reproduce You would need to set up the code at https://github.com/mrobbeloth-wright-state/obstruction . I'll attach the core dump file, it may be easier to debug using that.
priority: low,category: java bindings,incomplete,needs investigation
low
Critical
341,549,015
opencv
Rect_<Tp> operator & return wrong result
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => - Operating System / Platform => - Compiler => --> - OpenCV => 3.2 - Operating System / Platform => Ubuntu 16.04.10 - Compiler => g++ 5.4.0 ##### Detailed description Rect_<_Tp> operator & (const Rect_<_Tp>& a, const Rect_<_Tp>& b) returns a wrong result for `Rect_<unsigned>, Rect_<unsigned long>`. For example, the intersection of x=0,y=0,w=1,h=1 and 2,2,1,1 should be zero, but I get 1. The intersection variable gets a very large width and height: 4294967295 for both. ##### Steps to reproduce cv::Rect_<unsigned> r1(0,0,1,1); cv::Rect_<unsigned> r2(2,2,1,1); auto intersection = r1 & r2; auto area = intersection.area();
bug,priority: low,category: core
low
Critical
341,618,511
go
runtime/race: complete race detector on netbsd/amd64
The new netbsd/amd64 race detector support is not 100% done yet, per https://github.com/golang/go/issues/19273#issuecomment-349474852 We also don't have a builder for it. This is a tracking bug to see it completed. /cc @krytarowski
help wanted,OS-NetBSD,NeedsFix,compiler/runtime
low
Critical
341,636,866
godot
AudioEffectPitchShift deteriorates sample quality "without being used"
Godot 3.0.2 stable + Windows 10 Continuously playing the same sound through the "Effect" bus in this configuration: ![capture](https://user-images.githubusercontent.com/13801432/42777429-f9b534c2-8931-11e8-82af-afe4ae0d8348.PNG) ...results in the sample changing sound over time, without ever touching `pitch_scale`, i.e. it's always `= 1.0`.
bug,confirmed,topic:audio
low
Major
341,679,240
go
gollvm: build fail on slackware 14.2 in tools/gollvm/libgo/runtime_sysinfo.go
Please answer these questions before submitting your issue. Thanks! ### What operating system and processor architecture are you using (`go env`)? x86_64, slackware 14.2 ### What did you do? In my build directory: `cmake -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGETS_TO_BUILD="X86" -G "Unix Makefiles" ../llvm && make all -j4` but as far as I can tell, it just happens on my system regardless of other options when I build x86 (use gold linker, build with other targets, etc.) ### What did you expect to see? Successful build. ### What did you see instead? ``` /[my-build-path]/tools/gollvm/libgo/runtime_sysinfo.go:5966:32: error: expected ')' /[my-build-path]/tools/gollvm/libgo/runtime_sysinfo.go:5966:32: error: expected '=' /[my-build-path]/tools/gollvm/libgo/runtime_sysinfo.go:5966:32: error: expected ';' or newline after top level declaration /[my-build-path]/tools/gollvm/libgo/runtime_sysinfo.go:5965:36: use of undefined type 'ext' make[2]: *** [tools/gollvm/libgo/CMakeFiles/libgotool.dir/build.make:1737: tools/gollvm/libgo/.pic/runtime.o] Error 3 make[1]: *** [CMakeFiles/Makefile2:15851: tools/gollvm/libgo/CMakeFiles/libgotool.dir/all] Error 2 make: *** [Makefile:152: all] Error 2 ``` Those lines in `runtime_sysinfo.go` contain: ```go const ___glibc_clang_has_extension(ext) = 0 const ___glibc_clang_prereq(maj,min) = 0 ``` A little investigation seemed to indicate these slipped through libgo/godumpspec/m{parser,tokenizer} because their bodies don't reference their parameters. If I comment these lines out, and analogous lines in `/[my-build-path]/tools/gollvm/libgo/sysinfo.go` when building syscall, the build errors out while building libgo_shared because ld.gold can't find `__morestack`, `-lgcc`, `crt1.o`, etc. This part superficially has more in common with building gccgo. My binutils use gold, so even when I don't tell CMake to use gold, eventually it gets invoked. I am still investigating this latter part, but the above const/macro stuff seemed reportable.
NeedsInvestigation
medium
Critical
341,730,930
rust
Rust docs should at least have the option of using sans-serif fonts
__Edit__: When I say "docs" I mean the standard library docs. The font in places like "The Book" is absolutely fine. I'm sure I've seen this brought up in the issues before, but I don't seem to be able to find those issues anymore because I can't remember what I searched, so I hope you don't mind me creating a new issue for this. Currently the rust docs only really display all text in a serif style font, which is extremely difficult to read, especially when there are large sections of bold text. I understand that this may be a deliberate design decision, but unfortunately it makes it hard for people like me (no disabilities, just bad eyesight) to read anything. In my case I don't care about the design, I just want to be able to read the text without straining my eyes. If the docs can't be changed to a sans-serif font by default, then could an option at least be provided to change the font to a sans-serif one instead? Even the default `font-family: sans-serif;` is fine. Anything but serif will do for me. Currently when I open the docs I have to open the inspector in Chrome and apply `font-family: sans-serif;` to the entire site just to get anything done. This works but I shouldn't really have to do this in the first place. A serif font is fine for short pieces of text, like titles for example, but for the main body of text I hope a sans-serif font option can be seriously considered. I hope I'm not missing anything. If there's already an option to do this then please let me know.
T-rustdoc,A-docs,C-feature-request
low
Major
341,776,136
pytorch
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. WARNING:root:Debug message: libcurand.so.9.0: cannot open shared object file: No such file or directory Segmentation fault (core dumped)
HI, I have GPU and CUDA, CuDNN, and NCCL. My OS is Ubuntu 16.04, I followed this tutorial to install Coffe2 with GPU support ( conda install -c caffe2 caffe2-cuda9.0-cudnn7 ) and the installation finished successfully but this command: python2 -c 'from caffe2.python import workspace; print(workspace.NumCudaDevices())' returns: WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. WARNING:root:Debug message: libcurand.so.9.0: cannot open shared object file: No such file or directory Segmentation fault (core dumped) Any Idea how can I solve it?? TNX $nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2018 NVIDIA Corporation Built on Tue_Jun_12_23:07:04_CDT_2018 Cuda compilation tools, release 9.2, V9.2.148 $ conda -V conda 4.5.8 nvidia-smi Tue Jul 17 01:58:01 2018 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 384.130 Driver Version: 384.130 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Quadro M520 Off | 00000000:02:00.0 Off | N/A | | N/A 41C P0 N/A / N/A | 298MiB / 2002MiB | 2% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 21578 G /usr/lib/xorg/Xorg 234MiB | | 0 22157 G compiz 62MiB | +-----------------------------------------------------------------------------+
caffe2
low
Critical
341,877,709
rust
Blanket impl of `Into::into` for `From` should be default
Because of the blanket impl of `Into` for `From`, in which `Into::into` is not `default`, one cannot override the default implementation of `Into::into` using `specialization`.
C-enhancement,T-libs-api,A-specialization
low
Minor
341,936,639
TypeScript
Allow inferring rest element types in conditional types involving tuples
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. --> ## Search Terms rest element infer tuple ## Suggestion Currently, inferring single elements of tuple types is possible using the `infer` keyword: ```ts type FirstArg<T extends any[]> = T extends [infer R, ...any[]] ? R : T extends [] ? undefined : never; type T1 = FirstArg<[number, 2, 3]>; // number type T2 = FirstArg<[1, 2, 3]>; // 1 type T3 = FirstArg<["", 2]>; // "" type T4 = FirstArg<[]>; // undefined ``` However it is not possible to infer the type of the remaining arguments in one go, except by resorting to functions: ```ts type RestArgs<T extends any[]> = T extends [any, infer R[]] ? R : // this does not work - no way to specify that R should be an array! T extends [any] ? []] : never; // this does type RestArgs<T extends any[]> = ((...args: T) => void) extends ((first: any, ...rest: infer S1) => void) ? S1 : T extends [infer S2] ? [] : T extends [] ? [] : never; type T1 = RestArgs<[1,2,3]>; // [2, 3] type T2 = RestArgs<[1,2]>; // [2] type T3 = RestArgs<[1]>; // [] type T4 = RestArgs<[]>; // [] ``` I would like to see the possibility to infer rest types in tuples, e.g. like this (square brackets): ```ts type RestArgs<T extends any[]> = T extends [any, infer R[]] ? R : never; ``` or like this (3 dots) ```ts type RestArgs<T extends any[]> = T extends [any, infer ...R] ? R : never; ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,In Discussion
medium
Critical
341,946,141
TypeScript
Allow more constructs to work as type guards for `unknown`
## Search Terms unknown type guard Related: https://github.com/Microsoft/TypeScript/pull/24439#issuecomment-394185089, https://github.com/Microsoft/TypeScript/issues/25172 ## Suggestion Currently, only a very limited set of type guards are able to narrow the new `unknown` type: * Array.isArray (because it is a typeguard for `arg is any[]`) and probably some more in the lib files * instanceof * self-written typeguards However to make working with unknown types less awkward, I'd like to see a couple of other constructs being able to narrow the `unknown` type: ```ts let x: unknown; // Direct equality should narrow to the type we compare to x === "1"; // should narrow x to string or the literal "1" type, similar for other types aswell // All these should narrow x to {prop: any} "prop" in x; x.prop != null; x.prop !== undefined; typeof x.prop !== "undefined"; // typeof should work on properties of the unknown variable typeof x.prop === "string"; // should narrow x to {prop: string} ``` ## Use Cases Make `unknown` easier to work with! ```ts // before, very verbose! const x: unknown = undefined!; function hasProp1(x: any): x is {prop1: any} { return "prop1" in x; } function hasProp2(x: any): x is {prop2: any} { return "prop2" in x; } // imagine doing this for many more properties if (hasProp1(x)) { x.prop1; if (hasProp2(x)) { x.prop2; } } // =========== // after, much more concise and less overhead const x: unknown = undefined!; if ("prop1" in x) { x.prop1; if ("prop2" in x) { x.prop2; } } ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,Awaiting More Feedback
high
Critical
341,980,888
node
cluster: forked children eagerly ignore/replace inspect port from cluster settings
* **Version**: 8.11.3 * **Platform**: Linux 4.13.0-46-generic #<span></span>51-Ubuntu SMP Tue Jun 12 12:36:29 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux * **Subsystem**: cluster ## The Setup Here's a small example that can be used to show this behavior: ```js const cluster = require('cluster'); const args = [ process.argv[0], ...process.execArgv, ...process.argv.slice(1) ], prefix = cluster.isMaster ? '[MASTER]' : '[WORKER]'; console.log(prefix, ...args); if (cluster.isMaster) { const options = { execArgv: [], args: ['worker'] }; if (process.execArgv.some(arg => arg.startsWith('--inspect'))) { options.execArgv.push('--inspect=20002') } cluster.setupMaster(options); cluster.fork(); } ``` In this example, when the master process has been started with any of the `--inspect` arguments, we want to start the worker with the inspector active on a _specific_ port. Here's some example output: ``` $ node --inspect=10001 cluster.js master Debugger listening on ws://127.0.0.1:10001/cb5774d6-7c8b-4a91-93fd-4c700a5349f4 For help see https://nodejs.org/en/docs/inspector [MASTER] /usr/bin/node --inspect=10001 cluster.js master Debugger listening on ws://127.0.0.1:10002/ffb4f474-3e4d-42c7-a8d7-45f0b19d9924 For help see https://nodejs.org/en/docs/inspector [WORKER] /usr/bin/node --inspect=20002 --inspect-port=10002 cluster.js worker ``` What can be seen, here, is that the worker is forked with the configured argument from the cluster options' `execArgv`; but, the `--inspect-port=10002` argument is _also_ present, though it wasn't configured. ## The References ### The `cluster` Module Code The code that does this appears to be the `createWorkerProcess(id, env)` function at [/lib/internal/cluster/master.js:101](https://github.com/nodejs/node/blob/v8.x/lib/internal/cluster/master.js#L101). From this, we can see that this behavior appears to be intentional, at least as far as what the code says. Here's what I read from there: 1. If any of the cluster's configured `execArgv` arguments look like `--inspect`, `--inspect-brk`, `--inspect-port` or `--debug-port`... 1. If an `inspectPort` is defined in the cluster settings... 1. Get its value or run it as a getter function, saving the value as `inspectPort`. 1. ...else... 1. increment the master's debug port and save that value as `inspectPort`. 1. Push `` `--inspect-port=${inspectPort}` `` into the worker's `execArgv` array. ### The Documentation In [the CLI docs](https://nodejs.org/dist/latest-v8.x/docs/api/cli.html#cli_inspect_host_port), all three of `--inspect`, `--inspect-brk` and `--inspect-port` are indicated to support the `[=[host:]port]` value. However, _conceptually_, based on what they each do, it seems like they would **never be used together**. In [the `cluster` docs](https://nodejs.org/dist/latest-v8.x/docs/api/cluster.html#cluster_cluster_settings), `settings.inspectPort` is indicated to set the inspect port of the worker. ## The Problems So, here are the problems, as I see them: * Initially, without reading the code, it would seem that setting `settings.inspectPort` would allow one to configure the inspect port; **however**, that setting _is completely ignored_ unless `execArgv` already has one of the inspect arguments present (which _could be_ `--inspect-port`, resulting in two of the same argument). * When there _is_ an inspect argument present in `execArgv`, the selected `inspectPort` value ultimately results in the addition of the `--inspect-port` option; **however**, in the documentation, that argument is indicated to only configure what the inspect port _will_ be when/if inspection is latently activated. * While it seems, conceptually, that each of the inspect arguments would be mutually exclusive in practice, understandably there must be an order of precedence when they are combined; **however**, the `cluster` code _explicitly combines them_ by intentionally adding `--inspect-port` when it _knows_ there's already an inspect argument present. Ultimately, the only option one has to be able to configure the inspect port for a worker is to **both** (1) add one of the inspect arguments _and_ (2) set the `settings.inspectPort`. This will result in `execArgv` having something like `--inspect --inspect-port=####`. In that case, one might as well leave off the port from the original argument as it will be overridden by the added one. All of this is fairly confusing to me. Why would the `cluster` code intentionally combine inspect arguments? If there is a precedence, it's opaque (i.e. it's not indicated in the documentation). Is this operating as expected? Am I just missing something and thinking about this "wrong"? ------ P.S. The code for this [in **10.x**](https://github.com/nodejs/node/blob/v10.x/lib/internal/cluster/master.js#L102), while it has received some changes, does still appear to exhibit this behavior -- though I have not tested it myself.
cluster
low
Critical
342,004,068
deno
Capability-based security model
I don't know if anyone here has considered using a capability-based security model in Deno but when I read this paragraph in the Readme I thought that such a model would be perfect for this sort of thing: > File system and network access can be controlled in order to run sandboxed code. Defaults to read-only file system access and no network access. Access between V8 (unprivileged) and Golang (privileged) is only done via serialized messages defined in this protobuf. It is already based on message passing and a possibility to drop privileges may be needed in the future (e.g. a program runs with network access and wants to run a module but without giving it network access). There have been some interesting operating systems based on capabilities (search for GNOSIS, KeyKOS, EROS, CapROS and Coyotos for good info) where some complexities were needed to model entire file system and user privileges with capabilities, which didn't map well to the traditional Unix permission model. But here as I understand all we need is to make sure which privileged code an unprivileged code can run, and then let the OS do the rest of permission checking so most of the complexities that are inherent to designing a security model for an operating system are not relevant here. We can start a program with certain capabilites (like the read-only file system access by default, plus some additional ones turned on by the command-line switches if needed) and those capabilities could be passed (or not passed, which is really most important here) to certain modules used by the main program. Those modules running as unprivileged code (V8) would pass the capabilities along the serialized messages to the privileged code (previously Golang, now Rust) and the privileged code would either allow or reject such a request to a system call or other funcionality that requires a certain set of capabilities. Since Deno is already using message passing between unprivileged and privileged code, and since the idea to disallow certain permissions from the running code seems to be widely supported, then maybe a capability-based security model would work well here. Some resources: - Wikipedia: - https://en.wikipedia.org/wiki/Capability-based_security - https://en.wikipedia.org/wiki/Object-capability_model - Articles on Capability Theory by Norman Hardy: - http://www.cap-lore.com/CapTheory/ - Capsicum: practical capabilities for UNIX: - https://www.usenix.org/legacy/event/sec10/tech/full_papers/Watson.pdf - The KeyKOS Architecture: - http://webarchive.loc.gov/all/20011116155616/http://www.cis.upenn.edu/~keykos/osrpaper.html
permissions,suggestion
medium
Major
342,021,743
youtube-dl
could not send head request and http error 403
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.07.10*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.07.10** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [x] Bug report (encountered problems with youtube-dl) - [x] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [x] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): Microsoft Windows [Version 6.3.9600] (c) 2013 Microsoft Corporation. All rights reserved. C:\Users\censored\Desktop\youtube-dl>youtube-dl -v https://hls2.videos.sproutvideo.co m/31b0a8c7e2b3dd5a707caefdd17562dd/fc01da51d04c219899737f860bffcf8e/video/1080.m 3u8 [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['-v', 'https://hls2.videos.sproutvideo.com/31b0a8c7e 2b3dd5a707caefdd17562dd/fc01da51d04c219899737f860bffcf8e/video/1080.m3u8'] [debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252 [debug] youtube-dl version 2018.07.10 [debug] Python version 3.4.4 (CPython) - Windows-8.1-6.3.9600 [debug] exe versions: none [debug] Proxy map: {} [generic] 1080: Requesting header WARNING: Could not send HEAD request to https://hls2.videos.sproutvideo.com/31b0 a8c7e2b3dd5a707caefdd17562dd/fc01da51d04c219899737f860bffcf8e/video/1080.m3u8: H TTP Error 403: Forbidden [generic] 1080: Downloading webpage ERROR: Unable to download webpage: HTTP Error 403: Forbidden (caused by HTTPErro r()); please report this issue on https://yt-dl.org/bug . Make sure you are usin g the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp4etivjk5\bu ild\youtube_dl\extractor\common.py", line 599, in _request_webpage File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp4etivjk5\bu ild\youtube_dl\YoutubeDL.py", line 2211, in urlopen File "C:\Python\Python34\lib\urllib\request.py", line 470, in open File "C:\Python\Python34\lib\urllib\request.py", line 580, in http_response File "C:\Python\Python34\lib\urllib\request.py", line 508, in error File "C:\Python\Python34\lib\urllib\request.py", line 442, in _call_chain File "C:\Python\Python34\lib\urllib\request.py", line 588, in http_error_default --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single Video: http://indemandcareer.com/blueprint/5-digital-marketing-the-website/ - Playlist: Same link as above. Not sure if that is considered a playlist on the right side of the page, but they provide links like a table of contents. Each section leads to a web page that provides the table of contents. And each link/section has at least one video. --- ### Description of your *issue*, suggested solution and other information Remade issue since it was 'closed' before I could reply. Someone closed it right after they replied. How is that enough time toe reply? Source URL: http://indemandcareer.com/blueprint Here is the log file: [06/17/18 03:45:22] WARNING: Could not send HEAD request to https://hls2.videos.sproutvideo.com/31b0a8c7e2b3dd5a707caefdd17562dd/fc01da51d04c219899737f860bffcf8e/video/1080.m3u8: HTTP Error 403: Forbidden [06/17/18 03:45:22] ERROR: Unable to download webpage: HTTP Error 403: Forbidden (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. I tried using cookies or login information and it didn't work. Please give me some ideas to try. who can I PM to give them my login to test?
account-needed
medium
Critical
342,031,136
go
cmd/go: add 'go release'
Collecting things we would want a 'go release' command to do: - API-level compatibility checks - warning about dependence on prerelease or pseudo-versions. - checking that root/go.mod and root/v2/go.mod don't both say module foo/v2.
NeedsInvestigation,modules
high
Critical
342,032,072
flutter
MediaQuery.viewInsets.bottom changes instantly when keyboard opens
## Steps to Reproduce ```dart import 'package:flutter/material.dart'; void main() => runApp(MaterialApp( home: MyApp(), )); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return Scaffold( body: Column( children: <Widget>[ Expanded(child: Container(color: Colors.yellow.withOpacity(0.5),),), Container( color: Colors.grey, child: Padding( padding: const EdgeInsets.symmetric(vertical: 12.0), child: TextField( decoration: InputDecoration.collapsed(hintText: ''), ), ), ), ], ), ); } } ``` 1. Start this simple app on an ios device 2. Click at the text field at the bottom 3. See the snap of the text field ## Description The text fields goes to its final position instantly because the MediaQuery.viewInsets.bottom changes instantly to X when the keyboard opens. Would be nice if the MediaQuery.viewInsets.bottom would animate at the speed of the keyboard. The snap of the text field makes the ui very unnatural. A example of a production app (is a flutter app): https://drive.google.com/file/d/1CA8VXp14-_JihOT-xwafg16ahHQRbSZb/view?usp=sharing
c: new feature,framework,a: animation,f: material design,a: fidelity,P2,team-design,triaged-design
low
Minor
342,036,111
TypeScript
`@callback` is only generic after `@template` tag
**TypeScript Version:** master <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** **Code** Our declaration: ```ts export interface JSDocSignature extends JSDocType, Declaration { kind: SyntaxKind.JSDocSignature; typeParameters?: ReadonlyArray<JSDocTemplateTag>; ... } ``` Attempt to use: ```js /** * @callback Cb * @template T * @param {T} p * @return {T} */ /** @type {Cb} */ const x = p => p; ``` **Expected behavior:** `typeParameters` is set somewhere, and `type Cb = <T>(p: T) => T` (or `type Cb<T> = (p: T) => T`) **Actual behavior:** 1. `typeParameters` is not set. 2. `Cb` says that it requires a generic argument, but when provided, there's an error on `x`: 'any => any' isn't assignable to 'Cb<number>'. 3. Requesting quickinfo on `Cb/**/<number>` causes an infinite recursion in `getTypePredicateOfSignature`.
Bug,Domain: JSDoc,checkJs,Domain: JavaScript
low
Critical
342,037,252
vue
Possible memory leak when v-for in development mode
### Version 2.5.16 ### Reproduction link [https://codepen.io/anon/pen/KBMaOY](https://codepen.io/anon/pen/KBMaOY) ### Steps to reproduce - Open the codepen https://codepen.io/anon/pen/gjMgzG - Click a couple of times to list 0 item, then 1000 - See the memory increasing - Try force GC (trash icon) before measure it - Take heap snapshots in Chrome "Memory" tab - Watch the memory usage in the Chome task manager (shift esc) - Watch the memory usage in OS task manager ### What is expected? Same memory usage after garbage collector ### What is actually happening? Despite the "Performance" tab displays the same memory usage, the "Memory" heap snapshot displays a memory increase. Also, the both Chrome and OS task managers show the memory only increasing. --- It get worse when: - Using Vuei18n (even without translating) - Having big children components Tested on: - Windows 7 64 Bits - Chrome 67.0.3396.99 64 bits Please, consider that: - I may be doing something wrong - It can be a Chrome issue - It can be an OS issue <!-- generated by vue-issues. DO NOT REMOVE -->
improvement
low
Major
342,039,444
TypeScript
Unreachable code defaults
Apparently, since 2.9.1, unreachable code is allowed by default, you have to explicitly set `allowUnreachableCode` to `false` in `tsconfig.json` to get the old behavior. This doesn't match what's described [in the docs](https://www.typescriptlang.org/docs/handbook/compiler-options.html). ![image](https://user-images.githubusercontent.com/3883992/42838636-538d9dde-89cf-11e8-9df0-e077da8d5927.png) Quick repro : package.json ```json { "name": "tmp", "version": "1.0.0", "main": "index.js", "license": "MIT", "dependencies": { "typescript": "2.9.1" } } ``` tsconfig.json (default from tsc --init) ``` { "compilerOptions": { "target": "es5", "module": "commonjs", "strict": true, "esModuleInterop": true } } ``` index.ts ``` export const a = (): boolean => { return true; return false; }; ``` `yarn tsc` outputs no warning as of 2.9.1
Docs
low
Major
342,040,000
go
proposal: os/exec/v2: follow Context guidelines
# Proposal: Make `exec` follow guidelines for using `Context` objects Author(s): Casey Barker Last updated: 2018-07-17 ## Abstract This proposal is to add `cmd.StartCtx()` and `cmd.RunCtx()` methods to the `exec` package, eventually deprecating `exec.CommandContext()`, and bringing it into compliance with the guidelines for use of `Context` objects. ## Background The documentation for the `context` package says: > Do not store Contexts inside a struct type; instead, pass a Context explicitly to each function that needs it. The Context should be the first parameter, typically named ctx. The `exec.CommandContext()` function breaks this rule; it stores the `ctx` parameter inside the `Cmd` object until later referenced by the `cmd.Start()` method (which is also called by `cmd.Run()`). A caller of `cmd.Start()` or `cmd.Run()` has no control over which context gets used in the execution. Issue originally raised here: https://groups.google.com/forum/#!topic/golang-nuts/uvJIogNTD2c Looking at the development history for the `exec` package's use of `context,` I suspect this inconsistency arose accidentally, as the `Cmd.ctx` field was used differently early in development. ## Proposal 1) Add the following two new methods to the `exec` package: ```func (c *Cmd) StartCtx(ctx context.Context) error``` ```func (c *Cmd) RunCtx(ctx context.Context) error``` These two methods honor the passed-in context, rather than any context that might be attached to the `Cmd`. 2) Deprecate the following function in the `exec` package: ```func CommandContext(ctx context.Context, name string, arg ...string) *Cmd``` Once the deprecation is complete, the private `ctx` field could be removed from the `Cmd` object. ## Rationale The recommendations provided by the `context` package are solid; the context object should be passed at the time and place where a long-running operation is started. This makes it clear which calls are long-running, and it allows the source object (in this case, the `Cmd`) to be created without needing to know about the `ctx` that might eventually be used to control the execution. ## Compatibility Part 1 (adding two new methods) does not break compatibility, although it introduces some possible confusion in that the new methods would ignore the context provided if `exec.CommandContext()` were initially used to create the `Cmd` object. Part 2 (deprecating the `exec.CommandContext()` function) is a hard break in compatibility. ## Implementation TBD. This is my first proposal and I'm not yet familiar with the Go release process, but I'm willing to provide a patch if this proposal is acceptable.
v2,Proposal
low
Critical
342,047,670
vscode
Improve readability of diagnotic hovers
The number one annoyance I have and hear from people about VS Code + TypeScript is that type errors are hard to read. TypeScript will display nested type errors with each level indented, but because of the line wrapping, that indentation basically lost. This makes the error hard to follow and in consequence hard to solve. ![image](https://user-images.githubusercontent.com/10532611/42839855-dbc732a2-89b9-11e8-841d-f0b66f3a62fb.png) Here are things that I think would improve the readability: - Add an option to enable horizontal scrolling in hovers. This would mean the indentation would actually have the intended visual effect. - Make the width of the hover configurable. I have way more space on my screen it could utilize. - Add smart wrapping that preserves indentation. - Add markdown support to diagnostics and make TypeScript output markdown. This would mean not the whole diagnostic has to be rendered in a monospace font, therefor saving horizontal space. The parts that are code would use backticks instead of single quotes.
feature-request,languages-diagnostics
low
Critical
342,112,060
kubernetes
TokenReview audience support for nodes
Once https://github.com/kubernetes/kubernetes/pull/62692 merges, we should add support for audiences to the Kubelet's token review. This is important as part of the node isolation efforts, as the node could otherwise use tokens sent with requests to the nodes endpoints (e.g. stats requests) to send requests to other nodes. I think the node should accept 3 types of audiences: - legacy (no audience) - required for backwards compatibility - `system:nodes` - token used for all nodes - `system:node:<nodeName>` - per-node token, preferred /kind feature /sig auth /sig node /priority important-longterm
sig/node,kind/feature,sig/auth,priority/important-longterm,lifecycle/frozen
medium
Major
342,187,953
flutter
Feature Request for `SliverBottomAppBar`
**Problem:** When using SlideTransition to slide off BottomAppBar on SliverList scroll which is inside NestedScrollView with SliverAppBar, animation works, but SliverList does not occupy remaining space where BottomAppBar once was. **Intended behaviour:** https://material.io/design/components/app-bars-bottom.html#behavior ("Scrolling" paragraph) **Examples:** How it looks when SlideTransition finishes and BottomAppBar is no longer visible: ![screenshot_20180717-143740](https://user-images.githubusercontent.com/11775678/42862423-17f71ea0-8a68-11e8-8601-df8e32c4f833.jpg) **"Hack":** To archive intended effect I use SizeTransition instead of SlideTransition with _axis: Axis.vertical_ and _axisAlignment: -1.0_ , but that for some reason removes BottomAppBar elevation and FAB moves down when BottomAppBar changes size. **Feature request:** There could be a SliverBottomAppBar Widget with _floating/snap/pinned_ properties and it could react to scrolling of CustomScrollView same as SliverAppBar does.
c: new feature,framework,f: material design,f: scrolling,customer: crowd,P3,team-design,triaged-design
medium
Critical