id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
437,353,486 | godot | Export var ruins Instantiated objects export variables | **Godot version:**
Godot_v3.1-stable_win64
**Issue description:**
Main's instance object.tscn of a variable named example is set to 30.
Here's a line from object.gd,
export var example = 42
I found that removing "export" from above line and then saving the project removes all other instantiated object.tscn's example variable.
Then by adding "export" back and then saving which reverts all other example variable back to 42 instead of 30.
This is a very serious issue for two reasons, there is no undo button and I have made levels with export variables set and I have made a mistake of removing "export" in code and then saving the project which messed up my levels.
**Steps to reproduce:**
Open bug_showcase.zip
Look at Node2D in Main.tscn first to see the number 30
Remove "export" from line 3 in object.gd.
Save.
Then put back "export".
Save.
You will see in the Main.tscn and by inspecting Node2D that 30 has been replaced by 42.
**Minimal reproduction project:**
[bug_showcase.zip](https://github.com/godotengine/godot/files/3118411/bug_showcase.zip)
| enhancement,topic:gdscript | low | Critical |
437,375,005 | create-react-app | Using `BigInt` primitives | ### Is this a bug report?
No
### Did you try recovering your dependencies?
n/a
### Which terms did you search for in User Guide?
`bigint`, `babel plugin`,`babel syntax macro`, `"Identifier directly after number"`
### Environment
n/a
### Steps to Reproduce
(Write your steps here:)
1. Use a BigInt primitive, e.g., `42n`
2. Define supported browsers in package.json to be `last 1 chrome version`. I.e., ignore targets that haven't implemented BigInts yet.
### ~~Expected~~ Desired Behavior
A way to use BigInt primitives in CRA without ejecting.
`BigInt` ([Spec](https://tc39.es/proposal-bigint/)) is currently at [Stage 4](https://github.com/tc39/proposal-bigint), with support in Node (since 10.4) and all major desktop and mobile browsers [Caniuse details](https://caniuse.com/#feat=bigint).
*Side note: You can get partial support by declaring the global (`/* global BigInt */`) if each `BigInt` is wrapped (`BigInt(42)` or `BigInt('42')`). However this still means that a code rewrite will be needed once full support lands to remove the clunky syntax.*
### Actual Behavior
Syntax error:
`Identifier directly after number`
### Reproducible Demo
```js
const foo = 42n;
```
(Edited to reflect this proposal has moved to Stage 4 and has additional browser support and removed reference to the Babel plugin which is no longer necessary.) | issue: proposal | low | Critical |
437,387,953 | go | cmd/cgo: C-Shared program crashes with "fatal: morestack g0" | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.2 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/joshua/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/joshua/go:/Users/joshua/src/repo004/GOPATH"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/cross-rev005/go-1.11.2"
GOTMPDIR=""
GOTOOLDIR="/usr/local/cross-rev005/go-1.11.2/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/vx/tlkd5xy56h711dyw9xr3dzch000136/T/go-build534324521=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
It's a convoluted situation...
I've create a `c-shared` dylib that under the hood is making use CGO to call into another dylib which does make calls back into Go. At the top level of this, I have a written a C program that calls the go library:
```
#include "stuff.h"
#include <stdio.h>
int main() {
libbbtevi_error_t err;
libbbtevi_add_archive_to_filesystem("<a path to a file on disk>", 3, 6, &err);
printf("%s\n", err.msg);
```
`libbbtevi_add_archive_to_filesystem` is just a wrapper around a Go function `AddISYSArchiveToFilesystem`
```//export libbbtevi_add_archive_to_filesystem
func libbbtevi_add_archive_to_filesystem(
bldb_path *C.char,
fs_id C.int64_t,
archive_id C.int64_t,
e *C.libbbtevi_error_t,
) C.libbbtevi_return_t {
// Set up error handling
var err error
defer func() {
writeEVIError(e, err)
}()
// Validate parameters
if bldb_path == nil {
return C.LIBBBTEVI_RETURN_INVALID_PARAMETER
}
err = fs.AddISYSArchiveToFilesystem(C.GoString(bldb_path), int64(fs_id), int64(archive_id))
if err != nil {
return C.LIBBBTEVI_RETURN_FAILURE
}
return C.LIBBBTEVI_RETURN_SUCCESS
}
```
I also wrote a go program that just calls the `AddISYSArchiveToFilesystem` call directly:
```
package main
import (
"bbgo_utils2/fs"
"log"
)
func main() {
path := "<a path to a file on disk>"
err := fs.AddISYSArchiveToFilesystem(path, 3, 6)
if err != nil {
log.Fatal(err)
}
log.Println("Done")
}
```
### What did you expect to see?
Both programs run without error
### What did you see instead?
The Go program runs to completion, without issue
The C program crashes with:
```
fatal: morestack on g0
Trace/BPT trap: 5
```
if run in `lldb` the following backtrace is available:
```
fatal: morestack on g0
Process 93693 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BREAKPOINT (code=EXC_I386_BPT, subcode=0x0)
frame #0: 0x0000000100129bf2 libbbtgo.dylib`runtime.abort at asm_amd64.s:840
837 TEXT runtime·abort(SB),NOSPLIT,$0-0
838 INT $3
839 loop:
-> 840 JMP loop
841
842 // check that SP is in range [g->stack.lo, g->stack.hi)
843 TEXT runtime·stackcheck(SB), NOSPLIT, $0-0
Target 0: (isystester) stopped.
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BREAKPOINT (code=EXC_I386_BPT, subcode=0x0)
* frame #0: 0x0000000100129bf2 libbbtgo.dylib`runtime.abort at asm_amd64.s:840
frame #1: 0x00000001001282a5 libbbtgo.dylib`runtime.morestack at asm_amd64.s:397
frame #2: 0x0000000100127892 libbbtgo.dylib`runtime.exitsyscallfast.func1 at proc.go:3102
frame #3: 0x0000000100128236 libbbtgo.dylib`runtime.systemstack at asm_amd64.s:351
frame #4: 0x0000000100100a50 libbbtgo.dylib`runtime.startTheWorldWithSema + 608
frame #5: 0x0000000100104e15 libbbtgo.dylib`runtime.exitsyscall at proc.go:3015
frame #6: 0x00000001000d3878 libbbtgo.dylib`runtime.cgocallbackg at cgocall.go:191
frame #7: 0x0000000100129b5b libbbtgo.dylib`runtime.cgocallback_gofunc at asm_amd64.s:775
frame #8: 0x00000001001299f2 libbbtgo.dylib`runtime.asmcgocall at asm_amd64.s:622
frame #9: 0x00000001000d3712 libbbtgo.dylib`runtime.cgocall at cgocall.go:131
frame #10: 0x000000010056046b libbbtgo.dylib`isys._Cfunc_IGR_Extract_Subfile_Stream at _cgo_gotypes.go:198
frame #11: 0x00000001005673f4 libbbtgo.dylib`isys.(*SubfileEntry).Reader.func2 at subfile.go:85
frame #12: 0x0000000100565396 libbbtgo.dylib`isys.(*SubfileEntry).Reader at subfile.go:85
frame #13: 0x0000000100568eb0 libbbtgo.dylib`go4n6/logical/isysarchive.(*Collection).Entry at isysarchive.go:53
frame #14: 0x00000001005d3b91 libbbtgo.dylib`bbgo_utils2/fs.(*rdrRunBased).OpenID at reader.go:998
frame #15: 0x00000001005c9655 libbbtgo.dylib`bbgo_utils2/fs.AddISYSArchiveToFilesystem at fs.go:456
frame #16: 0x00000001005ede13 libbbtgo.dylib`main.libbbtevi_add_archive_to_filesystem at libbbtevi.go:785
frame #17: 0x00000001005e6151 libbbtgo.dylib`main._cgoexpwrap_8a09112867c2_libbbtevi_add_archive_to_filesystem at _cgo_gotypes.go:691
frame #18: 0x000000010012860b libbbtgo.dylib`runtime.call64 at asm_amd64.s:523
frame #19: 0x00000001000d3adf libbbtgo.dylib`runtime.cgocallbackg1 at cgocall.go:316
frame #20: 0x00000001000d3896 libbbtgo.dylib`runtime.cgocallbackg at cgocall.go:194
frame #21: 0x0000000100129b5b libbbtgo.dylib`runtime.cgocallback_gofunc at asm_amd64.s:775
frame #22: 0x000000010012a281 libbbtgo.dylib`runtime.goexit at asm_amd64.s:1333
``` | NeedsInvestigation,compiler/runtime | low | Critical |
437,394,989 | kubernetes | modifying namespace spec.finalizers via PATCH should return a validation error | **What happened**:
PATCH `{"spec":{"finalizers":null}}` doesn't clear the finalizers and also doesn't return an error.
**What you expected to happen**: It should return an error (use the /finalize subresource).
**How to reproduce it (as minimally and precisely as possible)**: see #77086
| kind/documentation,sig/api-machinery,lifecycle/frozen | low | Critical |
437,422,915 | flutter | Help understanding the profiling summary | I was following this documentation page to understand how to generate profile reports of my application: [Performance profiling](https://flutter.dev/docs/cookbook/testing/integration/profiling)
After that, I have been able to obtain a report of a test in my app, that looks like this (the summary version):
```json
{
"average_frame_build_time_millis": 11.888272727272728,
"90th_percentile_frame_build_time_millis": 11.209,
"99th_percentile_frame_build_time_millis": 37.459,
"worst_frame_build_time_millis": 37.459,
"missed_frame_build_budget_count": 10,
"average_frame_rasterizer_time_millis": 4.998090909090909,
"90th_percentile_frame_rasterizer_time_millis": 6.575,
"99th_percentile_frame_rasterizer_time_millis": 6.728,
"worst_frame_rasterizer_time_millis": 6.728,
"missed_frame_rasterizer_budget_count": 0,
"frame_count": 11,
"frame_build_times": [
11209,
37459,
8001,
10131,
7925,
9552,
9601,
10334,
8129,
9054,
9376
],
"frame_rasterizer_times": [
1947,
4032,
3472,
5550,
4255,
6256,
6728,
6016,
6575,
4607,
5541
]
}
```
However, I don't understand what to do with that information, and what could I do with it in my pipeline to fail when we degrade quality. Could you please give me some more details?
Thank you so much! | c: new feature,tool,t: flutter driver,P3,team-tool,triaged-tool | low | Major |
437,443,238 | TypeScript | 3.4.1 regression in distributed types | **TypeScript Version:** 3.5.0-dev.20190425
The example below was taken from a real world use-case and simplified.
The `U` type is distributed while the `T` type remains the original union.
The `T` type is wrapped with `Box<T>` when `U` is not an object.
```ts
type Box<T> = { value: T }
type Test<T> = [T] extends [infer U]
? U extends object
? U
: U | Box<T>
: never
type A = number | { foo: number }
type B = Test<A>
```
In theory, the `B` type should be identical to:
```ts
type B = number | { foo: number } | Box<number | { foo: number }>
```
But in the latest version, the `B` type is instead identical to:
```ts
type B = number | { foo: number } | Box<number | (number & { foo: number })>
```
This works correctly in 3.3.x but not 3.4.x and later.
**Playground Link:** [click here](https://typescript-play.js.org/#code/C4TwDgpgBAQg9gDwDwBUB8UC8UDeUBuAhgDYCuEAXFClAL4BQok1EAzsKhtgNooC6UCAmAQAdgBNWUbgEtRAMwgAnKAFU+9KFAD8awcLGSocAEYArCAGNgmrTrW2tVVVAA+sRJ1tVREfMvpGcGgAQSwoUVIAWxNlN1woeTg4H2jYlQYmaBhwlDYOELQgA) | Needs Investigation,Domain: Conditional Types | low | Minor |
437,451,249 | TypeScript | [Feature request] allow use `showConfig` in tsconfig.json | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
allow use `showConfig` in tsconfig.json
and print config in console, and also show tsconfig.json path
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
so can make sure know what is final config at current process
without use `tsc --showConfig`
## Examples
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [ ] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
437,467,780 | create-react-app | Why not use HashedModuleIdsPlugin | <!--
PLEASE READ THE FIRST SECTION :-)
-->
### Is this a bug report?
No
<!--
If you answered "Yes":
Please note that your issue will be fixed much faster if you spend about
half an hour preparing it, including the exact reproduction steps and a demo.
If you're in a hurry or don't feel confident, it's fine to report bugs with
less details, but this makes it less likely they'll get fixed soon.
In either case, please fill as many fields below as you can.
If you answered "No":
If this is a question or a discussion, you may delete this template and write in a free form.
Note that we don't provide help for webpack questions after ejecting.
You can find webpack docs at https://webpack.js.org/.
-->
I was curious about how create-react-app implements long cache, how do you control the file hashes to remain the same when adding or removing files, because I didn't see HashedModuleIdsPlugin added to the webpack configuration
| issue: needs investigation | low | Critical |
437,492,983 | storybook | Addon-docs: Source editing support | Add editing support to #6641
Details TBD | feature request,addon: storysource,addon: docs | low | Major |
437,494,180 | storybook | Addon-docs: styleguide support | Add doc blocks for:
- [x] Type
- [x] Colors
- [x] Icons
- [ ] Documentation
Relies on #6644 | feature request,addon: docs | low | Major |
437,545,458 | angular | Router scrolling does not work properly when dealing with content that is not immediately visible | <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 🚀 feature request
### Relevant Package
<!-- Can you pin-point one or more @angular/* packages the are relevant for this feature request? -->
<!-- ✍️edit: --> This feature request is for @angular/router
### Description
<!-- ✍️-->
When using routerLink fragment to scroll to an anchor it might not work properly if unless the element is already loaded (this might be the case for a component that is rendered once data from a API has been loaded)
Another problem is that the anchor might be pushed down in the viewport when data is loaded.
A simple example: https://angular-7fpprl.stackblitz.io/qwe#qwe
### Describe the solution you'd like
<!-- ✍️-->
Perhaps there should be some wait mechanism in place that waits for the element to be rendered? I assume it would also need have a configurable timeout setting.
Also if the anchor is pushed down by dynamic content then perhaps position should be updated?
### Describe alternatives you've considered
<!-- ✍️-->
One option might be to create a custom directive that checks if the element exsists and then scrollIntoView() but it feels like monkey patching a feature that should be handled by the router | type: bug/fix,freq3: high,area: router,state: confirmed,P3 | medium | Critical |
437,554,138 | go | archive/zip: cannot parse file header with compressed size or local file header offset of 0xffffffff | archive/zip misinterprets (I believe) APPNOTE.TXT 4.5.3, such that it wrongly requires a Zip64 Extended Information extra field to be present whenever the compressed size or local file header offset of a central directory header is exactly 0xffffffff. #14185 fixed the problem for the *un*compressed size as a special case, but really there is nothing special about the uncompressed size and all three fields should be treated equally.
APPNOTE.TXT 4.5.3 says:
> The order of the fields in the zip64 extended information record is fixed, but the fields MUST only appear if the corresponding Local or Central directory record field is set to 0xFFFF or 0xFFFFFFFF.
archive/zip [interprets](https://github.com/golang/go/blob/go1.12.4/src/archive/zip/reader.go#L305-L307) the statement as:
```
if a field is 0xffffffff:
require zip64 extended information to be present
```
But that logic is backwards—it's an "only if", not an "if". I think the interpretation should rather be
```
if zip64 extended information is present:
replace only those fields that are 0xffffffff
```
In other words, 0xffffffff, by itself, is not a magic value that indicates special handling is required. It is the presence of a Zip64 Extended Information extra field that indicates special handling, and only then does the value 0xffffffff become significant. 0xffffffff is a perfectly valid field value to have in a non-Zip64 file.
I'm attaching a zip file that demonstrates the problem, [ffffffff.zip.gz.gz](https://github.com/golang/go/files/3120168/ffffffff.zip.gz.gz). (It is gzipped twice to reduce the size of the attachment, but the gzip layers have nothing to do with the issue and you should remove them before testing.) The zip file was produced by Info-ZIP Zip 3.0 and contains 2 files, with a maximum compressed/uncompressed size of 0xffffffde and a maximum local file header offset of 0xffffffff. Zip has decided to write a non-Zip64 zip file, as none of the values exceeds 0xffffffff. Info-ZIP UnZip 6.00 can parse the file, but archive/zip cannot. The sample file was created as follows:
```
# 216186 * 19867 = 0xffffffff - len("pad") - 30
dd if=/dev/zero bs=216186 count=19867 of=pad
echo test > test.txt
rm -f ffffffff.zip
zip -0 -X ffffffff.zip pad test.txt
gzip -9 < ffffffff.zip | gzip -9 > ffffffff.zip.gz.gz
```
archive/zip doesn't have a problem if the local file header appears one byte earlier or later—the easiest way to test that is to use a 2- or 4-byte filename instead of "pad" in the recipe above. In the former case it's because the value is 0xfffffffe and in the latter case it's because the value is 0xffffffff but Zip64 information is present.
For corroboration, see the function `getZip64Data` in process.c of [UnZip 6.00](https://sourceforge.net/projects/infozip/files/UnZip%206.x%20%28latest%29/UnZip%206.0/). It puts the Zip64 check *outside* the field value checks:
```
if (eb_id == EF_PKSZ64) {
if (G.crec.ucsize == 0xffffffff || G.lrec.ucsize == 0xffffffff){
```
Fixing this issue will allow removing the special case introduced in #14185 because it will be handled by the general case: a value of 0xffffffff means what it says, in the absence of a Zip64 extra field.
This issue is only a problem when reading a zip file, not when writing. archive/zip [currently](https://github.com/golang/go/blob/go1.12.4/src/archive/zip/zip_test.go#L253-L266) writes Zip64 information whenever a field is exactly 0xffffffff—that's probably a good idea for interoperability, even if it's not required. Compare [Zip 3.0](https://sourceforge.net/projects/infozip/files/Zip%203.x%20%28latest%29/3.0/)'s strict inequality (function `putend` in zipfile.c):
```
if( n > ZIP_UWORD16_MAX || s > ZIP_UWORD32_MAX || c > ZIP_UWORD32_MAX ||
```
with archive/zip's [non-strict inequality](https://github.com/golang/go/blob/go1.12.4/src/archive/zip/writer.go#L155):
```
if records >= uint16max || size >= uint32max || offset >= uint32max {
```
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.5 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes, I tried `go version devel +a62887aade Fri Apr 26 05:16:33 2019 +0000 linux/amd64`.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
</pre></details>
### What did you do?
Put in ziplist.go:
```
package main
import (
"archive/zip"
"fmt"
"os"
)
func main() {
z, err := zip.OpenReader(os.Args[1])
if err != nil {
panic(err)
}
defer z.Close()
for _, f := range z.File {
fmt.Printf("0x%09x 0x%09x %+q\n", f.CompressedSize64, f.UncompressedSize64, f.Name)
}
}
```
Now run:
```
$ gzip -dc ffffffff.zip.gz.gz | gzip -dc > ffffffff.zip
$ go run ziplist.go ffffffff.zip
```
### What did you expect to see?
```
0x0ffffffde 0x0ffffffde "pad"
0x000000005 0x000000005 "test.txt"
```
### What did you see instead?
```
panic: zip: not a valid zip file
goroutine 1 [running]:
main.main()
ziplist.go:12 +0x202
exit status 2
``` | NeedsInvestigation | low | Minor |
437,557,370 | material-ui | Cleanup package scripts | We currently leverage the `scripts` entry in our package.json. As of right now this includes around 34 scripts that aren't really documented. Any addition, change etc. has to be followed by contributors to be understood.
It would be nice if we could consolidate those into a single place that also adds documentation. This should put emphasis on common tasks as well as guiding contributors: What script can I use to perform X task?
https://github.com/kentcdodds/nps looks very close to what I have in mind (task + description + watch mode in a single place).
Maybe there are other solutions for large monorepos. In any case it would be nice if we could improve the first contributor experience. | core | low | Major |
437,574,254 | rust | Split up files with `// ignore-tidy-filelength` | These files are over 3,000 lines, which is not ideal for navigating or comprehending. If we can, it would be good to split these up. See also https://github.com/rust-lang/rust/issues/60015, which is a specific issue of this problem.
T-compiler:
- [ ] rustc_parse/src/parser/expr.rs
- [ ] rustc_borrowck/src/diagnostics/conflict_errors.rs
- [ ] rustc_trait_selection/src/error_reporting/traits/suggestions.rs
- [ ] rustc_hir_typeck/src/method/suggest.rs
- [ ] rustc_resolve/src/late.rs
- [ ] rustc_resolve/src/late/diagnostics.rs
T-rustdoc:
- [ ] src/librustdoc/html/static/js/search.js | C-cleanup,T-rustdoc,T-compiler,E-medium,C-tracking-issue | high | Critical |
437,639,954 | rust | Display original source location for doc test failures | I have the following file:
```rust
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at http://mozilla.org/MPL/2.0/.
use oraide_parser_miniyaml::{
ParserCtxExt,
ParserCtxStorage,
};
/// Entrypoint into MiniYaml parsing
///
/// Contains inputs and memoized computation results
///
/// # Example
/// ```rust
/// use oraide_parser_miniyaml::{Database,ParserCtx,ParserCtxExt,Tree};
/// let mut db = Database::default();
/// let file_id = db.add_file("example.yaml", "Hello:\n");
/// let tree: Tree = db.file_tree(file_id);
/// ```
#[salsa::database(ParserCtxStorage)]
pub struct Database {
rt: salsa::Runtime<Self>,
}
```
When I run `cargo test` for this package I get the following, valid, failure:
```
failures:
---- src/lib.rs - Database (line 6) stdout ----
error[E0432]: unresolved import `oraide_parser_miniyaml::Database`
--> src/lib.rs:7:30
|
3 | use oraide_parser_miniyaml::{Database,ParserCtx,ParserCtxExt,Tree};
| ^^^^^^^^ no `Database` in the root
thread 'src/lib.rs - Database (line 6)' panicked at 'couldn't compile the test', src/librustdoc/test.rs:354:13
note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
```
Unfortunately the reported error location is from the generated (I assume) code that is actually tested and not the correct location in the original source.
It is indeed the 7th line in the doc but that isn't as helpful as it could be.
Ideally I would see `src/lib.rs:16:30`.
Mapping these locations back to original source location would, IMO, be more useful because we could then see exactly where doc tests are failing (especially in a file with many doc tests) and, in some environments, can ctrl/cmd-click the `src/lib.rs:X:Y` output to take our editor directly to the failure. | T-rustdoc,C-enhancement,A-doctests,E-needs-mcve | low | Critical |
437,661,315 | go | cmd/vet: add unkeyed field literal check to 'go test' set | #2794 proposes requiring keyed (tagged) literals for any imported struct.
`go vet` has a test for this already.
But the test is only on in `go vet` not the automatic vet during `go test`.
I suggest we enable it during `go test` at the start of the Go 1.14 cycle
as an experiment. If it works well there then maybe it would make sense
to promote the restriction to the language itself. And if not, then not.
Or maybe vet alone will be enough. Time will tell.
/cc @robpike @ianlancetaylor @griesemer | NeedsDecision,early-in-cycle,Analysis | low | Minor |
437,680,934 | create-react-app | Non module css doesn't work with :local() in CRA 3.0 | ### Is this a bug report?
Yes
### Did you try recovering your dependencies?
Yes
### Environment
```
Environment Info:
System:
OS: Linux 4.14 Manjaro Linux
CPU: (12) x64 Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz
Binaries:
Node: 10.15.1 - ~/.nvm/versions/node/v10.15.1/bin/node
Yarn: 1.15.2 - /usr/bin/yarn
npm: 6.4.1 - ~/.nvm/versions/node/v10.15.1/bin/npm
Browsers:
Chrome: Not Found
Firefox: Not Found
npmPackages:
react: ^16.8.6 => 16.8.6
react-dom: ^16.8.6 => 16.8.6
react-scripts: 3.0.0 => 3.0.0
npmGlobalPackages:
create-react-app: Not Found
```
### Steps to Reproduce
(Write your steps here:)
1. Have a non module css-file, e.g. MyStyle.css
2. Have a class in said file as a `:local` scoped class, e.g. `:local(.myClass) { ... }`
3. Import CSS file in a React-component, e.g. `import css from './MyStyle.css'`
4. Observe that the classname is not exported
### Expected Behavior
That the css class has been given a random name and is available as a property on the css object, e.g. `{ myClass: "_2uhJkdnDbAceZ0gQC-r3vz" }`
### Actual Behavior
Object was empty: `{}`
### Reproducible Demo
Source:
https://github.com/almyy/local-classname-repro
Deployed example:
https://almyy.github.io/local-classname-repro/
| issue: needs investigation | medium | Critical |
437,715,829 | go | internal/singleflight: delete, use golang.org/x/sync/singleflight | The singleflight package started life in internal/singleflight and then moved to golang.org/x/sync/singleflight, and the two have had slightly divergent development histories since. It's getting kinda messy. Their APIs have even somewhat diverged.
I just tried to re-unify them and delete internal/singleflight from std, but we can't use mod/vendor packages during bootstrap.
/cc @ianlancetaylor | help wanted,NeedsFix | low | Major |
437,737,705 | rust | Compiler error when I remove unreachable code | Consider the following minimal example ([playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=5fd9665e0b90ff589f07af468da21d98)):
```rust
use futures::{Future, future::ok};
fn main() {
ok(()).then(|_a: Result<(), ()>| {
std::process::exit(1);
Ok(()) as Result<(), ()>
}).wait().unwrap();
}
```
It compiles successfully but warns me that line 6 (`Ok(()) as Result<(), ()>`) is unreachable. That is correct, because `std::process::exit` is a diverging function. But when I remove that line, compilation fails with the following error:
```
error[E0277]: the trait bound `(): futures::future::Future` is not satisfied
```
So I am forced to include unreachable code to work around the error. | C-enhancement,A-lints,A-diagnostics,T-lang,T-compiler,D-confusing | low | Critical |
437,738,774 | go | net: DNS load balancing during bursts of traffic | Right now inflight DNS lookups are [grouped together](https://github.com/golang/go/blob/master/src/net/lookup.go#L267).
Since DNS resolution picks the [first available IP](https://github.com/golang/go/blob/master/src/net/dial.go#L417), this has the side effect of resolving the same IP when there are bursts in traffic.
This was surprising to me since I expected load to be distributed amongst the resolved hosts, especially during these bursts (assuming equal RFC 6724 weighting).
Load distribution via DNS seems like a common use case/expectation - it would be nice to be able to enable randomized DNS resolution. Being able to disable DNS query grouping or customize the way the group key is generated would also work. | NeedsInvestigation,FeatureRequest | low | Minor |
437,773,924 | kubernetes | Finalizing objects not reflected in CLI | Assume I have an object, e.g. a Service, which has a finalizer. When I delete that object, it correctly waits for the finalizer to be removed. In that intervening period, I can not see any indication that the object is being deleted. It does not show up in `kubectl describe`. It is not an event. If it gets stuck, I have NO indication that it is being deleted or if I know that, why it is not complete.
**What would you like to be added**:
Some consistent way to see that things are "currently being deleted, waiting for finalizers X, Y, Z" | kind/feature,sig/cli,lifecycle/frozen | low | Major |
437,787,578 | TypeScript | noUnusedLocals throws errors for "never" type. | <!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
nounusedlocals never
nounusedlocals
never
never variable is not used error
**Code**
```ts
//NOTE: nounusedlocals is set to true in tsconfig.
enum Color {
A,
B,
}
function f(color: Color) {
switch(color) {
case Color.A:
console.log("White");
break;
case Color.B:
console.log("Black");
break;
default:
const _:never = color;
}
}
```
**Expected behavior:**
Compiles fine, prints nothing
**Actual behavior:**
Given variable _ is unused. _ is of type never which means it is expected to be never used. It is being declared as a compile time safety to ensure that we have cases over all the Color enum values.
| Suggestion,Awaiting More Feedback | low | Critical |
437,819,519 | TypeScript | TS with .js files and JSDoc: type is found, displayed and used - despite "TS2304: Cannot find name XYZ" | TS 3.4.5, Webstorm 2019.1.1, node v11.14.0, Linux
Project: .ts files in src/, transpiled using Babel to lib/, *but files under test/ are .js files*
tsconfig.json has `allowJs` and `checkJs` set to `true` in order to check the .js files under test/ — which (CommonJS) `require(...)` transpiled .js files from lib/ (there also are .d.ts and .d.ts.map files for all files).
Every single TS type declared in src/ has an accompanying JSDoc `@typedef` (don't know if that is important or if TS does not even use it since there is the original TS type too).
The problem I see is a weird one, because what Typescript says and what it actually does is completely opposite!
In the IDE I see every single type declared in src/ files as read, and on mouseover I get `TS2304 Cannot find name "XYZ".`:

**However**, and this is the weird part (1 of 2): **It all works!**
Showing the type (in WebStorm, CTRL plus mouse-hover over a symbol, shows the correct information. Even on those red types, I get the type name and the file name of where it is declared. Variables show the correct autocompletion suggestions and type information, jumping to the type definition (the TS one) works too.
The other weird part (2 of 2) is that when I follow the suggestions to change e.g.
* @param {SHA256Hash} hash
to
* @param {import('../lib/core-types.js').SHA256Hash} hash
the symbols are no longer red — but now nothing works. No more type information or anything. (*Tried different import paths, also directly to the src/ files and without extension, for example.*)
This is the reason why I file this as a bug. It actually works, so it is not a feature request.
So all it needs is for TS to stop showing the "cannot find name" errors, I don't need a change in behavior, since it *does* find the names just fine.
<br>
PS: I realize this is probably deeper, the plugin does not specifically remove the comments, I think it is the `path.remove()` command from the underlying AST. That doesn't make it any better or even right though. [Others had similar issues in the past.](https://github.com/Microsoft/TypeScript/issues/17606)
| Domain: JSDoc,Needs Investigation,checkJs | low | Critical |
437,835,708 | kubernetes | API proxy strips URL query params when proxying websocket connections | Tested on Kubernetes 1.14 (minikube 1.0). To reproduce:
1. deploy pod with a websocket server running:
```bash
kubectl run wsserver --generator=run-pod/v1 --rm -i --tty --image ubuntu:disco -- bash -c \
"apt-get update && apt-get install -y wget && \
wget https://github.com/vi/websocat/releases/download/v1.4.0/websocat_1.4.0_ssl1.1_amd64.deb && \
dpkg -i webso*.deb && \
websocat -vv -s 0.0.0.0:8000"
```
2. in host shell outside minikube run the following websocat command to connect to the websocket server in your wsserver pod:
```bash
websocat --binary --ws-c-uri=wss://192.168.99.100:8443/api/v1/namespaces/default/pods/wsserver:8000/proxy/test?foo=what - ws-c:cmd:'socat - ssl:192.168.99.100:8443,verify=0,cafile=$HOME/.minikube/ca.crt,cert=$HOME/.minikube/client.crt,key=$HOME/.minikube/client.key'
```
3. check the output of the server-side websocat and notice that **the query params are missing**: `[DEBUG websocat::ws_server_peer] Incoming { version: Http11, subject: (Get, AbsolutePath("/test")), headers: ...`
4. exec into your pod with the websocket server, and issue a local websocket connect.
```bash
kubectl exec wsserver websocat ws://localhost:8000/test?foo=what
```
5. check output of the service-side websocat, now **the query params are present as to be expected**: `[DEBUG websocat::ws_server_peer] Incoming { version: Http11, subject: (Get, AbsolutePath("/test?foo=what")), headers: ...`
(note: updated instructions to be much simpler now) | sig/api-machinery | high | Critical |
437,879,733 | flutter | "flutter upgrade" on a clean install is way too noisy and doing sketchy things | I downloaded the beta / v1.4.9-hotfix.1 archive.
```
ianh@ianh:~/dev/test-install/flutter$ bin/flutter --version
Flutter 1.4.9-hotfix.1 • channel beta • https://github.com/flutter/flutter.git
Framework • revision 88fa7ea403 (2 weeks ago) • 2019-04-11 14:01:46 -0700
Engine • revision 4737fc5cd8
Tools • Dart 2.2.1 (build 2.2.1-dev.4.0 None)
```
At this time, it's the latest beta. I ran `flutter upgrade`:
```
ianh@ianh:~/dev/test-install/flutter$ bin/flutter upgrade
Upgrading Flutter from /usr/local/google/home/ianh/dev/test-install/flutter...
From https://github.com/flutter/flutter
2427163d5..d1b146355 Hixie-patch-2 -> origin/Hixie-patch-2
6c7b6833c..0ba67226e dev -> origin/dev
6c7b6833c..c15d48e29 master -> origin/master
e9dd13d1a..5ae6952c0 revert-30873-revert-30414-remove-hover-pressure -> origin/revert-30873-revert-30414-remove-hover-pressure
* [new branch] revert-30951-roll_branch -> origin/revert-30951-roll_branch
* [new branch] revert-30991-caretheight -> origin/revert-30991-caretheight
* [new branch] revert-30995-revert_engine -> origin/revert-30995-revert_engine
* [new tag] v1.5.8 -> v1.5.8
* [new tag] v1.5.0 -> v1.5.0
* [new tag] v1.5.1 -> v1.5.1
* [new tag] v1.5.2 -> v1.5.2
* [new tag] v1.5.3 -> v1.5.3
* [new tag] v1.5.4 -> v1.5.4
* [new tag] v1.5.5 -> v1.5.5
* [new tag] v1.5.6 -> v1.5.6
* [new tag] v1.5.7 -> v1.5.7
Updating 16a16e659..88fa7ea40
1 file changed, 1 insertion(+), 1 deletion(-)
Upgrading engine...
Building flutter tool...
Flutter 1.4.9-hotfix.1 • channel beta • https://github.com/flutter/flutter.git
Framework • revision 88fa7ea403 (2 weeks ago) • 2019-04-11 14:01:46 -0700
Engine • revision 4737fc5cd8
Tools • Dart 2.2.1 (build 2.2.1-dev.4.0 None)
Running flutter doctor...
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel beta, v1.4.9-hotfix.1, on Linux, locale en_US.UTF-8)
[!] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[!] Android Studio (version 2.3)
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
[✓] IntelliJ IDEA Community Edition (version 2017.3)
[!] Connected device
! No devices available
! Doctor found issues in 3 categories.
```
There's all manner of problems here.
- [ ] Why did the tool rebuild?
- [ ] What does `Updating 16a16e659..88fa7ea40` mean? (`git stash` output, I expect)
- [ ] What does `1 file changed, 1 insertion(+), 1 deletion(-)` mean? (`git stash` output, I expect, but this should be a clean install with no changes)
- [ ] We really should consider silencing the `git fetch` call.
- [ ] It tells me it's `Upgrading engine` but actually it didn't do anything. | tool,a: first hour,P2,team-tool,triaged-tool | low | Minor |
437,880,705 | flutter | "flutter upgrade" has weird output when upgrading | I ran `flutter upgrade` to update from 1.4.9-hotfix.1 to 1.5.4 on the beta channel. Here is the output.
```
ianh@ianh:~/dev/test-install/flutter$ bin/flutter upgrade
Upgrading Flutter from /usr/local/google/home/ianh/dev/test-install/flutter...
From https://github.com/flutter/flutter
+ 88fa7ea40...b593f5167 beta -> origin/beta (forced update)
Updating 16a16e659..b593f5167
packages/flutter/res/values/strings_en.arb | 0
321 files changed, 9788 insertions(+), 1534 deletions(-)
Upgrading engine...
Downloading Dart SDK from Flutter engine ca31a7c57bada458fa7f5c0d3f36bc1af4ccbc79...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
3 120M 3 4112k 0 0 5298k 0 0:00:23 --:--:-- 0:00:23 5292k
33 120M 33 40.0M 0 0 23.4M 0 0:00:05 0:00:01 0:00:04 23.4M
92 120M 92 111M 0 0 41.5M 0 0:00:02 0:00:02 --:--:-- 41.5M
100 120M 100 120M 0 0 43.4M 0 0:00:02 0:00:02 --:--:-- 43.4M
Building flutter tool...
Downloading android-arm-profile/linux-x64 tools... 0.6s
Downloading android-arm-release/linux-x64 tools... 0.6s
Downloading android-arm64-profile/linux-x64 tools... 0.5s
Downloading android-arm64-release/linux-x64 tools... 0.4s
Downloading android-arm-dynamic-profile/linux-x64 tools... 0.4s
Downloading android-arm-dynamic-release/linux-x64 tools... 0.4s
Downloading android-arm64-dynamic-profile/linux-x64 tools... 0.5s
Downloading android-arm64-dynamic-release/linux-x64 tools... 0.4s
Downloading android-x86 tools... 1.3s
Downloading android-x64 tools... 1.6s
Downloading android-arm tools... 0.8s
Downloading android-arm-profile tools... 0.7s
Downloading android-arm-release tools... 0.5s
Downloading android-arm64 tools... 0.8s
Downloading android-arm64-profile tools... 0.7s
Downloading android-arm64-release tools... 0.6s
Downloading android-arm-dynamic-profile tools... 0.7s
Downloading android-arm-dynamic-release tools... 0.5s
Downloading android-arm64-dynamic-profile tools... 0.9s
Downloading android-arm64-dynamic-release tools... 0.7s
Downloading package sky_engine... 0.3s
Downloading common tools... 2.2s
Downloading common tools... 0.9s
Downloading linux-x64 tools... 1.6s
Flutter 1.4.9-hotfix.1 • channel beta • https://github.com/flutter/flutter.git
Framework • revision 88fa7ea403 (4 days ago) • 2019-04-22 07:51:33 -0700
Engine • revision ca31a7c57b
Tools • Dart 2.2.1 (build 2.2.1-dev.4.0 None)
Running flutter doctor...
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel beta, v1.5.4, on Linux, locale en_US.UTF-8)
[!] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[!] Android Studio (version 2.3)
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
[✓] IntelliJ IDEA Community Edition (version 2017.3)
[!] Connected device
! No devices available
! Doctor found issues in 3 categories.
```
This part in particular is confusing:
```
Updating 16a16e659..b593f5167
packages/flutter/res/values/strings_en.arb | 0
321 files changed, 9788 insertions(+), 1534 deletions(-)
```
(The other confusing thing is https://github.com/flutter/flutter/issues/22254.) | tool,P2,team-tool,triaged-tool | low | Minor |
437,882,836 | flutter | BackdropFilter doesn't work as child of Opacity or other Filter widgets (was iOS dialogs fade in without blur) | I'm on Android, using the Gallery, Cupertino / Alerts demo. When I tap a button, the dialogs fade in, but they don't blur the background until the animation is over, at which point the blur effect pops in.
Testing with v1.5.4.
cc @xster | framework,engine,a: animation,dependency: skia,a: fidelity,f: cupertino,has reproducible steps,P2,team-design,triaged-design,found in release: 3.19,found in release: 3.22 | low | Critical |
437,886,151 | go | database/sql: connection pool was originally FIFO, is now random, but should be LIFO | The previous SQL connection pool behavior used a FIFO queue implemented as a slice, but in https://github.com/golang/go/commit/4f6d4bb3f4461e7e25eff24254115b689495e834 was changed as a side effect to read the first entry in map recurse order (effectively random among requests not yet timed out when execution started).
I believe we can do much better than this -- we should go back to dequeuing the pending requests in an ordered fashion, while continuing to not leak cancelled requests. And we should be able to instrument to see whether people are getting better performance out of LIFO, FIFO, or random.
See also: https://github.com/golang/go/issues/22697 talks about greater pool observability and configurability
See also: https://github.com/golang/go/issues/18080 talks about observability as well (Honeycomb has implemented our version of passing contexts into wrapped SQL calls, but loses visibility once they reach the SQL layer)
<!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.6 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/lizf/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/lizf/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/go-1.11"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go-1.11/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/lizf/go/src/github.com/honeycombio/hound/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build473464966=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Saturate our connection pool with a higher rate of incoming requests than it could temporarily handle after our database got slow. Almost no requests succeeded because we picked random requests from the pool to work on, many of which didn't have enough remaining time to complete without expiring after being pulled off the pending map.
### What did you expect to see?
If we have requests coming in 10% faster than they can be fulfilled, 90% of them should succeed and only 10% should time out. And we should have a separate waitDuration counter for successful requests that were dequeued and started running, vs. cancelled requests that timed out before even starting to be serviced.
### What did you see instead?
~100% of connections timed out because most of the connections in the pool became dead while being serviced, and random selection wasn't trying first the requests most likely to succeed. and we couldn't tell why, because the waitDuration of the successful requests wasn't separated from the failed requests. | NeedsDecision | low | Critical |
437,889,504 | flutter | Support embedding native views in macOS shell | We'll need support for embedding `NSView`s in Flutter views (e.g., for embedding a web view, as on mobile). | c: new feature,engine,platform-mac,a: desktop,P2,team-macos,triaged-macos | low | Major |
437,889,730 | flutter | Implement PlatformView support on Windows | We'll need support for embedding native views in Flutter views (e.g., for embedding a web view, as on mobile).
~Obviously this depends on having a view-based Windows shell first (#30726).~ | c: new feature,engine,platform-windows,customer: crowd,a: platform-views,a: desktop,P2,team-windows,triaged-windows | low | Critical |
437,894,163 | three.js | Geometries are not de-serialized in all cases | ##### Description of the problem
Currently, there are geometries in THREE JS src that are not de-serializable.
- `EdgesGeometry`
- ~~`ParametricGeometry`~~
- ~~`TextGeometry`~~
- ~~`InstancedBufferGeometry`~~
- `WireframeGeometry`
There are 2 main reasons for this issue.
- `ObjectLoader` doesn't handle these types. Every geometry has a `type` property which is the name of the geometry. `ObjectLoader` uses this to determine how to de-serialize the JSON.
- Geometries are serialized differently if they contain a `parameters` property.
Based on @Mugen87's [comment](https://github.com/mrdoob/three.js/issues/16026#issuecomment-475947408), it sounds like increasing the complexity of `ObjectLoader` is undesirable. However, these geometries are currently serialized in a way where they can't be de-serialized.
My suggestion would be one of the following:
1. Make these geometries serialize as normal `BufferGeometry` and send a warning saying such.
```javascript
EdgesBufferGeometry.prototype.toJSON = function () {
console.warn("EdgesBufferGeometry is not serializable. It will be serialized as BufferGeometry.");
var parameters = this.parameters;
this.parameters = undefined;
var data = BufferGeometry.prototype.toJSON.call( this );
this.parameters = parameters;
return data;
};
```
2. Make these geometries not serializable and send a error saying such.
#### Example
```javascript
EdgesBufferGeometry.prototype.toJSON = function () {
console.error("EdgesBufferGeometry is not serializable. Convert this geometry to BufferGeometry and call .toJSON() on that");
};
````
3. Make `ObjectLoader` handle these geometries
Related #16026, #16087, #14357
##### Three.js version
- [x] Dev
- [x] r104
- [ ] ...
##### Browser
- [x] All of them
- [ ] Chrome
- [ ] Firefox
- [ ] Internet Explorer
##### OS
- [x] All of them
- [ ] Windows
- [ ] macOS
- [ ] Linux
- [ ] Android
- [ ] iOS | Suggestion | low | Critical |
437,895,886 | vscode | [themes] Support for hue, saturation, lightness (HSL) color codes for theming | Support for hue, saturation, lightness (HSL) color codes for theme colors would allow for easier iteration in theme design by providing a more human readable format for color codes.
I find it much easier to modify color values using HSL than it is with hex codes because the values being modified more closely model how humans think about color rather than how the computer renders color.
Example:
```jsonc
{
"colors": {
"editor.background": "hsla(301, 46%, 9%, 1)", // #220c21,
}
}
``` | feature-request,themes | low | Major |
437,900,105 | go | cmd/go: build: add -buildmode=nolink flag | My understanding of the "go build" process is it does something like this:
- Figure out all of the dependencies of the package.
- For each dependency, determine whether it has changed since the last time it was built, by checking the `.a` file or equivalent in `$GOPATH/pkg`.
- Compile them, in parallel if possible
- Generate the new artifact (a `.a` file or similar) we checked in #2 that is now out of date.
- Link them all into a single binary.
I use `vim` to edit Go files. The most popular plugin, `vim-go`, recommends checking compilation by running `:GoBuild`. I do this frequently to ensure I have syntax correct, imports correct, a working Go program.
`:GoBuild` changes directory to the directory containing the file you are editing, then runs
```
go build -tags '' . errors
```
It builds the "errors" package because vim-go desperately does _not_ want to build any final artifacts, and attempting to build multiple packages turns off the binary building behavior.
However, I notice that this never takes advantage of build caching. That is, if I run `:GoBuild` and then run `:GoBuild` again immediately without making changes, it takes 4 seconds on the package I am compiling. If I run `go build .` to compile an artifact, it is much faster, about 600ms on the second run.
Is there a way to make "go build" take advantage of whatever intermediate build steps exist - the `.a` files from above, or their equivalents - even if it does not produce a final binary?
Alternatively, can you work with the `vim-go` maintainers to recommend a different tool for checking a package (and/or test package) contains valid Go syntax and identifiers? | Proposal,NeedsInvestigation,FeatureRequest,GoCommand | low | Critical |
437,943,465 | youtube-dl | [Site Support Request] TNT GO | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.04.24. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.04.24**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: http://www.tntgo.tv.br/video/astro-uma-fabula-urbana-em-um-rio-de-janeiro-magico
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Please, add support for this website
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['--cookies', 'Documents\\cookies.txt', 'http://www.tntgo.tv.br/video/astro-uma-fabula-urbana-em-um-rio-de-janeiro-magico', '-F', '-v']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2019.04.24
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.17134
[debug] exe versions: ffmpeg N-93022-g260f1960e7, ffprobe N-93022-g260f1960e7
[debug] Proxy map: {}
[generic] astro-uma-fabula-urbana-em-um-rio-de-janeiro-magico: Requesting header
WARNING: Falling back on generic information extractor.
[generic] astro-uma-fabula-urbana-em-um-rio-de-janeiro-magico: Downloading webpage
[generic] astro-uma-fabula-urbana-em-um-rio-de-janeiro-magico: Extracting information
ERROR: Unsupported URL: http://www.tntgo.tv.br/video/astro-uma-fabula-urbana-em-um-rio-de-janeiro-magico
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmp1l3rbf78\build\youtube_dl\YoutubeDL.py", line 796, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmp1l3rbf78\build\youtube_dl\extractor\common.py", line 529, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmp1l3rbf78\build\youtube_dl\extractor\generic.py", line 3320, in _real_extract
youtube_dl.utils.UnsupportedError: Unsupported URL: http://www.tntgo.tv.br/video/astro-uma-fabula-urbana-em-um-rio-de-janeiro-magico
| site-support-request,account-needed | low | Critical |
437,953,839 | flutter | Expanded hitTest area | It would be useful to be able to expand the hit test area of a widget. Often, the size of a widget can't be increased without ugly hacks or at all, while the current size is too small to make it easily tappable.
The code below contains two examples where this would be useful (and a failed attempt to achieve this). In the first, the red container might be the thumb of a slider at the bottom of an app bar or a handle at the top of a panel for resizing it. In the second case, the button with "<" is inside several widgets to align it with "Title" below it. (In our code, there are several more layout widgets between the padding and "<" to handle animations) In both cases, increasing the size of the widget inside the gesture detector, while keeping the layout visually the same, would lead to a awkward layout.
The `ExpandedHitTestArea` widget is an attempt to achieve an expansion without affecting the size. However, it doesn't work because all parent widgets don't hit test their children when the hit point is outside the parent.
```dart
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
import 'package:flutter/widgets.dart';
import 'package:flutter/rendering.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) => MaterialApp(home: HitTest());
}
class HitTest extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Material(
child: Column(
crossAxisAlignment: CrossAxisAlignment.stretch,
mainAxisAlignment: MainAxisAlignment.spaceAround,
children: [
SizedBox(
width: 100,
height: 100,
child: Container(
alignment: Alignment.bottomCenter,
color: Colors.yellow,
height: 100,
width: 100,
child: GestureDetector(
onTap: () => print("I'm hit! I'm hit!"),
child: ExpandedHitTestArea(
child: Container(width: 20, height: 10, color: Colors.red),
),
),
),
),
Container(
height: 100,
color: Colors.black12,
child: Padding(
padding: const EdgeInsets.all(8.0),
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
GestureDetector(
onTap: () => print("Tapped"),
child: ExpandedHitTestArea(child: Text("<")),
),
SizedBox(height: 20),
Text("Title"),
],
),
),
),
],
),
);
}
}
class ExpandedHitTestArea extends SingleChildRenderObjectWidget {
const ExpandedHitTestArea({
Key key,
Widget child,
}) : super(key: key, child: child);
@override
RenderObject createRenderObject(BuildContext context) => RenderExpandedHitTestArea();
}
class RenderExpandedHitTestArea extends RenderBox with RenderObjectWithChildMixin<RenderBox> {
// trivial implementations left out to save space: computeMinIntrinsicWidth, computeMaxIntrinsicWidth, computeMinIntrinsicHeight, computeMaxIntrinsicHeight
@override
void performLayout() {
child.layout(constraints, parentUsesSize: true);
size = child.size;
}
@override
void paint(PaintingContext context, Offset offset) {
if (child != null) {
final BoxParentData childParentData = child.parentData;
context.paintChild(child, childParentData.offset + offset);
}
}
@override
bool hitTest(HitTestResult result, {Offset position}) {
const minimalSize = 44;
final deltaX = (minimalSize - size.width).clamp(0, double.infinity) / 2;
final deltaY = (minimalSize - size.height).clamp(0, double.infinity) / 2;
if (Rect.fromLTRB(-deltaX, -deltaY, size.width + deltaX, size.height + deltaY).contains(position)) {
result.add(BoxHitTestEntry(this, position));
return true;
}
return false;
}
}
```
<img width="365" alt="Screenshot 2019-04-27 at 17 19 00" src="https://user-images.githubusercontent.com/4456832/56851540-9adbcc80-6910-11e9-9aff-a5b476107c11.png">
```
[✓] Flutter (Channel unknown, v1.4.18, on Mac OS X 10.14.4 18E226, locale en-NL)
• Flutter version 1.4.18 at /Users/spkersten/Development/.../flutter
• Framework revision 8bea3fb2eb (2 weeks ago), 2019-04-11 13:11:22 -0700
• Engine revision 72986c39ea
• Dart version 2.2.1 (build 2.2.1-dev.3.1 None)
``` | c: new feature,framework,f: material design,customer: crowd,c: proposal,P3,team-design,triaged-design | high | Critical |
437,955,453 | rust | .tar.gz dist files require running `install.sh` before they can be used | If I download Rust using:
~~~
x86_64-pc-windows-gnu.tar.gz
~~~
from here:
https://forge.rust-lang.org/other-installation-methods#standalone-installers
it seems the compiler cant actually compile anything:
~~~
$ cat aaaaa.rs
fn main() {
println!("bbbbb ccccc");
}
$ rustc aaaaa.rs
error[E0463]: can't find crate for `std`
~~~
| T-bootstrap,T-infra | low | Critical |
437,961,997 | TypeScript | KeyboardEvent interface implements key as string | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.3.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Actual Output**

The `key` property is defined as a string
**Expected Output**
The `key` property should be a enum of keys. | Suggestion,Needs Proposal,Domain: lib.d.ts | low | Critical |
437,962,059 | opencv | Core.NATIVE_LIBRARY_NAME and Jni so files name should be identicals | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
Please:
* Read the documentation to test with the latest developer build.
* Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue.
* Try to be as detailed as possible in your report.
* Report only one problem per created issue.
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
- OpenCV => 4.1.0
- Operating System / Platform => Android
- Compiler => Android Studio
##### Detailed description
<!-- your description -->
Core.NATIVE_LIBRARY_NAME and Jni so file name should be identicals
##### Steps to reproduce
1. compile opencv-sdk with jni libs (libopencv_java4.so files)
2. `static { System.loadLibrary(Core.NATIVE_LIBRARY_NAME); }` at begin of class
3. Run opencv program.
Exception occur:
long exception description and then: `couldn't find "libopencv_java410.so`
Core.NATIVE_LIBRARY_NAME == "libopencv_java410"
jni.files = "libopencv_java4.so"
If I change to `static { System.loadLibrary("libopencv_java4"); }`
Everything is working.
Add build.gradle I create for compile the sdk.
[build.gradle.txt](https://github.com/opencv/opencv/files/3124106/build.gradle.txt)
| bug,category: build/install,category: java bindings | low | Critical |
437,962,287 | go | errors: examples need improvement | There is a total of 4 examples in the package. Two are for `New`, one doesn't use the package at all, and one incorrectly uses `As` by dropping the error on the floor if it's not a `os.PathError`.
```
_, err := os.Open("non-existing")
if err != nil {
var pathError *os.PathError
if errors.As(err, &pathError) {
fmt.Println("Failed at path:", pathError.Path)
}
}
``` | Documentation,NeedsFix | low | Critical |
437,978,852 | pytorch | Performance issue master (a25b79531) | ## 🐛 Bug
To run the mnist example from https://github.com/pytorch/examples is taking more than twice long the usual
## To Reproduce
I have built the pytorch package from the repo ( BUILD_TORCH=ON python setup.py bdist_wheel).
I also have modified the num_workers option of the mnist example script from 1 to 8.
A lot of time is spent on iomp functions (see figure):

Setting the environment variable OMP_NUM_THREADS=1 reduces the time to the usual ones on this machine.
## Expected behavior
To run at the usual speed without needing tho change the OMP_NUM_THREADS environment variable.
## Environment
PyTorch version: 1.1.0a0+a25b795
Is debug build: No
CUDA used to build PyTorch: 10.1.105
OS: Fedora release 29 (Twenty Nine)
GCC version: (GCC) 8.3.1 20190223 (Red Hat 8.3.1-2)
CMake version: version 3.14.2
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.105
GPU models and configuration: GPU 0: GeForce GTX 1080
Nvidia driver version: 418.56
cuDNN version: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.5.0
Versions of relevant libraries:
[pip3] numpy==1.16.2
[pip3] numpydoc==0.8.0
[pip3] torch==1.1.0a0+a25b795
[pip3] torch-nightly==1.1.0.dev20190423
[pip3] torchtext==0.3.1
[pip3] torchvision==0.2.3a0+0c36735
[pip3] torchvision-nightly==0.2.3
[conda] blas 1.0 mkl
[conda] mkl 2019.3 199
[conda] mkl-service 1.1.2 py37he904b0f_5
[conda] mkl_fft 1.0.10 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] torch-nightly 1.1.0.dev20190423 pypi_0 pypi
[conda] torchtext 0.3.1 pypi_0 pypi
[conda] torchvision-nightly 0.2.3 pypi_0 pypi
## Additional context
The issue #18398 still occurring. I have copied the relevant libraries to a folder in the LD_LIBRARY_PATH. | module: performance,module: cpu,triaged,module: multithreading | low | Critical |
437,980,175 | pytorch | vectorized convert_to_int_of_same_size <int64_t> can't handle nan | The relevant code is at
https://github.com/pytorch/pytorch/blob/1071e92335d8a69107f4694f56bebbb9655371aa/aten/src/ATen/cpu/vec256/vec256.h#L115-L125
This can't handle nan, inf, or other large (small) values. We should error (e.g., trigger fpe?) in here. Triggering FPE is what `convert_to_int_of_same_size<int32_t>` does, according to https://scc.ustc.edu.cn/zlsc/sugon/intel/compiler_c/main_cls/intref_cls/common/intref_avx_cvttps_epi32.htm .
This manifests in the following script with `grid_sample`.
Original issue description:
## 🐛 Bug
CPU version of grid_sample got segmentation fault under some circumstances
## To Reproduce
Run this:
```
import torch
import torch.nn.functional as F
img = torch.empty([1, 3, 123, 456], dtype=torch.float32)
offset = torch.zeros([1, 123, 456, 2], dtype=torch.float32, requires_grad=True)
optimizer = torch.optim.SGD([offset], lr=0.0005, momentum=0.9)
for i in range(200):
optimizer.zero_grad()
warped = F.grid_sample(img, offset, mode='bilinear',padding_mode="border")
loss = torch.mean(torch.sqrt(torch.abs(offset)))
loss.backward()
optimizer.step()
print('{:04d} {:.6f}, '.format(i, loss.item()))
```
## Environment
PyTorch version: 1.0.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.2 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 1080 Ti
Nvidia driver version: 410.93
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.16.1
[pip3] torch==1.0.0
[pip3] torchfile==0.1.0
[pip3] torchvision==0.2.1
[pip3] torchviz==0.0.1
[conda] Could not collect
## Additional context
GDB backtrace:
```
(gdb) run
Starting program: /usr/bin/python3.6 bug.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff39dc700 (LWP 21382)]
[New Thread 0x7ffff31db700 (LWP 21383)]
[New Thread 0x7ffff09da700 (LWP 21384)]
[New Thread 0x7fffec1d9700 (LWP 21385)]
[New Thread 0x7fffe99d8700 (LWP 21386)]
[New Thread 0x7fffe71d7700 (LWP 21387)]
[New Thread 0x7fffe49d6700 (LWP 21388)]
[Thread 0x7fffe49d6700 (LWP 21388) exited]
[Thread 0x7fffe71d7700 (LWP 21387) exited]
[Thread 0x7fffe99d8700 (LWP 21386) exited]
[Thread 0x7fffec1d9700 (LWP 21385) exited]
[Thread 0x7ffff09da700 (LWP 21384) exited]
[Thread 0x7ffff31db700 (LWP 21383) exited]
[Thread 0x7ffff39dc700 (LWP 21382) exited]
(gdb) [New Thread 0x7fffe49d6700 (LWP 21392)]
[New Thread 0x7fffe71d7700 (LWP 21393)]
[New Thread 0x7fffe99d8700 (LWP 21394)]
0000 0.000000,
Thread 1 "python3.6" received signal SIGSEGV, Segmentation fault.
0x00007fff9823141f in std::enable_if<((((4l)==(1))||((4l)==(2)))||((4l)==(4)))||((4l)==(8)), at::vec256::(anonymous namespace)::Vec256<float> >::type at::vec256::(anonymous namespace)::mask_gather<4l, float>(at::vec256::(anonymous namespace)::Vec256<float> const&, float const*, at::vec256::(anonymous namespace)::Vec256<at::vec256::(anonymous namespace)::int_of_size<sizeof (float)>::type> const&, at::vec256::(anonymous namespace)::Vec256<float>&) [clone .constprop.166] () from /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so
(gdb) backtrace
#0 0x00007fff9823141f in std::enable_if<((((4l)==(1))||((4l)==(2)))||((4l)==(4)))||((4l)==(8)), at::vec256::(anonymous namespace)::Vec256<float> >::type at::vec256::(anonymous namespace)::mask_gather<4l, float>(at::vec256::(anonymous namespace)::Vec256<float> const&, float const*, at::vec256::(anonymous namespace)::Vec256<at::vec256::(anonymous namespace)::int_of_size<sizeof (float)>::type> const&, at::vec256::(anonymous namespace)::Vec256<float>&) [clone .constprop.166] () from /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so
#1 0x00007fff9824d7a1 in at::native::(anonymous namespace)::grid_sampler_2d_cpu_kernel_impl(at::Tensor const&, at::Tensor const&, long, long)::{lambda()#1}::operator()() const::{lambda()#2}::operator()() const ()
from /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so
#2 0x00007fff9824ee10 in at::native::(anonymous namespace)::grid_sampler_2d_cpu_kernel_impl(at::Tensor const&, at::Tensor const&, long, long) () from /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so
#3 0x00007fff979d5c3d in at::native::grid_sampler_2d_cpu(at::Tensor const&, at::Tensor const&, long, long) () from /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so
#4 0x00007fff97b97954 in at::CPUFloatType::grid_sampler_2d(at::Tensor const&, at::Tensor const&, long, long) const () from /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so
#5 0x00007fff95b3b8cb in torch::autograd::VariableType::grid_sampler_2d(at::Tensor const&, at::Tensor const&, long, long) const () from /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch.so.1
#6 0x00007fff979d6780 in at::native::grid_sampler(at::Tensor const&, at::Tensor const&, long, long) () from /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so
#7 0x00007fff97c4f344 in at::TypeDefault::grid_sampler(at::Tensor const&, at::Tensor const&, long, long) const () from /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so
#8 0x00007fff95b550e3 in torch::autograd::VariableType::grid_sampler(at::Tensor const&, at::Tensor const&, long, long) const () from /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch.so.1
#9 0x00007ffff2d51e78 in torch::autograd::THPVariable_grid_sampler(_object*, _object*, _object*) () from /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so
#10 0x00000000005030d5 in ?? ()
#11 0x0000000000506859 in _PyEval_EvalFrameDefault ()
#12 0x0000000000504c28 in ?? ()
#13 0x0000000000502540 in ?? ()
#14 0x0000000000502f3d in ?? ()
#15 0x0000000000507641 in _PyEval_EvalFrameDefault ()
#16 0x0000000000504c28 in ?? ()
#17 0x0000000000506393 in PyEval_EvalCode ()
#18 0x0000000000634d52 in ?? ()
#19 0x0000000000634e0a in PyRun_FileExFlags ()
#20 0x00000000006385c8 in PyRun_SimpleFileExFlags ()
#21 0x000000000063915a in Py_Main ()
#22 0x00000000004a6f10 in main ()
``` | module: cpu,module: error checking,triaged,module: NaNs and Infs | low | Critical |
437,987,605 | puppeteer | waitForSelector with visible:true not returning the first visible element, causes timeout. | <!--
STEP 1: Are you in the right place?
- For general technical questions or "how to" guidance, please search StackOverflow for questions tagged "puppeteer" or create a new post.
https://stackoverflow.com/questions/tagged/puppeteer
- For issues or feature requests related to the DevTools Protocol (https://chromedevtools.github.io/devtools-protocol/), file an issue there:
https://github.com/ChromeDevTools/devtools-protocol/issues/new.
- Problem in Headless Chrome? File an issue against Chromium's issue tracker:
https://bugs.chromium.org/p/chromium/issues/entry?components=Internals%3EHeadless&blocking=705916
For issues, feature requests, or setup troubles with Puppeteer, file an issue right here!
-->
### Steps to reproduce
**Tell us about your environment:**
* Puppeteer version: 0.14.0
* Platform / OS version: MacOS (Dockerized in node:10)
* URLs (if applicable): https://j4q389wzv3.codesandbox.io/
* Node.js version: v10.15.3
**What steps will reproduce the problem?**
Open Website OR use this file:
```
<html>
<head>
<meta charset="UTF-8" />
</head>
<body>
<input style="display: none" type="number" class="my-input-class" />
<input type="number" class="my-input-class" />
</body>
</html>
```
_Please include code that reproduces the issue._
```
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://j4q389wzv3.codesandbox.io/');
try {
const visibleInput = await page.waitForSelector('.my-input-class', {visible: true, timeout: 1000 })
console.log('Found visible Element')
} catch (e) {
console.log('Could NOT find a visible element ', e.message)
}
const inputs = await page.$$('.my-input-class')
console.log(`Found ${inputs.length} inputs`)
await browser.close()
```
**What is the expected result?**
There are 2 elements matching the CSS Selector on the page. the first one is hidden, the second one is visible. The `page.waitForSelector` with `{visible: true}` should have found and returned the visible element on the page.
**What happens instead?**
It Times Out since there was another HIDDEN element matching the CSS selector higher up in the DOM structure, causing it to wait until timeout even though a visible element exists on the page. | feature,confirmed | medium | Critical |
437,994,471 | vscode | Source Control: "Accept All Incoming" for "deleted by them" should delete the file from working copy and stage the change | When:

Actual:
- warning message: No merge conflicts found in this file

Expected:
- delete file from working copy
- stage the change (`git add`)
VS Code 1.33.1 | feature-request,merge-conflict | medium | Major |
438,013,596 | go | crypto/rsa: doc: reword "coprime to" in doc comments | I found this paragraph in the `crypto/rsa` docs:
https://github.com/golang/go/blob/master/src/crypto/rsa/rsa.go#L187-L191
```
// Check that de ≡ 1 mod p-1, for each prime.
// This implies that e is coprime to each p-1 as e has a multiplicative
// inverse. Therefore e is coprime to lcm(p-1,q-1,r-1,...) =
// exponent(ℤ/nℤ). It also implies that a^de ≡ a mod p as a^(p-1) ≡ 1
// mod p. Thus a^de ≡ a mod n for all a coprime to n, as required.
```
It is a mistake to say `x` is _coprime to_ `y`, because _co_ and _to_ convey the same thing. A number `x`, `y` can be coprime, but we say `x` is _prime to_ `y`.
This is explicitly stated by Stewart in _Galios Theory_ 3rd Edition (ISBN 1-58488-393-6), in Definition 3.14 on the topic of coprimes. | ExpertNeeded,Documentation,NeedsDecision | low | Major |
438,018,375 | vscode | [folding] Support flags on folding.markers RegExps | A `language-configuration.json` file can supply a pair of regular expressions `start` and `end` [(doc)](https://code.visualstudio.com/api/language-extensions/language-configuration-guide#folding). At the moment there's no way to specify regexp flags, for example to make the matches case-insensitive.
Previous enhancements tackled a similar issue for `wordPattern` and `indentationRules` by allowing their regexps to be objects with `pattern` and `flags` properties.
Please enhance languageConfigurationExtensionPoint.ts [(code pointer)](https://github.com/Microsoft/vscode/blob/a47406b9c8aeed71e2b26ee41e0f275ead87da04/src/vs/workbench/contrib/codeEditor/browser/languageConfigurationExtensionPoint.ts#L313) to support `pattern` and `flags` on `start` and `end`. See also https://github.com/Microsoft/vscode/issues/27591#issuecomment-305175307
It'd also be good to augment [the doc](https://code.visualstudio.com/api/language-extensions/language-configuration-guide) , which currently doesn't mention the pattern+flags syntax. | feature-request,editor-folding | low | Minor |
438,028,132 | rust | Switch to `termcolor` (first)? | There're on going effort to move libtest out of tree, however i think the current approach(#59440) is a little too aggressive by changing everything in a huge PR. I think it the effort should be splitted and happen in several separate steps.
The first big step in it is replacing `libterm` with `termcolor`, which means:
* Add separate features to `termcolor`, `wincolor`, and `winapi` to mark these crates unstable (so they can be shipped with rustc.
* Maybe move these crates repo to `rust-lang` organization.
* Modify libtest in-tree to use `termcolor`
An alternative is to keep the existing code for unstable rustc usage but make it a shim over `termcolor` for non-rustc usages.
Just wrote these down to see what peoples think. @alexcrichton @Manishearth @gnzlbg @BurntSushi
| C-enhancement,T-libs-api,A-libtest | low | Minor |
438,050,052 | TypeScript | keyof of 'mixed' class equals to `string | number` | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** `3.4.3` or `3.5.0-dev.20190427`
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** `keyof`, `anonymous class`
**Code**
```ts
type AnyClass = new (...args: any[]) => any;
const mixin = <T extends AnyClass>(Class: T) => class extends Class {a: string};
const Cl = mixin(class {b = 123});
const inst = new Cl();
inst. // <- IntelliSense works fine here and suggests 'a' and 'b' properties
type Keys = keyof typeof inst; // Keys = string | number
```
**Expected behavior:**
`Keys` to be equal to `'a' | 'b'`
**Actual behavior:**
`Keys` to be equal to `string | number`
**Playground Link:** [ts-playground](https://www.typescriptlang.org/play/index.html#src=type%20AnyClass%20%3D%20new%20(...args%3A%20any%5B%5D)%20%3D%3E%20any%3B%0D%0A%0D%0Aconst%20mixin%20%3D%20%3CT%20extends%20AnyClass%3E(Class%3A%20T)%20%3D%3E%20class%20extends%20Class%20%7Ba%3A%20string%7D%3B%0D%0Aconst%20Cl%20%3D%20mixin(class%20%7Bb%20%3D%20123%7D)%3B%0D%0Aconst%20inst%20%3D%20new%20Cl()%3B%0D%0A%0D%0Atype%20Keys%20%3D%20keyof%20typeof%20inst%3B)
**Description:**
_I'm sorry if that is a known bug ot expected behaviour. I did not find a duplicate._
Class `C` is a result of extension of some other class `A` provided as a function argument and class `B` which is declared inside the function body. Such class `C` works fine for the most part, but `keyof InstanceType<typeof C>` returns `string | number`.

While IntelliSense works fine:

I'll be glad if you suggest any workarounds for this issue. | Bug,Domain: Index Types | low | Critical |
438,060,621 | neovim | wildmenumode() with wildoptions=pum | The default arrow key action when maneuvering the pum is sub-optimal (\<left\> is \<up\> and \<right\> is \<down\>. I see that @bfredl has attempt to fix this with the "\<down\> and \<up\> mappings for wildoptions=pum" commits, however this commit was reverted.
I thought that the below vimscript would be sufficient, but that causes command history to not complete based on a partially written command; e.g if my command history contains `write` and `edit`, and I type `:w` and press `<up>`, `:edit` is shown, not `:write`
```
if &wildoptions =~ "pum"
cnoremap <up> <c-p>
cnoremap <down> <c-n>
endif
```
So, is it possible to only enable the mapping if the pum is active, or, preferably to fix the default behavior? | documentation,complexity:low,has:plan | medium | Major |
438,068,598 | vue-element-admin | request.js 文件依赖import store from '@/store',会导致编译循环依赖RangeError: Maximum call stack size exceeded | need repro :mag_right: | low | Critical |
|
438,072,394 | godot | Make GraphNodes be able to be moved with the arrow keys | **Godot version:**
3.1.1
**Issue description:**
Sometimes I want to position GraphNodes (in VisualScript) so the connections are straight, and it would be useful if I could use the the arrow keys for that. | enhancement,topic:editor,usability | low | Major |
438,079,339 | TypeScript | Cannot infer generic argument type from passed callback | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
infer, parameter, argument, callback, function
**Code**
```ts
function inferArguments<T>(callback: ((t: T) => void)) {
return callback;
}
function noop(){}
const explicit = inferArguments(({a = noop}: {a: Function}) => {});
explicit({a: noop}); // OK!
explicit({a: false}); // Expected error - Got one!
const implicit = inferArguments(({a = noop}) => {});
implicit({a: noop}); // OK!
implicit({a: false}); // Expected error - No Error!
```
**Expected behavior:**
Both function calls with `({a: false})` should cause a type error
**Actual behavior:**
Only the function call with the explicit typing causes a type error
[**Playground Link**](https://www.typescriptlang.org/play/#src=function%20inferArguments%3CT%3E(callback%3A%20((t%3A%20T)%20%3D%3E%20void))%20%7B%0A%20%20return%20callback%3B%0A%7D%0A%0Afunction%20noop()%7B%7D%0A%0Aconst%20explicit%20%3D%20inferArguments((%7Ba%20%3D%20noop%7D%3A%20%7Ba%3A%20Function%7D)%20%3D%3E%20%7B%7D)%3B%0Aexplicit(%7Ba%3A%20noop%7D)%3B%20%2F%2F%20OK!%0Aexplicit(%7Ba%3A%20false%7D)%3B%20%2F%2F%20Expected%20error%20-%20Got%20one!%0A%0Aconst%20implicit%20%3D%20inferArguments((%7Ba%20%3D%20noop%7D)%20%3D%3E%20%7B%7D)%3B%0Aimplicit(%7Ba%3A%20noop%7D)%3B%20%2F%2F%20OK!%0Aimplicit(%7Ba%3A%20false%7D)%3B%20%2F%2F%20Expected%20error%20-%20No%20Error!)
**Related Issues:** #30975 Looks similar but seems different since there should be a clear way for inference to work in this case
Note that `Parameters` is correctly able to extract the correct types for such a construct as seen in this bit of code:
```ts
function noop() { }
function callback({ a = noop }) { }
let args: Parameters<typeof callback>;
args[0].a as Function
```
This appears to be an error with inline callback functions used in this fashion | Bug,Domain: Type Inference | medium | Critical |
438,085,866 | TypeScript | make @ts-ignore available when using {/* @ts-ignore */}. | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
@ts-ignore annotation
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
Hi, I apologize if it is already discussed. However, I believe we need this feature for `@ts-ignore`.
Currently `@ts-ignore` only mutes the errors only using with `// @ts-ignore`.
I believe we also need to make `@ts-ignore` available when using `{/* @ts-ignore */}`.
I've already founded [this issue](https://github.com/Microsoft/TypeScript/issues/19573). It's a similar issue but not the same and I think this is easier to implement.
I think this feature only need to change this code.
https://github.com/Microsoft/TypeScript/blob/master/src/compiler/program.ts#L2
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
I'd like to use this in the html structures in the TSX file.
## Examples
<!-- Show how this would be used and what the behavior would be -->
This is implementation for `amp-script`. We have to use `onclick` instead of `onClick` and it's not supported for now.
```
const App = () => {
return (
<div>
...
{/* @ts-ignore */}
<button onclick={() => setCount(count + 1)}>Click me</button> // this line should be ignored
...
</div>
);
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [ ] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | high | Critical |
438,086,787 | go | cmd/go: treat most errors as fatal when resolving a package with a proxy | ### What version of Go are you using (`go version`)?
go1.12.4 and master (`049c8dbf`)
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/jayconrod/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/jayconrod/go"
GOPROXY="http://127.0.0.1:6123/mod"
GORACE=""
GOROOT="/opt/go/installed"
GOTMPDIR=""
GOTOOLDIR="/opt/go/installed/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/dev/null"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/rq/x0692kqj6ml8cvrhcqh5bswc008xj1/T/go-build212227611=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
In src/cmd/go/testdata/script/mod_load_badzip.txt, remove commands except the `go build` command, so that `go build` runs with an empty module.
`! go build rsc.io/badzip`
### What did you expect to see?
The zip file for this module contains a superfluous .info file. We should see an error about that.
### What did you see instead?
Instead, we see a message `cannot find module providing package`.
### Analysis
When we want to load a package that isn't provided by any module in the build list, we attempt to retrieve modules that *could* provide the package. In this case, we'll look for `rsc.io` and `rsc.io/badzip`. Frequently, these modules don't exist (we get a 404 or 410 from the proxy), so if there's an error on every path we try, we report `cannot find module providing package`.
Many other errors are possible though. We should distinguish "not found" errors from "found but couldn't access" errors.
When we fetch code from a VCS, we have a special `codehost.VCSError` type which covers "found but couldn't access" errors. Instead, we should define a special "not found" error type in `cmd/go/internal/modfetch` (used with both proxy and VCS), and we should treat all other errors as "found but couldn't access". | NeedsFix,GoCommand,modules | low | Critical |
438,090,795 | three.js | Suggestion: Merge updateMatrixWorld and updateWorldMatrix | The `updateWorldMatrix` function adds the ability to guarantee that the parent matrices are up to date before updating an objects world matrix. In the interest of maintaining backwards compatibility and simplifying the API why not merge the behavior of the two functions into `updateMatrixWorld`?
The API might look like this which would give the same behavior as the current function:
```js
updateMatrixWorld(force = false, updateChildren = true, updateParents = false)
```
And to get the behavior of the current `updateWorldMatrix` function you would call `updateMatrixWorld(true, false, true)`.
The `force` parameter and `matrixWorldNeedsUpdate` function might feel a little clunky but maybe that can be improved in the future, as well?
/cc @WestLangley
Related to #16292 | Suggestion | low | Minor |
438,101,950 | vue-element-admin | Routes is missing after reload page | <!--
注意:为更好的解决你的问题,请参考模板提供完整信息,准确描述问题,信息不全的 issue 将被关闭。
Note: In order to better solve your problem, please refer to the template to provide complete information, accurately describe the problem, and the incomplete information issue will be closed.
-->
## Bug report(问题描述)
#### Steps to reproduce(问题复现步骤)
1. Make login
2. Open a option from menú (note that option is added according with https://panjiachen.github.io/vue-element-admin-site/guide/essentials/permission.html#permission)
3. After open option press F-5 or reload page manually
4. Go to option selected or select another option
5. Surprise: the option is missing
I noticed that the static option for menu is keeping but new asyncRoutes are missing.
#### Screenshot or Gif(截图或动态图)
#### Link to minimal reproduction(最小可在线还原demo)
<!--
Please only use Codepen, JSFiddle, CodeSandbox or a github repo
-->
#### Other relevant information(格外信息)
- Your OS: Linux Kubuntu AMD 64
- Node.js version: latest
- vue-element-admin version: latest
Best regards.
| need repro :mag_right: | low | Critical |
438,108,716 | godot | Using set_shader_param with a color returns non-linear colors | **Godot version:** v3.1.stable.mono.official (LATEST)
**OS/device including version:** Ubuntu 18.04 bionic,
Kernel: x86_64 Linux 4.18.0-17-generic
**Issue description:**
If using colors to transfer values (Since sampler2d is our only array), The values seem to be offset by 0.05 and put through a square root function
If the red value is set to `sqrt(0.55)`, the line seems to appear in the middle

**Steps to reproduce:**
- Use `set_shader_param('param_name', Color(0.5, 1.0, 1.0))` to set a specific color
- When accessing the color in the shader, the values will be mutated
**Minimal reproduction project:**
[GodotTestProject.zip](https://github.com/godotengine/godot/files/3125580/GodotTestProject.zip)
| bug,breaks compat,topic:shaders | low | Major |
438,126,725 | youtube-dl | Add support for British Telecom film archive | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.04.24. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x ] I'm reporting a new site support request
- [x ] I've verified that I'm running youtube-dl version **2019.04.24**
- [x ] I've checked that all provided URLs are alive and playable in a browser
- [x ] I've checked that none of provided URLs violate any copyrights
- [x ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://bt.kuluvalley.com/view/ApYmrmm36J8
- Single video: https://bt.kuluvalley.com/view/XxzB3x6SF9V
- Single video: https://bt.kuluvalley.com/view/I9tnOFRRssI
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
This website offers films held by British Telecom that are historically significant.
| site-support-request | low | Critical |
438,172,532 | TypeScript | [Feature request] keyword `final` type for output full final type in .d.ts | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
with keyword `final`, output a final type typescript know in .d.ts
## Use Cases
this help when we import `a.ts` will didn't need import other file for get type info
## Examples
<!-- Show how this would be used and what the behavior would be -->
> b.ts
```ts
export type A2 = {
a: number,
}
export type B2 = {
b: number,
}
```
### origin
> a.ts
```ts
import { A2, B2 } from './b'
type A1 = {
a: number,
}
type B1 = {
b: number,
}
export type C1 = A1 & B1
export type C2 = A2 & B2
```
=>
.d.ts
```ts
import { A2, B2 } from './b';
declare type A1 = {
a: number;
};
declare type B1 = {
b: number;
};
export declare type C1 = A1 & B1;
export declare type C2 = A2 & B2;
```
### with keyword `final`
> a.ts
```ts
import { A2, B2 } from './b'
export type A1 = {
a: number,
}
export type B1 = {
b: number,
}
export final type C1 = A1 & B1
export final type C2 = A2 & B2
```
=>
will no need to know what is A1 B1
and also will not import b.ts for know what is A2, B2 when import a.ts
```ts
export declare type C1 = {
a: number,
b: number,
}
export declare type C2 = {
a: number,
b: number,
}
```
or
```ts
export declare type C1 = {
a: number,
} & {
b: number,
}
export declare type C2 = {
a: number,
} & {
b: number,
}
```
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [ ] This wouldn't change the runtime behavior of existing JavaScript code
* [ ] This could be implemented without emitting different JS based on the types of the expressions
* [ ] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback,Domain: Declaration Emit | low | Critical |
438,182,509 | TypeScript | 'instanceof' changes type outside of 'if' statement | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.*
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
instanceof
**Code**
```ts
// @strictNullChecks: true
interface OnChanges {
onChanges(changes: Record<string, unknown>): void
}
interface Validator {
validate(): null | Record<string, unknown>;
}
class C {
validate() {
return {}
}
}
function foo() {
let v: Validator & Partial<OnChanges> = null as any;
if (v instanceof C) {
}
if (v.onChanges) { // error here
v.onChanges({});
}
}
```
**Expected behavior:**
No error.
**Actual behavior:**
```
Property 'onChanges' does not exist on type 'C | (Validator & Partial<OnChanges>)'.
Property 'onChanges' does not exist on type 'C'.
````
`instanceof` shouldn't change the type of the variable outside of the `if` statement.
This only happens with `strictNullChecks` enabled.
**Playground Link:** https://typescript-play.js.org/#code/JYOwLgpgTgZghgYwgAgPIgMIAs4gOYQDOyA3gFDKXID2mO+RAFAvQYQFzIBKEC1UAEwA8hMFFB4ANMgCuIANYhqAdxAA+AJScAbtWACyAXzKhIsRCgBqcADb64YfqQpVtt+5EZbkIGTZvIAD7cvPzCouL40nKKKuoA3EZkZAg2cITEGM5UyG52Ag4QXtk5VFAQYDJQIKTGOcbGMHIIYMC0yDDU1MXkOTYVuZzW+Q5OAGTIAApwUK22QujYuGxqyAC8Pn4B6ci4AJ6JOcAwyIzayKCiuEjUJxgaJZR1VMen2gB0tEsMhA+9pZQPl9WEwSIYNIcqA0gA
**Related Issues:**
| Bug,Domain: Control Flow,Fix Available,Rescheduled | low | Critical |
438,238,836 | pytorch | BatchNorm1d does not support batchsize>65535 in eval mode with 3 dimension (NxCxL), raise CUDNN_STATUS_NOT_SUPPORTED | ## 🐛 Bug
Hi, I found a bug while feeding big batch size to my model. The code works fine if I remove `bn.eval()`.
I have checked similar issues, none seems to solve the problem
`RuntimeError: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.`
## To Reproduce
My minimum code for reproducing this bug:
```
import torch
import torch.nn as nn
torch.backends.cudnn.enabled = True
x = torch.rand(65536, 1, 1).cuda()
bn = nn.BatchNorm1d(1)
bn.cuda()
bn.eval()
y = bn(x)
print(y.size())
```
it works fine in training mode.
It also works fine without the L dimension at the end for the BatchNorm, as below:
```
import torch
import torch.nn as nn
torch.backends.cudnn.enabled = True
x = torch.rand(65536, 1).cuda()
bn = nn.BatchNorm1d(1)
bn.cuda()
bn.eval()
y = bn(x)
print(y.size())
```
Environment:
- PyTorch Version (e.g., 1.0): 1.0.1
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): conda
- Python version: 3.6.6
- CUDA/cuDNN version: CUDA 10/ cuDNN7.4.1
- GPU models and configuration: GTX1080ti
| module: dependency bug,module: cudnn,triaged,small | low | Critical |
438,288,673 | go | go/doc: lines with double-quotes cannot be headings | ### What did you do?
Add a package doc like:
```
// ...
// The behaviour can be customized by annotations on the CRDs.
//
// Annotation "annotation-name"
//
// example.com/annotation-name: <bool>
//
// If true, lorem ipsum ...
//
```
### What did you expect to see?
The string `Annotation "annotation-name"` should be a title since it is delimited by blank lines, starts with and uppercase letter and does not end with punctuation.
### What did you see instead?
It shows up as a regular text paragraph. I've also tested simply putting the work Annotation there to exclude the quotes.
| NeedsInvestigation | low | Major |
438,299,699 | TypeScript | Allow returning types from functions | ## Search Terms
Not easily googleable because all the words are the same for extracting the function's return type.
export type from function, return type from function
## Suggestion
Make it possible to return a type from a function.
The type doesn't need to depend on the runtime values passed to the function, only on their types. This feature is just syntactic sugar.
Desired:
```typescript
const {decorated_fn, DecoratedFnParams} = decorate(fn);
declare const decorated_fn_params: DecoratedFnParams;
decorated_fn(decorated_fn_params);
```
Workaround:
```typescript
const decorated_fn = decorate(fn);
type DecoratedFnParams = DecoratedParams<typeof fn>;
declare const decorated_fn_params: DecoratedFnParams;
decorated_fn(decorated_fn_params);
```
Alternative syntax, since this only seems useful for functions that run once:
```typescript
import {decorated_fn, DecoratedFnParams} = decorate(fn);
```
(Shouldn't be a compatibility breaking change for the same reasons `import ... = require()` isn't).
## Use Cases
My case is decorating a class with multiple generic parameters. Currently I have to do this:
```typescript
interface Bar extends BaseBar {}
interface Baz extends BaseBaz {}
class Foo<Bar, Baz> extends BaseFoo {}
const DecoratedFoo = decorator(Foo);
type DecoratedFooBar = ExtractBar<DecoratedFoo>;
type DecoratedFooBaz = ExtractBaz<DecoratedFoo>;
```
But it could be just `const {DecoratedFoo, DecoratedFooBar, DecoratedFooBaz} = decorator(Foo);`
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Major |
438,404,501 | kubernetes | Metrics Request: FS metrics such as Disk IO for PVCs | **What would you like to be added**:
Is it possible to extend `container_fs_io_time_seconds_total` and similar FS metrics for PVCs? I am not able to access metrics for any of the mounted PVC disks.
I can only access them via `node_disk_io_time_seconds_total` from node-exporter but there's no way to automatically map those to the container mounting the disk without some seriously manual process. I'm happy to contribute a fix for this with some guidance.
**Why is this needed**:
Monitoring disk IO is critical for several persistent workloads such as StatefulSets running in production environments.
Thank you! | sig/storage,kind/feature,lifecycle/frozen | low | Major |
438,407,535 | tensorflow | CPU support for dilation rates larger than 1 |
**System information**
- TensorFlow version (you are using): 1.13.1
- Are you willing to contribute it (Yes/No): Yes
**Describe the feature and the current behavior/state.**
Current behavior for a model we are training is that CPU training yields errors:
`tensorflow/core/common_runtime/executor.cc:624] Executor failed to create kernel. Invalid argument: Current libxsmm and customized CPU implementations do not yet support dilation rates larger than 1.
[[{{node Train_1/Optimizer/TrainOperation/gradients/Conv2D_72_grad/Conv2DBackpropFilter}}]]
`
GPU training is successful. We would like to have parity between CPU and GPU train as not all developers have a local GPU host. This appears to be a documented issue but I could not find any mention of when it may be fixed or what may be blocking this issue.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/conv_grad_filter_ops.cc#L223
This is a feature request to complete the TODO mentioned in the above link.
**Will this change the current api? How?**
**Who will benefit with this feature?**
Anyone that is trying to train on CPU for dilated convolutions
**Any Other info.**
| stat:awaiting tensorflower,type:feature,comp:runtime | low | Critical |
438,450,384 | rust | Where bounds are ignored as part of trait type parameters on an impl | Where bounds are ignored, but suggested by the compiler, when implementing trait objects with second order types.
A full example is available at: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=d61ff8acea97a202e00ea6fc32def211
The bound on line 21 is ignored:
```rust
<T as hyper::service::MakeService<&'a SC>>::Service: 'static,
```
but suggested (with inaccurate markers):
```
error[E0310]: the associated type `<T as hyper::service::make_service::MakeService<&SC>>::Service` may not live long enough
--> src/lib.rs:10:32
|
10 | impl<'a, T, SC, RC, F, OB, ME> hyper::service::MakeService<&'a SC>
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: consider adding an explicit lifetime bound `<T as hyper::service::make_service::MakeService<&SC>>::Service: 'static`...
note: ...so that the type `<T as hyper::service::make_service::MakeService<&SC>>::Service` will meet its required lifetime bounds
--> src/lib.rs:10:32
|
10 | impl<'a, T, SC, RC, F, OB, ME> hyper::service::MakeService<&'a SC>
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
In fact, in the example given, the error is repeated four times for good measure, in a standard `cargo build`.
The workaround is to add another type parameter to the function declaration - i.e. changing:
```rust
impl<'a, T, S, SC, RC, F, OB, ME> hyper::service::MakeService<&'a SC>
for MakeService<T, RC>
where
T: hyper::service::MakeService<
&'a SC,
ReqBody = hyper::Body,
ResBody = OB,
Error = hyper::Error,
MakeError = ME,
Future = F,
>,
<T as hyper::service::MakeService<&'a SC>>::Service: 'static,
```
to
```rust
impl<'a, T, SC, RC, F, OB, ME> hyper::service::MakeService<&'a SC>
for MakeService<T, RC>
where
T: hyper::service::MakeService<
&'a SC,
ReqBody = hyper::Body,
ResBody = OB,
Error = hyper::Error,
Service = S,
MakeError = ME,
Future = F,
>,
S: 'static,
```
which additionally requires duplicating any bounds on the inner type (i.e,
```rust
type Service: Service<
ReqBody=Self::ReqBody,
ResBody=Self::ResBody,
Error=Self::Error,
>;
```
specified at https://docs.rs/hyper/0.12.25/src/hyper/service/make_service.rs.html#10-44).
This results in function declarations with large numbers of type parameters, and more complex bounds than is strictly necessary. | C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics | low | Critical |
438,516,241 | pytorch | caffe2/resnet50 assert in fetch_blob() when base_learning_rate = 0 | ## 🐛 Bug
The MultiPrecisionSgdOptimizer used by the resnet50_trainer.py Caffe2 example avoids setting up a learning rate blob (doesn't call build_lr()) if the base_learning_rate is 0:
From `caffe2/python/optimizer.py`:
```
class MultiPrecisionSgdOptimizer(SgdOptimizer):
def __init__(self, base_learning_rate=0.1, ...):
super(MultiPrecisionSgdOptimizer, self).__init__(
base_learning_rate=base_learning_rate,
...
)
def _run(self, net, param_init_net, param_info):
...
if self.base_learning_rate == 0:
return
...
lr, _ = self.build_lr(
net, param_init_net,
base_learning_rate=-self.base_learning_rate,
policy=self.policy,
**(self.init_kwargs)
)
```
But the resnet50_trainer.py example code tries to fetch the learning rate blob without regard for whether `base_learning_rate` was non-zero.
From `caffe2/examples/resnet50_trainer.py`:
```
def RunEpoch(...):
...
learning_rate = workspace.FetchBlob(
data_parallel_model.GetLearningRateBlobNames(train_model)[0]
)
```
So at the end of a training epoch, if base_learning_rate was specified as 0, an assert / enforce fail occurs in fetch_blob() as the code tries to fetch a blob that doesn't exist:
```
$ python resnet50_trainer.py ... --base_learning_rate 0.0 ...
Traceback (most recent call last):
File "resnet50_trainer.py", line 660, in <module>
main()
File "resnet50_trainer.py", line 655, in main
Train(args)
File "resnet50_trainer.py", line 563, in Train
explog
File "resnet50_trainer.py", line 209, in RunEpoch
data_parallel_model.GetLearningRateBlobNames(train_model)[0]
File "/.../lib/python2.7/site-packages/caffe2/python/workspace.py", line 356, in FetchBlob
result = C.fetch_blob(StringifyBlobName(name))
RuntimeError: [enforce fail at pybind_state.cc:183] ws->HasBlob(name). Can't find blob: MultiPrecisionSgdOptimizer_0_lr_gpu0
```
I'd be happy to submit a PR for this, but not sure whether the better solution is to qualify the example's call to FetchBlob(), or just to assert (at the top of Train()) that the user specified a non-zero base_learning_rate.
I'd lean toward the assert as I don't see much value in a training run with LR = 0, but that leaves me wondering why the optimizer codes expects / special-cases 0 in the first place.
## To Reproduce
See above
## Expected behavior
Training should either proceed despite the (perhaps nonsense LR), or should fail in a way that more obviously indicates what's wrong.
## Environment
- PyTorch Version (e.g., 1.0): 1.0.1
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): locally-built conda package
- Build command you used (if compiling from source):
- Python version: any; confirmed with 2.7 and 3.6
- CUDA/cuDNN version: 10.1 / 7.5
- GPU models and configuration: V100
- Any other relevant information:
## Additional context
| caffe2 | low | Critical |
438,516,768 | flutter | Row as leading/trailing in CupertinoSliverNavigationBar takes up the entire space covering the regular title | ## Steps to Reproduce
1. Use CupertinoSliverNavigationBar
``` dart
class AppNavigationBar extends StatelessWidget {
const AppNavigationBar(this._title, this._appLocalizations);
final String _title;
final AppLocalizations _appLocalizations;
@override
Widget build(BuildContext context) {
return CupertinoSliverNavigationBar(
border: Border.all(color: CupertinoColors.white),
backgroundColor: CupertinoColors.white,
largeTitle: Text(_title),
middle: Text(_title),
previousPageTitle: _appLocalizations.backButtonTitle,
trailing: Row(
mainAxisAlignment: MainAxisAlignment.end,
crossAxisAlignment: CrossAxisAlignment.end,
children: <Widget>[
CupertinoButton(
padding: const EdgeInsets.all(0),
child: const Icon(
CupertinoIcons.info,
color: CupertinoColors.black,
size: 32,
),
onPressed: () {
Navigator.push(
context,
PageRouteBuilder<CupertinoFullscreenDialogTransition>(
pageBuilder: (BuildContext context,
Animation<double> anim1,
Animation<double> anim2) =>
SettingsScreen(
localizations: _appLocalizations,
),
transitionsBuilder: (BuildContext context,
Animation<double> animation,
Animation<double> secondaryAnimation,
Widget child) =>
CupertinoFullscreenDialogTransition(
animation: animation,
child: child,
)));
}),
],
),
);
}
}
```
2 in
``` dart
GestureDetector(
onTap: () {
_focus.unfocus();
},
child: CupertinoPageScaffold(
child: CustomScrollView(
physics: const AlwaysScrollableScrollPhysics(),
controller: _scrollController,
slivers: <Widget>[
AppNavigationBar(_appLocalization.title, _appLocalization),
CupertinoSliverRefreshControl(
onRefresh: () {
return Future<void>.delayed(Duration(seconds: 1))
..then<void>((_) {
if (!mounted) {
return;
}
repopulateList();
});
},
),
SliverPersistentHeader(
pinned: true,
delegate: SliverAppBarDelegate(
minHeight: Dimens.headerMinDimension,
maxHeight: Dimens.headerMinDimension,
child: _buildSearchField(context),
),
),
SliverSafeArea(
top: false, // Top safe area is consumed by the navigation bar.
sliver: SliverFixedExtentList(
itemExtent: Dimens.homeScreenItemExtent,
delegate: SliverChildBuilderDelegate(
(BuildContext context, int index) {
if (index >= _loadedCarPlates.length) {
return _buildFooter();
} else {
return listItem()
}
},
childCount: n,
),
),
),
],
),
),
)
```
<!-- Finally, paste the output of running `flutter doctor -v` here. -->
```
Flutter (Channel master, v1.5.9-pre.29, on Mac OS X 10.14.4 18E226, locale en-UA)
• Flutter version 1.5.9-pre.29 at /Users/bogdanmigilev/Documents/flutter
• Framework revision d121df9987 (3 hours ago), 2019-04-26 09:02:38 -0700
• Engine revision 0b6a4be5f1
• Dart version 2.3.0 (build 2.3.0-dev.0.3 7adad2a245)
```
## How it looks

| framework,f: cupertino,a: quality,P2,workaround available,team-design,triaged-design | low | Major |
438,575,051 | youtube-dl | Site Support - ceknito.sk | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.04.30. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.04.30**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.ceknito.sk/video/480877
- User: https://www.ceknito.sk/channel/Proxymo
- Single Video Metadata/Links: https://www.ceknito.sk/d/embed/001d/480877/info.xml
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
This is apparently the first Slovakian video sharing site. It uses an Adobe Flash video player, and provides metadata in XML format.
| site-support-request | low | Critical |
438,584,945 | godot | Saving a scene that has an animationTree node "working" mess up ragdoll setup on reload | **Godot version:**
v3.2.dev.custom_build.2931b4d
**OS/device including version:**
OS: Ubuntu 18.04.2
**Issue description:**
I noticed that every time I saved a scene of a character that has a ragdoll setup and an AnimationTree that is active with an animation running, close the scene and then reopen, the location of the physical bones it's all messed up as seen in [this video](https://streamable.com/yul38),
(in the video I keep switching back and forth between scenes to "disable" the rendering of joint constraints to see the physical bones setup)
**Minimal reproduction project:**
[testm2.zip](https://github.com/godotengine/godot/files/3129785/testm2.zip)
| bug,topic:editor,confirmed | low | Minor |
438,589,217 | create-react-app | Allow import of built-in browser modules, i.e. std:kv-storage | # Should built-in browser modules be supported?
As it currently stands, CRA does not support built-in, standard library, browser modules and I haven't been able to find any information or discussion on future support for these modules (i.e. `std:kv-storage`).
If built-in modules are to ever be supported it would make sense to use polyfills as fallbacks. CRA could handle polyfilling those modules itself or allow developers to provide a mapping of built-in module names to fallbacks (i.e. a URL or a module in `node_modules/`).
### Is this a bug report?
Yes
### Environment
Browser: **Chrome v74** (you'll need this to import `std:kv-storage`)
Dev Environment: **Codesandbox** but the problem is everywhere
CRA version: **3.0.0**
### Steps to Reproduce
0. **Enable experimental web platform features on Chrome**
chrome://flags/#enable-experimental-web-platform-features
1. Create a new project with CRA
2. Attempt to import `storage` from `std:kv-storage` inside any js file (I used `/src/index.js`) like so:
```js
import { storage } from 'std:kv-storage';
```
3. Try to `console.log` the imported `storage` variable.
```js
console.log(storage);
```
### Expected Behavior
I expected the build to succeed by being able to identify that this import was referencing a built-in module because it was prefixed with `std:`, and, therefore, **not** attempt to find and bundle a module called `std:kv-storage` from `/node_modules/`.
Since it didn't build the application, this part never happened but I expected the `console.log(storage)` log the following to the console:
```js
StorageArea {}
```
### Actual Behavior
The build fails with the error:
```
Module not found: Can't resolve 'std:kv-storage'
```
### Reproducible Demo
CRA unsuccessful build repro:
https://codesandbox.io/s/4lw89q0l14?fontsize=14
Static example of importing `std:kv-storage`:
https://codesandbox.io/s/7m87rjjk90?fontsize=14 | tag: underlying tools,tag: new feature | low | Critical |
438,631,295 | angular | Angular Elements - Will show ExpressionChangedAfterItHasBeenCheckedError when use variable in attribute | # 🐞 bug report
### Affected Package
The issue is caused by package @angular/elements 7.1.0
### Is this a regression?
Yes, the previous version in 6.1.9 also existed.
### Description
popup-element is a custom element created by angular element. Just the same as Demo in angular document. https://angular.cn/guide/elements#example-a-popup-service
And I use popup-element like this:
```
<popup-element [message]="'ww'"></popup-element>
```
Add this code to AppComponent html.
## 🔬 Minimal Reproduction
https://github.com/wszhi/angular-element-demo
## 🔥 Exception or Error
This error will be shown in console.
<pre><code>
ng:///AppModule/PopupComponent.ngfactory.js:8 ERROR Error: ExpressionChangedAfterItHasBeenCheckedError: Expression has changed after it was checked. Previous value: 'null: undefined'. Current value: 'null: ww'.
at viewDebugError (vendor.js:47267)
at expressionChangedAfterItHasBeenCheckedError (vendor.js:47244)
at checkBindingNoChanges (vendor.js:47416)
at checkNoChangesNodeInline (vendor.js:51494)
at checkNoChangesNode (vendor.js:51467)
at debugCheckNoChangesNode (vendor.js:52362)
at debugCheckRenderNodeFn (vendor.js:52294)
at Object.eval [as updateRenderer] (ng:///AppModule/PopupComponent.ngfactory.js:12)
at Object.debugUpdateRenderer [as updateRenderer] (vendor.js:52279)
at checkNoChangesView (vendor.js:51302)
</code></pre>
## 🌍 Your Environment
**Angular Version:**
```
Angular CLI: 7.3.8
Node: 10.12.0
OS: darwin x64
Angular: 7.2.5
```
**Anything else relevant?**
In chrome browser Version 73.0.3683.103 (Official Build) (64-bit) | type: bug/fix,freq1: low,workaround2: non-obvious,area: elements,state: confirmed,P4 | low | Critical |
438,633,440 | youtube-dl | Not able to download from Safaribooks Video | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.04.30. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Read bugs section in FAQ: http://yt-dl.org/reporting
- Finally, put x into all relevant boxes (like this [x])
-->
- [ x] I'm reporting a broken site support issue
- [ x] I've verified that I'm running youtube-dl version **2019.04.30**
- [ x] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ x] I've searched the bugtracker for similar bug reports including closed ones
- [ x] I've read bugs section in FAQ
## Verbose log
<!--
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2019.04.30
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
G:\>youtube-dl -v
[debug] System config: []
[debug] User config: ['-u', 'PRIVATE', '-p', 'PRIVATE', '-i', '-c', '--no-warnings', '--console-title', '--batch-file=batch-file.txt', '-o', '%(playlist_title)s/%(playlist_index)s-%(title)s.%(ext)s', '-f', 'best[tbr<=1000]/worst[[height>=720]]/best[[height<720]]']
[debug] Custom config: []
[debug] Command-line args: ['-v']
[debug] Batch file urls: ['https://learning.oreilly.com/videos/hands-on-problem-solving/9781789530087/']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2019.04.30
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.17134
[debug] exe versions: none
[debug] Proxy map: {}
[safari:course] Downloading login form
[safari:course] Logging in
[safari:course] 9781789530087: Downloading course JSON
ERROR: Unable to download JSON metadata: HTTP Error 403: Forbidden (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpfeq7l785\build\youtube_dl\extractor\common.py", line 626, in _request_webpage
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpfeq7l785\build\youtube_dl\YoutubeDL.py", line 2227, in urlopen
File "C:\Python\Python34\lib\urllib\request.py", line 470, in open
File "C:\Python\Python34\lib\urllib\request.py", line 580, in http_response
File "C:\Python\Python34\lib\urllib\request.py", line 508, in error
File "C:\Python\Python34\lib\urllib\request.py", line 442, in _call_chain
File "C:\Python\Python34\lib\urllib\request.py", line 588, in http_error_default
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE
I am trying videos from Safaribooks with the instruction from the below website.
https://www.quora.com/How-can-I-download-videos-from-Safari-Books-online
where below is the config file details.
-u <Username>
-p <Password>
-i
-c
--no-warnings
--console-title
--batch-file='batch-file.txt'
-o '%(playlist_title)s/%(playlist_index)s-%(title)s.%(ext)s'
-f 'best[tbr<=1000]/worst[[height>=720]]/best[[height<720]]'
Thanks and Regards,
| account-needed | low | Critical |
438,657,298 | pytorch | Performance difference between 0.4.1 and 1.1.0 | ## ❓ Questions and Help
The avearge runtime of my model under the environment of PyTorch 0.4.1.post2 is about 16ms, while changing to PyTorch 1.1.0.dev20190411, the time is increased to 20ms.
runtime in 0.4.1

runtime in 1.1.0

Moreover, the performance drops ~1.0 AP (object detection) using PyTorch 1.1.0 compared with 0.4.1.
I wonder whether some improvements in 0.4.1 are removed in 1.1.0 or their implementations have been changed?
## Environment
### 1.1.0
PyTorch version: 1.1.0.dev20190411
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 16.04.3 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
GPU 3: GeForce GTX 1080
Nvidia driver version: 410.73
cuDNN version: Probably one of the following:
/usr/local/MATLAB/R2016b/bin/glnxa64/libcudnn.so.4.0.7
/usr/local/cuda-8.0/lib64/libcudnn.so.7.0.2
/usr/local/cuda-8.0/lib64/libcudnn_static.a
/usr/local/cuda-9.0/cuda/lib64/libcudnn.so.7.0.5
/usr/local/cuda-9.0/cuda/lib64/libcudnn_static.a
/usr/local/cuda-9.0/lib64/libcudnn.so
/usr/local/cuda-9.0/lib64/libcudnn.so.7
/usr/local/cuda-9.0/lib64/libcudnn.so.7.0.5
/usr/local/cuda-9.0/lib64/libcudnn_static.a
Versions of relevant libraries:
[pip] Could not collect
[conda] blas 1.0 mkl
[conda] mkl 2018.0.3 1
[conda] mkl_fft 1.0.6 py37h7dd41cf_0
[conda] mkl_random 1.0.1 py37h4414c95_1
[conda] pytorch-nightly 1.1.0.dev20190411 py3.7_cuda10.0.130_cudnn7.4.2_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] torch 1.0.0a0+54e8623 <pip>
### 0.4.1
PyTorch version: 0.4.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.3 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
GPU 3: GeForce GTX 1080
Nvidia driver version: 410.73
cuDNN version: Probably one of the following:
/usr/local/MATLAB/R2016b/bin/glnxa64/libcudnn.so.4.0.7
/usr/local/cuda-8.0/lib64/libcudnn.so.7.0.2
/usr/local/cuda-8.0/lib64/libcudnn_static.a
/usr/local/cuda-9.0/cuda/lib64/libcudnn.so.7.0.5
/usr/local/cuda-9.0/cuda/lib64/libcudnn_static.a
/usr/local/cuda-9.0/lib64/libcudnn.so
/usr/local/cuda-9.0/lib64/libcudnn.so.7
/usr/local/cuda-9.0/lib64/libcudnn.so.7.0.5
/usr/local/cuda-9.0/lib64/libcudnn_static.a
Versions of relevant libraries:
[pip] numpy (1.16.2)
[pip] torch (0.4.1.post2)
[pip] torch-encoding (0.5.2)
[pip] torchfile (0.1.0)
[pip] torchnet (0.0.4)
[pip] torchvision (0.2.1)
[conda] blas 1.0 mkl
[conda] mkl 2019.0 118
[conda] mkl_fft 1.0.6 py36h7dd41cf_0
[conda] mkl_random 1.0.1 py36h4414c95_1
[conda] pytorch 0.4.1 py36_py35_py27__9.0.176_7.1.2_2 pytorch
[conda] torch-encoding 0.4.5+69f6e1c <pip>
[conda] torch-encoding 0.5.2 <pip>
[conda] torchfile 0.1.0 <pip>
[conda] torchnet 0.0.4 <pip>
[conda] torchvision 0.2.1 py36_1 pytorch
| module: performance,triaged | low | Critical |
438,697,687 | rust | Tracking issue for RFC 2645, "Transparent Unions" (formerly: and Enums) | This is a tracking issue for the RFC "Transparent Unions and Enums" (rust-lang/rfcs#2645).
**Steps:**
- [x] Implement the RFC (cc @rust-lang/compiler -- can anyone write up mentoring instructions?)
- [ ] Adjust documentation ([see instructions on forge][doc-guide])
- [ ] Stabilization PR ([see instructions on forge][stabilization-guide]) [done for enums]
[stabilization-guide]: https://forge.rust-lang.org/stabilization-guide.html
[doc-guide]: https://forge.rust-lang.org/stabilization-guide.html#updating-documentation
**Unresolved questions:**
> The role of `#[repr(transparent)]` in nonnull-style optimizations is not entirely clear. Specifically, it is unclear whether the user can rely on these optimizations to be performed when they make a type transparent. [Transparent `union`s somewhat complicate the matter](https://github.com/rust-lang/rfcs/pull/2645#issuecomment-470699497). General concensus seems to be that the compiler is free to decide where and when to perform nonnull-style optimizations on `union`s (regardless of whether or not the `union` is transaprent), and no guarantees are made to the user about when and if those optimizations will be applied. It is still an open question exactly what guarantees (if any) Rust makes about transparent `struct`s (and `enum`s) and nonnull-style optimizations.
>
> This RFC doesn't propose any changes to transparent `struct`s, and so does not strictly depend on this question being resolved. But since this RFC is attempting to round out the `#[repr(transparent)]` feature, it seems reasonable to dedicate some time to attempting to round out the guarantees about `#[repr(transparent)]` on `struct`s.
Also it is not clear if [transparent unions can even be implemented on LLVM without seriously restricting our semantics for unions overall](https://github.com/rust-lang/rust/issues/60405#issuecomment-624860809). | B-RFC-approved,T-lang,B-unstable,B-RFC-implemented,C-tracking-issue,S-tracking-design-concerns,A-repr | high | Critical |
438,727,174 | pytorch | On first construction, CUDAContext changes default CPU allocator behavior | The first time you construct a CUDAContext, the CPU memory allocator will be changed into one that allocates CUDA pinned memory, reporting that this occurred at level `VLOG(1)`. This is not documented anywhere end-users are likely to see, and is a silent performance hazard for anyone doing Caffe2-PyTorch interop, as it will *also* silently change the behavior of all PyTorch allocation functions to also start allocating pinned CPU memory by default. I see some evidence in old design discussions that we were planning to use some sort of thread-local guard to toggle between pinned memory / non-pinned memory (and have the Caffe2 executor toggle this guard), but it does not look like we ever implemented this for real.
Relevant code is `Caffe2UsePinnedCPUAllocator` in `caffe2/core/context_gpu.cu`
Cc @dzhulgakov @jerryzh168
Also cc @smessmer, I think you're most likely to accidentally trigger this | caffe2 | low | Major |
438,786,955 | youtube-dl | [toutv] Gets a 400: Bad request on login | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.04.30. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2019.04.30**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
<!--
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2019.04.30
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'https://ici.tou.tv/les-chefs/S09E05', u'-o', u'/Volumes/Fichiers/Series/Les chefs/Season 9/S09E05.mp4', u'-u', u'PRIVATE', u'-p', u'PRIVATE', u'--verbose']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.04.30
[debug] Python version 2.7.10 (CPython) - Darwin-17.7.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg N-92154-g7a6d88ee62-tessus, ffprobe N-92219-g9e21ba3dc3-tessus, rtmpdump 2.4
[debug] Proxy map: {}
[debug] Using fake IP 99.254.93.186 (CA) as X-Forwarded-For.
[tou.tv] Logging in
ERROR: Unable to download JSON metadata: HTTP Error 400: Bad Request (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 626, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2227, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
```
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
When trying to log, i get a 400: Bad request. I have no idea what's causing the problem yet, the extractor has changed a lot since the last time I looked at it.
I can proving a username / password if needed.
| account-needed | low | Critical |
438,867,887 | flutter | Would like hooks to record user interactions for later playback | Alibaba has created their own internal record/playback system for quickly turning manual testing of Flutter-built apps into automated tests.
They currently maintain forks to the framework to make this happen (some "assert" wrapped calls to their record system in a few strategic places, e.g. GestureDetector).
Having hooks through which it's possible to write gesture recording systems might be a reasonable addition to the framework instead of requiring customers to maintain a fork.
Thoughts @goderbauer? FYI @kangwang1988 | c: new feature,framework,customer: alibaba,P3,team-framework,triaged-framework | low | Major |
438,883,936 | flutter | Driver scrollUntilVisible method errors could be improved | For example, I was trying to drive some UI that uses a `CustomScrollView`, but I was specifying `find.byType('ListView')` for the `scrollable`. I received the following error:
```
DriverError: Failed to fulfill WaitFor due to remote error
Original error: Bad state: The client closed with pending request "ext.flutter.driver".
Original stack trace:
#0 new Client.withoutJson.<anonymous closure> (package:json_rpc_2/src/client.dart:70:24)
#1 StackZoneSpecification._run (package:stack_trace/src/stack_zone_specification.dart:209:15)
#2 StackZoneSpecification._registerCallback.<anonymous closure> (package:stack_trace/src/stack_zone_specification.dart:119:48)
#3 _rootRun (dart:async/zone.dart:1120:38)
#4 _CustomZone.run (dart:async/zone.dart:1021:19)
#5 _FutureListener.handleWhenComplete (dart:async/future_impl.dart:150:18)
#6 Future._propagateToListeners.handleWhenCompleteCallback (dart:async/future_impl.dart:609:39)
#7 Future._propagateToListeners (dart:async/future_impl.dart:665:37)
#8 Future._propagateToListeners (dart:async/future_impl.dart:566:9)
#9 Future._completeWithValue (dart:async/future_impl.dart:483:5)
#10 Future._asyncComplete.<anonymous closure> (dart:async/future_impl.dart:513:7)
#11 StackZoneSpecification._run (package:stack_trace/src/stack_zone_specification.dart:209:15)
#12 StackZoneSpecification._registerCallback.<anonymous closure> (package:stack_trace/src/stack_zone_specification.dart:119:48)
#13 _rootRun (dart:async/zone.dart:1124:13)
#14 _CustomZone.run (dart:async/zone.dart:1021:19)
#15 _CustomZone.runGuarded (dart:async/zone.dart:923:7)
#16 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:963:23)
#17 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
#18 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
#19 _Timer._runTimers (dart:isolate-patch/timer_impl.dart:391:30)
#20 _Timer._handleMessage (dart:isolate-patch/timer_impl.dart:416:5)
#21 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:171:12)
package:flutter_driver/src/driver/driver.dart 424:7 FlutterDriver._sendCommand
===== asynchronous gap ===========================
dart:async _AsyncAwaitCompleter.completeError
package:flutter_driver/src/driver/driver.dart FlutterDriver._sendCommand
===== asynchronous gap ===========================
dart:async _asyncErrorWrapperHelper
package:flutter_driver/src/driver/driver.dart FlutterDriver._sendCommand
package:flutter_driver/src/driver/driver.dart 462:11 FlutterDriver.waitFor
===== asynchronous gap ===========================
dart:async _asyncThenWrapperHelper
google3:///ads/adwords_mobileapp/app/mobile_harness/test/ad_screen_memory_test.dart main.<fn>.<fn>
DriverError: Failed to fulfill Scroll due to remote error
Original error: Bad state: The client closed with pending request "ext.flutter.driver".
Original stack trace:
#0 new Client.withoutJson.<anonymous closure> (package:json_rpc_2/src/client.dart:70:24)
#1 StackZoneSpecification._run (package:stack_trace/src/stack_zone_specification.dart:209:15)
#2 StackZoneSpecification._registerCallback.<anonymous closure> (package:stack_trace/src/stack_zone_specification.dart:119:48)
#3 _rootRun (dart:async/zone.dart:1120:38)
#4 _CustomZone.run (dart:async/zone.dart:1021:19)
#5 _FutureListener.handleWhenComplete (dart:async/future_impl.dart:150:18)
#6 Future._propagateToListeners.handleWhenCompleteCallback (dart:async/future_impl.dart:609:39)
#7 Future._propagateToListeners (dart:async/future_impl.dart:665:37)
#8 Future._propagateToListeners (dart:async/future_impl.dart:566:9)
#9 Future._completeWithValue (dart:async/future_impl.dart:483:5)
#10 Future._asyncComplete.<anonymous closure> (dart:async/future_impl.dart:513:7)
#11 StackZoneSpecification._run (package:stack_trace/src/stack_zone_specification.dart:209:15)
#12 StackZoneSpecification._registerCallback.<anonymous closure> (package:stack_trace/src/stack_zone_specification.dart:119:48)
#13 _rootRun (dart:async/zone.dart:1124:13)
#14 _CustomZone.run (dart:async/zone.dart:1021:19)
#15 _CustomZone.runGuarded (dart:async/zone.dart:923:7)
#16 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:963:23)
#17 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
#18 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
#19 _Timer._runTimers (dart:isolate-patch/timer_impl.dart:391:30)
#20 _Timer._handleMessage (dart:isolate-patch/timer_impl.dart:416:5)
#21 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:171:12)
```
If this error instead said something along the lines of 'could not find the provided scrollable', it would have saved me some time. | c: new feature,tool,customer: mulligan (g3),t: flutter driver,a: error message,P3,team-tool,triaged-tool | low | Critical |
438,896,342 | kubernetes | leveraging etcd minor version features | etcd 3.4 is slated for release in late June 2019. It includes additions to the API that Kubernetes can only leverage if the kube-apiserver knows it's making requests to a etcd 3.4+ cluster.
The kube-apiserver's `--storage-backend` flag is available to set the etcd major version ("etcd3").
Would it be acceptable to introduce minor version options to this flag, e.g. "etcd3.4" ?
An alternative would be to have the kube-apiserver make a request to etcd when it starts to determine the "cluster version" and then enable features based on that.
Example of a feature we might leverage: https://github.com/etcd-io/etcd/pull/9869
cc @wojtek-t
| sig/api-machinery,kind/feature,lifecycle/frozen | low | Minor |
438,896,452 | TypeScript | Namespaced ES6 classes are not recognized as types | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.33.1
- OS Version: Linux x64
MWE:
jsconfig.json
```json
{
"compilerOptions": {
"target": "es2018",
"checkJs": true
},
"files": [
"mwe.js",
"mwe.d.ts"
]
}
```
mwe.js:
```js
class T {
constructor() {
this.x = 1.23;
}
}
var NS = {}
NS.T = class {
constructor() {
this.x = 1.23;
}
}
```
mwe.d.ts:
```typescript
function f1(x: T): void;
function f2(x: NS.T): void;
```
The problem: the type of parameter for `f2` is shown as `any` and no type checking is done.
Does this issue occur when all extensions are disabled?: Yes
| Bug | low | Minor |
438,898,496 | godot | Improve enum usage in Dictionary exports for C# | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.1.1
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Win10
**Issue description:**
Feature request:
I am trying to export a dictionary of `enum: float`. In the editor my enum shows up as an int, instead of a string. This makes it difficult to use. There is also no auto-complete available. If possible these would be great features to have.
**Steps to reproduce:**
[Export]
public Godot.Collections.Dictionary Weights = new Godot.Collections.Dictionary() {
{EnumName1, 0.5}
};
Appears as the following in the editor

If you type the dictionary, it will correctly set the value as a float (otherwise initializing to 1 will cause it to be an int in the editor), but the enum is still not auto-translated to a string in the editor.
[Export]
public Godot.Collections.Dictionary<ClassName.EnumNames, float> Weights = new Godot.Collections.Dictionary<ClassName.EnumNames, float> () {
{ EnumName1, 1 } // 1 will be a float in the editor, but EnumName1 will still show up as 11 and not offer auto-complete
};
Using a C# dictionary causes the exported var to not show up at all in the editor. | enhancement,topic:gdscript,topic:dotnet | low | Major |
438,902,614 | opencv | Positional Argument `outImg` Should Be Keyword Argument in cv2.drawMatches | ##### System information (version)
- OpenCV => 4.0.1
- Operating System / Platform => MacOS 10.14 Mojave
- Compiler => XCode 10.1
##### Detailed description
The docstring says
> drawMatches(...)
> drawMatches(img1, keypoints1, img2, keypoints2, matches1to2, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]) ->
>
> @param outImg Output image. Its content depends on the flags value defining what is drawn in the output image. See possible flags bit values below.
But it doesn't mention that passing `None` would actually eliminate the need to instantiate a placeholder image to draw on, which would otherwise be quite the task in itself - how would one even calculate the output size of a feature-matched pair of images with lines?
Either way, the **actual behavior** of the `outImg` positional argument is exactly what a Python **kwarg** is meant to do. Please implement this breaking change (breaking changes are necessary in software development).
[See this StackOverflow thread](https://stackoverflow.com/questions/31631352/typeerror-required-argument-outimg-pos-6-not-found/31631995) for the confusion this has caused.
##### Steps to reproduce
```python
import cv2
cv2.drawMatchesKnn(img1, kp1, img2, kp2, matches, None)
``` | category: python bindings,category: documentation | low | Critical |
438,903,054 | TypeScript | `paths` not used for auto import | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
From https://github.com/Microsoft/vscode/issues/72931
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.0-dev.20190430
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
- paths
- auto import
**Repo**
1. clone https://github.com/ac566/typescript-auto-import-issue and open in vscode
1. In VS Code, set `"typescript.preferences.importModuleSpecifier": "non-relative"`
1. Open `src/app/app1/test.ts`
1. Delete the import for `TestType`
1. Try using a quick fix or auto import to add the impact back
**Expected behavior:**
The import is added as `import { TestType } from "@myLib/example_2";`
The is expected since the tsconfig configures `paths` and the import of `@myLib/example_2` was previously working properly:
```
{
"compilerOptions": {
"target": "es5",
"module": "es2015",
"moduleResolution": "node",
"baseUrl": ".",
"paths": {
"@myLib/*": [
"./src/lib/src/*/_index.ts"
]
}
}
}
```
**Actual behavior:**
The import is added as `import { TestType } from "@myLib/example_2/_index";` | Bug,Domain: Quick Fixes | medium | Critical |
438,911,135 | go | cmd/go: go get -u behaves differently with and without GOPROXY when a module doesn't exist at head | <pre>
$ go version
go version devel +fbc6a97222 Mon Apr 29 19:54:30 2019 +0000 linux/amd64
</pre>
The problem exists with go1.12 as well.
### What did you do?
Without proxy, I could run `go get -u` successfully. With proxy, it fails with cryptic error message.
<pre>
$ export GOPROXY=direct
$ mkdir scratch; cd scratch; go mod init example
$ go get github.com/spf13/viper
$ go get -u
go: finding golang.org/x/sys latest
go: finding golang.org/x/crypto latest
go: finding golang.org/x/tools latest
go: finding github.com/ugorji/go/codec latest
go: finding gopkg.in/check.v1 latest
go: finding golang.org/x/sync latest
go: finding golang.org/x/net latest
go: finding github.com/armon/consul-api latest
</pre>
<pre>
$ export GOPROXY=https://proxy.golang.org
$ mkdir scratch; cd scratch; go mod init example
$ go get github.com/spf13/viper
$ go get -u
go get: upgrading github.com/ugorji/go/[email protected]: reading https://proxy.golang.org/github.com/ugorji/go/codec/@v/list: 404 Not Found
</pre>
What happens underneath:
The viper module or its dependencies have dependency on github.com/ugorji/go/[email protected]. The `github.com/ugorji/go/codec` does not exist in the origin at head (no go.mod file, no valid tag), but it probably existed with `@d75b2dc`. Thus, `go mod download github.com/ugorji/go/[email protected]` succeeds with and without GOPROXY.
But when trying to upgrade, with GOPROXY=direct, the go command runs a sequence of git commands and detects there is no tag, no go.mod file at head and doesn't attempt to upgrade.
With GOPROXY=<proxy>, the go command queries /@v/list endpoint to find available module versions. The proxy doesn't know about the named module (because the module doesn't exist according to what `go list -m -v` reports), so answers HTTP error (404/410/500/...). Then, the go command treats it as a hard error.
I think in this case (the list query failure), the go command should not fail but proceed as if there is no newer version to upgrade (indeed, there is no version to upgrade to!) | NeedsInvestigation,GoCommand,modules | low | Critical |
438,915,211 | TypeScript | Object.getOwnPropertyDescriptors accepts undefined (lib.es2017.object.d.ts) | **TypeScript Version:** 3.4.5
**Search Terms:**
getOwnPropertyDescriptors
**Code**
```ts
Object.getOwnPropertyDescriptors(undefined);
```
**Expected behavior:**
Undefined is not accepted.
**Actual behavior:**
Undefined is accepted.
**Playground Link:**
https://www.typescriptlang.org/play/index.html#src=Object.getOwnPropertyDescriptors(undefined)%3B
However, can't seem to enable es2017 in the Playground
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
n/a | Bug,Domain: lib.d.ts | low | Critical |
438,957,040 | TypeScript | Import Interfaces and/or definitions from URL's pointing to Servers (Not local to machine) | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
- import url
- import web address
- import from server
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
Hello, My thought process behind this is allowing strictly typed API's to be served for developers to have an easier time developing/consuming API's.
What I am suggesting probably has implications reaching far past my goal such as importing modules or variables or other things over the web which would impact runtime or require downloading resources at compilation time, I do not wish to dive that deep as my purpose is strictly for typing of objects and possibly typescript definition files that are only used during development, I will let any possible discussion delve further into those other topics.
API's are often served at endpoints such as https://www.foo.com/_api/bar; and when we developers want to consume these API's using typescript we have two options, Declare our response object as any, or write an interface that maps what we need from the API so that intellisense can recommend GetNumberOfLegs from our imported object Animal.dog.Get...
Would it be possible to allow the Typescript compiler to evaluate a url like [this one](https://gist.githubusercontent.com/Metroidaron/48b6bb2ea0cb7b8b2d0e81f82a62ffb6/raw/1479fc1271eddd1dfef12ccd9da028f546899fe1/BasicSampleObject.ts) (Gist URL Below) during the development/linting phase and utilize the type at that endpoint? This could be done in my case for typing objects that appear in multiple projects by hosting files like this at a publicly available endpoint, or also allow API endpoints to provide an interface that our typescript compilers could read and utilize during development. If used for definition files, then the package manger install of a separate package for third party definitions of a library would also be rendered not needed.
I imagine we would have an import statement at the top of our file that instead of pointing to a local path, would simply point to a web address.
import myObjectInterface from 'https://www.foo.com/_api/v1.3/interfaces/bar';
Non Raw Gist Url:
https://gist.github.com/Metroidaron/48b6bb2ea0cb7b8b2d0e81f82a62ffb6
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
I want to use this to centralize my own object types that are relevant across multiple projects by creating public interfaces that all my work projects can point to, the current pitfalls of the current system are that to accomplish synchronization across projects and consistency, I would need to either create and install an NPM Package (or some other package manager), use Git SubModules to maintain separate git repositories inside project repositories, or simply copy/paste files.
This would also allow for API's to provide interfaces to the developer, this would increase development speed because there would not be a need to translate an API into an interface for use in a project while maintaining full and accurate strict typing. If an API is strictly typed behind the scenes, I think it would likely be fairly strait forward for the generation of these files to be automated.
## Examples
<!-- Show how this would be used and what the behavior would be -->
Example 1
```typescript
import iBasicSampleObject from 'https://gist.githubusercontent.com/Metroidaron/48b6bb2ea0cb7b8b2d0e81f82a62ffb6/raw/1479fc1271eddd1dfef12ccd9da028f546899fe1/BasicSampleObject.ts';
import iEasierToReadExample from 'https://www.foo.com/_api/v1.3/interfaces/bar';
fetch('https://www.foo.com/_api/v1.3/bar', {}).then( httpResponse => {
httpResponse.json().then( (bar : iEaserToReadExample) => {
// Bar is now strictly typed based on what the API defines Bar's Type should be.
bar.getFunctionOne();
});
});
```
Example 2
```typescript
import iRemoteUserObject from 'https://www.CompanyWebsite.com/DevelopersStuff/iUserObject.ts';
const UserObject : iRemoteUserObject = {
name : 'Metroidaron',
ageBracket : 'old'
}
/**
* UserObject is strictly typed from an external source
* and can be imported into my companies external website, employee portal, desktop App, Mobile app, and more.
*/
UserObject.ageNumber
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
438,963,622 | terminal | Console window cannot be closed when in selection mode before WSL tty is ready | On Windows 10 1903 (build 18362.86), but could be the same on other build, I didn't test on earlier ones or 20H1.
Launching a WSL distribution opens a console window and connects it as a tty* to the instance.
If an edit selection is marked before the client process gets connected, for example by click and drag over the window client area right when it appears in QuickEdit Mode, before any output gets displayed, the output is paused until the selection is cancelled or copied.
In that state, trying to close the console window using the [X] button does not close the console, and instead seems to queue the WM_CLOSE until the output is resumed by cancelling or copying the selection.
This means if a user triggers a selection before the shell gets displayed, the close button seems ineffective until they first resolve the selection issue, at which point the window gets closed even if the user changed their mind. In QuickEdit Mode, if a selection is triggered by mistake, it might not be obvious to the user, and pressing the close button several times has no effect.
In the state shown in the screenshot, the close button does not work :

This only seems to happen with WSL distributions, probably because Win32 CUI processes are always started before a console gets created or attached to their process, making it always ready to process WM_CLOSE, while in WSL the console gets created before the shell process gets started and connected to it.
Current behavior: Pressing the close button does nothing while in selection mode, and when selection is over, the window gets closed even if user doesn't press the close button again.
Expected behavior: Pressing the close button should always be a reliable way to close a console window.
Instead of an ineffective close button that only reacts asynchronously once the selection is over, pressing the close button (or processing a WM_CLOSE sent by another process to the console window's message queue) should automatically cancel any selection underway, similar to how the selection gets cancelled if a keypress gets processed.
This wouldn't change the behavior for Win32 CUI processes, and would resolve the issue for WSL instances.
| Product-Conhost,Area-Interop,Issue-Bug | low | Minor |
438,989,673 | flutter | iOS Add2App profile mode can't attach/find observatory | `flutter run --profile` on a iPhone allows you to attach and use the observatory in a normal way.
However, running in a profile mode add2app scenario does not. | platform-ios,tool,a: existing-apps,a: debugging,customer: amplify,P3,team-ios,triaged-ios | medium | Major |
438,991,087 | godot | GIProbe based GI is wrong for intersecting or overlapping geometry | **Godot version:**
Godot master (#25670)
**OS/device including version:**
Windows 10
**Issue description:**
Voxel based GI (with GIProbe) is not computed correctly for intersecting or overlapping geometry.


**Minimal reproduction project:**
[test0 - Kopie.zip](https://github.com/godotengine/godot/files/3133225/test0.-.Kopie.zip) | bug,topic:rendering | low | Critical |
438,997,880 | flutter | rename the `--observatory-port` cli arg of `flutter run` | `flutter run` supports an `--observatory-port` cli arg. That's the name of an app which uses the service protocol. We should rename that to something that better describes what it does.
`flutter run --service-protocol-port` would be more correct. `flutter run --debug-port` might be a bit more meaningful for the user (and shorter to type).
| tool,c: API break,P3,team-tool,triaged-tool | low | Critical |
439,001,455 | pytorch | [JIT] traced model with optimization shows no performance improvement | ## 🐛 Bug
<!-- traced model with optimization shows no performance improvement -->
Using torch.jit.trace with optimize=True shows no performance difference with optimize=False
The test model I used is resnet from torchvision. I modified it to run only the features extraction (no ave pooling and fc for classification).
Inference test.py python script:
```
""" Pytorch inference script """
from __future__ import print_function
from __future__ import absolute_import
from __future__ import division
import argparse
import timeit
import numpy as np
import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
# Select appropriate model for test
import resnet
def timeGraph(model, batch_size, num_loops):
# Create random input tensor of certain size
input = torch.rand(batch_size, 3, 1200, 1920, dtype=torch.float).cuda()
print("Warm up ...")
with torch.no_grad():
for _ in range(20):
model(input)
print("Start timing ...")
timings = []
with torch.no_grad():
for i in range(num_loops):
start_time = timeit.default_timer()
features = model(input)
end_time = timeit.default_timer()
timings.append(end_time - start_time)
print("Iteration {}: {:.6f} s".format(i, end_time - start_time))
print("Output features size:", features.size())
return timings
def printStats(graphName,timings,batch_size):
times = np.array(timings)
steps = len(times)
speeds = batch_size / times
time_mean = np.mean(times)
time_med = np.median(times)
time_99th = np.percentile(times, 99)
time_std = np.std(times, ddof=0)
speed_mean = np.mean(speeds)
speed_med = np.median(speeds)
msg = ("\n%s =================================\n"
"batch size=%d, num iterations=%d\n"
" Median FPS: %.1f, mean: %.1f\n"
" Median latency: %.6f, mean: %.6f, 99th_p: %.6f, std_dev: %.6f\n"
) % (graphName,
batch_size, steps,
speed_med, speed_mean,
time_med, time_mean, time_99th, time_std)
print(msg)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Run inference on a model with random input values")
parser.add_argument('--gpu', default=None, type=int, help='GPU id to use.')
parser.add_argument("--batch_size", type=int, default=1, help="Batch size (default=1)")
parser.add_argument('--optimize', action='store_true', help='Turn on optimization for traced model')
parser.add_argument("--iter", default=10, type=int, help="Number of iteration loops")
args = parser.parse_args()
# Creating model with random weights
model = resnet.resnet50()
print("Tracing model... Optimization=", args.optimize)
example_input = torch.rand(args.batch_size, 3, 1200, 1920, dtype=torch.float)
traced_model = torch.jit.trace(model, example_input,
check_trace=True,
check_tolerance=1e-05,
optimize=args.optimize,
)
# Save the script module
# traced_model.save("model_traced.pt")
# Create graph on GPU if CUDA is available
if args.gpu is not None:
if torch.cuda.is_available():
# Enable CuDNN autotune for better performance (with fixed inputs)
cudnn.benchmark = True
traced_model = traced_model.cuda(args.gpu)
else:
raise Exception("No cuda available.")
dev = torch.cuda.current_device()
print("Cuda device id, count=", dev, torch.cuda.device_count())
print("Cuda DNN version=", cudnn.version())
print("Cuda compute capability=", torch.cuda.get_device_capability(dev))
print("Cuda device name=", torch.cuda.get_device_name(dev))
# Timing graph inference
timings = timeGraph(traced_model, args.batch_size, args.iter)
printStats("resnet", timings, args.batch_size)
```
Modified resnet.py from torchvision
```
import torch.nn as nn
import math
import torch.utils.model_zoo as model_zoo
__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
'resnet152']
model_urls = {
'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
}
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000):
self.inplanes = 64
super(ResNet, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AvgPool2d(7, stride=1)
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
#x = self.avgpool(x)
#x = x.view(x.size(0), -1)
#x = self.fc(x)
return x
def resnet50(pretrained=False, **kwargs):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))
return model
```
## To Reproduce
Steps to reproduce the behavior:
Run test.py with GPU:
> python test.py --gpu 0 --iter 100
Run test.py with GPU and trace optimize:
> python test.py --gpu 0 --optimize --iter 100
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- Sample output -->
```
Tracing model... Optimization= True
Cuda device id, count= 0 1
Cuda DNN version= 7401
Cuda compute capability= (6, 1)
Cuda device name= GeForce GTX 1080
Warm up ...
Start timing ...
Iteration 0: 0.133147 s
Iteration 1: 0.137695 s
Iteration 2: 0.132463 s
Iteration 3: 0.132877 s
Iteration 4: 0.132633 s
Iteration 5: 0.137405 s
Iteration 6: 0.134528 s
Iteration 7: 0.133907 s
Iteration 8: 0.134656 s
Iteration 9: 0.133537 s
Output features size: (1, 2048, 38, 60)
resnet =================================
batch size=1, num iterations=10
Median FPS: 7.5, mean: 7.4
Median latency: 0.133722, mean: 0.134285, 99th_p: 0.137669, std_dev: 0.001777
```
## Environment
- PyTorch Version (e.g., 1.0): 1.1.0a0
- OS (e.g., Linux): Ubuntu 14.04 / 16.04
- How you installed PyTorch (`conda`, `pip`, source): pip/source
- Build command you used (if compiling from source):
- Python version:2.7 / 3.5
- CUDA/cuDNN version: 10.0 / 7.4
- GPU models and configuration: Nvidia GTX1080
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
Also tried to run the model without jit.trace, and there seem to be little change in performance also.
cc @suo | oncall: jit,triaged | low | Critical |
439,012,983 | flutter | Output of "flutter channel beta" when there's a hot fix pending is disturbing | I downloaded v1.4.9-hotfix.1-beta and tried to do `flutter channel beta` and the output was concerning:
```
ianh@ianh:~/dev/test-install$ tar xf flutter_linux_v1.4.9-hotfix.1-beta.tar.xz
ianh@ianh:~/dev/test-install$ cd flutter/
ianh@ianh:~/dev/test-install/flutter$ bin/flutter channel beta
╔════════════════════════════════════════════════════════════════════════════╗
║ A new version of Flutter is available! ║
║ ║
║ To update to the latest version, run "flutter upgrade". ║
╚════════════════════════════════════════════════════════════════════════════╝
Switching to flutter channel 'beta'...
git: From https://github.com/flutter/flutter
git: + 88fa7ea40...09cbc34a0 beta -> origin/beta (forced update)
git: 6c7b6833c..0ba67226e dev -> origin/dev
git: 6c7b6833c..514fb2c7c master -> origin/master
git: * [new branch] revert-30339-tap-buttons -> origin/revert-30339-tap-buttons
git: e9dd13d1a..5ae6952c0 revert-30873-revert-30414-remove-hover-pressure -> origin/revert-30873-revert-30414-remove-hover-pressure
git: * [new branch] revert-30951-roll_branch -> origin/revert-30951-roll_branch
git: * [new branch] revert-30991-caretheight -> origin/revert-30991-caretheight
git: * [new branch] revert-30995-revert_engine -> origin/revert-30995-revert_engine
git: * [new branch] revert-31736-roll_build -> origin/revert-31736-roll_build
git: * [new branch] revert-31801-revert-30339-tap-buttons -> origin/revert-31801-revert-30339-tap-buttons
git: * [new branch] v1.5.4-hotfixes -> origin/v1.5.4-hotfixes
git: * [new tag] v1.5.0 -> v1.5.0
git: * [new tag] v1.5.1 -> v1.5.1
git: * [new tag] v1.5.2 -> v1.5.2
git: * [new tag] v1.5.3 -> v1.5.3
git: * [new tag] v1.5.4 -> v1.5.4
git: * [new tag] v1.5.4-hotfix.1 -> v1.5.4-hotfix.1
git: * [new tag] v1.5.8 -> v1.5.8
git: * [new tag] v1.5.5 -> v1.5.5
git: * [new tag] v1.5.6 -> v1.5.6
git: * [new tag] v1.5.7 -> v1.5.7
git: Already on 'beta'
git: Your branch and 'origin/beta' have diverged,
git: and have 1 and 173 different commits each, respectively.
git: (use "git pull" to merge the remote branch into yours)
ianh@ianh:~/dev/test-install/flutter$
```
As a user I don't know what to make of all this nonsense.
cc @jonahwilliams | tool,a: first hour,P2,team-tool,triaged-tool | low | Minor |
439,024,936 | godot | String parsing behavior is inconsistent between float() and int() in gdscript | This probably can't be fixed due to a breakage of incompatibility but I've noticed that the functions `int` and `float` behave subtly differently in strange ways:
```
int("456") # yields 456
int("asdfjaldsgja;lsdgj456") # yields 456
int("dfkj4dkfjj5dkjf6dfkjdk") # yields 456
int("4.56") # yields 4
int("4kgjsdjkfhglskd.b56") # yields 4
int("4.5.6") # yields 4
int("1e3") # yields 13
```
Essentially, int seems to do a left-right parse ignoring any character except [0-9] and ".". After it hits the first "." it will terminate.
Compare this with float
```
float("456") # yields 456
float("asdfjaldsgja;lsdgj456") # yields 0
float("dfkj4dkfjj5dkjf6dfkjdk") # yields 0
float("4.56") # yields 4.56
float("4kgjsdjkfhglskd.b56") # yields 4
float("4.5.6") # yields 4.5
float("1e3") # yields 1000
```
The float parser appears to abort the second it sees a non [0-9] or "." character except `e`. I expect this is due to the handling of exponentiation. It'll also terminate upon hitting a second decimal point, e, or a "." occurring anywhere after an "e".
You probably shouldn't have arbitrary text in the strings you're trying to convert to numbers anyway, but I think this should at least be documented in case someone hits it. To me, the most dangerous one is the difference between `int("1e3")` and `float("1e3")` which is a difference that could theoretically occur even in seemingly well-formed parse strings. | bug,discussion,topic:core,breaks compat | low | Minor |
439,058,559 | vscode | Add a button to open Find All References from Peek References | Issue Type: <b>Feature Request</b>
While I typically like to use Peek References (and want to keep it as the default for code lens), once it opens depending on the number of items, I'll then often want to "upgrade" the peek to a full Find All References, but it is not convenient to do so. It would be great if there were a button somewhere like below to run Find All References once the peek is open:

VS Code version: Code - Insiders 1.34.0-insider (473af338e1bd9ad4d9853933da1cd9d5d9e07dc9, 2019-05-01T00:22:05.899Z)
OS version: Windows_NT x64 10.0.18362
<!-- generated by issue reporter --> | feature-request,editor-symbols,references-viewlet | low | Minor |
439,119,833 | storybook | Addon-docs: support interactions with existing addons | From @trevoreyre :
> Any thoughts on how addon-docs would interact with other addons? I notice right now when you switch to the docs tab, the addons panel goes away, as well as any other tabs you have enabled through other addons. I think it would be useful to still be able to interact with other addons, like seeing actions as you're clicking through things in the docs, like you would in a story, or changing themes or context variables and seeing all of the stories in the docs update.
It's tricky because all the addons assume that there is only one story currently visible, and in docs there are potentially many. We have a proposal for "knobs v2" to address this for knobs, but nothing planned to address it in general. How we deal with it generally is open for discussion! | discussion,addon: docs | medium | Critical |
439,137,174 | pytorch | install error from source | Hello,there is the output.I don't know how to deal with it.
[1/1460] Building CXX object caffe2/CMakeFiles/caffe2.dir/contrib/aten/aten_op.cc.o
FAILED: caffe2/CMakeFiles/caffe2.dir/contrib/aten/aten_op.cc.o
/data1/NLPRMNT/public/gcc485/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_GCC_ATOMICS=1 -D_FILE_OFFSET_BITS=64 -Dcaffe2_EXPORTS -I../aten/src -I. -I../ -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -I../third_party/protobuf/src -isystem /data1/NLPRMNT/liguanjun/bliu/opt/anaconda3/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -I../cmake/../third_party/benchmark/include -isystem ../cmake/../third_party/eigen -isystem /data1/NLPRMNT/liguanjun/bliu/opt/anaconda3/include/python3.6m -isystem /data1/NLPRMNT/liguanjun/bliu/opt/anaconda3/lib/python3.6/site-packages/numpy/core/include -isystem ../cmake/../third_party/pybind11/include -isystem /opt/openmpi/include -isystem ../cmake/../third_party/cub -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -isystem ../third_party/ideep/mkl-dnn/include -isystem ../third_party/ideep/include -Icaffe2/aten/src/TH -I../aten/src/TH -Icaffe2/aten/src -Iaten/src -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -I../third_party/miniz-2.0.8 -isystem include -I/data1/NLPRMNT/liguanjun/software/cuda9.0/include -I../c10/.. -Ithird_party/ideep/mkl-dnn/include -I../third_party/ideep/mkl-dnn/src/../include -I../third_party/QNNPACK/include -I../third_party/pthreadpool/include -I../third_party/NNPACK/include -I../third_party/cpuinfo/include -I../third_party/FP16/include -fvisibility-inlines-hidden -fopenmp -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -DHAVE_AVX_CPU_DEFINITION -O3 -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DTH_HAVE_THREAD -fvisibility=hidden -DCAFFE2_BUILD_MAIN_LIB -O2 -pthread -std=gnu++11 -MMD -MT caffe2/CMakeFiles/caffe2.dir/contrib/aten/aten_op.cc.o -MF caffe2/CMakeFiles/caffe2.dir/contrib/aten/aten_op.cc.o.d -o caffe2/CMakeFiles/caffe2.dir/contrib/aten/aten_op.cc.o -c ../caffe2/contrib/aten/aten_op.cc
In file included from ../caffe2/contrib/aten/aten_op.cc:1:0:
../caffe2/contrib/aten/aten_op.h:1:52: fatal error: caffe2/caffe2/contrib/aten/gen_aten_op.h: No such file or directory
#include "caffe2/caffe2/contrib/aten/gen_aten_op.h"
^
compilation terminated.
[18/1460] Building CXX object caffe2/CMakeFiles/caffe2.dir/operators/conv_op_eigen.cc.o
ninja: build stopped: subcommand failed.
| proposal accepted,module: internals,triaged | low | Critical |
439,141,218 | pytorch | RuntimeError: invalid argument 10: ldb should be at least max(1, 0), but have 0 at ../aten/src/TH/generic/THBlas.cpp:36 | ## 🐛 Bug
Runtime error when running:
```
loss = loss_func(self.alpha, self.lam, f_loss, ds, x_hat_p, x_hat_n)
optim.zero_grad()
loss.backward() # <- Error here
```
Error Message:
```
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1741, in <module>
main()
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 20, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/zayd/repos/uo/double_decoder/driver.py", line 104, in <module>
_main(_args, _pos_db, _unlabel_db, _enc_dim, [_args.pos_label])
File "/Users/zayd/repos/uo/double_decoder/driver.py", line 62, in _main
deep_pu.train_pu(p_db=pos_db, u_db=u_db, f_loss=DeepPU.simple_loss, quiet_mode=args.q)
File "/Users/zayd/repos/uo/double_decoder/deep_pu.py", line 473, in train_pu
loss.backward()
File "/Users/zayd/.pyenv/versions/3.7.1/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/Users/zayd/.pyenv/versions/3.7.1/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: invalid argument 10: ldb should be at least max(1, 0), but have 0 at ../aten/src/TH/generic/THBlas.cpp:365
```
## To Reproduce
I do not get this in all cases. I get it on an autoencoder with two decoders. I can dig into it more and try to make a streamlined MWE if the devs are unsure what could cause this.
It seems to happen when a linear block has an out dimension of zero. That was being concatenated with another vector so other checks in my code did not catch it.
## Expected behavior
I think the error message could be clearer. When I saw this error, it was very unclear what could cause. I expect the better solution would be prevent linear blocks being created with an output dimension of zero.
## Environment
Collecting environment information...
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.14.4
GCC version: Could not collect
CMake version: version 3.13.2
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] msgpack-numpy==0.4.3.2
[pip3] numpy==1.16.1
[pip3] torch==1.1.0
[pip3] torchfile==0.1.0
[pip3] torchnet==0.0.4
[pip3] torchtext==0.3.1
[pip3] torchvision==0.2.1
[conda] Could not collect
## Additional context
Confirmed this error can also occur in torch 1.0.0
| module: internals,triaged | low | Critical |
439,189,326 | go | encoding/gob: decoding fails for structs with anonymous pointer fields that implement GobDecoder interface | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.1 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/kuppas/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/kuppas/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12.1/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12.1/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/1f/s_1g3pg54b123lds0c08xfmw0000gn/T/go-build807164800=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
I am using the gob encoder/decoder to serialize and deserialize the strut into a byte array. I ran into an issue when `big.Rat` embedded into another struct. I am able to successfully encode and decode`big.Rat` type directly without any issue. But `big.Rat` is embedded into another strut then I am able to encode successfully but during the decoding process, it throws <span style="color:red">**panic: runtime error: invalid memory address or nil pointer dereference**</span>.
BTW, to solve this issue, I added implemented the GobEncoder/GobDecoder for the custom strut.
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
Link to go playground https://play.golang.org/p/98PzJDqm0QL
The below code produces the issue and we can uncomment GobEncoder/GobDecoder to successfully run the example.
Code sample:
```
package main
import (
"bytes"
"encoding/gob"
"fmt"
"math/big"
)
func main() {
fmt.Println("Testing Rat")
rat1 := big.NewRat(10, 5)
// The Rat works gob encode and decode
rat1Bytes, err := Serialize(rat1)
if err != nil {
fmt.Printf("Error on serializing rat1: %v", err)
}
rat2 := &big.Rat{}
err = Deserialize(rat1Bytes, rat2)
if err != nil {
fmt.Printf("Error on deserializing rat1: %v", err)
}
if rat2.String() == rat1.String() {
fmt.Println("rat1 == rat2")
}
fmt.Println("Testing CustomRat")
// CustomRat encode works and decode fails due nil object
// Uncomment GobEncode and GobDecode to make encode and decode works
customRat1 := &CustomRat{big.NewRat(10, 5)}
customRat1Bytes, err := Serialize(customRat1)
if err != nil {
fmt.Printf("Error on serializing rat1: %v", err)
}
customRat2 := &CustomRat{}
err = Deserialize(customRat1Bytes, customRat2)
if err != nil {
fmt.Printf("Error on deserializing rat1: %v", err)
}
if rat2.String() == rat1.String() {
fmt.Println("customRat1 == customRat2")
}
}
func Serialize(val interface{}) ([]byte, error) {
b := new(bytes.Buffer)
if err := gob.NewEncoder(b).Encode(val); err != nil {
return nil, err
}
return b.Bytes(), nil
}
func Deserialize(data []byte, result interface{}) error {
return gob.NewDecoder(bytes.NewBuffer(data)).Decode(result)
}
type CustomRat struct {
*big.Rat
}
// Uncomment the below GobDecode and GobEncode for CustomRat to work.
//func (cr *CustomRat) GobDecode(data []byte) error {
// cr.Rat = &big.Rat{}
// return cr.Rat.GobDecode(data)
//}
//
//func (cr *CustomRat) GobEncode() ([]byte, error) {
// return cr.Rat.GobEncode()
//}
```
### What did you expect to see?
I expect to see successful encoding/decoding when `big.Rat` whether it is embedded into another strut or not.
### What did you see instead?
The `big.Rat` decoding fails when it is embedded into another strut. Please see the example code above.
| NeedsInvestigation | low | Critical |
439,191,705 | flutter | Animate Snackbar (SnackBarBehaviour.floating) with FAB as it does with SnackBarBehaviour.Fixed | The new behaviour with the snackbar floating works really well rxcept when there is a FAB in the scaffold, the FAB stays in the same position and the snackbar floats above the FAB, instead of adding extra padding to the bottom of the FAB with the snackbar being below.
Is this actually per the MD guidelines or does it need to be updated? Would look a lot better if the FAB bottom padding increased! | c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Major |
439,232,888 | flutter | Check whether the device can display a specific emoji char | When placing the emoji's unicode inside a `Text` widget, I did notice that some Android devices cannot display some emojis (besides the emulators, I've seen it in an Asus Zenfone Go Live).
There should be a way to detect if an emoji cannot be displayed, so developers can work around. | c: new feature,framework,dependency: dart,a: typography,customer: crowd,P3,team-framework,triaged-framework | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.