id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
318,939,946
angular
Service Worker: Hash Mismatch
I'd like to use this to start a discussion based upon a topic I talked about with @gkalpak at ng-conf 2018. As mentioned in #21288, the service worker implementation checkes the hashes of the cached files and if they don't match, it just serves the data from the network (which is like having no service worker). The problem is that every proxy on the web can manipulate the files downloaded. This is especially the case for mobile providers that are minifying and inlining a lot of stuff on the fly to save bandwith. But also tools like live-server change the index.html (and so they can be used to reproduce this issue). As one of the big use cases for PWAs are areas with low bandwith, this is somehow conflicting. Perhaps one way to solve this is to provide an exchangeable HashCheckStrategy. In cases where I as the programmer wants to take the responsiblity for not checking the hashes in order to bypass this issue, I could write sth like this: ``` export class BruceWillisHashCheckStrategy implements HashCheckStrategy { check(currentHash, expectedHash, fileName) { return true; } } ```
feature,area: service-worker,feature: under consideration
high
Critical
318,942,212
flutter
provide terse, actionable, flutter doctor output
As a follow-up to https://github.com/flutter/flutter/pull/14173#issuecomment-384990179, I think we should: - unify the `flutter doctor` and `flutter doctor -v` output, and - remove the `-v` option We see issues coming in w/o the `-v` text, and it would be good to have the default output of the tool be the one that is useful for diagnosing problems and attaching to issues. I believe the motivation for creating the summary version of the output was that the default version became too long. If so, we should just look to condense the longer version - make sure that it doesn't produce more than a page of output on a typical terminal window. /cc @jcollins-g
tool,t: flutter doctor,P2,team-tool,triaged-tool
low
Major
318,972,537
go
cmd/go: support ldflags -X with gccgo
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? 1.10 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/dma2/Code/go" GORACE="" GOROOT="/home/dma2/Code/go/src/github.com/cloudflare/tls-tris/_dev/go1.10" GOTOOLDIR="/usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1" GCCGO="/usr/bin/gccgo" CC="/usr/bin/gcc" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build074293983=/tmp/go-build -gno-record-gcc-switches" CXX="/usr/bin/g++" CGO_ENABLED="1" PKG_CONFIG="pkg-config" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" ### What did you do? If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. This works: go build go build -compiler gccgo This doesn't: go build -compiler gccgo -gccgoflags "-X main.gbBuildTime=$(date +'%Y.%m.%d.%H%M%S') -X main.gbCommitHash=$(git log --pretty=format:'%h' -n 1) -X main.gbGitVersionTag=$(git describe) -X main.gbMfwlibGitVersionTag=$(git --git-dir ../mfwlib/.git describe)" ### What did you expect to see? No errors. I wish to set some go variable using gccgoflags as I would use them with -ldflags. I also wish more documentation as to how to set go variables when using gccgo and gccgoflags rather than go and ldflags. ### What did you see instead? # github.com/go-sql-driver/mysql gccgo: error: main.gbBuildTime=2018.04.30.102007: No such file or directory gccgo: error: main.gbCommitHash=770fe0e: No such file or directory gccgo: error: main.gbGitVersionTag=v1.14: No such file or directory gccgo: error: main.gbMfwlibGitVersionTag=v1.55: No such file or directory gccgo: error: unrecognized command line option ‘-X’ gccgo: error: unrecognized command line option ‘-X’ gccgo: error: unrecognized command line option ‘-X’ gccgo: error: unrecognized command line option ‘-X’
NeedsFix
low
Critical
318,975,055
rust
Higher-ranked trait bounds on associated types are not elaborated
This code works: ```rust trait Boring {} trait Usage where Self::Thing: Boring, { type Thing; } fn value_example<T: Usage>(t: T::Thing) { use_the_trait(t); } fn use_the_trait<T: Boring>(_: T) {} fn main() {} ``` While this code fails: ```rust trait Boring {} trait Usage where for<'a> &'a Self::Thing: Boring, { type Thing; } fn ref_example<T: Usage>(t: T::Thing) { use_the_trait(&t); } fn use_the_trait<T: Boring>(_: T) {} fn main() {} ``` ``` error[E0277]: the trait bound `for<'a> &'a <T as Usage>::Thing: Boring` is not satisfied --> src/main.rs:10:1 | 10 | / fn ref_example<T: Usage>(t: T::Thing) { 11 | | use_the_trait(&t); 12 | | } | |_^ the trait `for<'a> Boring` is not implemented for `&'a <T as Usage>::Thing` | note: required by `Usage` --> src/main.rs:3:1 | 3 | / trait Usage 4 | | where 5 | | for<'a> &'a Self::Thing: Boring, 6 | | { 7 | | type Thing; 8 | | } | |_^ ``` This is probably related to #20671, but seems slightly different from the examples posited there. Specifically, the "standard" case (the first example) does work. Also highly relevant: #44656 and #32722. Tested with Rust 1.25.0 and 1.27.0-nightly (2018-04-29 79252ff4e25d82f9fe85)
C-enhancement,A-trait-system,A-associated-items,T-types,A-higher-ranked
low
Critical
318,990,130
go
spec: document cycle restrictions for alias type declarations
Per the original type-alias design doc from @rsc, it must be possible to "expand out" type alias declarations; this wouldn't be possible for `type T = *T` without expanding endlessly. The spec doesn't say anything in this regard; and is generally vague or silent on the subject of cycles.
Documentation,NeedsFix
low
Major
319,023,868
flutter
A cold-start "flutter test" takes multiple seconds
Running "flutter test" used to be pretty fast (a few hundred milliseconds until the test runs — not great, but workable). Now though it takes entire seconds to start up the test. This makes iterating on a test an unbearably slow process. cc @mraleph @aam @a-siva @tvolkert
a: tests,tool,dependency: dart,P2,team-tool,triaged-tool
low
Major
319,026,196
go
cmd/compile: elide useless type assertion
Consider the following: ```go type CustomUnmarshaler interface { CustomUnmarshal([]byte) error } type MyMessage struct{} func (m *MyMessage) Unmarshal(b []byte) error { if u, ok := (interface{})(m).(CustomUnmarshaler); ok { return u.CustomUnmarshal(b) } return nil } ``` In this situation, the compiler knows that the `MyMessage` type has no `CustomUnmarshal` method, so the type assertion cannot possibly succeed. In this case, the condition of the if statement can be statically determined and the body be considered dead code. However, a tip build of the compiler continues to emit code for the assertion. ``` 0x0021 00033 (/tmp/main.go:10) LEAQ type."".CustomUnmarshaler(SB), AX 0x0028 00040 (/tmp/main.go:10) MOVQ AX, (SP) 0x002c 00044 (/tmp/main.go:10) LEAQ type.*"".Foo(SB), AX 0x0033 00051 (/tmp/main.go:10) MOVQ AX, 8(SP) 0x0038 00056 (/tmp/main.go:9) MOVQ "".f+64(SP), AX 0x003d 00061 (/tmp/main.go:10) MOVQ AX, 16(SP) 0x0042 00066 (/tmp/main.go:10) PCDATA $0, $1 0x0042 00066 (/tmp/main.go:10) CALL runtime.assertE2I2(SB) 0x0047 00071 (/tmp/main.go:10) MOVQ 24(SP), AX 0x004c 00076 (/tmp/main.go:10) MOVQ 32(SP), CX 0x0051 00081 (/tmp/main.go:10) LEAQ 40(SP), DX 0x0056 00086 (/tmp/main.go:10) CMPB (DX), $0 0x0059 00089 (/tmp/main.go:10) JEQ 161 0x005b 00091 (/tmp/main.go:11) MOVQ 24(AX), AX 0x005f 00095 (/tmp/main.go:11) MOVQ "".b+72(SP), DX 0x0064 00100 (/tmp/main.go:11) MOVQ DX, 8(SP) 0x0069 00105 (/tmp/main.go:11) MOVQ "".b+80(SP), DX 0x006e 00110 (/tmp/main.go:11) MOVQ DX, 16(SP) 0x0073 00115 (/tmp/main.go:11) MOVQ "".b+88(SP), DX 0x0078 00120 (/tmp/main.go:11) MOVQ DX, 24(SP) 0x007d 00125 (/tmp/main.go:11) MOVQ CX, (SP) 0x0081 00129 (/tmp/main.go:11) PCDATA $0, $2 0x0081 00129 (/tmp/main.go:11) CALL AX 0x0083 00131 (/tmp/main.go:11) MOVQ 32(SP), AX 0x0088 00136 (/tmp/main.go:11) MOVQ 40(SP), CX 0x008d 00141 (/tmp/main.go:11) MOVQ AX, "".~r1+96(SP) 0x0092 00146 (/tmp/main.go:11) MOVQ CX, "".~r1+104(SP) 0x0097 00151 (/tmp/main.go:11) MOVQ 48(SP), BP 0x009c 00156 (/tmp/main.go:11) ADDQ $56, SP 0x00a0 00160 (/tmp/main.go:11) RET ```
Performance,compiler/runtime
low
Critical
319,043,089
TypeScript
In JS, export assignments should bind as types, not just values
Currently, there is special code in the checker to look up values as type in order to support the value-only way that the binder exports things that come from an export assignment, even things that in Typescript export types, such as classes. The binder should instead treat JS export assignments as types if the initialiser (such as a class or constructor function) or later assignments (such as prototype assignments) warrant it. That means the checker code will be more uniform and easier to understand. **Code** ```js function F() { } F.prototype.method = function() { } exports.F = F ``` Allows `F` to be used as a type: ```js var x = require('./module') /** @type {x.F} */ ``` But this doesn't work in pure typespace: ```js /** @typedef {import('./module').F} F */ /** @type {F} */ var f; ``` Note that binding this way will enable tricks that aren't possible in Typescript today: ```js module.exports. C = class { } module.exports.C.D = class { } ``` Now `var m = require('./module')` makes both `m.C` and `m.C.D` available as types.
Bug,checkJs,Domain: JavaScript
low
Minor
319,047,561
flutter
Android things: All shared object libraries(*.so) in the APK should be uncompressed
i am using flutter with Android things on a rpi3. everything works and i am testing a basic hello world. when i upload to the android things console to embed my app i get: "All shared object libraries(*.so) in the APK should be uncompressed" Any tips ?
platform-android,tool,P2,team-android,triaged-android
low
Major
319,104,562
pytorch
how can i use openMP for caffe2? why caffe2 not work in multi-threads mode?
i had compile caffe2 with set openMP ON and use openblas, it compile correctly . but problem is how to use it? 1) i set openBLAS_NUM_THREADS=1 and OMP_NUM_THREADS=4 then run command: python convnet_benchmarks.py --batch_size=16 iteration 1 --cpu --mode AlexNet i can see only one core is run, and the result is Milliseconds per iter:8499 i change the OMP_NUM_THREADS value to 1,2,3,4 but only core is running, the result is same 2) i set openBLAS_NUM_THREADS=4 and OMP_NUM_THREADS=1 then run python convnet_benchmarks.py --batch_size=16 iteration 1 --cpu --mode AlexNet i can see 4 core is running, and the result is Milliseconds per iter:3835 but it is openBLAS work in multi-threads mode why caffe2 not work in multi-threads mode?
caffe2
low
Minor
319,152,475
go
cmd/compile: BCE/Prove do not take arithmetic into account
This is follow-up to https://github.com/golang/go/issues/19714 where prove/BCE were taught to take transitivity into account, but originally-motivating example where indices are constructed as arithmetic expressions remains with bounds checks not eliminated: ```go package hex func Encode(dst, src []byte) { if len(dst) < len(src) * 2 { panic("dst overflow") } for i, _ := range src { dst[i*2] = 0 // XXX BC not eliminated dst[i*2+1] = 0 // XXX BC not eliminated } } ``` ``` $ gotip version go version devel +030ac2c719 Tue May 1 05:02:43 2018 +0000 linux/amd64 ``` ``` $ gotip tool compile -S 3.go |go2asm ``` ```asm TEXT ·Encode(SB), $24-48 // 3.go:3 // MOVQ (TLS), CX (stack growth prologue) // CMPQ SP, 16(CX) // JLS 158 // SUBQ $24, SP // MOVQ BP, 16(SP) (BP save) // LEAQ 16(SP), BP (BP init) // FUNCDATA $0, gclocals·7578f313ff9d15b1ec5bd5c7e7ab3d8c(SB) (args) FUNCDATA $1, gclocals·69c1753bd5f81501d95132d08af04464(SB) (locals) MOVQ src+32(FP), AX // 3.go:4 MOVQ AX, CX SHLQ $1, AX MOVQ dst+8(FP), DX CMPQ DX, AX JLT pc128 MOVQ dst+0(FP), AX XORL BX, BX JMP pc72 // 3.go:8 pc63: MOVB $0, 1(BX)(AX*1) // 3.go:10 LEAQ 1(SI), BX // 3.go:8 pc72: CMPQ BX, CX JGE pc104 MOVQ BX, SI SHLQ $1, BX // 3.go:9 CMPQ BX, DX JCC pc121 // <-- NOTE MOVB $0, (AX)(BX*1) LEAQ 1(SI)(SI*1), DI // 3.go:10 CMPQ DI, DX JCS pc63 JMP pc114 // <-- NOTE pc114: // PCDATA $0, $1 (stack growth) // CALL runtime.panicindex(SB) // <-- NOTE // UNDEF pc121: // PCDATA $0, $1 // 3.go:9 // CALL runtime.panicindex(SB) // <-- NOTE // UNDEF pc128: // LEAQ type.string(SB), AX // 3.go:5 // MOVQ AX, (SP) // LEAQ ·statictmp_0(SB), AX // MOVQ AX, 8(SP) // PCDATA $0, $1 // CALL runtime.gopanic(SB) // UNDEF // NOP // PCDATA $0, $-1 // 3.go:3 // CALL runtime.morestack_noctxt(SB) // JMP 0 ``` /cc @rasky, @josharian
NeedsInvestigation,compiler/runtime
low
Major
319,160,418
flutter
Example application in Flutter Plugin project does not include Gradle sub-projects
## Steps to Reproduce 1. Create a new Flutter Plugin project. 2. Add a new Gradle project under the `android/` folder called `foo`. 3. Add some Java code to `foo`. 3. Make the `android/` project depend on the `foo` sub-project (add a dependency like `implementation project(':foo')`). 4. Use Java code from `foo` in the `android/` project's code. The Android project should build correctly. However, the `example` app will not build, failing with this error: ``` Launching lib/main.dart on Android SDK built for x86 in debug mode... Initializing gradle... Resolving dependencies... Finished with error: Please review your Gradle project setup in the android/ folder. * Error running Gradle: Exit code 1 from: /Users/renato/AndroidStudioProjects/flutter_plugin/example/android/gradlew app:properties: FAILURE: Build failed with an exception. * Where: Build file '/Users/renato/AndroidStudioProjects/flutter_plugin/android/build.gradle' line: 37 * What went wrong: A problem occurred evaluating project ':flutter_plugin'. > Project with path ':foo' could not be found in project ':flutter_plugin'. ``` The problem is that the `android/` project is "included" in the `example/android` project with a incomplete hack. In `example/android/settings.gradle`, the `android/` project is included using this code: ```groovy plugins.each { name, path -> def pluginDirectory = flutterProjectRoot.resolve(path).resolve('android').toFile() include ":$name" project(":$name").projectDir = pluginDirectory } ``` This fails to take into account that the `android/` folder may contain sub-projects, hence breaking the Gradle build expectations. The code above should take into account Gradle semantics. I suppose the best way to solve this would be to change completely how this is being done and let Gradle do the work. For example, if the `example/android` project was part of the same build as the `android/` project, then `example/android/` could just depend on `android/`, similar to how `android/` depends on the artifacts generated by `foo/` in the example given above. Alternatively, you would have to parse the `settings.gradle` files recursively to be able to know which projects should be included in the example app, which seem unreliable and error-prone. ## Logs N/A ## Flutter Doctor N/A
tool,t: gradle,P2,team-tool,triaged-tool
low
Critical
319,168,769
go
net/http/httptrace: internal nettrace leaks into other net.Dial calls
### What version of Go are you using (`go version`)? go version go1.10.2 linux/amd64 ### Does this issue reproduce with the latest release? Yep ### What did you do? Run the following program ``` package main import ( "context" "fmt" "net" "net/http" "net/http/httptrace" ) func main() { dialFunc := func(ctx context.Context, network string, addr string) (net.Conn, error) { // This would be where there's a net.Dial to an external lookup service // but just pretend that this actually matters for the purposes of this bug // report n, err := (&net.Dialer{}).DialContext(ctx, "tcp", "www.golang.org:80") if err != nil { panic(err) } n.Close() // assume that 1.1.1.1 is an IP returned from the lookup service // This IP was purely chosen because it's an IP that I know runs a // HTTP server return (&net.Dialer{}).DialContext(ctx, network, "1.1.1.1:80") } transport := &http.Transport{ DialContext: dialFunc, } req, err := http.NewRequest("GET", "http://example.org/", nil) if err != nil { panic(err) } ct := &httptrace.ClientTrace{ DNSStart: func(info httptrace.DNSStartInfo) { fmt.Println("DNSStart:", info.Host) }, } ctx := httptrace.WithClientTrace(context.Background(), ct) req = req.WithContext(ctx) resp, err := transport.RoundTrip(req) if err != nil { panic(err) } resp.Body.Close() } ``` ### What did you expect to see? Nothing printed on stdout ### What did you see instead? `DNSStart: www.golang.org` printed on stdout Additionally, ConnectStart and ConnectDone gets called twice, once for the resolution service and once for the actual HTTP connection. This also gets really annoying once your resolution service might be using HTTP itself, resulting in even more confusion. Normally, I'd stash away the ClientTrace while the resolution function runs and call the DNSStart and DNSDone functions independently, but there is no way to remove a ClientTrace once it's been put into the context. I'm fairly confident I can come up with a workaround for this, but this sort of interaction is unexpected and could cause confusion if you were using a library for ClientTraces that weren't expecting overridden Dial functions.
NeedsInvestigation
low
Critical
319,203,302
electron
Make single instance the default
Running multiple instances of Electron is generally a bad idea and [not really supported](https://github.com/electron/electron/issues/4727) with the default configuration. Some renderer features don't work in this scenario ([localStorage](https://github.com/electron/electron/issues/2493), [IndexedDb](https://github.com/electron/electron/issues/10792), persisted sessions?) and this is not documented and so I'm guessing not tested for regressions either. This would be a significant breaking change but in the spirit of guiding developers in the right direction it might be worth it. Related to @MarshallOfSound's [New event for `makeSingleInstance`](https://github.com/electron/electron/issues/12752).
enhancement :sparkles:,discussion,semver/major
low
Major
319,207,816
pytorch
[Caffe2 warpctc] How to use the offered warpctc?
caffe2 offers warpctc in caffe2/contrib/warpctc. But how can I use it as a caffe2 op? When I run `ctc_ops_test.py`, the error happens, `Ignoring @/caffe2/caffe2/contrib/warpctc:ctc_ops as it is not a valid file.`. Is there anyone can help me? Thanks!
caffe2
low
Critical
319,265,734
kubernetes
kubelet: CRI: return structured errors from CRI calls
Currently, the CRI does not return any structured errors such that the kubelet code can, for example, distinguish between a "Runtime not available" error or a "Container not found" error. See https://github.com/kubernetes/kubernetes/pull/63334 for an example. The kubelet has many paths where a "Container not found" error is no big deal and the error could be treated harmlessly and not logged. But since these methods do not return structured errors, there is no reliable way to tell them apart. Discussed in sig-node 5/1/18. @smarterclayton @derekwaynecarr @dchen1107 @vishh @dashpole
area/kubelet,sig/node,kind/feature,lifecycle/frozen
medium
Critical
319,268,323
rust
Confusing error when forgetting to include system crates in Cargo.toml
Using a crate like `rand` without adding it to dependencies produces a long message: `error[E0658]: use of unstable library feature 'rustc_private': this crate is being loaded from the sysroot, an unstable location; did you mean to load this crate from crates.io via 'Cargo.toml' instead? (see issue #27812)` etc. This is pretty incomprehensible for a Rust beginner, compared to the normal error when failing to add a dependency: ``` error[E0463]: can't find crate for `nom` --> src/main.rs:1:1 | 1 | extern crate nom; | ^^^^^^^^^^^^^^^^^ can't find crate ``` ## Meta `rustc --version --verbose`: ``` % rustc --version --verbose rustc 1.27.0-nightly (79252ff4e 2018-04-29) binary: rustc commit-hash: 79252ff4e25d82f9fe856cb66f127b79cdace163 commit-date: 2018-04-29 host: x86_64-apple-darwin release: 1.27.0-nightly LLVM version: 6.0 ```
C-enhancement,A-diagnostics,T-compiler
low
Critical
319,296,790
flutter
Attach widget creation source in build/paint profile timeline events
Would be nice in places like https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/widgets/framework.dart#L3627 and https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/rendering/object.dart#L122 to also point out where the widget was created. cc @jacob314 I think we chatted about this but I forgot where.
framework,a: debugging,c: proposal,P2,team-framework,triaged-framework
low
Minor
319,303,754
puppeteer
extensions: Ability to click browser action buttons
We are relying on puppeteer to automate testing of our extension. I can automate most things except I cannot click the button that our extension adds to the browser's toolbar. Could there be an API to interact with browser action buttons? Something to the tune of ```js let actionButtons = await browser.actionButtons(); await actionButtons[0].click(); ``` would be great. This will allow us to fully automate testing of our extension workflow. Thanks!
feature,upstream,chromium
high
Critical
319,413,801
neovim
option 'splitvertical' to split vertically by default
There's an old patch for Vim that adds an option, 'splitvertical' which opens splits vertically rather than horizontally by default: [vim-mq-patches/vertsplit](https://github.com/chrisbra/vim-mq-patches/blob/master/verstplit) It's a nice option to have since most monitors are wider than tall. Could this be added as a feature? Or will something like this be exposed to plugins in the future.
enhancement,has:vim-patch,has:workaround,has:plan
low
Major
319,477,718
go
x/crypto/ssh: test cipher implementations against known good input/output data
If I apply the following patch to the cipher.go file the tests still pass. It would be nice to catch this kind of error so we can refactor this code and have some confidence we haven't broken it. ```diff diff --git a/ssh/cipher.go b/ssh/cipher.go index 67b0126..d99ffc7 100644 --- a/ssh/cipher.go +++ b/ssh/cipher.go @@ -16,7 +16,7 @@ import ( "hash" "io" "io/ioutil" - "math/bits" + _ "math/bits" "golang.org/x/crypto/internal/chacha20" "golang.org/x/crypto/poly1305" @@ -666,7 +666,7 @@ func newChaCha20Cipher(key, unusedIV, unusedMACKey []byte, unusedAlgs directionA } func (c *chacha20Poly1305Cipher) readPacket(seqNum uint32, r io.Reader) ([]byte, error) { - nonce := [3]uint32{0, 0, bits.ReverseBytes32(seqNum)} + nonce := [3]uint32{1, 2, 3} s := chacha20.New(c.contentKey, nonce) var polyKey [32]byte s.XORKeyStream(polyKey[:], polyKey[:]) @@ -724,7 +724,7 @@ func (c *chacha20Poly1305Cipher) readPacket(seqNum uint32, r io.Reader) ([]byte, } func (c *chacha20Poly1305Cipher) writePacket(seqNum uint32, w io.Writer, rand io.Reader, payload []byte) error { - nonce := [3]uint32{0, 0, bits.ReverseBytes32(seqNum)} + nonce := [3]uint32{1, 2, 3} s := chacha20.New(c.contentKey, nonce) var polyKey [32]byte s.XORKeyStream(polyKey[:], polyKey[:]) ```
Testing,help wanted
low
Critical
319,519,214
go
runtime: can not read stacktrace from a core file
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? This problem was introduced by b1d1ec9 (CL 110065) and is still present at tip. ### Does this issue reproduce with the latest release? No. ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/home/a/.cache/go-build" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/a/n/go/" GORACE="" GOROOT="/usr/local/go-tip" GOTMPDIR="" GOTOOLDIR="/usr/local/go-tip/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build495464568=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? Given: ``` package main import "runtime/debug" func main() { debug.SetTraceback("crash") crash() } func crash() { panic("panic!") } ``` Running it under gdb will produce this stacktrace: ``` (gdb) bt #0 0x000000000044fff4 in runtime.raise () at <autogenerated>:1 #1 0x0000000000438c2b in runtime.dieFromSignal (sig=6) at /usr/local/go-tip/src/runtime/signal_unix.go:424 #2 0x0000000000438dea in runtime.crash () at /usr/local/go-tip/src/runtime/signal_unix.go:526 #3 0x0000000000426487 in runtime.fatalpanic (msgs=<optimized out>) at /usr/local/go-tip/src/runtime/panic.go:696 #4 0x0000000000425e6b in runtime.gopanic (e=...) at /usr/local/go-tip/src/runtime/panic.go:502 #5 0x0000000000470ac9 in main.crash () at /home/a/temp/simple.go:11 #6 0x0000000000470a7b in main.main () at /home/a/temp/simple.go:7 ``` however letting it produce a core file then reading the core file with gdb produces this: ``` $ gdb ./simple simple-core ... (gdb) bt #0 0x000000000044fff4 in runtime.raise () at <autogenerated>:1 #1 0x0000000000438c2b in runtime.dieFromSignal (sig=6) at /usr/local/go-tip/src/runtime/signal_unix.go:424 #2 0x00000000004390a8 in runtime.sigfwdgo (ctx=0xc000009ac0, info=0xc000009bf0, sig=6, ~r3=<optimized out>) at /usr/local/go-tip/src/runtime/signal_unix.go:637 #3 0x0000000000438488 in runtime.sigtrampgo (ctx=0xc000009ac0, info=0xc000009bf0, sig=<optimized out>) at /usr/local/go-tip/src/runtime/signal_unix.go:289 #4 0x00000000004502e3 in runtime.sigtramp () at <autogenerated>:1 #5 0x00000000004503d0 in ?? () at <autogenerated>:1 #6 0x0000000000000001 in ?? () #7 0x0000000000000000 in ?? () ``` I'm not sure what's happening here. Is the signal handler running and overwriting part of the stack?
NeedsInvestigation,Debugging,compiler/runtime
medium
Critical
319,526,165
TypeScript
CFA failure with fallthrough clauses in switch statements
**TypeScript Version:** master <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** **Code** ```ts declare const o: { c?: 1; }; switch (0) { case 0: break; case o.c && o.c.valueOf(): case o.c && o.c.valueOf(): break; } ``` **Expected behavior:** pass **Actual behavior:** ``` $ node built/local/tsc.js index.ts --strictNullChecks index.ts:5:15 - error TS2532: Object is possibly 'undefined'. 5 case o.c && o.c.valueOf(): ~~~ ``` **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> **Related Issues:**
Bug
low
Critical
319,588,548
pytorch
[caffe2]when i do CMAKE
If you have a question or would like help and support, please ask at our [forums](https://discuss.pytorch.org/). If you are submitting a feature request, please preface the title with [feature request]. If you are submitting a bug report, please fill in the following details. ## Issue description when i cmake caffe2: [ 85%] Linking CXX shared module python/caffe2_pybind11_state_gpu.so [ 85%] Built target caffe2_pybind11_state_gpu Makefile:140: recipe for target 'all' failed make: *** [all] Error 2# this is my cmake-output: [cmake_output.txt](https://github.com/pytorch/pytorch/files/1967745/cmake_output.txt) The dir :/build/Makefile--140 line is : ![1](https://user-images.githubusercontent.com/33169964/39530704-15e4ee8c-4e5c-11e8-9a1a-7cf37cc146a8.png) ## Code example Please try to provide a minimal example to repro the bug. Error messages and stack traces are also helpful. ## System Info Please copy and paste the output from our [environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py) (or fill out the checklist below manually). You can get the script and run it with: ``` wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py # For security purposes, please check the contents of collect_env.py before running it. python collect_env.py ``` - PyTorch or Caffe2: - How you installed PyTorch (conda, pip, source): - Build command you used (if compiling from source): - OS: - PyTorch version:3.0.0 - Python version:2.7 - CUDA/cuDNN version:8.0 - GPU models and configuration: - GCC version (if compiling from source): - CMake version:3.9.4 - Versions of any other relevant libraries:
caffe2
low
Critical
319,646,555
pytorch
[Caffe2 Bug] Windows timer is not accurate
## Issue description Caffe2 fails the timer related test on Windows: * parallel_net_test.exe - This one always fail. * timer_test.exe - Not always but...good luck... ``` PS <...erase for privacy...>\pytorch\build> .\bin\RelWithDebInfo\parallel_net_test.exe Running main() from gtest_main.cc [==========] Running 10 tests from 2 test cases. [----------] Global test environment set-up. [----------] 5 tests from DAGNetTest [ RUN ] DAGNetTest.TestDAGNetTiming [libprotobuf WARNING C:\Users\tolia\AppData\Local\Temp\protobuf\src\google\protobuf\text_format.cc:305] Warning parsing text-format caffe2.NetDef: 1:50: text format contains deprecated field "num_workers" WARNING: Logging before InitGoogleLogging() is written to STDERR I0502 10:15:30.237299 27072 net_dag_utils.cc:102] Operator graph pruning prior to chain compute took: 2.8225e-05 secs I0502 10:15:30.237299 27072 net_dag.cc:46] Number of parallel execution chains 2 Number of operators = 3 <...erase for privacy...>\pytorch\caffe2\core\parallel_net_test.cc(108): error: The difference between ms and 200 is 21, which exceeds kTimeThreshold, where ms evaluates to 221, 200 evaluates to 200, and kTimeThreshold evaluates to 20. [ FAILED ] DAGNetTest.TestDAGNetTiming (227 ms) [ RUN ] DAGNetTest.TestDAGNetTimingReadAfterRead [libprotobuf WARNING C:\Users\tolia\AppData\Local\Temp\protobuf\src\google\protobuf\text_format.cc:305] Warning parsing text-format caffe2.NetDef: 1:50: text format contains deprecated field "num_workers" I0502 10:15:30.475047 27072 net_dag_utils.cc:102] Operator graph pruning prior to chain compute took: 2.9936e-05 secs I0502 10:15:30.475047 27072 net_dag.cc:46] Number of parallel execution chains 3 Number of operators = 3 [ OK ] DAGNetTest.TestDAGNetTimingReadAfterRead (267 ms) [ RUN ] DAGNetTest.TestDAGNetTimingWriteAfterWrite [libprotobuf WARNING C:\Users\tolia\AppData\Local\Temp\protobuf\src\google\protobuf\text_format.cc:305] Warning parsing text-format caffe2.NetDef: 1:50: text format contains deprecated field "num_workers" I0502 10:15:30.736847 27072 net_dag_utils.cc:102] Operator graph pruning prior to chain compute took: 2.1098e-05 secs I0502 10:15:30.736847 27072 net_dag.cc:46] Number of parallel execution chains 1 Number of operators = 3 [ OK ] DAGNetTest.TestDAGNetTimingWriteAfterWrite (368 ms) [ RUN ] DAGNetTest.TestDAGNetTimingWriteAfterRead [libprotobuf WARNING C:\Users\tolia\AppData\Local\Temp\protobuf\src\google\protobuf\text_format.cc:305] Warning parsing text-format caffe2.NetDef: 1:50: text format contains deprecated field "num_workers" I0502 10:15:31.104876 27072 net_dag_utils.cc:102] Operator graph pruning prior to chain compute took: 2.1953e-05 secs I0502 10:15:31.104876 27072 net_dag.cc:46] Number of parallel execution chains 1 Number of operators = 3 <...erase for privacy...>\pytorch\caffe2\core\parallel_net_test.cc(254): error: The difference between ms and 350 is 43, which exceeds kTimeThreshold, where ms evaluates to 393, 350 evaluates to 350, and kTimeThreshold evaluates to 20. [ FAILED ] DAGNetTest.TestDAGNetTimingWriteAfterRead (400 ms) [ RUN ] DAGNetTest.TestDAGNetTimingControlDependency [libprotobuf WARNING C:\Users\tolia\AppData\Local\Temp\protobuf\src\google\protobuf\text_format.cc:305] Warning parsing text-format caffe2.NetDef: 5:3: text format contains deprecated field "num_workers" I0502 10:15:31.504611 27072 net_dag_utils.cc:102] Operator graph pruning prior to chain compute took: 2.2238e-05 secs I0502 10:15:31.504611 27072 net_dag.cc:46] Number of parallel execution chains 1 Number of operators = 3 <...erase for privacy...>\pytorch\caffe2\core\parallel_net_test.cc(304): error: The difference between ms and 350 is 42, which exceeds kTimeThreshold, where ms evaluates to 392, 350 evaluates to 350, and kTimeThreshold evaluates to 20. [ FAILED ] DAGNetTest.TestDAGNetTimingControlDependency (398 ms) [----------] 5 tests from DAGNetTest (1666 ms total) [----------] 5 tests from SimpleNetTest [ RUN ] SimpleNetTest.TestSimpleNetTiming [libprotobuf WARNING C:\Users\tolia\AppData\Local\Temp\protobuf\src\google\protobuf\text_format.cc:305] Warning parsing text-format caffe2.NetDef: 1:50: text format contains deprecated field "num_workers" <...erase for privacy...>\pytorch\caffe2\core\parallel_net_test.cc(115): error: The difference between ms and 350 is 42, which exceeds kTimeThreshold, where ms evaluates to 392, 350 evaluates to 350, and kTimeThreshold evaluates to 20. [ FAILED ] SimpleNetTest.TestSimpleNetTiming (398 ms) [ RUN ] SimpleNetTest.TestSimpleNetTimingReadAfterRead [libprotobuf WARNING C:\Users\tolia\AppData\Local\Temp\protobuf\src\google\protobuf\text_format.cc:305] Warning parsing text-format caffe2.NetDef: 1:50: text format contains deprecated field "num_workers" <...erase for privacy...>\pytorch\caffe2\core\parallel_net_test.cc(164): error: The difference between ms and 350 is 42, which exceeds kTimeThreshold, where ms evaluates to 392, 350 evaluates to 350, and kTimeThreshold evaluates to 20. [ FAILED ] SimpleNetTest.TestSimpleNetTimingReadAfterRead (398 ms) [ RUN ] SimpleNetTest.TestSimpleNetTimingWriteAfterWrite [libprotobuf WARNING C:\Users\tolia\AppData\Local\Temp\protobuf\src\google\protobuf\text_format.cc:305] Warning parsing text-format caffe2.NetDef: 1:50: text format contains deprecated field "num_workers" <...erase for privacy...>\pytorch\caffe2\core\parallel_net_test.cc(212): error: The difference between ms and 350 is 21, which exceeds kTimeThreshold, where ms evaluates to 371, 350 evaluates to 350, and kTimeThreshold evaluates to 20. [ FAILED ] SimpleNetTest.TestSimpleNetTimingWriteAfterWrite (376 ms) [ RUN ] SimpleNetTest.TestSimpleNetTimingWriteAfterRead [libprotobuf WARNING C:\Users\tolia\AppData\Local\Temp\protobuf\src\google\protobuf\text_format.cc:305] Warning parsing text-format caffe2.NetDef: 1:50: text format contains deprecated field "num_workers" <...erase for privacy...>\pytorch\caffe2\core\parallel_net_test.cc(260): error: The difference between ms and 350 is 28, which exceeds kTimeThreshold, where ms evaluates to 378, 350 evaluates to 350, and kTimeThreshold evaluates to 20. [ FAILED ] SimpleNetTest.TestSimpleNetTimingWriteAfterRead (383 ms) [ RUN ] SimpleNetTest.TestSimpleNetTimingControlDependency [libprotobuf WARNING C:\Users\tolia\AppData\Local\Temp\protobuf\src\google\protobuf\text_format.cc:305] Warning parsing text-format caffe2.NetDef: 5:3: text format contains deprecated field "num_workers" <...erase for privacy...>\pytorch\caffe2\core\parallel_net_test.cc(310): error: The difference between ms and 350 is 43, which exceeds kTimeThreshold, where ms evaluates to 393, 350 evaluates to 350, and kTimeThreshold evaluates to 20. [ FAILED ] SimpleNetTest.TestSimpleNetTimingControlDependency (398 ms) [----------] 5 tests from SimpleNetTest (1962 ms total) [----------] Global test environment tear-down [==========] 10 tests from 2 test cases ran. (3629 ms total) [ PASSED ] 2 tests. [ FAILED ] 8 tests, listed below: [ FAILED ] DAGNetTest.TestDAGNetTiming [ FAILED ] DAGNetTest.TestDAGNetTimingWriteAfterRead [ FAILED ] DAGNetTest.TestDAGNetTimingControlDependency [ FAILED ] SimpleNetTest.TestSimpleNetTiming [ FAILED ] SimpleNetTest.TestSimpleNetTimingReadAfterRead [ FAILED ] SimpleNetTest.TestSimpleNetTimingWriteAfterWrite [ FAILED ] SimpleNetTest.TestSimpleNetTimingWriteAfterRead [ FAILED ] SimpleNetTest.TestSimpleNetTimingControlDependency 8 FAILED TESTS ``` ``` PS <...erase for privacy...>\pytorch\build> .\bin\RelWithDebInfo\timer_test.exe Running main() from gtest_main.cc [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from TimerTest [ RUN ] TimerTest.Test [ OK ] TimerTest.Test (115 ms) [ RUN ] TimerTest.TestLatency Average nanosecond latency is: 38.775 Average microsecond latency is: 0.040485 Average millisecond latency is: 4.0208e-05 [ OK ] TimerTest.TestLatency (3 ms) [----------] 2 tests from TimerTest (119 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (119 ms total) [ PASSED ] 2 tests. ``` ``` PS <...erase for privacy...>\pytorch\build> .\bin\RelWithDebInfo\timer_test.exe Running main() from gtest_main.cc [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from TimerTest [ RUN ] TimerTest.Test <...erase for privacy...>\pytorch\caffe2\core\timer_test.cc(26): error: The difference between ns and 100000000 is 15655312, which exceeds 10000000, where ns evaluates to 115655312, 100000000 evaluates to 100000000, and 10000000 evaluates to 10000000. <...erase for privacy...>\pytorch\caffe2\core\timer_test.cc(27): error: The difference between us and 100000 is 15655.3125, which exceeds 10000, where us evaluates to 115655.3125, 100000 evaluates to 100000, and 10000 evaluates to 10000. <...erase for privacy...>\pytorch\caffe2\core\timer_test.cc(28): error: The difference between ms and 100 is 15.655593872070313, which exceeds 10, where ms evaluates to 115.65559387207031, 100 evaluates to 100, and 10 evaluates to 10. [ FAILED ] TimerTest.Test (126 ms) [ RUN ] TimerTest.TestLatency Average nanosecond latency is: 37.066 Average microsecond latency is: 0.036486 Average millisecond latency is: 4.0194e-05 [ OK ] TimerTest.TestLatency (4 ms) [----------] 2 tests from TimerTest (131 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (131 ms total) [ PASSED ] 1 test. [ FAILED ] 1 test, listed below: [ FAILED ] TimerTest.Test 1 FAILED TEST ``` I don't think the problem is due to timer itself. It's more likely to be caused by Windows scheduler/time slice configuration. On my machine the timer_test almost always shows 15ms (for ns/us/ms tests) difference when failed. Sometimes it can also be 21ms or 22ms. NOTE that I'm testing with 12-thread idle machine, so no system load related issues. ## System Info - PyTorch or Caffe2: Caffe2 - How you installed PyTorch (conda, pip, source): source - Build command you used (if compiling from source): cmake - OS: Win10 - PyTorch version: master - CUDA/cuDNN version: 9.1/7.1.3 - GPU models and configuration: 1080Ti - Visual Studio version (if compiling from source): 2017.6 + v140 toolchain - CMake version: 3.11.1
caffe2
low
Critical
319,663,585
youtube-dl
[ESPN] ESPN+ "Unable to open key file" and "ffmpeg exited with code 1"
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.05.01*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.05.01** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [x] Bug report (encountered problems with youtube-dl) - [x] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other Not sure if bug or more site support need? --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` Using site URL [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['--verbose', '--ffmpeg-location', 'C:\\ffmpeg\\ffmpeg-20180502-e07b191-win32-static\\bin', 'http://www.espn.com/ watch/film/cf950763-9adb-4acf-b04e-425d8676615c/tommy', '-f', '4600', '-o', 'W:\\Work\\ESPNPlus.Test.720p.WEB-DL.x264-YouTube-DL.mkv'] [debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252 [debug] youtube-dl version 2018.05.01 [debug] Python version 3.4.4 (CPython) - Windows-7-6.1.7601-SP1 [debug] exe versions: ffmpeg N-90920-ge07b1913fc, ffprobe N-90920-ge07b1913fc, rtmpdump 2.4 [debug] Proxy map: {} [ESPNArticle] tommy: Downloading webpage ERROR: Unable to extract video id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type you tube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. Traceback (most recent call last): File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\YoutubeDL.py", line 789, in extract_info File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\extractor\common.py", line 440, in extract File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\extractor\espn.py", line 204, in _real_extract File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\extractor\common.py", line 820, in _search_regex youtube_dl.utils.RegexNotFoundError: Unable to extract video id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. Using m3u8 URL [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['--verbose', '--ffmpeg-location', 'C:\\ffmpeg\\ffmpeg-20180502-e07b191-win32-static\\bin', 'https://hlsvod-l3c-c lt1.media.plus.espn.com/ps01/espn/hls/2018/03/14/855deb02-94a8-4b63-874e-add949b25ab5/1520999867346/master_wired60_s2.m3u8', '-f', '4600', ' -o', 'W:\\Work\\ESPNPlus.Test.720p.WEB-DL.x264-YouTube-DL.mkv'] [debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252 [debug] youtube-dl version 2018.05.01 [debug] Python version 3.4.4 (CPython) - Windows-7-6.1.7601-SP1 [debug] exe versions: ffmpeg N-90920-ge07b1913fc, ffprobe N-90920-ge07b1913fc, rtmpdump 2.4 [debug] Proxy map: {} [generic] master_wired60_s2: Requesting header [generic] master_wired60_s2: Downloading m3u8 information [debug] Invoking downloader on 'https://hlsvod-l3c-clt1.media.plus.espn.com/ps01/espn/hls/2018/03/14/855deb02-94a8-4b63-874e-add949b25ab5/15 20999867346/asset_3500k.m3u8' [download] Destination: W:\Work\ESPNPlus.Test.720p.WEB-DL.x264-YouTube-DL.mkv [debug] ffmpeg command line: "C:\ffmpeg\ffmpeg-20180502-e07b191-win32-static\bin\ffmpeg" -y -loglevel verbose -headers "Accept-Encoding: gzi p, deflate Accept-Language: en-us,en;q=0.5 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0 (Chrome) " -i "https://hlsvod-l3c-clt1.media.plus.espn.com/ps01/espn/hls/2018/03/14/855deb02-94a8-4b63-874e-add949b25ab5/1520999867346/asset_3500k.m3 u8" -c copy -f mp4 "-bsf:a" aac_adtstoasc "file:W:\Work\ESPNPlus.Test.720p.WEB-DL.x264-YouTube-DL.mkv.part" ffmpeg version N-90920-ge07b1913fc Copyright (c) 2000-2018 the FFmpeg developers built with gcc 7.3.0 (GCC) configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-lib ass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg - -enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwav pack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable- libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth libavutil 56. 18.100 / 56. 18.100 libavcodec 58. 19.100 / 58. 19.100 libavformat 58. 13.100 / 58. 13.100 libavdevice 58. 4.100 / 58. 4.100 libavfilter 7. 21.100 / 7. 21.100 libswscale 5. 2.100 / 5. 2.100 libswresample 3. 2.100 / 3. 2.100 libpostproc 55. 2.100 / 55. 2.100 [AVIOContext @ 05b37a00] The "user-agent" option is deprecated: use the "user_agent" option instead [hls,applehttp @ 05b33080] HLS request for url 'https://hlsvod-l3c-clt1.media.plus.espn.com/ps01/espn/hls/2018/03/14/855deb02-94a8-4b63-874e -add949b25ab5/1520999867346/asset_3500k/00000/asset_3500k_00001.ts', offset 0, playlist 0 [hls,applehttp @ 05b33080] Opening 'https://playback.svcs.plus.espn.com/media/855deb02-94a8-4b63-874e-add949b25ab5/keys/6eb41ad2-a2e3-4ba9-b b8f-e7e72c19aa56' for reading [https @ 06c8a980] The "user-agent" option is deprecated: use the "user_agent" option instead Unable to open key file https://playback.svcs.plus.espn.com/media/855deb02-94a8-4b63-874e-add949b25ab5/keys/6eb41ad2-a2e3-4ba9-bb8f-e7e72c19 aa56 [hls,applehttp @ 05b33080] Opening 'crypto+https://hlsvod-l3c-clt1.media.plus.espn.com/ps01/espn/hls/2018/03/14/855deb02-94a8-4b63-874e-add9 49b25ab5/1520999867346/asset_3500k/00000/asset_3500k_00001.ts' for reading [https @ 06f7cb40] The "user-agent" option is deprecated: use the "user_agent" option instead [hls,applehttp @ 05b33080] Error when loading first segment 'https://hlsvod-l3c-clt1.media.plus.espn.com/ps01/espn/hls/2018/03/14/855deb02-9 4a8-4b63-874e-add949b25ab5/1520999867346/asset_3500k/00000/asset_3500k_00001.ts' https://hlsvod-l3c-clt1.media.plus.espn.com/ps01/espn/hls/2018/03/14/855deb02-94a8-4b63-874e-add949b25ab5/1520999867346/asset_3500k.m3u8: In valid data found when processing input ERROR: ffmpeg exited with code 1 File "__main__.py", line 19, in <module> File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\__init__.py", line 471, in main File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\__init__.py", line 461, in _real_main File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\YoutubeDL.py", line 1993, in download File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\YoutubeDL.py", line 800, in extract_info File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\YoutubeDL.py", line 854, in process_ie_result File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\YoutubeDL.py", line 1627, in process_video_result File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\YoutubeDL.py", line 1900, in process_info File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\YoutubeDL.py", line 1839, in dl File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\downloader\common.py", line 365, in download File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\downloader\external.py", line 64, in real_download File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\downloader\common.py", line 166, in report_error File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\YoutubeDL.py", line 617, in report_error File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpulhs0gaq\build\youtube_dl\YoutubeDL.py", line 579, in trouble <end of log> ``` --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: http://www.espn.com/watch/film/cf950763-9adb-4acf-b04e-425d8676615c/tommy - m3u8 of URL: https://aeng.svcs.plus.espn.com/settoken/redirect/v1?sect=espnplus:web:vod:en&ssess=d912566c-71d2-47dc-be27-d5a421fce34d&trk=s&platform=web&app=espn&partnr=espn&uid={A667818B-850A-4010-A781-8B850A7010D6}&swid={A667818B-850A-4010-A781-8B850A7010D6}&cdn=LEVEL3&corigin=CLT1&mediaurl=https://hlsvod-l3c-clt1.media.plus.espn.com/ps01/espn/hls/2018/03/14/855deb02-94a8-4b63-874e-add949b25ab5/1520999867346/master_wired60_s2.m3u8 or - m3u8 of URL: https://hlsvod-l3c-clt1.media.plus.espn.com/ps01/espn/hls/2018/03/14/855deb02-94a8-4b63-874e-add949b25ab5/1520999867346/master_wired60_s2.m3u8 Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights. --- ### Description of your *issue*, suggested solution and other information Is I missing somethings obvious here? Get "Unable to open key file" and "Error when loading first segment" and "ffmpeg exited with code 1" errors. Whats wrong? Do I needs to pass more informations through CLI ? ESPN+ video. It's required to have ESPN+. I have account. ESPN+ has a free trial if an account for testing needing. Just go here: https://plus.espn.com/ Any helps appreciated? Thanks!
geo-restricted,account-needed
high
Critical
319,667,869
flutter
Hot reload fails with error: Unexpected tag 74 (ReturnStatement)
## Steps to Reproduce I'm running `/Users/scheglov/Source/flutter/bin/flutter --no-color, run, --machine, --track-widget-creation, --device-id=flutter-tester, /Users/scheglov/dart/flutter-projects/test_sink/.dart_tool/render_server.dart` and hot reload fails with the error below when I change `render_server.dart` The file `render_server.dart` imports `to_render.dart`. If I make small changes to `to_render.dart`, like changing a string literal, hot reload works fine. But when I make a big change, like replacing the file with completely different class, it fails. ## Logs Run your application with `flutter run` and attach all the log output. ``` [{"event":"app.progress","params":{"appId":"19f86ad7-9e8d-4f7e-824c-3d55cc5eaa9a","id":"12","progressId":"hot.reload","message":"Performing hot reload..."}}] {"exception":"'file:///Users/scheglov/dart/flutter-projects/test_sink/.dart_tool/to_render.dart': error: Unexpected tag 74 (ReturnStatement) in MyHomePage.build, expected a procedure, a constructor or a function node","stackTrace":"#0 StatelessElement.build (package:flutter/src/widgets/framework.dart:3687:28)\n#1 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3634:15)\n#2 Element.rebuild (package:flutter/src/widgets/framework.dart:3487:5)\n#3 BuildOwner.buildScope (package:flutter/src/widgets/framework.dart:2234:33)\n#4 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding&WidgetsBinding.drawFrame (package:flutter/src/widgets/binding.dart:626:20)\n#5 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding._handlePersistentFrameCallback (package:flutter/src/rendering/binding.dart:208:5)\n#6 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._invokeFrameCallback (package:flutter/src/scheduler/binding.dart:990:15)\n#7 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:930:9)\n#8 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.scheduleWarmUpFrame.<anonymous closure> (package:flutter/src/scheduler/binding.dart:751:7)\n#9 Timer._createTimer.<anonymous closure> (dart:async/runtime/libtimer_patch.dart:21:15)\n#10 _Timer._runTimers (dart:isolate/runtime/libtimer_impl.dart:382:19)\n#11 _Timer._handleMessage (dart:isolate/runtime/libtimer_impl.dart:416:5)\n#12 _RawReceivePortImpl._handleMessage (dart:isolate/runtime/libisolate_patch.dart:165:12)\n","library":"widgets library","context":"building MyHomePage(dirty)"} ``` Or (without `--track-widget-creation`): ``` [{"event":"app.progress","params":{"appId":"89a2baf5-fbd5-454e-91ef-7172d5f37009","id":"1","progressId":"hot.reload","message":"Initializing hot reload..."}}] {"exception":"'file:///Users/scheglov/dart/flutter-projects/test_sink/.dart_tool/to_render.dart': error: Unexpected tag 128 (SpecializedVariableGet) in MyHomePage.build, expected a procedure, a constructor or a function node","stackTrace":"#0 StatelessElement.build (package:flutter/src/widgets/framework.dart:3687:28)\n#1 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3634:15)\n#2 Element.rebuild (package:flutter/src/widgets/framework.dart:3487:5)\n#3 BuildOwner.buildScope (package:flutter/src/widgets/framework.dart:2234:33)\n#4 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding&WidgetsBinding.drawFrame (package:flutter/src/widgets/binding.dart:626:20)\n#5 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding._handlePersistentFrameCallback (package:flutter/src/rendering/binding.dart:208:5)\n#6 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._invokeFrameCallback (package:flutter/src/scheduler/binding.dart:990:15)\n#7 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:930:9)\n#8 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.scheduleWarmUpFrame.<anonymous closure> (package:flutter/src/scheduler/binding.dart:751:7)\n#9 Timer._createTimer.<anonymous closure> (dart:async/runtime/libtimer_patch.dart:21:15)\n#10 _Timer._runTimers (dart:isolate/runtime/libtimer_impl.dart:382:19)\n#11 _Timer._handleMessage (dart:isolate/runtime/libtimer_impl.dart:416:5)\n#12 _RawReceivePortImpl._handleMessage (dart:isolate/runtime/libisolate_patch.dart:165:12)\n","library":"widgets library","context":"building MyHomePage(dirty)"} ``` Run `flutter analyze` and attach any output of that command also. ## Flutter Doctor Paste the output of running `flutter doctor -v` here. ``` Flutter 0.3.6-pre.86 • channel master • [email protected]:scheglov/flutter.git Framework • revision dba7855de8 (10 minutes ago) • 2018-05-02 11:18:48 -0700 Engine • revision d48ba4c034 Tools • Dart 2.0.0-dev.50.0.flutter-0cc70c4a7c Running flutter doctor... Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel master, v0.3.6-pre.86, on Mac OS X 10.13.3 17D47, locale en-US) [✓] Android toolchain - develop for Android devices (Android SDK 27.0.2) [!] iOS toolchain - develop for iOS devices (Xcode 9.2) ✗ libimobiledevice and ideviceinstaller are not installed. To install, run: brew install --HEAD libimobiledevice brew install ideviceinstaller ✗ ios-deploy not installed. To install: brew install ios-deploy ✗ CocoaPods not installed. CocoaPods is used to retrieve the iOS platform side's plugin code that responds to your plugin usage on the Dart side. Without resolving iOS dependencies with CocoaPods, plugins will not work on iOS. For more info, see https://flutter.io/platform-plugins To install: brew install cocoapods pod setup [✓] Android Studio (version 3.0) [!] IntelliJ IDEA Ultimate Edition (version 2018.1) ✗ Flutter plugin not installed; this adds Flutter specific functionality. ✗ Dart plugin not installed; this adds Dart specific functionality. [✓] IntelliJ IDEA Community Edition (version 2018.1) [!] VS Code (version 1.10.1) [!] Connected devices ! No devices available ! Doctor found issues in 4 categories. ``` > For more information about diagnosing and reporting Flutter bugs, please see [https://flutter.io/bug-reports/](https://flutter.io/bug-reports/).
tool,dependency: dart,t: hot reload,P2,team-tool,triaged-tool
low
Critical
319,676,961
bitcoin
RPC wont bind without an IP address on a non-localhost interface
If the only interface which has an IP address and is up is lo, one of the two default rpcbinds (:: and 0.0.0.0) will fail with "libevent: getaddrinfo: address family for nodename not supported".
RPC/REST/ZMQ
low
Major
319,691,404
youtube-dl
Error Unable to extract Video [Openload]
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.05.01*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.05.01** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [x] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### Ouput Console: ``` youtube-dl -v https://openload.co/embed/4VfaP6nlcd4/silicon-valley-s01e01-lat-720p.mp4 [debug] System config: [u'--force-ipv4', u'--external-downloader', u'aria2c', u'--external-downloader-args', u'-x 16 -s 16 -k 1M'] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: [u'-v', u'https://openload.co/embed/4VfaP6nlcd4/silicon-valley-s01e01-lat-720p.mp4'] [debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2018.05.01 [debug] Python version 2.7.15rc1 (CPython) - Linux-4.15.0-20-generic-x86_64-with-Ubuntu-18.04-bionic [debug] exe versions: ffmpeg 3.4.2-2, ffprobe 3.4.2-2, phantomjs 2.1.1 [debug] Proxy map: {} [Openload] 4VfaP6nlcd4: Downloading embed webpage [Openload] 4VfaP6nlcd4: Executing JS on webpage ERROR: Unable to extract stream URL; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 789, in extract_info ie_result = ie.extract(url) File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py", line 440, in extract ie_result = self._real_extract(url) File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/openload.py", line 347, in _real_extract 'stream URL')) File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py", line 820, in _search_regex raise RegexNotFoundError('Unable to extract %s' % _name) RegexNotFoundError: Unable to extract stream URL; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. ``` ### Ignore Config File: ``` youtube-dl -v --ignore-config https://openload.co/embed/4VfaP6nlcd4/silicon-valley-s01e01-lat-720p.mp4 [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: [u'-v', u'--ignore-config', u'https://openload.co/embed/4VfaP6nlcd4/silicon-valley-s01e01-lat-720p.mp4'] [debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2018.05.01 [debug] Python version 2.7.15rc1 (CPython) - Linux-4.15.0-20-generic-x86_64-with-Ubuntu-18.04-bionic [debug] exe versions: ffmpeg 3.4.2-2, ffprobe 3.4.2-2, phantomjs 2.1.1 [debug] Proxy map: {} [Openload] 4VfaP6nlcd4: Downloading embed webpage [Openload] 4VfaP6nlcd4: Executing JS on webpage ERROR: Unable to extract stream URL; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 789, in extract_info ie_result = ie.extract(url) File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py", line 440, in extract ie_result = self._real_extract(url) File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/openload.py", line 347, in _real_extract 'stream URL')) File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py", line 820, in _search_regex raise RegexNotFoundError('Unable to extract %s' % _name) RegexNotFoundError: Unable to extract stream URL; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. ``` --- ### Description of your *issue*, suggested solution and other information (I speak Spanish and a little English, sorry for my spelling) - Issue find: https://github.com/rg3/youtube-dl/issues/16137 I was downloading a video in Openload Site, and I get an error. I have the latest version of youtube-dl and phantomjs. The video plays perfectly in my browser (Firefox 59.0.2, Linux 64-Bit) Proof: https://i.imgur.com/2MUFBRY.png
cant-reproduce
low
Critical
319,717,215
go
proposal: crypto/tls: implement Session IDs resumption
### What version of Go are you using (`go version`)? go version go1.10 linux/amd64 ### Does this issue reproduce with the latest release? Yes ### What did you do? Crypto/tls today only implements session ticket resumption described in RFC 5077. Per https://en.wikipedia.org/wiki/Comparison_of_TLS_implementations#Extensions JSSE does not support session ticket resumption, so I'm looking for a way to speed up the TLS handshake between Golang and Java applications https://tools.ietf.org/html/rfc5246 (The Transport Layer Security (TLS) Protocol Version 1.2) describes the session resumption, also useful to speed up the TLS handshake. It is implemented by OpenSSL and JSSE. Implementation should have a public interface similar to the OpenSSL's SSL_CTX_add_session() to inject the sessions in the server cache.
Proposal,Proposal-Hold,Proposal-Crypto
medium
Major
319,729,777
angular
ExpressionChangedAfterItHasBeenCheckedError with dynamic validators in a template driven composite form control
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code>[x] Bug report </code></pre> ## Current behavior If you have a composite form control (think a `<my-address [formControl]="address"></my-address>` component), if the internal form has a dynamic validator (i.e. `[required]="isUSA"`), the control will throw an `ExpressionChangedAfterItHasBeenCheckedError` error when the validator (in this example, `isUSA` value) changes: ``` Error: ExpressionChangedAfterItHasBeenCheckedError: Expression has changed after it was checked. Previous value: 'ng-valid: true'. Current value: 'ng-valid: false'. ``` ## Expected behavior There to not be an `ExpressionChangedAfterItHasBeenCheckedError` error. ## Minimal reproduction of the problem with instructions Click on the `Toggle Requiredness` button, and note in the console that there is an `ExpressionChangedAfterItHasBeenCheckedError` error. (The circular JSON error seems to be generated from StackBitz.) Demo: https://stackblitz.com/edit/angular-d9rsnv?file=app%2Fmy-composite-control%2Fmy-composite-control.component.ts ## What is the motivation / use case for changing the behavior? Template driven forms are easier to use, and template-driven (aka directive) validators are easier to work with and conditionally add and remove. You can do this similarly with reactive forms and you don't get this error. ## Environment <pre><code>Angular version: 5.2.8 <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [x] Firefox (desktop) version 59.0.3</code></pre> ## Other information Note that if you change the `toggleRequiredness` method to update the validators on the control, you don't get the error (but the point of this bug is that it should work by using purely template-driven means). For example, remove the `required` attribute from the `input` in the demo link above and update the `toggleRequiredness` method to this: ```ts toggleRequiredness() { this.nameRequired = !this.nameRequired; const control = this.form.controls['name']; if (this.nameRequired) { control.setValidators([Validators.required]); } else { control.clearValidators(); } control.updateValueAndValidity(); } ``` Demo (with the above modification): https://stackblitz.com/edit/angular-hhgkje?file=app%2Fmy-composite-control%2Fmy-composite-control.component.ts Alternatively, you can manually trigger `this.cdr.detectChanges();` when updating the requiredness. The argument of this bug is that this sort of behavior shouldn't be necessary with this kind of template-driven form.
type: bug/fix,freq2: medium,area: forms,state: confirmed,P4
high
Critical
319,765,015
TypeScript
Suggestion: Allow interface to extend conditional type
An interface cannot currently extend a conditional type. See: ([playground link](https://www.typescriptlang.org/play/index.html#src=type%20Test%3CT%3E%20%3D%20T%20extends%20%7B%7D%20%3F%20%7B%20test%3A%20T%20%7D%20%3A%20%7B%7D%0D%0A%0D%0A%2F%2F%20Ok%3A%20type%20with%20selected%20condition%0D%0Ainterface%20I%20extends%20Test%3Cnumber%3E%20%7B%20%7D%0D%0A%0D%0A%2F%2F%20%22Error%3A%20An%20interface%20may%20only%20extend%20a%20class%20or%20another%20interface.%22%22%0D%0Ainterface%20J%3CT%3E%20extends%20Test%3CT%3E%20%7B%20%7D%0D%0A%0D%0A%2F%2F%20Ok%3A%20type%20without%20condition%0D%0Atype%20Test2%3CT%3E%20%3D%20%7B%20test%3A%20T%20%7D%0D%0Ainterface%20K%3CT%3E%20extends%20Test2%3CT%3E%20%7B%20%7D)) ```typescript type Test<T> = T extends {} ? { test: T } : {} // Ok: type with selected condition interface I extends Test<number> { } // "Error: An interface may only extend a class or another interface." interface J<T> extends Test<T> { } // Ok: type without condition type Test2<T> = { test: T } interface K<T> extends Test2<T> { } ``` This example exhibits one of these problems: * `J<T>` should be allowed to extend `Test<T>`, to make it more complete and allow conditional type things in interfaces. --*OR*-- * The error message should be made more clear, because interfaces can obviously extend types as you can see above.
Bug,Help Wanted,Domain: Error Messages
low
Critical
319,765,571
pytorch
Do not put system paths in RPATH
This is what I'm seeing today: ``` $ readelf -d /opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/_nvrtc.cpython-36m-x86_64-linux-gnu.so | grep RPATH 0x000000000000000f (RPATH) Library rpath: [/opt/conda/envs/pytorch-py3.6/lib:/usr/lib/x86_64-linux-gnu/:/usr/local/cuda/lib64:$ORIGIN/lib] ``` Paths `/usr/lib/x86_64-linux-gnu` and `/usr/local/cuda/lib64` were taken from the build environment, but they do not belong in RPATH since those paths are not shipped with the application. Those paths may contain versions of DT_NEEDED libraries that you do not want to use with pytorch, however RPATH has precedence over everything else, so those libraries will get picked, and there is no way around that. For example, let's say libcublas provided in `/usr/local/cuda/lib64` is not the version I want since it's too old, I have a newest version in `~/cuda/lib64`, but I have no way to point to it since `LD_LIBRARY_PATH` will be ignored. Caused by: https://github.com/pytorch/pytorch/pull/3255 (for `/usr/lib/x86_64-linux-gnu`) cc @malfet @seemethere @walterddr
module: build,triaged
low
Minor
319,778,221
rust
concat_idents cannot be used in pattern position
* The three places where an ident-returning macro can be used are patterns, types, and expressions. * The two places where `concat_idents` can be used are types and expressions. * **_Hmmmm._** I looked around previous issues and PRs related to `concat_idents` and the word "pattern" never even comes up. This seems to be an accidental omission.
A-macros,T-compiler,C-feature-request,A-patterns
low
Minor
319,787,827
pytorch
[Caffe2]Install problem
If you have a question or would like help and support, please ask at our [forums](https://discuss.pytorch.org/). If you are submitting a feature request, please preface the title with [feature request]. If you are submitting a bug report, please fill in the following details. ## Issue description I want to using detectron so I need to install caffe2 I follow the instruction from caffe2 website ubuntu->build from source and when I run make install and I got some error as below 6 errors detected in the compilation of "/tmp/tmpxft_0000e8de_00000000-20_depthwise_3x3_conv_op.compute_20.cpp1.ii". CMake Error at caffe2_gpu_generated_depthwise_3x3_conv_op.cu.o.Release.cmake:275 (message): Error generating file /home/liujiajun/workspace/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/operators/./caffe2_gpu_generated_depthwise_3x3_conv_op.cu.o caffe2/CMakeFiles/caffe2_gpu.dir/build.make:13397: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_depthwise_3x3_conv_op.cu.o' failed make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_depthwise_3x3_conv_op.cu.o] Error 1 CMakeFiles/Makefile2:2903: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/all' failed make[1]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2 ## System Info Caffe2 I use anaconda virtualenv python version:2.7 OS: ubuntu 16.04 CUDA 8 Cudnn 6.0 GCC version :5.4 I search the key "DepthwiseConv2D_3*3" in issues but I can't find useful information So Could you please give me some explanations?
caffe2
low
Critical
319,789,915
flutter
Drag Box FeedBack Limit
Can we Set the limit or range of the Feedback of DragBox on the Screen?
c: new feature,framework,f: gestures,P2,team-framework,triaged-framework
low
Major
319,796,692
electron
Implement powerMonitor 'shutdown' event for Windows
Electron v2.0.0 introduced a new [shutdown](https://github.com/electron/electron/blob/2-0-x/docs/api/power-monitor.md#event-shutdown-linux-macos) event to `powerMonitor` - that's currently implemented for Linux and macOS but not for Windows. Please implement the 'shutdown' event for Windows as well.
enhancement :sparkles:,platform/windows,2-0-x,semver/minor
low
Major
319,866,774
TypeScript
JSDoc support for @yields
It would be nice to be able to use the [`@yields`](http://usejsdoc.org/tags-yields.html) JSDoc tag. We work with JS and JSDoc, but taking advantage of TSC and VSCode. We also use the [valid-jsdoc](https://eslint.org/docs/rules/valid-jsdoc) eslint rule. ## We have: ``` javascript /** @typedef Thing (...) */ function* walkThings() { //... some yield here and there } ``` ## We *want to* ``` javascript /** * @yields {Thing} */ function* walkThings() { //... some yield here and there } ``` ## We *can't do* ``` javascript /** * @returns {IterableIterator<Something>} */ function* walkThings() { //... some yield here and there } ``` Because it doesn't `return`, but `yield`s, and eslint complains about it. We can disable the rule here, but it is not the desirable scenario.
Suggestion,Awaiting More Feedback,Domain: JSDoc
low
Major
319,907,999
You-Dont-Know-JS
Types & Grammar - Chapter 2 : TypeError while reversing a string.
https://github.com/getify/You-Dont-Know-JS/blob/master/types%20%26%20grammar/ch2.md#strings There's a code in this section that tries to use the **Array.prototype.reverse** function to reverse a string. ``` Array.prototype.reverse.call( a ); // still returns a String object wrapper (see Chapter 3) // for "foo" :( ``` The comment says that it returns a **String object wrapper**. However, running it on Chrome 66 (and maybe other browsers too) gives a **TypeError**. ![screenshot from 2018-05-03 18-21-27](https://user-images.githubusercontent.com/26036974/39577381-d5d32f18-4efe-11e8-93e0-a9399aee7ac4.png)
for second edition
low
Critical
320,022,407
pytorch
feature request: support for new/future hardware accelerators
## Issue description There are a slew of new AI hardware accelerators available that can beat the GPUs handily either in terms of raw performance or performance per Watt. There are also many new efforts in analytic and sensor fusion hardware acceleration that are optimizing tensor processor for specific tasks using specific number systems, particularly for the embedded and robotics space. For Pytorch to be future proof, it would be beneficial if Pytorch provides a more general purpose interface to connect to hardware acceleration alternatives. We would like to collaborate with other researchers that are interested in AI research in Deep Reinforcement Learning that share our desire to connect Pytorch to optimized robotic brain and spine alternatives that look nothing like GPUs. cc @ezyang
feature,triaged,shadow review
low
Major
320,043,204
rust
Improve "private type in public interface" message
There seems to be an issue with detecting the `pub`-ness of types on the current nightly (EDIT: and actually stable) branch. Consider the following code: `main.rs`: ```rust mod hardware; fn main() { hardware::do_stuff(); } ``` `hardware/mod.rs`: ```rust mod parser; #[derive(Debug)] struct Hardware { foo: u32 } pub fn do_stuff() { let hw = parser::get_hardware(); println!("{:?}", hw); } ``` `hardware/parser.rs`: ```rust use super::Hardware; pub fn get_hardware() -> Hardware { Hardware { foo: 5 } } ``` There are no public types in the `hardware` module that are exposed in any way. Yet, I get this error from the compiler: ``` [rustc] private type `hardware::Hardware` in public interface can't leak private type ``` [Play Link](https://play.rust-lang.org/?gist=62c7b418185a8e91f51361f536b506c2&version=nightly&mode=debug)
C-enhancement,A-lints,A-diagnostics,T-compiler
low
Critical
320,067,892
go
cmd/go: test mixes stderr and stdout
https://github.com/golang/go/commit/7badae85f20f1bce4cc344f9202447618d45d414 appears to have made `go test` mix stderr and stdout from tests, which has made go 1.10 a regression for CockroachDB's benchmarks. This issue is related to https://github.com/golang/go/issues/18010 : Not only is the benchmark output not swallowed, but stderr and stdout are now mixed, so, if our binary produces logs (on stderr, as ours does), it's now impossible to see the benchmark results or use tools like `benchstat`. In particular, I believe `benchstat` fails to work because the mixing of stderr and stdout doesn't even use line buffering - so the benchmark measurements lines are mixed with log lines and `benchstat` can't recognize them any more. I generally concur with the folks on #18010 arguing that the swallowing/not swallowing of output is inconsistent (and there's no control over it as far as I can tell). Additionally, now the mixing of stdout and stderr is really a problem. The argument for not doing anything about `go test -bench` not swallowing output was that "we should at least preserve existing behavior as much as possible", but it would appear that existing behavior was changed in a quite significant way by this mixing. So I would kindly ask for the swallowing decision to be re-evaluated. ### What version of Go are you using (`go version`)? go version go1.10 darwin/amd64 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? OS X/ amd64 cc @benesch
NeedsInvestigation
low
Minor
320,089,952
flutter
Centralized location for test file output
If we had the notion of a test output folder, tests would have the flexibility to not only write to stdout and stderr, but to write more helpful info, like the contents of adb logs, screenshots, etc. ### Proposal During `flutter test`, we have The Flutter tool create a temp folder before running any tests. For each test in the suite, the test harness will create a subfolder dedicated to that test and set the path of that subfolder into an environment variable (e.g. `TEST_OUTPUT_DIR`) before spawning the test. When each test is done running, if it did not output anything into its folder, the test harness will delete the empty directory. When all tests are done running, if none of the tests wrote any output files, it'll delete the temp directory. If any files were written during any of the tests, the Flutter tool will keep the temp directory intact and print a message to the caller that "test outputs are available at ..."
a: tests,tool,P3,team-tool,triaged-tool
low
Minor
320,130,914
vscode
VSCode opens URL without user's permission
- VSCode Version: today's "insider build" (May 3, 2018) - OS Version: macOS Steps to Reproduce: 1. download today's insider build (May 3, 2018) 2. run it <!-- Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: ? ----------- VSCode will open a URL and will pass some data to Microsoft's servers via GET parameters (I didn't catch what they were, the page immediately redirected to another one) without asking the user for permission to do so.
under-discussion
medium
Major
320,134,356
vue
warn if $set is used on a property that already exist
### Version 2.5.16 ### Reproduction link [https://codepen.io/JJPandari/pen/gzLVBq?editors=1010](https://codepen.io/JJPandari/pen/gzLVBq?editors=1010) ### Steps to reproduce See the codepen snippet. Follow the comment there to change the vm's data and see what happens. ### What is expected? Even if the prop already exists, using `set` still makes it reactive, thus trigger view update. ### What is actually happening? Using `set` later doesn't update the view. --- Related source code: https://github.com/vuejs/vue/blob/3eb37acf98e2d9737de897ebe7bdb7e9576bcc21/src/core/observer/index.js#L192 I think most users would expect `set` to make the prop reactive whenever it's used. I initially opened [an issue for the api doc](https://github.com/vuejs/vuejs.org/issues/1601) because it wasn't clear (for me) about this. But the comment in the source code is. <!-- generated by vue-issues. DO NOT REMOVE -->
feature request,good first issue,has PR
low
Major
320,174,704
opencv
[RFC] switch for LTO friendliness
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => master - Operating System / Platform =>MS Windows (Linux bundles with distros) - Compiler =>MSVC or GNU ##### Detailed description <!-- your description --> I'm using `opencv` heavily in my own open-source project. Given the shared object size after exported symbols, it's better to link it into multiple result shared objects than ship it linked shared. I'd like to conditionalize code with runtime dispatching that LTO can't prune. However, such a locally-maintained patchset is bound to get out of date and provide a great maintainership trouble. Would you consider accepting such a patchset provided it's as clean and non-intrusive as possible? It's plenty of work and I prefer to ask up front. The modules I use are: - core - calib3d - imgproc - videoio At this moment some things are non-optimal, e.g. statically linking to `highgui` increases shared object size even if the `highgui` header isn't included to begin with. An example of a change is `#ifdef`-conditionalizing functions dispatching of strings, to exclude unused algorithms, then `#else`'ing them with a function dispatching on a compile-time-constant parameter using partial specialization. This way unused algorithms can be pruned. ##### Steps to reproduce <!-- to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file --> Static link with LTO, including pruning unused symbols. For example, the original `aruco` utils are 14-16 MB in size when linking against all `opencv` libraries. Again, I'm willing to do the work myself (and iterate upon review, of course).
RFC
low
Critical
320,287,874
vscode
[html-templating] easy way to inherite advanced html features
With the vscode extension [Better Jinja](https://github.com/samuelcolvin/jinjahtml-vscode) the `jinja-html` language highlights jinja html templates, however some more advanced features available with the `html` langauge are not working with `jinja-html`: * highlighting the opening tag when the cursor is in the closing tag (and visa versa) ![image](https://user-images.githubusercontent.com/4039449/39631948-95a001c0-4fab-11e8-911d-6fd571357b76.png) ![image](https://user-images.githubusercontent.com/4039449/39631931-8c49dbbe-4fab-11e8-8595-42f7ab8b1eda.png) In the second case with `jinja-html` as the language all `div`s are highlighted, not just the sibling of the tag under the cursor * auto creating the closing tag once the opening tag is created. And probably more, but these are the two which would be most useful. How do I get these advanced features working in my child language definition? I've tried copying most of the config from `extensions/html/package.json` and `extensions/html/language-configuration.json` into my extension but to no avail. I also tried looking through `extensions/html-language-features` but there's nothing obvious there to include. This isn't as simple as adding more file extensions somewhere, even in a `.html` file if the language is changed to `jinja-html` the features above disappear. - VSCode Version: `1.22.2 3aeede733d9a3098f7b4bdc1f66b63b0f48c1ef9 x64` - OS Version: ubuntu 18.04 Steps to Reproduce: 1. install "Better Jinja" 2. open or create a file with some html in 3. change the language to `jinja-html` 4. try using advanced html features as described above Does this issue occur when all extensions are disabled?: Yes (the advanced features work with the `html` language without extensions, obviously the error can only be shown with the extension installed)
feature-request,html
low
Critical
320,307,848
go
proposal: os: make a Go 2 type for environment variables?
Should we change the type of environment variables in Go 2? Currently it's `[]string` which is slightly odd. We at least solved the duplicated problems in Go 1.9 with https://golang.org/doc/go1.9#os/exec (#12868). In https://github.com/golang/go/issues/25210#issuecomment-386256365, @alexbrainman suggested `map[string]string` but that doesn't work well for case-insensitive Windows, and doesn't permit intentional(ly odd) duplicates or specific ordering on Unix, if such control is ever desired. (whether we wish to permit that is an additional question) It might be nice to have some new opaque type (not exposing the underlying representation) and have various constructors and accessors.
v2,Proposal
medium
Critical
320,374,424
rust
derive(Debug) on huge enum causes massive memory spike during liveness checking
Discovered on twitter: https://twitter.com/malwareunicorn/status/992438462652403713 `#[derive(Debug)]` on an enum with 10,000 variants causes a 7GB spike. `#[derive(Debug)]` on an enum with 20,000 variants causes a **massive** 28GB spike. 20,000 variant code to reproduce: https://gist.github.com/retep998/fdddd37aea38c5d2bfacbf63773e2417 Time passes from 20,000 variant build which doesn't show the massive spike (in fact it shows the opposite because the massive spike pushed most of rustc's memory usage into the page file): ``` time: 0.025; rss: 24MB parsing time: 0.000; rss: 24MB recursion limit time: 0.000; rss: 24MB crate injection time: 0.000; rss: 24MB plugin loading time: 0.000; rss: 24MB plugin registration time: 0.287; rss: 82MB expand crate time: 0.000; rss: 82MB check unused macros time: 0.289; rss: 82MB expansion time: 0.000; rss: 82MB maybe building test harness time: 0.009; rss: 82MB maybe creating a macro crate time: 0.026; rss: 82MB creating allocators time: 0.028; rss: 82MB AST validation time: 0.222; rss: 98MB name resolution time: 0.022; rss: 98MB complete gated feature checking time: 0.104; rss: 131MB lowering ast -> hir time: 0.059; rss: 131MB early lint checks time: 0.149; rss: 134MB indexing hir time: 0.000; rss: 102MB load query result cache time: 0.000; rss: 102MB looking for entry point time: 0.000; rss: 102MB looking for plugin registrar time: 0.009; rss: 102MB loop checking time: 0.009; rss: 104MB attribute checking time: 0.009; rss: 104MB stability checking time: 0.098; rss: 121MB type collecting time: 0.000; rss: 121MB outlives testing time: 0.000; rss: 121MB impl wf inference time: 0.024; rss: 139MB coherence checking time: 0.000; rss: 139MB variance testing time: 0.076; rss: 157MB wf checking time: 0.144; rss: 157MB item-types checking time: 60.147; rss: 211MB item-bodies checking time: 0.506; rss: 231MB rvalue promotion time: 0.170; rss: 232MB privacy checking time: 0.016; rss: 232MB intrinsic checking time: 96.740; rss: 234MB match checking time: 54.939; rss: 52MB liveness checking time: 54.936; rss: 246MB borrow checking time: 0.001; rss: 246MB MIR borrow checking time: 0.000; rss: 246MB MIR effect checking time: 0.069; rss: 250MB death checking time: 0.012; rss: 250MB unused lib feature checking time: 0.245; rss: 261MB lint checking time: 0.000; rss: 261MB dumping chalk-like clauses time: 0.004; rss: 261MB resolving dependency formats time: 0.011; rss: 262MB write metadata time: 0.276; rss: 281MB translation item collection time: 0.002; rss: 281MB codegen unit partitioning time: 0.002; rss: 284MB write allocator module time: 0.011; rss: 287MB translate to LLVM IR time: 0.001; rss: 287MB assert dep graph time: 0.000; rss: 287MB serialize dep graph time: 0.323; rss: 287MB translation time: 0.005; rss: 273MB llvm function passes [hugeenum2] time: 0.004; rss: 273MB llvm function passes [hugeenum3] time: 0.005; rss: 273MB llvm function passes [hugeenum0] time: 0.005; rss: 273MB llvm function passes [hugeenum4] time: 0.004; rss: 273MB llvm function passes [hugeenum1] time: 0.003; rss: 254MB llvm module passes [hugeenum0] time: 0.003; rss: 253MB llvm module passes [hugeenum2] time: 0.003; rss: 253MB llvm module passes [hugeenum3] time: 0.003; rss: 253MB llvm module passes [hugeenum1] time: 0.003; rss: 253MB llvm module passes [hugeenum4] time: 0.039; rss: 226MB codegen passes [hugeenum0] time: 0.040; rss: 221MB codegen passes [hugeenum2] time: 0.043; rss: 207MB codegen passes [hugeenum3] time: 0.043; rss: 208MB codegen passes [hugeenum4] time: 0.048; rss: 202MB codegen passes [hugeenum1] time: 0.059; rss: 203MB LLVM passes time: 0.000; rss: 46MB serialize work products time: 0.322; rss: 46MB running linker time: 0.371; rss: 46MB linking ``` The massive spike: ![](https://i.imgur.com/dpnvToc.png)
C-enhancement,T-compiler,I-compilemem
low
Critical
320,382,278
go
x/net/publicsuffix: PublicSuffix() is case sensitive
### What version of Go are you using (`go version`)? ``` go version go1.10.1 linux/amd64 ``` ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOOS="linux" ``` ### What did you do? PublicSuffix() returns incorrect values when the domain contains letters with uppercase According to [RFC4343](https://tools.ietf.org/html/rfc4343) I believe PublicSuffix() should be case insensitive. ``` package main import ( "fmt" "golang.org/x/net/publicsuffix" ) func main() { suffix, icann := publicsuffix.PublicSuffix("abc.CoM.Br") fmt.Printf("suffix: %s\nicann: %v\n", suffix, icann) } ``` ### What did you expect to see? ``` suffix: com.br icann: true ``` ### What did you see instead? ``` suffix: Br icann: false ```
NeedsInvestigation
low
Major
320,389,679
go
x/mobile/cmd/gomobile: RunOnJVM is not available from init functions
In my library, JNI functions are called from Go side. JVM is accessed via a global C variable `current_vm`. I admit depending this variable is not a good way, but there is no other way to access JNI from Go. Actual code is https://github.com/hajimehoshi/ebiten/blob/master/internal/jni/jni_android.go I realized that `current_vm` is not initialized yet when `init` functions are called. IIUC, `current_vm` is initialized at `SetCurrentContext` when the activity's `onCreate` is called, and this `onCreate` is called after all `init` functions are called and before `main` is called. Wouldn't it be possible to make JVM available even from `init` functions? For gomobile-bind, as `init` functions are automatically called before calling `SetCurrentContext`, then it might be impossible. If this is impossible, is there other way to call JNI functions from Go side? As for iOS, Objective-C binding via cgo is available and there is not such problem.
mobile
low
Major
320,401,204
TypeScript
Repeated mapped type inference causes incorrect typeToString results
**TypeScript Version:** 2.8.0-insiders.20180320 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** mapped type inference display typetostring **Code** Via https://stackoverflow.com/questions/50181650/type-inference-lost-inner-level-types-in-typescript ```ts interface IAction { type: string; } type Reducer<S> = (state: S, action: IAction) => S function combineReducers<S>(reducers: { [K in keyof S]: Reducer<S[K]> }): Reducer<S> { const dummy = {} as S; return () => dummy; } const test_inner = (test: string, action: IAction) => { return 'dummy'; } const test = combineReducers({ test_inner }); const test_outer = combineReducers({ test }); // '{test: { test_inner: any } }' type FinalType = ReturnType<typeof test_outer>; var k: FinalType; k.test.test_inner // 'string' ``` **Expected behavior:** `FinalType`'s hover type should be `{ test: { test_inner: string } }` **Actual behavior:** Shows as `{ test: { test_inner: any } }` **Playground Link:** [Link](https://www.typescriptlang.org/play/#src=interface%20IAction%20%7B%0D%0A%20%20%20%20type%3A%20string%3B%0D%0A%7D%0D%0A%0D%0Atype%20Reducer%3CS%3E%20%3D%20(state%3A%20S%2C%20action%3A%20IAction)%20%3D%3E%20S%0D%0A%0D%0Afunction%20combineReducers%3CS%3E(reducers%3A%20%7B%20%5BK%20in%20keyof%20S%5D%3A%20Reducer%3CS%5BK%5D%3E%20%7D)%3A%20Reducer%3CS%3E%20%7B%0D%0A%20%20%20%20const%20dummy%20%3D%20%7B%7D%20as%20S%3B%0D%0A%20%20%20%20return%20()%20%3D%3E%20dummy%3B%0D%0A%7D%0D%0A%0D%0Aconst%20test_inner%20%3D%20(test%3A%20string%2C%20action%3A%20IAction)%20%3D%3E%20%7B%0D%0A%20%20%20%20return%20'dummy'%3B%0D%0A%7D%0D%0Aconst%20test%20%3D%20combineReducers(%7B%0D%0A%20%20%20%20test_inner%0D%0A%7D)%3B%0D%0A%0D%0Aconst%20test_outer%20%3D%20combineReducers(%7B%0D%0A%20%20%20%20test%0D%0A%7D)%3B%0D%0A%0D%0A%2F%2F%20'%7Btest%3A%20%7B%20test_inner%3A%20any%20%7D%20%7D'%0D%0Atype%20FinalType%20%3D%20ReturnType%3Ctypeof%20test_outer%3E%3B%0D%0A%0D%0Avar%20k%3A%20FinalType%3B%0D%0Ak.test.test_inner%20%2F%2F%20'string'%0D%0A)
Bug,Domain: Type Display
low
Major
320,443,934
TypeScript
Proposal: Easier Migration with Types in JavaScript Files
This issue is the dual of #22665. # Background In last week's design meeting, we discussed #22665 and it seemed like we had some positive sentiment around the proposal. Specifically, issues where users hit an expressivity wall or mismatch in `.js` when migrating to `.ts` files provide a compelling case. However, there are certain issues with the proposal: * It still requires users to make a file extension change from `.js` to `.ts`, which usually implies some change in build steps. * It provides no easy migration path from other type systems with annotation syntax used in `.js` files (see https://github.com/Microsoft/TypeScript-Babel-Starter/issues/1). * It introduces the concept of a "diluted/loose" TypeScript experience which can make things more confusing. # Proposal We can introduce a new mode that enables TypeScript-specific constructs in `.js` files. This new mode would allow things like type annotations, `interface`s, class property annotations, etc. to be declared in `.js` files. Users would have to explicitly opt-in to this mode for transitionary purposes, with the end goal of migrating to TypeScript. ## Supported constructs The following constructs from TypeScript would be supported: * Type annotations (`: Foo`) * Interfaces (`interface Foo { /*...*/ }`) * Type alias (`type Foo = "hello" | "world"`) * `as`-style type assertions (`xyz as SomeType`) * Non-nullable type assertions (`x!.bar`) * Definite assignment assertions (`let x!: number`) * Enums (`enum E { /*...*/ }`) * Import aliases and export aliases: * `import ## Unsupported constructs The following would not be supported to encourage users to write ECMAScript modules and standard enums that utilize late-binding. * Namespaces * `const` enums ## Existing typed `.js` file semantics A JavaScript file still has the same specialized semantics including understanding things like CommonJS modules, JSDoc type annotations, ES5-style constructor functions, object literals potentially being open-ended, etc. However, in the presence of an analogous TypeScript construct, the TypeScript construct always wins. For example, the following declares `x` as type `number`, and potentially issues an error on the JSDoc. ```ts /** * @type {string} */ var x: number; ``` # Drawbacks This proposal avoids the aforementioned issues with #22665, but has some new drawbacks. For one, it seems like there's little-to-no advantage to moving from a `.js` file to a `.ts` file, even though TypeScript is easier to reason about and make assumptions on (whereas much of our JavaScript semantics are more "best-effort"). The other issue, which is likely much bigger, is that this conflates where JavaScript ends and TypeScript begins. While the intent here is to make migration easier, enabling this functionality could be perceived poorly as an attempt to extend the language inappropriately. While we hope users understand this is **not** the case, we are extremely open to feedback from the community on this issue. If this flag was called something like `--jsMigrationMode`, perhaps we could be explicit on the intent and also signal to users that having this flag on is not ideal. ## Existing Precedent Common usage of JSX and Flow in `.js` files in the Babel ecosystem (as opposed to using the `.jsx` file extensions) provides a reasonable precedent for supporting TypeScript constructs in `.js` files.
Suggestion,Awaiting More Feedback
low
Critical
320,453,909
pytorch
[proposal] [discussion] Refactor pruning/weight_norm using new Reparametrization functionality + actually deprecate old impl of SpectralNorm
Currently the [weight_norm](https://github.com/pytorch/pytorch/blob/master/torch/nn/utils/weight_norm.py) and [spectral_norm](https://github.com/pytorch/pytorch/blob/master/torch/nn/utils/spectral_norm.py) are patching a passed module + implement special functions for adding/removing these from a module. Some ideas for refactoring to make it less tricky: - provide a stable signature for getting weight, then they can be cleanly used with methods such as `torch.matmul` and `F.conv2d` - if module patching (adding some new buffers as parameters and registering a hook) is needed and is a reasonable pattern, provide a user-facing stable abstraction for it (especially adding and removal of parameters). It seems we have a chain of decorators-hooks, and each of them may have some extra buffers, and currently they are all patched into the passed module object. cc @jerryzh168 @jianyuh @dzhulgakov
module: nn,triaged
high
Critical
320,454,689
TypeScript
Experiment with always using parameters from base types for derived methods
Note some related issues for doing this with *properties* rather than methods: #3667, #6118, #1373. # Goal We want to make it easier for users to derive types without rewriting the same parameter signature every time ```ts class Base { method(x: number) { // ... } } class Derived extends Base { method(x) { // 'x' should have the type 'number' here. } } ``` # Potential ideas * Only enable in `noImplicitAny` (doesn't work for default initializers 😞) * Revert to `any` in all locations, opt in with another strictness flag (😞) * Something else? 😕 # Potential issues ## Default initializers with more capable derived types ```ts class A { a = 1; } class B extends A { b = 2; } class Base { method(x: A) { // ... } } class Derived extends Base { method(x = new B) { x.b; // Today, 'x' has type 'B' which is technically unsound // but that's just what we do. Does changing this to 'A' break things? } } ``` ## Default initializers that become less-capable via contextual types ```ts class Base { method(x: "a" | "b") { // ... } } class Derived extends Base { method(x = "a") { // We have to make sure 'x' doesn't have type '"a"' // which is both unsound and less useful. } } ``` ## Distinction between properties and methods Would this work? ```ts class Base { method = (x: number) => { // ... } } class Derived extends Base { method(x) { // Does 'x' have the type 'number' here? } } ``` What about this? ```ts class Base { method(x: number) { // ... } } class Derived extends Base { method = (x) => { // Does 'x' have the type 'number' here? } } ``` ____ **Keywords:** base type derived contextual contextually inherit methods implicit any
Suggestion,In Discussion
high
Critical
320,459,106
material-ui
[Menu] Clicking outside a menu blocks click from bubbling up
## Current Behavior There a fundamental issue with the menu dismissal. When presented with multiple buttons or select fields that show a menu, it take 2 clicks to dismiss the existing menu and show the next one. ![material-ui](https://user-images.githubusercontent.com/2038264/39656921-7d1544dc-4fd1-11e8-9956-fb3440461c85.gif) ## Steps to Reproduce In the demo below, try opening the first menu by clicking on the first button, then the second menu by clicking on the second button. Notice that an extra click is needed to dismiss the first menu then another one to show the second menu. https://codesandbox.io/s/9lp94v86zo ## Expected Behavior Good UX guidelines recommend to reduce the number of clicks needed to perform an action. As opposed to modals and dialogs, menus are not expected to hijack the click away events. The issue is that the background click to dismiss a menu, has a `e.preventDefault()` which prevents it from bubbling up to the other element. In other frameworks, you can click on an other element, and it will have double effect: close up the first menu, and show the new one in one click action. For reference: *Bootstrap* http://getbootstrap.com/2.3.2/components.html#buttonDropdowns ![bootstrap](https://user-images.githubusercontent.com/2038264/39656862-20504d3c-4fd1-11e8-8957-0a80cc33b4e3.gif) *Semantic-UI* ![semantic-ui](https://user-images.githubusercontent.com/2038264/39656804-b0666056-4fd0-11e8-998c-d1ad26846445.gif) *Sencha ExtJS* http://examples.sencha.com/extjs/6.5.3/examples/kitchensink/?modern#buttons-split ![extjs6-2](https://user-images.githubusercontent.com/2038264/39657084-b412f5d2-4fd2-11e8-82dc-c75a02c96eda.gif) *UIKit* https://getuikit.com/v2/docs/button.html ## Your Environment | Tech | Version | |--------------|---------| | Material-UI | v1.0.0-beta.44 | | React | 16 | | browser | Chrome 65 | | etc | |
new feature,component: menu,component: select
medium
Critical
320,475,543
pytorch
Optional modifiers (e.g., Tensor?) are not checked for non-dispatched native functions
I noticed this because I swapped some of the CuDNN RNN functions from pass through to dispatched, and I started getting failures when I passed undefined tensors to `Tensor x` arguments (...from C++, I think). Which is clearly supposed to be not allowed, but it shouldn't have mattered which dispatch strategy was being used here. Maybe C10 will make this problem obsolete.
triaged,module: dispatch
low
Critical
320,489,157
rust
Suggested fix for E0658 does not resolve the compiler error
A little while back I've released my [bitrange](https://github.com/trangar/bitrange) crate. On `rustc 1.27.0-nightly (ac3c2288f 2018-04-18)`, this works like a charm. On `rustc 1.27.0-nightly (91db9dcf3 2018-05-04)`, I get the following error: ``` error[E0658]: procedural macros cannot be expanded to expressions ... = help: add #![feature(proc_macro_non_items)] to the crate attributes to enable = note: this error originates in a macro outside of the current crate (in Nightuilds, run with -Z external-macro-backtrace for more info) ``` I've tried adding the suggested `#![feature(proc_macro_non_items)]` to the following items, but the error is not resolved: - `bitrange_plugin/src/lib.rs`, the crate that defines the proc macros - `src/lib.rs`, the crate that uses the `bitrange_plugin` crate and exposes a new macro - `src/bin/test.rs`, a test crate that imports `src/lib.rs` and calls that macro (which includes the proc macro) I've added the suggested fixes to [this branch](https://github.com/Trangar/bitrange/tree/proc_macro_issue). (run `cargo test` to trigger the errors)
C-enhancement,A-macros,T-compiler
low
Critical
320,493,089
godot
Polygon 2D UV editor UI unclear
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** <!-- Specify commit hash if non-official. --> af9a620 **OS/device including version:** <!-- Specify GPU model and drivers if graphics-related. --> **Issue description:** <!-- What happened, and what was expected. --> Testing the new polygon 2D UV editor and I cannot distinguish the weight of small value when painting. Can their be a way to show the number or any other way to show the weight clearly? Can we have text showing the paint mode? like add and minus? I noticed that the tooltip explains the button but I think it is not direct enough, can we have text like **Paint Mode:** before the two dots Can there be numerical input for the strength slider? ![2018-05-05 17 02 04](https://user-images.githubusercontent.com/10883749/39661561-11f2932a-5086-11e8-8553-9dfacf634f2d.png) It is unclear to see the selected painted mode, the white dot turns blue but the black dot doesn't change when selected. **Steps to reproduce:** **Minimal reproduction project:** <!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
enhancement,discussion,topic:editor,usability
low
Critical
320,508,979
godot
[Bullet] Rigidbodies have crazy movement when used with joints
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** <!-- Specify commit hash if non-official. --> 3.0.2 **OS/device including version:** <!-- Specify GPU model and drivers if graphics-related. --> linux mint **Issue description:** <!-- What happened, and what was expected. --> I try to make a simple ragdoll and the movement went crazy with bullet but not in godot physics. I think it has something to do with the size of objects. I tried to test with default size collision shapes and rigidbodies with joints and the result is stable. So I guess it is related to the size as the collision shapes have the size pram smaller than 1 in the attach project. **Steps to reproduce:** check project , switch physics engine and run the scene **Minimal reproduction project:** <!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. --> [BUG.zip](https://github.com/godotengine/godot/files/1976687/BUG.zip)
bug,confirmed,topic:physics
low
Critical
320,597,277
go
math: Pow() results are less accurate than libc pow()
Please see detailed discussion on the topic here: https://groups.google.com/forum/#!topic/golang-nuts/LqVD5kMHJQw Note that the headline is about Fibonacci numbers, however, the root issue is regarding the accuracy of math.Pow, compare to libc pow() function. ### What version of Go are you using (`go version`)? ```go version go1.9.5 linux/amd64``` ### Does this issue reproduce with the latest release? did not test with go1.10, but could not find anything related to math performance in release notes ### What operating system and processor architecture are you using (`go env`)? ``` $ go env GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/ylifshit/go" GORACE="" GOROOT="/usr/lib/golang" GOTOOLDIR="/usr/lib/golang/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build223266492=/tmp/go-build -gno-record-gcc-switches" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" ``` ### What did you do? compare accuracy of math.Pow tolibc pow(). cod, as well as detailed results are in the go-nuts discussion above ### What did you expect to see? see group discussion above ### What did you see instead? see group discussion above
NeedsInvestigation
low
Critical
320,601,196
youtube-dl
Bharat Acharya Education Site Support
for example log into bharatacharyaeducation.com and log in using these credentials email : [email protected] pwd: bankai and choose 8086 as topic for watching videos and afterwards video will play...and how do I download these videos
site-support-request
low
Minor
320,604,834
pytorch
Issue when importing both retro (from OpenAI) and torch
**Issue description:** Crash when importing both retro and torch, order doesn't matter **Code example:** import retro import torch I then get the windows pop-up saying python has stopped working - PyTorch or Caffe2: PyTorch - How you installed PyTorch (conda, pip, source): Conda - OS: Windows 10 - PyTorch version: 0.4.0 - Python version: 3.6.4 - CUDA/cuDNN version: 9.0 - GPU models and configuration: Nvidia GTX 760
triaged,module: pybind
low
Critical
320,635,425
rust
Misleading diagnostic when trying to use Box<Error> for io::Error:new
```rust fn main(){ let q : Box<::std::error::Error /* + Send + Sync */ > = unimplemented!(); let w : ::std::io::Error = ::std::io::Error::new(::std::io::ErrorKind::Other, q); } ``` First compiler error message: ``` error[E0277]: the trait bound `std::error::Error: std::marker::Sized` is not satisfied --> src/main.rs:3:32 | 3 | let w : ::std::io::Error = ::std::io::Error::new(::std::io::ErrorKind::Other, q); | ^^^^^^^^^^^^^^^^^^^^^ `std::error::Error` does not have a constant size known at compile-time | = help: the trait `std::marker::Sized` is not implemented for `std::error::Error` = note: required because of the requirements on the impl of `std::error::Error` for `std::boxed::Box<std::error::Error>` = note: required because of the requirements on the impl of `std::convert::From<std::boxed::Box<std::error::Error>>` for `std::boxed::Box<std::error::Error + std::marker::Send + std::marker::Sync + 'static>` = note: required because of the requirements on the impl of `std::convert::Into<std::boxed::Box<std::error::Error + std::marker::Send + std::marker::Sync + 'static>>` for `std::boxed::Box<std::error::Error>` = note: required by `std::io::Error::new` ``` Later it gets to the `Send` and `Sync` parts, but initially it makes one wonder about why wanting `Sized` to `Error` itself instead of `Box<Error>`.
C-enhancement,A-diagnostics,T-compiler
low
Critical
320,775,047
rust
Warn when a user-defined trait is used on an std library type without full path
I'd like rustc to warn me when I use a trait method on a std (or liballoc, core, ...) type without using the full trait path. That is, if I implement my own trait for a `std` library type: ```rust trait Foo { fn bar(&self); } impl Foo for Vec<u32> { fn bar(&self) { } } fn main() { let v = Vec::new(); v.bar(); // Warning Foo::bar(&v); // OK } ``` The motivation is that: * if the `std` library adds a new trait with a method called `foo` and implements it for `Vec<u32>` my code stops compiling due to an ambiguity error * if the `std` library adds an inherent method to `Vec<u32>` called `foo`that has the same signature as `Foo::foo`, my code continues to compile but might silently change behavior --- I don't know whether this warning should apply exclusively to the std library or to types defined in external crates as well.
C-enhancement,A-trait-system,T-compiler
low
Critical
320,814,650
flutter
Would like a benchmark tracking install size of hello_world
Including after extracting icudata. We also get asked about install-size occasionally: https://github.com/flutter/flutter/issues/16833#issuecomment-387019591
team,engine,c: performance,a: size,perf: app size,team: benchmark,P3,team-engine,triaged-engine
low
Minor
320,841,382
rust
EvaluationCache overwrites its entries
When the insertion to `EvaluationCache ` is changed to use `HashMapExt::insert_same`. It triggers an the assertion which checks that the old value is the same as the new one. This is because the DepNodeIndex changed. [Backtrace](https://gist.github.com/Zoxc/c81faf46c9b1a532f029c1a3776c6a4b#file-gistfile1-txt) [Log output](https://gist.github.com/Zoxc/c81faf46c9b1a532f029c1a3776c6a4b#file-log-output) cc @arielb1 @nikomatsakis
C-enhancement,T-compiler
low
Minor
320,894,890
pytorch
[feature request] [PyTorch] More flexible optimizer API
Currently, there is no easy way to change the decay / momentum / lr for a parameter after constructing the optimizer. For example, it is hard to stop a parameter from being decayed in an iteration. Maybe we should support a getter/setter API like `optim.set(my_parameter, 'decay', 0)`. cc @vincentqb
module: optimizer,triaged,enhancement
low
Major
320,896,726
pytorch
[memory leak] [PyTorch] .backward(create_graph=True)
```python >>> import torch >>> import gc >>> _ = torch.randn(1, device='cuda') >>> del _ >>> torch.cuda.synchronize() >>> gc.collect() 0 >>> print(torch.cuda.memory_allocated()) 865280 >>> x = torch.randn(1, device='cuda', requires_grad=True) >>> y = x.tanh() >>> y.backward(torch.ones_like(y), create_graph=True) >>> del x, y >>> torch.cuda.synchronize() >>> gc.collect() 0 >>> print(torch.cuda.memory_allocated()) 867328 ``` leaks with `y = x.tanh()` but not with `y = x + 1`. Discovered when running code in #7270 cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved
module: autograd,module: memory usage,triaged
low
Major
320,897,741
go
cmd/cover: use different //line comment for code that does not come from source file
### What version of Go are you using (`go version`)? go version go1.10.1 darwin/amd64 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GORACE="" GOTMPDIR="" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/m6/vs051rl54k3_sgxr94ylcz5h0000gn/T/go-build238749509=/tmp/go-build -gno-record-gcc-switches -fno-common" ### What did you do? ``` git clone [email protected]:linzhp/go_examples.git cd go_examples/hyphen go tool cover -mode set -var Var_power-driven -o power-driven-gen.go power-driven.go go run main.go ``` ### What did you expect to see? An error pointing to the right line of `power-driven-gen.go` ### What did you see instead? `power-driven.go:7:14: expected ';', found '-'` However, `power-driven.go` has less than 7 lines.
help wanted,NeedsFix,compiler/runtime
low
Critical
320,915,929
TypeScript
JSDoc referencing non-global class: Namespace 'X' has no exported member 'MyClass'
**TypeScript Version:** 2.9.0-dev.20180430 **Code** ```js class Circle {}; class Rectangle {}; const Shapes = {Circle, Rectangle}; // `Shapes` imported into another file class Square extends Shapes.Rectangle { /** * @param {Shapes.Circle} circle */ intersects(circle) { // } } ``` Typecheck using: ``` tsc --checkjs --allowjs --noemit sample.js ``` **Actual behavior:** ``` sample.js(10,23): error TS2694: Namespace 'Shapes' has no exported member 'Circle'. ```
Bug,Domain: JSDoc,Domain: JavaScript
low
Critical
320,915,977
three.js
Optimization: Output flags on materials to avoid material switching for specialized outputs.
##### Description of the problem So I am interested in quickly switching between different outputs of a shader without having to create new materials to just get different outputs. One area in particular were this would be useful (there are many) is in the shadowing code. It would allow us to not have to create a library of cached depth materials just for shadows. Instead we could keep the original materials but just change the flag for an early out after writing the depth. Basically it would deal with this complex caching situation here: https://github.com/mrdoob/three.js/blob/dev/src/renderers/webgl/WebGLShadowMap.js#L56 This is a huge time saver when you have complex shaders that have bones, displacement, normal maps, etc. It would be an optional feature. Something like: ``` #ifdef OUTPUT_FLAGS #define OUTPUTFLAG_NONE 0x0000 #define OUTPUTFLAG_DIFFUSE_DIRECT 0x0001 #define OUTPUTFLAG_DIFFUSE_INDIRECT 0x0002 #define OUTPUTFLAG_DIFFUSE ( OUTPUTFLAG_DIFFUSE_INDIRECT | OUTPUTFLAG_DIFFUSE_DIRECT ) #define OUTPUTFLAG_SPECULAR_DIRECT 0x0004 #define OUTPUTFLAG_SPECULAR_INDIRECT 0x0008 #define OUTPUTFLAG_SPECULAR ( OUTPUTFLAG_SPECULAR_DIRECT | OUTPUTFLAG_SPECULAR_INDIRECT ) #define OUTPUTFLAG_ALBEDO 0x0016 // pre-shading value #define OUTPUTFLAG_ALPHA 0x0016 // pre-shading value #define OUTPUTFLAG_METALNESS 0x0016 // pre-shading value #define OUTPUTFLAG_ROUGHNESS 0x0016 // pre-shading value #define OUTPUTFLAG_SHADOW_DIRECT 0x0032 #define OUTPUTFLAG_SHADOW_INDIRECT 0x0064 // Include ambient #define OUTPUTFLAG_LIGHT_DIRECT 0x0128 #define OUTPUTFLAG_NORMAL 0x0256 // Includes bump #define OUTPUTFLAG_DEPTH 0x0512 #define OUTPUTFLAG_POSITION 0x1024 uniform int outputFlags; #endif ``` It would also enable, as you can surmise by looking at my desired output flags, nice deferred workflows via multi-pass rendering. I would implement it as a uniform flag so that we could change outputs without a recompile. That is key to performance because I would like to switch each material multiple times per render. PS. I wonder if there is a way to incorporate multi-target rendering in this somehow. I just do not know how to specify optional packing of these outputs. MAybe when there is multi-target support, we can specify multiple output flags? Or that would just be a completely different type of output speification. /ping @WestLangley @takahirox @tschw ##### Three.js version - [x] Dev ##### Browser - [x] All of them ##### OS - [x] All of them ##### Hardware Requirements (graphics card, VR Device, ...)
Suggestion
low
Major
320,934,189
rust
Unhelpful borrowck error when wrong lifetimes are used
Consider the following code: ```rust fn run() -> () { let mut buf = [0u8; 1]; let mut foo = Foo(&mut buf[..]); let _bar = foo.bar(); foo.next(); } pub struct Foo<'a>(&'a mut [u8]); impl<'a> Foo<'a> { pub fn bar(&self) -> bool { true } pub fn next(&'a mut self) -> &'a () { unimplemented!(); } } impl<'a> Drop for Foo<'a> { fn drop(&mut self) { unimplemented!(); } } ``` It does not currently compile. With NLL, it yields: ``` error[E0597]: `foo` does not live long enough --> x.rs:5:5 | 5 | foo.next(); | ^^^ borrowed value does not live long enough 6 | } | - | | | borrowed value only lives until here | borrow later used here, when `foo` is dropped ``` Without, it gives ``` error[E0597]: `foo` does not live long enough --> x.rs:5:5 | 5 | foo.next(); | ^^^ borrowed value does not live long enough 6 | } | - `foo` dropped here while still borrowed | = note: values in a scope are dropped in the opposite order they are created ``` Both of them also give a warning that `foo` does not need to be mutable, even though it does... More importantly though, neither of these are particularly helpful in describing what the user did wrong. In particular, in this case, that `next` should have been ```rust fn next<'b>(&'b mut self) -> &'b () ``` The error is quite confusing, and it's not at all clear to the user that this error is related to the lifetime of `buf`! Note that the error changes if either the `foo.bar()` or the `impl Drop for Foo` is removed.
C-enhancement,A-diagnostics,A-lifetimes,A-borrow-checker,T-lang,D-confusing
low
Critical
320,943,155
rust
println!() prevents optimization by capturing pointers
This weekend I ran some benchmarks on some of my code. After making a seemingly insignificant code change I noticed a small, but measurable performance regression. After investigating the generated assembly, I stumbled upon a case, where the compiler emits code that is not optimal. This minimal example shows the same behaviour ([Playground link](https://play.rust-lang.org/?gist=0fd24dc477cc80a396c4db78ba605fe6&version=stable&mode=release)): ``` extern crate rand; use std::f32; use rand::Rng; fn main() { let mut list = [0.0; 16]; let mut rg = rand::thread_rng(); // Random initialization to prevent the compiler from optimizing the whole example away for i in 0..list.len() { list[i] = rg.gen_range(0.0, 0.1); } let mut lowest = f32::INFINITY; for i in 0..list.len() { lowest = if list[i] < lowest { // <<<<<<<<<<<<<<< list[i] } else { lowest }; } println!("{}", lowest); } ``` When compiling with the `--release` flag, the compiler generates the following instructions for the marked block: ``` ... minss %xmm0, %xmm1 movss 88(%rsp), %xmm0 minss %xmm1, %xmm0 movss 92(%rsp), %xmm1 ... ``` However, if I replace those lines with the following: ``` if list[i] < lowest { lowest = list[i]; } ``` the compiler emits a strange series of float compare and jump instructions: ``` .LBB5_38: movss 92(%rsp), %xmm1 ucomiss %xmm1, %xmm0 ja .LBB5_39 ... .LBB5_42: movss 100(%rsp), %xmm1 ucomiss %xmm1, %xmm0 ja .LBB5_43 ... .LBB5_39: movss %xmm1, 12(%rsp) movaps %xmm1, %xmm0 movss 96(%rsp), %xmm1 ucomiss %xmm1, %xmm0 jbe .LBB5_42 ``` As a comparison, both gcc and clang can optimize a similar C++ example: ``` #include <stdlib.h> #include <iostream> using namespace std; int main() { float list[16]; for(size_t i = 0; i < 16; ++i) { list[i] = rand(); } float lowest = 1000.0f; for (size_t i = 0; i < 16; ++i) { /* Variant A: */ //lowest = list[i] < lowest ? list[i] : lowest; /* Variant B: */ if (list[i] < lowest) { lowest = list[i]; } } cout << lowest; } ``` Both compilers generate `minss` instructions for both variants. ([Godbolt](https://godbolt.org/g/gJPXgU)) I wasn't sure whether rustc or LLVM were responsible for this behaviour, however after a quick glance at the generated LLVM IR, I'm tending towards rustc, since in the first case it emits `fcmp` and `select` instructions, while in the latter it generates `fcmp` and `br`. What do you think?
A-LLVM,I-slow,C-enhancement,A-codegen,T-compiler,C-optimization
medium
Major
320,951,625
pytorch
Check for F2C convention (for blas) at runtime
In `cmake/Modules/FindBLAS.cmake` we have a test to see if `sdot_` returns a float or a double (which may vary, depending on whether or not BLAS is compiled with the F2C convention or not.) However, in general, we *dynamically* link against our BLAS library, which means that it is possible that at runtime we get a BLAS library with a different convention than what we compiled with. This is bad. A more robust solution would be to test the convention *at runtime*, so that we always make the correct selection. CC @soumith cc @malfet @seemethere @walterddr
module: build,triaged
low
Minor
320,953,850
pytorch
Windows MAGMA binary requires explicit linking against MKL LAPACK, or it will silently give incorrect results
Short steps to reproduce: 1. Check out https://github.com/pytorch/pytorch/pull/7275 2. Comment out in `aten/src/ATen/CMakeLists.txt` these lines: ``` if(CUDA_FOUND) TARGET_LINK_LIBRARIES(ATen_cuda ${LAPACK_LIBRARIES}) endif() ``` 3. Run `torch.symeig(torch.eye(48).cuda(), eigenvectors=True)` I think the "core" of the repro instructions is: 1. Create a simple program that runs `magma_dsyevd_gpu` (computing eigenvectors) on an identity matrix of size 48x48 (or anything up to 128x128) 2. Link against magma, but do NOT explicitly link against `mkl_lapack95_*` Expected result: Either (1) you get a link error (because no LAPACK symbols, e.g., `lapackf77_dsyevd`, can be found) or (2) everything works. Actual result: You get back nonsense eigenvectors that look like (nondeterministic): ``` 1 0 0 0 0 0 0 0-3 0 0 0 0 0 0 0-3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0-3 0 0 0 0 0 0 0-2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0-3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0-2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0-2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0-2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0-2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0-2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0-2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0-2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ``` Diagnosis: It seems that somehow, if you don't link against lapack explicitly, we end up picking up the wrong implementation of `dsyevd` and we get nonsense. cc @malfet @seemethere @walterddr @ngimel
module: build,module: cuda,triaged
low
Critical
320,975,044
flutter
When generating golden files, catch the case of duplicate names
Sometimes when writing a test you copy and paste a test to make a new variant, and as a result you end up having two parts of the test overwrite the same file. We should be able to catch that case. cc @tvolkert
a: tests,team,framework,P2,team-framework,triaged-framework
low
Minor
321,001,551
pytorch
DataParallel on list inputs
ERROR: type should be string, got "\r\nhttps://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/scatter_gather.py#L5\r\nIf your input is a list of two variables in which the first one has dimension 5 and the other 1.\r\nThis will create a single replica which only contains the first 3 batches of the first variable.\r\n"
triaged,module: data parallel
low
Minor
321,008,392
TypeScript
Support for custom Automatic Type Acquisition scopes.
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. --> ## Search Terms Automatic type acquisition scopes, custom type acquisition, acquire types from custom scope ## Suggestion The ability to specify in `.tsconfig` and/or IDE settings additional scopes to search beyond `@types` for type acquisition. ## Use Cases Private repositories with large amounts of JavaScript code and a small number of TypeScript developers who wish to produce accurate typings, and a high barrier to entry to modifying/adding declaration files to the JavaScript projects. This mirrors the public ecosystem with say, `react` and `@types/react`. ## Examples A new string array parameter of scopes to search for: ```json { "typeAcquisition": { "searchScopes": [ "@companyname-types", "@types" ] } } ``` This would also allow overriding the `@types` scope. The first scope with a matching declaration should win. ## Checklist My suggestion meets these guidelines: [X] This wouldn't be a breaking change in existing TypeScript / JavaScript code [X] This wouldn't change the runtime behavior of existing JavaScript code [X] This could be implemented without emitting different JS based on the types of the expressions [X] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,Awaiting More Feedback
low
Critical
321,011,312
pytorch
[feature request] [PyTorch] Dynamic Samplers.
I am trying to implement Dynamic Samplers in Pytorch. This seems impossible with the existing train_loader api's. As the weighted the samplers only allow weights to be set once. I was thinking that weights could change based on the loss value for each and individual data point. This can be done without train_loader API but I think this a feature that could be included in a future release. Something as simple as Loss Based sampling can't be done right now. cc @SsnL @VitalyFedyunin @ejguan
feature,module: dataloader,triaged
low
Major
321,016,891
pytorch
[caffe2] Can I load the gpu data to cpu in .cu file?
Can I load the gpu data to cpu in .cu file? In caffe v1, we can use cpu_data() or gpu_data() to fectch data in different context. However, how to do that for caffe v2?
caffe2
low
Minor
321,017,896
godot
When using placeholders, signals and groups are added to the InstancePlaceholder and lost when instancing
**Godot version:** 3.x and v4.0 Tested on 3.4b3 and 4.0.dev.20210811.official [7188cb601] **OS/device including version:** **Issue description:** Scenes set as placeholders in the scene tree, get the connections and groups set via the editor when the placeholder is added. This means that there will be errors when the engine tries to connect non available signals and groups added to the placeholder too, both will be not available to the instances made from that placeholder. I think this should be partially handled by the packed scene `instance()`, it needs to detect placeholders (I think it already do that) and `InstancePlaceholder` needs to have a way to store connections and groups to later assign to the instanced Nodes and not to the placeholder. **Steps to reproduce:** - Set a scene as placeholder and connect it with some signal just available on that type of node. - Add that scene to a group too. Run the game and see the connection error, instance that placeholder and see that connections and groups are not on the instanced node. Project for Godot 3 [InstancePlaceholderSignalsG3.zip](https://github.com/godotengine/godot/files/6994990/InstancePlaceholderSignalsG3.zip) Project for Godot 4 [InstancePlaceholderSignalsG4.zip](https://github.com/godotengine/godot/files/6994992/InstancePlaceholderSignalsG4.zip)
enhancement,topic:core,confirmed
low
Critical
321,032,193
pytorch
nn.DataParallel fills None grads with 0
On pytorch 0.4.0, using nn.DataParallel leads to grads of parameters not used being filled with 0. This is a problem when the model has many parameters but only small part of it is used in forward. Test: ```python import torch from torch import nn class Test(nn.Module): def __init__(self): super(Test, self).__init__() self.l1 = nn.Linear(10,10) self.l2 = nn.Linear(10,10) def forward(self, input): return self.l1(input) print('CPU:') net = Test() inputs = torch.randn(1,10) outputs = net(inputs) loss = torch.norm(outputs) loss.backward() for j,p in net.named_parameters(): if p.grad is not None: print(j, p.shape, torch.norm(p.grad)) else: print(j) print('GPU:') net = Test().to('cuda') inputs = torch.randn(1,10).to('cuda') outputs = net(inputs) loss = torch.norm(outputs) loss.backward() for j,p in net.named_parameters(): if p.grad is not None: print(j, p.shape, torch.norm(p.grad)) else: print(j) print('MultiGPU:') net = nn.DataParallel(Test().to('cuda')) inputs = torch.randn(1,10).to('cuda') outputs = net(inputs) loss = torch.norm(outputs) loss.backward() for j,p in net.module.named_parameters(): if p.grad is not None: print(j, p.shape, torch.norm(p.grad)) else: print(j) ``` For multi GPU we see all grads are filled with zeros. For CPU and GPU the grads are None.
triaged,module: data parallel
low
Major
321,103,152
opencv
Different result of fisheye::stereoRectify
##### System information (version) - OpenCV 3.4, 2.4.11 - Operating System / Platform =>Ubuntu 14.04 64 Bit and macOS 10 - Compiler => gcc 4.8.4 ##### Detailed description ##### Steps to reproduce Just as test in https://github.com/AHoveringFish/fisheye_stereo_rectify.git The outputs of mac(on which opencv is brew install) and Ubuntu(on which opencv is compilied with source code) are different.
incomplete
low
Minor
321,232,488
pytorch
[Caffe2] Can't use resnet50_trainer.py through redis.
## Issue description I was trying to use resnet50_trainer.py to do distributed training. However, I encountered this problem. > RuntimeError: [enforce fail at operator.cc:185] op. Cannot create operator of type 'RedisStoreHandlerCreate' on the device 'CPU'. Verify that implementation for the corresponding device exist. It might also happen if the binary is not linked with the operator implementation code. If Python frontend is used it might happen if dyndep.InitOpsLibrary call is missing. Operator def: output: "store_handler" name: "" type: "RedisStoreHandlerCreate" arg { name: "host" s: "192.168.1.24" } arg { name: "prefix" s: "1" } arg { name: "port" i: 6379 } I am able to run resnet50_trainer.py on single machine, and I'm able to connect to the redis host through `redis-cli -h 192.168.1.24 -p 6379` Please give me some help. ## Code example python resnet50_trainer.py --train_data /home/cch/Downloads/Downloads/imagenet_cars_boats_train/ --batch_size 1 --run_id 1 --redis_host 192.168.1.24 --redis_port 6379 --num_shards 2 --shard_id 1 - PyTorch or Caffe2: - How you installed PyTorch (conda, pip, source):source - Build command you used (if compiling from source): cmake .. -DCUDA_ARCH_NAME=Manual -DCUDA_ARCH_BIN="50" -DCUDA_ARCH_PTX="50" -DUSE_NNPACK=OFF -DUSE_ROCKSDB=OFF -DUSE_GLOO=ON -DUSE_REDIS=ON -DUSE_IBVERBS=ON -DUSE_MPI=OFF - OS:Centos6.9 - Python version:2.7.14 - CUDA/cuDNN version:9.0/7.0 - GCC version (if compiling from source):4.8.5 - CMake version:3.11.1
caffe2
low
Critical
321,249,939
thefuck
Enabling experimental instant mode causes The Fuck to stop working entirely
<!-- If you have any issue with The Fuck, sorry about that, but we will do what we can to fix that. Actually, maybe we already have, so first thing to do is to update The Fuck and see if the bug is still there. --> <!-- If it is (sorry again), check if the problem has not already been reported and if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with the following basic information: --> The output of `thefuck --version` (something like `The Fuck 3.1 using Python 3.5.0`): The Fuck 3.26 using Python 3.6.5 Your shell and its version (`bash`, `zsh`, *Windows PowerShell*, etc.): zsh 5.2 (x86_64-apple-darwin16.0) Your system (Debian 7, ArchLinux, Windows, etc.): ProductName: Mac OS X ProductVersion: 10.12.6 BuildVersion: 16G1314 How to reproduce the bug: 1. Add `--enable-experimental-instant-mode` to the alias initialization in `.zshrc`. 2. Restart terminal or run `source ~/.zshrc`. The Fuck completely stops working (i.e. it doesn't show any correction suggestions to any mistakes) and instead shows the following messages whenever I type the `fuck` command: ``` [WARN] PS1 doesn't contain user command mark, please ensure that PS1 is not changed after The Fuck alias initialization No fucks given ``` The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck): ``` DEBUG: Run with settings: {'alter_history': True, 'debug': True, 'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'}, 'exclude_rules': [], 'history_limit': None, 'instant_mode': True, 'no_colors': False, 'priority': {}, 'repeat': False, 'require_confirmation': True, 'rules': [<const: All rules enabled>], 'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'], 'user_dir': PosixPath('/Users/xxxx/.config/thefuck'), 'wait_command': 3, 'wait_slow_command': 15}[WARN] PS1 doesn't contain user command mark, please ensure that PS1 is not cha nged after The Fuck alias initialization DEBUG: Importing rule: adb_unknown_command; took: 0:00:00.000214 DEBUG: Importing rule: ag_literal; took: 0:00:00.000474 DEBUG: Importing rule: apt_get; took: 0:00:00.000945DEBUG: Importing rule: apt_get_search; took: 0:00:00.000372 DEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.000967 DEBUG: Importing rule: apt_list_upgradable; took: 0:00:00.000493 DEBUG: Importing rule: apt_upgrade; took: 0:00:00.000481 DEBUG: Importing rule: aws_cli; took: 0:00:00.000517 DEBUG: Importing rule: brew_cask_dependency; took: 0:00:00.001909 DEBUG: Importing rule: brew_install; took: 0:00:00.000308 DEBUG: Importing rule: brew_link; took: 0:00:00.000751 DEBUG: Importing rule: brew_uninstall; took: 0:00:00.000569 DEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000310 DEBUG: Importing rule: brew_update_formula; took: 0:00:00.000815 DEBUG: Importing rule: brew_upgrade; took: 0:00:00.000234 DEBUG: Importing rule: cargo; took: 0:00:00.000211 DEBUG: Importing rule: cargo_no_command; took: 0:00:00.000880 DEBUG: Importing rule: cd_correction; took: 0:00:00.002276 DEBUG: Importing rule: cd_mkdir; took: 0:00:00.000715 DEBUG: Importing rule: cd_parent; took: 0:00:00.000190 DEBUG: Importing rule: chmod_x; took: 0:00:00.000181 DEBUG: Importing rule: composer_not_command; took: 0:00:00.000530 DEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000730 DEBUG: Importing rule: cpp11; took: 0:00:00.000469 DEBUG: Importing rule: dirty_untar; took: 0:00:00.002174 DEBUG: Importing rule: dirty_unzip; took: 0:00:00.002918 DEBUG: Importing rule: django_south_ghost; took: 0:00:00.000261 DEBUG: Importing rule: django_south_merge; took: 0:00:00.000181 DEBUG: Importing rule: dnf_no_such_command; took: 0:00:00.001595 DEBUG: Importing rule: docker_not_command; took: 0:00:00.001042 DEBUG: Importing rule: dry; took: 0:00:00.000205 DEBUG: Importing rule: fab_command_not_found; took: 0:00:00.000666 DEBUG: Importing rule: fix_alt_space; took: 0:00:00.000501 DEBUG: Importing rule: fix_file; took: 0:00:00.006306 DEBUG: Importing rule: gem_unknown_command; took: 0:00:00.000752 DEBUG: Importing rule: git_add; took: 0:00:00.001006 DEBUG: Importing rule: git_add_force; took: 0:00:00.000503 DEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000491 DEBUG: Importing rule: git_branch_delete; took: 0:00:00.000464 DEBUG: Importing rule: git_branch_exists; took: 0:00:00.000611 DEBUG: Importing rule: git_branch_list; took: 0:00:00.000525 DEBUG: Importing rule: git_checkout; took: 0:00:00.000526 DEBUG: Importing rule: git_commit_amend; took: 0:00:00.000564 DEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000472 DEBUG: Importing rule: git_diff_staged; took: 0:00:00.000624 DEBUG: Importing rule: git_fix_stash; took: 0:00:00.000508 DEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000433 DEBUG: Importing rule: git_help_aliased; took: 0:00:00.000437 DEBUG: Importing rule: git_merge; took: 0:00:00.000431 DEBUG: Importing rule: git_merge_unrelated; took: 0:00:00.000412 DEBUG: Importing rule: git_not_command; took: 0:00:00.000529 DEBUG: Importing rule: git_pull; took: 0:00:00.000445 DEBUG: Importing rule: git_pull_clone; took: 0:00:00.000580 DEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000515 DEBUG: Importing rule: git_push; took: 0:00:00.000464 DEBUG: Importing rule: git_push_different_branch_names; took: 0:00:00.000410 DEBUG: Importing rule: git_push_force; took: 0:00:00.000620 DEBUG: Importing rule: git_push_pull; took: 0:00:00.000529 DEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000566 DEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000467 DEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000354 DEBUG: Importing rule: git_remote_delete; took: 0:00:00.000478 DEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000342 DEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000669 DEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000473 DEBUG: Importing rule: git_rm_staged; took: 0:00:00.000637 DEBUG: Importing rule: git_stash; took: 0:00:00.000564 DEBUG: Importing rule: git_stash_pop; took: 0:00:00.000559 DEBUG: Importing rule: git_tag_force; took: 0:00:00.000478 DEBUG: Importing rule: git_two_dashes; took: 0:00:00.000575 DEBUG: Importing rule: go_run; took: 0:00:00.000496 DEBUG: Importing rule: gradle_no_task; took: 0:00:00.000999 DEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000641 DEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000504 DEBUG: Importing rule: grep_recursive; took: 0:00:00.000480 DEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000888 DEBUG: Importing rule: gulp_not_task; took: 0:00:00.000504 DEBUG: Importing rule: has_exists_script; took: 0:00:00.000489 DEBUG: Importing rule: heroku_multiple_apps; took: 0:00:00.000467 DEBUG: Importing rule: heroku_not_command; took: 0:00:00.000686 DEBUG: Importing rule: history; took: 0:00:00.000185 DEBUG: Importing rule: hostscli; took: 0:00:00.000639 DEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.000682 DEBUG: Importing rule: java; took: 0:00:00.000606 DEBUG: Importing rule: javac; took: 0:00:00.000498 DEBUG: Importing rule: lein_not_task; took: 0:00:00.000772 DEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000442 DEBUG: Importing rule: ln_s_order; took: 0:00:00.000582 DEBUG: Importing rule: ls_all; took: 0:00:00.000501 DEBUG: Importing rule: ls_lah; took: 0:00:00.000595 DEBUG: Importing rule: man; took: 0:00:00.000429 DEBUG: Importing rule: man_no_space; took: 0:00:00.000190 DEBUG: Importing rule: mercurial; took: 0:00:00.000422 DEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000281 DEBUG: Importing rule: mkdir_p; took: 0:00:00.000496 DEBUG: Importing rule: mvn_no_command; took: 0:00:00.000501 DEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000508 DEBUG: Importing rule: no_command; took: 0:00:00.000554 DEBUG: Importing rule: no_such_file; took: 0:00:00.000193 DEBUG: Importing rule: npm_missing_script; took: 0:00:00.001039 DEBUG: Importing rule: npm_run_script; took: 0:00:00.000451 DEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000824 DEBUG: Importing rule: open; took: 0:00:00.000852 DEBUG: Importing rule: pacman; took: 0:00:00.000969 DEBUG: Importing rule: pacman_not_found; took: 0:00:00.000220 DEBUG: Importing rule: path_from_history; took: 0:00:00.000262 DEBUG: Importing rule: php_s; took: 0:00:00.000511 DEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000527 DEBUG: Importing rule: port_already_in_use; took: 0:00:00.000427 DEBUG: Importing rule: prove_recursively; took: 0:00:00.000692 DEBUG: Importing rule: python_command; took: 0:00:00.000463 DEBUG: Importing rule: python_execute; took: 0:00:00.000503 DEBUG: Importing rule: quotation_marks; took: 0:00:00.000191 DEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000683 DEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000199 DEBUG: Importing rule: rm_dir; took: 0:00:00.000566 DEBUG: Importing rule: rm_root; took: 0:00:00.000656 DEBUG: Importing rule: scm_correction; took: 0:00:00.000526 DEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000562 DEBUG: Importing rule: sl_ls; took: 0:00:00.000172 DEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000453 DEBUG: Importing rule: sudo; took: 0:00:00.000178 DEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000422 DEBUG: Importing rule: switch_lang; took: 0:00:00.000197 DEBUG: Importing rule: systemctl; took: 0:00:00.000606 DEBUG: Importing rule: test.py; took: 0:00:00.000131 DEBUG: Importing rule: tmux; took: 0:00:00.000543 DEBUG: Importing rule: touch; took: 0:00:00.000490 DEBUG: Importing rule: tsuru_login; took: 0:00:00.000496 DEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000644 DEBUG: Importing rule: unknown_command; took: 0:00:00.000226 DEBUG: Importing rule: unsudo; took: 0:00:00.000690 DEBUG: Importing rule: vagrant_up; took: 0:00:00.001252 DEBUG: Importing rule: whois; took: 0:00:00.001076 DEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000568 DEBUG: Importing rule: yarn_alias; took: 0:00:00.000635 DEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.001129 DEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000835 DEBUG: Importing rule: yarn_help; took: 0:00:00.000559 DEBUG: Trying rule: dirty_unzip; took: 0:00:00.000111 No fucks given DEBUG: Total took: 0:00:00.127378 ``` If the bug only appears with a specific application, the output of that application and its version: N/A Anything else you think is relevant: N/A <!-- It's only with enough information that we can do something to fix the problem. -->
help wanted,zsh
low
Critical
321,308,759
TypeScript
fixUnusedIdentifier may remove a used side-effect
**TypeScript Version:** 2.9.0-dev.20180506 **Code** ```ts { const x = launchMissiles(); } ``` **Expected behavior:** `const x = ` removed, and missiles still launched. **Actual behavior:** Missiles unlaunched. Unfortunately we have no way of knowing whether a function call does have side effects (#17181), so doing this in general might be annoying as users have to manually delete function calls with unused results.
Suggestion,Needs Proposal,Domain: Quick Fixes
low
Minor
321,324,726
godot
New signal methods in C# are declared outside of its class, making the script invalid
*Bugsquad edit: This long standing issue is still valid as of Godot 4.2. It doesn't need further confirmation, but feel free to :+1: the OP to confirm it affects you. Any help welcome to solve this, it's something the .NET team is having difficulty fixing properly, but it's recognized as a major usability issue.* --- **Godot version:** 3.0.3 rc1 Mono Build **Issue description:** When using the editor to connect a signal to a C# script, the new method declaration is added to the end of the file, and outside of the class. It should instead add the new method just before the final closing brace. **Steps to reproduce:** 1. Create a button node. 2. Attach a new C# script to this node. 3. Select the button in the editor, choose Node -> Signals 4. Select the "pressed()" signal and click connect. 5. Choose the same button as the "Node to Connect." Leave everything default and hit connect. 6. Notice that the new method is created outside of the class. Example: ```cs using Godot; using System; public class btn_move : Button { public override void _Ready() { } } //End of class private void _on_Button_pressed() { // Replace with function body } ``` This is a low priority issue as the easy workaround is to just move the closing brace of the class, but this creates invalid code and could be confusing.
bug,topic:editor,confirmed,usability,topic:dotnet
medium
Critical
321,328,807
flutter
Create a build with Flutter Fastlane action
https://docs.fastlane.tools/actions/#building Create one to build an Android and/or iOS app via Fastlane. For simplicity, it can initially just aggregate build_ios_app and build_android_app with parameters from both.
tool,P3,team-tool,triaged-tool
low
Minor
321,349,219
go
cmd/compile: tweak branchelim heuristic on amd64
### What version of Go are you using (`go version`)? master: ```go version devel +cd1976d Tue May 8 19:57:49 2018 +0000 linux/amd64``` ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/localdisk/itocar/gocache/" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/localdisk/itocar/gopath" GORACE="" GOROOT="/localdisk/itocar/golang" GOTMPDIR="" GOTOOLDIR="/localdisk/itocar/golang/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build948974604=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? Run strconv/Atof benchmarks ### What did you expect to see? Same or better performance as 1.10 ### What did you see instead? ``` Atof64Decimal-8 20.9ns ± 1% 20.3ns ± 2% -3.11% (p=0.000 n=10+10) Atof64Float-8 22.9ns ± 1% 23.1ns ± 2% +0.83% (p=0.042 n=10+10) Atof64FloatExp-8 56.3ns ± 3% 68.8ns ± 2% +22.11% (p=0.000 n=10+10) Atof64Big-8 84.0ns ± 1% 94.0ns ± 1% +11.91% (p=0.000 n=10+10) Atof64RandomBits-8 72.9ns ±15% 80.5ns ±24% +10.47% (p=0.022 n=10+9) Atof64RandomFloats-8 80.2ns ±25% 91.6ns ±19% +14.27% (p=0.029 n=10+10) Atof32Decimal-8 20.1ns ± 2% 20.5ns ± 2% +1.59% (p=0.008 n=10+10) Atof32Float-8 21.5ns ± 3% 21.9ns ± 3% +2.09% (p=0.012 n=10+10) Atof32FloatExp-8 58.6ns ± 3% 68.2ns ± 2% +16.49% (p=0.000 n=10+10) Atof32Random-8 93.8ns ± 0% 86.4ns ± 0% -7.83% (p=0.000 n=10+8) ``` Bisects points to a35ec9a59e29dc11a15ded047a618298a59599b4 Looking at code I see that (*extFloat).Normalize got slower, probably because branches were well predicted, and most instruction are dependent on a result of branch. It looks like heuristic for generating CMOV should be tweaked, but I'm not sure how. Current threshold is low and reducing it further will cause performance impact in other cases. Maybe we should avoid generating long chain of dependent CMOVs?
Performance,compiler/runtime
low
Critical
321,352,531
kubernetes
Export metrics for precidates restrictiveness
Currently users do not have any metrics on predicates restrictiveness (i.e capacity of a predicate to filter potential target nodes). The idea is to provide a metric each time the predicate is computed. This (And how handle ordering predicates in general) might be impacted by the new scheduling framework. cc @bsalamat @resouer /sig scheduling
sig/scheduling,lifecycle/frozen
low
Major
321,398,176
rust
Coherence rules are not consistent when applied to auto traits
The coherence rules which allow blanket trait implementations to coexist with more specific impls are not consistent when applied to normal traits and auto traits. An example is provided below. If this is expected behavior, I could not find it documented anywhere. The same error occurs on both stable and nightly compilers. ```rust // `Test` implements neither `Clone` nor `Send`. struct Test { ptr: *mut u8, } trait NotSend {} impl<T> NotSend for T where T: Send {} trait NotClone {} impl<T> NotClone for T where T: Clone {} // Coherence rules allow `Test` to have its own implementation of `NotClone`. impl NotClone for Test {} // However, the same logic does not apply to `Send`. This results in: // error[E0119]: conflicting implementations of trait `NotSend` for type `Test`. // impl NotSend for Test {} ``` [playground link](https://play.rust-lang.org/?gist=100507f67e2f9832e355d7da76af3e4a&version=nightly&mode=debug)
C-enhancement,A-trait-system,T-lang
low
Critical
321,421,285
godot
AudioDriverWASAPI error while using editor if another application is using exclusive mode
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** 3.0.3 RC1 **OS/device including version:** Windows 10, not related to video, apparently. **Issue description:** I have used the editor for a while and never got this error before, I haven't even used the audio features, yet this error message seems to be related to the audio: ``` ERROR: AudioDriverWASAPI::thread_func: WASAPI: GetCurrentPadding error At: drivers\wasapi\audio_driver_wasapi.cpp:533 ``` **Steps to reproduce:** Unkown **Minimal reproduction project:** unkonwn
bug,platform:windows,topic:porting,confirmed,topic:audio
medium
Critical
321,422,530
go
cmd/compile: incorrect error for function conversion based on cyclic definition (esoteric)
I'm only reporting this for future use should we ever get to a more thorough rethink of the type checker. This is a completely esoteric test case: https://play.golang.org/p/JjZZGCopERA The error message is "cannot convert nil to type F" but F is clearly a function and thus nil can be converted into that function (of whatever type). The problem here is the type cycle; the conversion looks at a type that is not yet fully set up. This example is primarily interesting as a test case.
NeedsInvestigation,compiler/runtime
low
Critical
321,466,686
pytorch
[Caffe2] Negative export to ONNX fails
The following sample fails. The error seems to be coming from the exact same cause of #7020, but I cannot workaround the error with the same fix in frontend.py. After some debugging I found out that for `Negative`, the export process goes through `C.support_onnx_export()` and `C.export_to_onnx()` unlike #7020. These functions checks the schema in https://github.com/pytorch/pytorch/blob/master/caffe2/operators/negative_op.cc to return the corresponding name for ONNX, but for some reason returning the wrong name. ## Sample ```python #!/usr/bin/env python import onnx import caffe2.python.onnx.frontend as oc2f import caffe2.python.predictor.predictor_exporter as pe import caffe2.python.predictor.mobile_exporter as c2me from caffe2.python import model_helper, workspace, brew, utils IN_DATA_NAME = "in_data" OUT_DATA_NAME = "out_data" DATA_SHAPE = (1, 3, 3, 3) model = model_helper.ModelHelper(name="neg_net") model.net.Negative(IN_DATA_NAME, OUT_DATA_NAME) model.param_init_net.UniformFill([], IN_DATA_NAME, shape=DATA_SHAPE) workspace.RunNetOnce(model.param_init_net) init_net, predict_net = c2me.Export(workspace, model.net, [IN_DATA_NAME]) value_info = { } onnx_model = oc2f.caffe2_net_to_onnx_model( predict_net, init_net, value_info) onnx.checker.check_model(onnx_model) print(onnx_model) with open("neg.onnx", 'wb') as f: f.write(onnx_model.SerializeToString()) ``` ## Error ``` Traceback (most recent call last): File "./gen_neg.py", line 38, in <module> value_info) File "/home/masato/repo/pytorch/build/caffe2/python/onnx/frontend.py", line 337, in caffe2_net_to_onnx_model model = make_model(cls.caffe2_net_to_onnx_graph(*args, **kwargs), File "/home/masato/repo/pytorch/build/caffe2/python/onnx/frontend.py", line 273, in caffe2_net_to_onnx_graph checker.check_graph(graph_def) File "/home/masato/repo/onnx/onnx/checker.py", line 45, in checker proto.SerializeToString(), ctx) onnx.onnx_cpp2py_export.checker.ValidationError: No Schema registered for Negative with domain_version of 7 ==> Context: Bad node spec: input: "in_data_0" output: "out_data_1" name: "" op_type: "Negative" ``` - OS: Ubuntu 16.04.4 x86_64 - PyTorch version: 5c2015d133d1a6c54b4daba35e54fd0c84806463 - ONNX version: dee6d89781fb885d1ed3b935992b27338befe1cd - Python version: 3.5.2
caffe2
low
Critical
321,551,887
go
x/crypto/salsa20: implement cipher.Stream interface
salsa20 is the odd one out that implements only a stateless XORKeyStream at the package level.
help wanted,NeedsFix
low
Minor
321,554,841
go
x/crypto/cryptotest: new package
crypto/cipher has well defined interfaces with plenty of tricky requirements (about aliasing, different lengths, state) that are hard to test for and easy to overlook. cryptotest will be an interface test suite, like nettest. It caught a bug in x/crypto/internal/chacha20 and can replace a bunch of duplicated tests in the standard library, too. It's useful to expose it in x/crypto for external implementations of the interface to use it. It will be vendored back. Originally submitted as an internal package in https://golang.org/cl/102196
Proposal-Accepted,NeedsFix,Proposal-Crypto
low
Critical
321,587,812
pytorch
Multi queue for dataloader when workers > 1
Recently I have read the implementation of dataloader and get some ideas may be make dataloader faster when use multi prefetch data loader. Currently there is only one index_queue and data_queue, sampler will put batch indices to index_queue, and multi processes will put the single data_queue at the same time , ther is an race condition and would be blocking. May be create an queue per worker should be more efficiently. cc @SsnL @VitalyFedyunin @ejguan
module: dataloader,triaged,enhancement
low
Minor
321,597,089
go
test: generate gcgort
In e0adc35c470d9bf7f462c9d26340b494cbe7ce43 test/gcgort.go is added. This test may benefit from being generated instead of hand-written to increase maintainability by reducing repeating and narrow test coverage by removing extraneous setup code like the modifier struct. This change would be after 1.11 and my opinion is it should only be done if the test strategy is also updated for the stated garbage collector, types, and concurrency interactions coverage. Currently the test failure relies on the runtime implementation causing a panic. From https://go-review.googlesource.com/c/go/+/93715 discussion with @aclements.
help wanted,NeedsFix
low
Critical
321,600,058
You-Dont-Know-JS
Async & Performance Chapter 6 "Repetition" section
**Yes, I promise I've read the [Contributions Guidelines](https://github.com/getify/You-Dont-Know-JS/blob/master/CONTRIBUTING.md)** (please feel free to remove this line). I've been having lots of trouble with 3 paragraphs in this section, IMO, the whole section is too dense and short. This paragraph: > A straight mathematical average by itself is definitely not sufficient for making judgments about performance which you plan to extrapolate to the breadth of your entire application. With a hundred iterations, even a couple of outliers (high or low) can skew the average, and then when you apply that conclusion repeatedly, you even further inflate the skew beyond credulity. 1. Is the usage of the word "credulity" here a typo ? "credulity" is a synonym of "naivety" and "gullibility", I think the word is supposed to be "credibility" ? 2. What does "apply that conclusion repeatedly" really mean ? Does it mean "when you increase the number of iterations, the skew will increase proportionally" ? <br> The next paragraph: > Instead of just running for a fixed number of iterations, you can instead choose to run the loop of tests until a certain amount of time has passed. That might be more reliable, but how do you decide how long to run? You might guess that it should be some multiple of how long your operation should take to run once. Wrong. 1. Why might this be more reliable ? In the previous section, it is established that there are 3 main problems with a simple "startTime, do operation, endTime - startTime" benchmark: A) Something could have intervened with the engine or system during that specific test. B) The engine could have found a way to optimize your isolated case. C) Perhaps your timer was not precise enough, therefore you got an inaccurate result. Using the second pattern, I believe problem A is still there, something could intervene with the engine causing an iteration of the test to take longer, therefore affecting the end result. I can't see why the engine can't find a way to optimize your tests with this pattern as well if it could do so with the first pattern, how much is it really different anyways ? The second problem could solve problem C, but only if the time to repeat across is a multiple of the timer's precision (e.g 30ms for a timer with 15ms precision). Otherwise you would have to increase the time to repeat across to improve accuracy, and couldn't this be achieved by increasing the number of iterations in pattern A as well ? <br> And then the next paragraph: > Actually, the length of time to repeat across should be based on the accuracy of the timer you're using, specifically to minimize the chances of inaccuracy. The less precise your timer, the longer you need to run to make sure you've minimized the error percentage. A 15ms timer is pretty bad for accurate benchmarking; to minimize its uncertainty (aka "error rate") to less than 1%, you need to run your each cycle of test iterations for 750ms. A 1ms timer only needs a cycle to run for 50ms to get the same confidence. I wonder what the maths behind this part is:> A 15ms timer is pretty bad for accurate benchmarking; to minimize its uncertainty (aka "error rate") to less than 1%, you need to run your each cycle of test iterations for 750ms. A 1ms timer only needs a cycle to run for 50ms to get the same confidence. <br> My brain is roaming in uncharted territory, so please excuse me if I am saying things that don't make any sense, but trust me, I have tried to comprehend this section for quite a while and this is my best so far, and it is all purely theoretical. I understand the basic message of this part of the book: "Use a library to benchmark your JavaScript code, it isn't as easy as it looks".
for second edition
low
Critical