id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
337,088,478 | vue | Allow to call original `errorHandler` when custom one defined | ### What problem does this feature solve?
When custom `Vue.config.errorHandler` defined by the user, it will stop firing original [`logError `](https://github.com/vuejs/vue/blob/c2b1cfe9ccd08835f2d99f6ce60f67b4de55187f/src/core/util/error.js#L38).
The problem is that there is no straight way to recreate `logError ` behavior in custom `Vue.config.errorHandler` without requiring some Vue's internals.
This issue is partially related to [raven-js#1416](https://github.com/getsentry/raven-js/issues/1416), which defines custom `errorHandler`... which prevents Vue from login errors to the console.
### What does the proposed API look like?
I think Vue should by default expose original `errorHandler` under `Vue.config.errorHandler`. In other words, `Vue.config.errorHandler` shouldn't be `undefined` by default.
If the user would like to override `errorHandler`, he will reassign it. Otherwise, it would be possible to save original `Vue.config.errorHandler` into some variable and call it within custom `errorHandler`.
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | medium | Critical |
337,101,770 | rust | Bad diagnostic for associated types when forgetting type arguments | [Given an incorrect order of associated types](https://play.rust-lang.org/?gist=c8c0e4db2b08eba0281157ef683278c3&version=stable&mode=debug):
```
type Bar = Foo<C = u32, B = bool, A = String>;
struct Foo<A, B, C> {
a: A,
b: B,
c: C
}
```
We currently identify the first argument as a type binding and emit two errors:
```
error[E0243]: wrong number of type arguments: expected 3, found 0
--> src/main.rs:4:12
|
4 | type Bar = Foo<C = u32, B = bool, A = String>;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected 3 type arguments
error[E0229]: associated type bindings are not allowed here
--> src/main.rs:4:16
|
4 | type Bar = Foo<C = u32, B = bool, A = String>;
| ^^^^^^^ associated type not allowed here
```
Modify to emit only _one_ error complaining only about the <del>order</del>missing type arguments. | C-enhancement,A-diagnostics,P-low,T-compiler,D-verbose | low | Critical |
337,111,534 | rust | consider fixing common regression with expansion of 2-phase borrows | The following code works without NLL but not with NLL enabled:
https://play.rust-lang.org/?gist=9b797f941b3aa419991e15fd5a2d07a0&version=nightly&mode=debug
```rust
//#![feature(nll)]
struct S {
a: &'static str,
}
fn f(_: &mut S, _: &str) {
}
fn main() {
let mut s = S {
a: "a",
};
f(&mut s, s.a);
}
```
NLL is not *wrong* to reject this code. This is https://github.com/rust-lang/rust/issues/38899. However, it could plausibly be accepted if we expanded two-phase borrows *ever so slightly*... actually, I'm not sure about this *exact* source, or it least it wouldn't quite fit into what I had in mind.
I had in mind an expansion for arguments of type `&mut`, basically, so that `foo(bar)`, if `bar` is an `&mut` that is being reborrowed, would reborrow using a 2-phase borrow. | C-enhancement,P-medium,T-compiler,A-NLL,NLL-complete | low | Critical |
337,114,853 | rust | Adding `impl Add<char> for String` breaks code (adding an impl leads to breakage due to deref coercions) | I just prepared a PR to fix #38784. I wanted to add `Add<char>` and `AddAssign<char>` impls for String. However, this breaks the following code:
```rust
let a = String::from("a");
let b = String::from("b");
a + &b;
```
([Playground](https://play.rust-lang.org/?gist=81fafb1dc39fdad9908b1e730985a589&version=nightly&mode=debug))
Apparently, without `Add<char>`, deref coercion is used to turn `&String` into `&str`. With the additional impl, deref coercion is no more and thus it results in:
```
error[E0277]: cannot add `&std::string::String` to `std::string::String`
[...]
= help: the trait `std::ops::Add<&std::string::String>` is not implemented for `std::string::String`
```
This is unexpected, because [RFC 1105](https://github.com/rust-lang/rfcs/blob/master/text/1105-api-evolution.md) says that adding an implementing for a non-fundamental trait is a minor change and thus should not require a major version bump. But if I were to submit a PR which adds `Add<char> for String`, it would probably not be merged, because it breaks a bunch of crates. `a + &b` for `a` and `b` being `String` is widely used.
I would have expected that either deref coercion wouldn't happen in cases where it influences the trait selection (in order to avoid this breakage) or that deref coercion would still happen if there is more than one `impl`. Since the former is not possible anymore, I think the latter is the solution to this problem.
The third option (I can see) would be to change RFC 1105 and amend a section about this special case.
| A-trait-system,T-lang,A-inference,C-bug,A-coercions | low | Critical |
337,133,660 | pytorch | [feature request] freeze() for nn.Module | I want to suggest a freeze() method to freeze an already trained module, and its parameters. However, for modules like BatchNorm it is not sufficient to set requires_grad = False, because the running_mean and running_var are still updated unless the the module is set to .eval() .
Unless I am missing something, ATM, I only can solve this with
```
self.bn.train(self.bn.training and self.bn.weight.requires_grad)
x = self.bn(x)
```
which feels quite hacky.
The idea would be that freeze() sets all the parameters of a module to requires_grad = False, sets self.training = False and sets a flag self.is_frozen = True which blocks all future .train() calls to the module.
cc @albanD @mruberry | module: nn,low priority,triaged,enhancement | low | Major |
337,134,877 | go | all: port to Windows/ARM32 | Hi everyone, we will be submitting a patch in the near future that adds Windows/ARM32 support to GO. All but a few tests are passing, and this implementation has been used to compile GO itself and run Docker containers on Windows/ARM32. We look forward to working with the community to iron out the last remaining issues and get this merged! | OS-Windows,NeedsInvestigation,arch-arm | high | Critical |
337,142,564 | go | cmd/go: module chatter hides actual errors | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.10.3 darwin/amd64 vgo:2018-02-20.1
### Does this issue reproduce with the latest release?
yes i built from source today
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN="/Volumes/Repositories/go/bin"
GOCACHE="/Users/jlorenzini/Library/Caches/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Volumes/Repositories/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/8w/7ndx_j6s36x9hm9dns67j21s3hynqn/T/go-build299230517=/tmp/go-build -gno-record-gcc-switches -fno-common"
VGOMODROOT="/Users/jlorenzini/repos/operator"
```
### What did you do?
vgo mod -vendor
### What did you expect to see?
The vendor directory to be populated with the packages defined in my go.mod.
### What did you see instead?
The vendor directory is empty after command seemed to download packages.
| NeedsInvestigation,modules | medium | Critical |
337,170,217 | rust | dead_code lint false negative when dead code in warned about dead code | ```rust
#![crate_type="dylib"]
// #![allow(dead_code)]
/* pub */ fn dead_code_test() {
#[warn(dead_code)] {
match () {
#![deny(dead_code)]
() => { fn im_dead() {} }
}
fn im_also_dead() {}
}
}
```
Despite the obvious dead code (the `im_dead` function), this compiles just fine on the [playground](http://play.rust-lang.org/?gist=b8c32cec2714f23997a97a951ec5e8b2&version=stable&mode=debug). It gives the following output:
```
warning: function is never used: `dead_code_test`
--> src/lib.rs:4:11
|
L | /* pub */ fn dead_code_test() {
| ^^^^^^^^^^^^^^^^^^^
|
= note: #[warn(dead_code)] on by default
```
Neither `im_dead` nor `im_also_dead` emit their compiler error / warning. It makes sense for `im_also_dead` to not do so because it'd be redundant with the error already given. On the other hand, with the `deny`, it should be a hard error.
You'll also note that there's commented out code. By uncommenting either other, the false negative is removed and it fails to compile correctly.
| A-lints,A-trait-system,T-compiler,C-bug,L-dead_code | low | Critical |
337,214,746 | opencv | unnecessary lookup table entries | I am referring to `prefilterXSobel()` in [`calib3d/stereobm`](https://github.com/opencv/opencv/blob/master/modules/calib3d/src/stereobm.cpp#L196).
Table lookup occurs at `d0+2*d1+d2+OFS`.
`dN` is in the range `-255 .. 255` thus `d0+2*d1+d2` is in the range `-4*255 .. 4*255`.
Therefor `d0+2*d1+d2`**+`OFS`** is in the range `4*(256-255) .. 4*(256+255)` ⊂ `0 .. 2*OFS` for `OFS = 4*256`. Thus a capacity of `2*OFS` should be enough and the [additional `256` entries](https://github.com/opencv/opencv/blob/master/modules/calib3d/src/stereobm.cpp#L199) are not needed. | category: calib3d,Hackathon | low | Minor |
337,231,359 | flutter | [camera] Access realtime image frames for pre-processing and effect overlays | Is it possible to access the real-time camera frame feed to do pre-processing or to be used for ML implementations like face-recognition, Image overlays, filters, etc.? | c: new feature,p: camera,package,team-ecosystem,P3,triaged-ecosystem | high | Critical |
337,232,170 | pytorch | Deployment for ios for 1.0 | I have a quick question regarding your future release for 1.0, do you guys plan on developing a converter from Pytorch models to coreML? I know Onnx.ai exist. | caffe2 | low | Minor |
337,234,387 | rust | suggest replacing dash with underscore when in-code lint name cannot be found. | I was looking at ````rustc -D help```` for additional lints to enable in one of my projects
````
anonymous-parameters allow detects anonymous parameters
bare-trait-objects allow suggest using `dyn Trait` for trait objects
box-pointers allow use of owned (Box type) heap memory
elided-lifetimes-in-paths allow hidden lifetime parameters are deprecated, try `Foo<'_>`
ellipsis-inclusive-range-patterns allow `...` range patterns are deprecated
missing-copy-implementations allow detects potentially-forgotten implementations of `Copy`
missing-debug-implementations allow detects missing implementations of fmt::Debug
missing-docs allow detects missing documentation for public members
.....
````
however when trying to enable some of these
````rust
#![warn(anonymous-parameters)]
````
I got an obscure warning;
````
Compiling fooo v0.1.0 (file:///tmp/fooo)
error: expected one of `(`, `)`, `,`, `::`, or `=`, found `-`
--> src/main.rs:2:18
|
2 | #![warn(anonymous-parameters)]
| ^ expected one of `(`, `)`, `,`, `::`, or `=` here
error: unexpected token: `parameters`
--> src/main.rs:2:19
|
2 | #![warn(anonymous-parameters)]
| ^^^^^^^^^^ unexpected token after this
error: aborting due to 2 previous errors
error: Could not compile `fooo`.
To learn more, run the command again with --verbose.
````
Apparently I have to translate all dashes ````-```` to underscores ````_```` for it to work.
````rust
#![warn(anonymous_parameters)]
````
An appropriate error message would have been greatly appreciated and saved me a couple of minutes figuring this out.
Something like
````
Could not find "anonymous-parameters", did you mean "anonymous_parameters"?
```` | A-diagnostics,A-trait-system,A-parser,T-compiler,C-feature-request,A-suggestion-diagnostics,D-papercut | low | Critical |
337,234,888 | rust | The recursion limit for monomorphizations are different than for macro expansions | Currently the recursion limit for monomorphization is 64 which the [toml_edit](https://github.com/ordian/toml_edit) crate ended up hitting as reported in https://github.com/Marwes/combine/issues/172 . The solution here is to manually specify a higher recursion limit via the `recursion_limit` attribute.
This does mean that the attribute ends up in a bit of a weird situation though, since `recursion_limit` also affects macro expansions which has a default value of 1024. So any explicitly set value between 64 and 1024 actually ends up reducing the recursion limit for macros.
I assume that there is a good reason for the monomorphization limit being so low compared to the macro limit. But if that is the case then it is perhaps a bit problematic that any increase of the limit for the sake of macro expansion will actually also end up increasing it for monomorphizations as well, only it is increased by a factor of 20 or more.
...
I don't think there is anything that is actually directly actionable in this issue now that I have written it out. But I figure I should still post this as I did end up being rather confused by the fact that setting the recursion_limit to 1024 actually fixed the issue in toml_edit, despite the fact that I remember it being increased to 1024 some time ago (only when I dug up the [issue](https://github.com/rust-lang/rust/pull/41676) did I realize it was only for macros) | I-slow,A-macros | low | Minor |
337,237,397 | godot | Rim is broken on a particular mesh | Godot 3.0.4
Windows 10 64 bits
nVidia geForce GTX 1060
I found a case where spatial material rim light is broken, and I can't reproduce it with another mesh so far.
Basically, it makes the mesh uniformly lighter (or all white) instead of applying, well, a rim effect.

Something even weirder: I had this mesh in my project for a while, and I already made rim working with it.
Here is how it looked like with working rim (but in a different scene with different material):

But today I wanted to make a new testbed scene in which I re-used the same mesh, and that's where the bug started happening. The problem persists even if I restart Godot or delete the import folder. I tried to reproduce it with a simple sphere mesh, to no avail.
I made a redux of my project containing only the minimum reproducing the bug:
[RimBugAllWhite.zip](https://github.com/godotengine/godot/files/2152126/RimBugAllWhite.zip)
| bug,topic:rendering,confirmed | low | Critical |
337,242,913 | TypeScript | Conditional types can cause TS to miss incorrect interface extensions | Conditional types can cause TS to miss incorrect interface extensions
**TypeScript Version:** 2.9.2, 3.0.0-dev.20180630
**Search Terms:** conditional types, interfaces, extensions, assignability, substitutability, recursive types
**Introduction:**
I really appreciate the introduction of conditional types in TypeScript. They solve many problems that couldn't be solved before. Just the `ReturnType<T>` alone is priceless.
Unfortunately I think there is a new class of issues introduced by the presence of conditional types. Those issues relate to the verification whether an interface properly `extends` another interface.
When interface `B<T>` is declared as extending interface `A` (`interface B<T> extends A { ... }`) and no error is reported in such interface declaration, I expect that for every type `T`, `A<T>` will be assignable (substitutable) for `B`. I don't know if this property of the type system has a name, but it's listed in the language spec (https://github.com/Microsoft/TypeScript/blob/master/doc/spec.md#71-interface-declarations) as:
> The this-type (section 3.6.3) of the declared interface must be assignable (section 3.11.4) to each of the base type references.
It appears that when verifying if `A<T>` is assignable to `B` TypeScript replaces the type of members with conditional type following the pattern `X extends Y ? A : B` with `A | B`. This ensures that only if either type is assignable to the type of the corresponding member of `A`, `B<T>` will be assignable to `A`. However this replacement can be avoided with certain tricks (see the code below). This leads to a situation where the `B<T> extends A` declaration is verified yet for certain types `T` the type `B<T>` is not assignable to `A`.
**Code**
```ts
type Cryptic = { cryptic: "cryptic" }
type IsCryptic<MaybeCryptic> = MaybeCryptic extends Cryptic ? 1 : 0
const any: any = 0
interface A {
mustBeZero: 0
}
type Confuse<A> = [A] & A
// This should report the "Interface 'B<T>' incorrectly extends interface 'A'." error
interface B<T> extends A {
mustBeZero: IsCryptic<Confuse<T>>
}
// The above extension is allowed yet B<T> is not assignable to A for some T:
const incorrect: A = any as B<Cryptic>
```
In the above example inlining `IsCryptic<MaybeCryptic>` solves the problem (the expected error is returned by `tsc`).
A different version of the same issue can be encountered when the type parameter `T` is constrained in the extending interface.
```ts
type Cryptic = { cryptic: "cryptic" }
type IsCryptic<MaybeCryptic> = MaybeCryptic extends Cryptic ? 1 : 0
const any: any = 0
interface A {
mustBeZero: 0
}
type SomeOtherType = { value: number }
interface B<T extends SomeOtherType> extends A {
mustBeZero: IsCryptic<T>
}
// The above extension is allowed yet B<SomeOtherType & T> is not assignable to A for some T:
const incorrect: A = any as B<SomeOtherType & Cryptic>
```
In the above example inlining `IsCryptic<MaybeCryptic>` does *not* help.
Here is yet another way to force an interface extension to pass. In this example even the concrete types of the incompatible interfaces are allowed to be assigned. Only the attempt to assign the right members of both types result in an eventual error:
```ts
interface RecurseToZero<T> {
recurse: RecurseToZero<[T,T]>
mustBeZero: 0
}
interface RecurseToWhat<T> extends RecurseToZero<T> {
recurse: RecurseToWhat<[T,T]>
// The type of "mustBeZero" member will be different from 0 only on a certain recursion depth
mustBeZero: T extends [infer T1, infer T2]
? T1 extends [infer T3, infer T4]
? T3 extends [infer T5, infer T6]
? T5 extends [infer T7, infer T8]
? T7 extends [infer T9, infer T10]
? 1
: 0 : 0 : 0 : 0 : 0
}
// The above extension is allowed and the type RecurseToWhat<null> appears to be assignable to RecurseToZero<null>:
declare let a: RecurseToZero<null>
declare let b: RecurseToWhat<null>
a = b
// Yet the above should not be allowed because the type RecurseToWhat<null> is not structurally assignable to RecurseToZero<null>:
a.recurse.recurse.recurse.recurse.mustBeZero = b.recurse.recurse.recurse.recurse.mustBeZero
// This is because on the certain depths of recursion of the RecurseToWhat<T> the type of "mustBeZero" is 1:
type Zero = RecurseToZero<null>["recurse"]["recurse"]["recurse"]["recurse"]["mustBeZero"]
type What = RecurseToWhat<null>["recurse"]["recurse"]["recurse"]["recurse"]["mustBeZero"]
const incorect: Zero = any as What
```
I believe this particular problem is caused by the way TypeScript compiler avoids infinite assignability checks on recursive types. As far as I could deduce from the code, type checker will avoid verifying the assignability of the same members of the same interfaces more than once. The type arguments of the interfaces will be ignored in order not to loop forever through the ever-growing type argument.
Verifying if `RecurseToWhat<T>` is assignable to `RecurseToZero<T>` required verifying if `RecurseToWhat<[T,T]>` is assignable to `RecurseToZero<[T,T]>` which in turn requires trying with the `[[T,T],[T,T]]` type argument and so on. To avoid endless computations TypeScript allows for at most only one level of recursion per member of the interface. This however is not enough now, because due to the conditional types it is possible to make the assignability of `RecurseToWhat<T>` to `RecurseToZero<T>` fail only for an arbitrarily complex `T`.
This particular issue can also be replicated without the use of extending interfaces:
```ts
interface RecurseToZeroP<T> {
recurse: RecurseToZeroP<[T,T]>
mustBeZero: 0
}
type RecurseToZero = RecurseToZeroP<null>
interface RecurseToWhatP<T> {
recurse: RecurseToWhatP<[T,T]>
// The type of "mustBeZero" member will be different from 0 only on a certain recursion depth
mustBeZero: T extends [infer T1, infer T2]
? T1 extends [infer T3, infer T4]
? T3 extends [infer T5, infer T6]
? T5 extends [infer T7, infer T8]
? T7 extends [infer T9, infer T10]
? 1
: 0 : 0 : 0 : 0 : 0
}
type RecurseToWhat = RecurseToWhatP<null>
// The type RecurseToWhat appears to be assignable to RecurseToZero:
declare let a: RecurseToZero
declare let b: RecurseToWhat
a = b
// Yet the above should not be allowed because the type RecurseToWhat is not structurally assignable to RecurseToZero:
a.recurse.recurse.recurse.recurse.mustBeZero = b.recurse.recurse.recurse.recurse.mustBeZero
// This is because on the certain depths of recursion the type of "mustBeZero" is 1
type Zero = RecurseToZero["recurse"]["recurse"]["recurse"]["recurse"]["mustBeZero"]
type What = RecurseToWhat["recurse"]["recurse"]["recurse"]["recurse"]["mustBeZero"]
const incorect: Zero = any as What
```
**Expected behavior:**
The declarations `interface B<T> extends A { ... }`, `interface B<T extends SomeOtherType> extends A { ... }` and `interface RecurseToWhat<T> extends RecurseToZero<T> { ... }` should fail with the appropriate error listing the correct path through incompatible members, for example:
```
Interface 'B<T>' incorrectly extends interface 'A'.
Types of property 'mustBeZero' are incompatible.
Type 'IsCryptic<Confuse<T>>' is not assignable to type '0'.
Type '0 | 1' is not assignable to type '0'.
Type '1' is not assignable to type '0'.
```
The last code example should fail on the `a = b` assignment.
**Actual behavior:**
All of the interface declarations are parsed without the compile-time error. Only the attempts to actually assign a variable of a certain concrete type of the interface `B<T>` to `A` will fail.
In the example 4, the problem goes even further. The two created types (`RecurseToZero` and `RecurseToWhat`) seemingly appear to be assignable (`RecurseToWhat` to `RecurseToZero`). Only the attempt to assign certain nested members of the corresponding types reveals the error.
**Playground Link:** [link](https://www.typescriptlang.org/play/#src=type%20Cryptic%20%3D%20%7B%20cryptic%3A%20%22cryptic%22%20%7D%0D%0Atype%20IsCryptic%3CMaybeCryptic%3E%20%3D%20MaybeCryptic%20extends%20Cryptic%20%3F%201%20%3A%200%0D%0Aconst%20any%3A%20any%20%3D%200%0D%0A%0D%0Ainterface%20A%20%7B%0D%0A%20%20mustBeZero%3A%200%0D%0A%7D%0D%0A%0D%0Amodule%20CorrectExample%20%7B%0D%0A%20%20%2F%2F%20This%20interface%20declaration%20will%20correctly%20cause%20a%20compile-time%20error%0D%0A%20%20interface%20B%3CT%3E%20extends%20A%20%7B%0D%0A%20%20%20%20mustBeZero%3A%20IsCryptic%3CT%3E%0D%0A%20%20%7D%0D%0A%20%20%0D%0A%20%20const%20incorrect%3A%20A%20%3D%20any%20as%20B%3CCryptic%3E%0D%0A%7D%0D%0A%0D%0Amodule%20BrokenExample1%20%7B%0D%0A%20%20type%20Confuse%3CA%3E%20%3D%20%5BA%5D%20%26%20A%0D%0A%0D%0A%20%20%2F%2F%20This%20should%20report%20the%20%22Interface%20'B%3CT%3E'%20incorrectly%20extends%20interface%20'A'.%22%20error%0D%0A%20%20interface%20B%3CT%3E%20extends%20A%20%7B%0D%0A%20%20%20%20mustBeZero%3A%20IsCryptic%3CConfuse%3CT%3E%3E%0D%0A%20%20%7D%0D%0A%20%20%0D%0A%20%20%2F%2F%20The%20above%20extension%20is%20allowed%20yet%20B%3CT%3E%20is%20not%20assignable%20to%20A%20for%20some%20T%3A%0D%0A%20%20const%20incorrect%3A%20A%20%3D%20any%20as%20B%3CCryptic%3E%0D%0A%7D%0D%0A%0D%0Amodule%20BrokenExample2%20%7B%0D%0A%20%20type%20SomeOtherType%20%3D%20%7B%20value%3A%20number%20%7D%0D%0A%0D%0A%20%20interface%20B%3CT%20extends%20SomeOtherType%3E%20extends%20A%20%7B%0D%0A%20%20%20%20mustBeZero%3A%20IsCryptic%3CT%3E%0D%0A%20%20%7D%0D%0A%0D%0A%20%20%2F%2F%20The%20above%20extension%20is%20allowed%20yet%20B%3CSomeOtherType%20%26%20T%3E%20is%20not%20assignable%20to%20A%20for%20some%20T%3A%0D%0A%20%20const%20incorrect%3A%20A%20%3D%20any%20as%20B%3CSomeOtherType%20%26%20Cryptic%3E%0D%0A%7D%0D%0A%0D%0Amodule%20BrokenExample3%20%7B%0D%0A%20%20interface%20RecurseToZero%3CT%3E%20%7B%0D%0A%20%20%20%20recurse%3A%20RecurseToZero%3C%5BT%2CT%5D%3E%0D%0A%20%20%20%20mustBeZero%3A%200%0D%0A%20%20%7D%0D%0A%0D%0A%20%20interface%20RecurseToWhat%3CT%3E%20extends%20RecurseToZero%3CT%3E%20%7B%0D%0A%20%20%20%20recurse%3A%20RecurseToWhat%3C%5BT%2CT%5D%3E%0D%0A%20%20%20%20%2F%2F%20The%20type%20of%20%22mustBeZero%22%20member%20will%20be%20different%20from%200%20only%20on%20a%20certain%20recursion%20depth%0D%0A%20%20%20%20mustBeZero%3A%20T%20extends%20%5Binfer%20T1%2C%20infer%20T2%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20T1%20extends%20%5Binfer%20T3%2C%20infer%20T4%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20T3%20extends%20%5Binfer%20T5%2C%20infer%20T6%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20T5%20extends%20%5Binfer%20T7%2C%20infer%20T8%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20T7%20extends%20%5Binfer%20T9%2C%20infer%20T10%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%201%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3A%200%20%3A%200%20%3A%200%20%3A%200%20%3A%200%0D%0A%20%20%7D%0D%0A%0D%0A%20%20%2F%2F%20The%20above%20extension%20is%20allowed%20and%20the%20type%20RecurseToWhat%3Cnull%3E%20appears%20to%20be%20assignable%20to%20RecurseToZero%3Cnull%3E%3A%0D%0A%20%20declare%20let%20a%3A%20RecurseToZero%3Cnull%3E%0D%0A%20%20declare%20let%20b%3A%20RecurseToWhat%3Cnull%3E%0D%0A%20%20a%20%3D%20b%0D%0A%0D%0A%20%20%2F%2F%20Yet%20the%20above%20should%20not%20be%20allowed%20because%20the%20type%20RecurseToWhat%3Cnull%3E%20is%20not%20structurally%20assignable%20to%20RecurseToZero%3Cnull%3E%3A%0D%0A%20%20a.recurse.recurse.recurse.recurse.mustBeZero%20%3D%20b.recurse.recurse.recurse.recurse.mustBeZero%0D%0A%20%20%0D%0A%20%20%2F%2F%20This%20is%20because%20on%20the%20certain%20depths%20of%20recursion%20of%20the%20RecurseToWhat%3CT%3E%20the%20type%20of%20%22mustBeZero%22%20is%201%3A%0D%0A%20%20type%20Zero%20%3D%20RecurseToZero%3Cnull%3E%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22mustBeZero%22%5D%0D%0A%20%20type%20What%20%3D%20RecurseToWhat%3Cnull%3E%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22mustBeZero%22%5D%0D%0A%20%20const%20incorect%3A%20Zero%20%3D%20any%20as%20What%0D%0A%7D%0D%0A%0D%0Amodule%20BrokenExample4%20%7B%0D%0A%20%20interface%20RecurseToZeroP%3CT%3E%20%7B%0D%0A%20%20%20%20recurse%3A%20RecurseToZeroP%3C%5BT%2CT%5D%3E%0D%0A%20%20%20%20mustBeZero%3A%200%0D%0A%20%20%7D%0D%0A%0D%0A%20%20type%20RecurseToZero%20%3D%20RecurseToZeroP%3Cnull%3E%0D%0A%0D%0A%20%20interface%20RecurseToWhatP%3CT%3E%20%7B%0D%0A%20%20%20%20recurse%3A%20RecurseToWhatP%3C%5BT%2CT%5D%3E%0D%0A%20%20%20%20%2F%2F%20The%20type%20of%20%22mustBeZero%22%20member%20will%20be%20different%20from%200%20only%20on%20a%20certain%20recursion%20depth%0D%0A%20%20%20%20mustBeZero%3A%20T%20extends%20%5Binfer%20T1%2C%20infer%20T2%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20T1%20extends%20%5Binfer%20T3%2C%20infer%20T4%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20T3%20extends%20%5Binfer%20T5%2C%20infer%20T6%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20T5%20extends%20%5Binfer%20T7%2C%20infer%20T8%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20T7%20extends%20%5Binfer%20T9%2C%20infer%20T10%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%201%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3A%200%20%3A%200%20%3A%200%20%3A%200%20%3A%200%0D%0A%20%20%7D%0D%0A%0D%0A%20%20type%20RecurseToWhat%20%3D%20RecurseToWhatP%3Cnull%3E%0D%0A%0D%0A%20%20%2F%2F%20The%20type%20RecurseToWhat%20appears%20to%20be%20assignable%20to%20RecurseToZero%3A%0D%0A%20%20declare%20let%20a%3A%20RecurseToZero%0D%0A%20%20declare%20let%20b%3A%20RecurseToWhat%0D%0A%20%20a%20%3D%20b%0D%0A%0D%0A%20%20%2F%2F%20Yet%20the%20above%20should%20not%20be%20allowed%20because%20the%20type%20RecurseToWhat%20is%20not%20structurally%20assignable%20to%20RecurseToZero%3A%0D%0A%20%20a.recurse.recurse.recurse.recurse.mustBeZero%20%3D%20b.recurse.recurse.recurse.recurse.mustBeZero%0D%0A%20%20%0D%0A%20%20%2F%2F%20This%20is%20because%20on%20the%20certain%20depths%20of%20recursion%20the%20type%20of%20%22mustBeZero%22%20is%201%0D%0A%20%20type%20Zero%20%3D%20RecurseToZero%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22mustBeZero%22%5D%0D%0A%20%20type%20What%20%3D%20RecurseToWhat%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22recurse%22%5D%5B%22mustBeZero%22%5D%0D%0A%20%20const%20incorect%3A%20Zero%20%3D%20any%20as%20What%0D%0A%7D%0D%0A%0D%0Amodule%20BrokenExample5%20%7B%0D%0A%20%20enum%20True%20%7B%20true%20%3D%20%22true%22%20%7D%0D%0A%20%20enum%20False%20%7B%20false%20%3D%20%22false%22%20%7D%0D%0A%0D%0A%20%20type%20And%3CBool1%2CBool2%3E%20%3D%20Bool1%20extends%20True%20%3F%20Bool2%20extends%20True%20%3F%20True%20%3A%20False%20%3A%20False%0D%0A%20%20type%20Or%3CBool1%2C%20Bool2%3E%20%3D%20Bool1%20extends%20True%20%3F%20True%20%3A%20Bool2%20extends%20True%20%3F%20True%20%3A%20False%0D%0A%20%20type%20Not%3CBool%3E%20%3D%20Bool%20extends%20False%20%3F%20True%20%3A%20Bool%20extends%20True%20%3F%20False%20%3A%20never%0D%0A%0D%0A%20%20interface%20RecurseToFalseP%3CT%3E%20%7B%0D%0A%20%20%20%20recurseWithTrue%3A%20RecurseToFalseP%3C%5BT%2CTrue%5D%3E%0D%0A%20%20%20%20recurseWithFalse%3A%20RecurseToFalseP%3C%5BT%2CFalse%5D%3E%0D%0A%20%20%20%20mustBeFalse%3A%20False%0D%0A%20%20%7D%0D%0A%0D%0A%20%20type%20RecurseToFalse%20%3D%20RecurseToFalseP%3Cnull%3E%0D%0A%0D%0A%20%20interface%20RecurseToPropsitionP%3CT%3E%20%7B%0D%0A%20%20%20%20recurseWithTrue%3A%20RecurseToPropsitionP%3C%5BT%2CTrue%5D%3E%0D%0A%20%20%20%20recurseWithFalse%3A%20RecurseToPropsitionP%3C%5BT%2CFalse%5D%3E%0D%0A%20%20%20%20%2F%2F%20The%20type%20of%20%22mustBeFalse%22%20member%20will%20be%20different%20from%20False%20only%20when%20T%20represents%20the%20assignment%20satisfying%20propositional%20logic%20statement%20hardcoded%20below%0D%0A%20%20%20%20mustBeFalse%3A%20T%20extends%20%5Binfer%20R6%2C%20infer%20T7%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20R6%20extends%20%5Binfer%20R5%2C%20infer%20T6%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20R5%20extends%20%5Binfer%20R4%2C%20infer%20T5%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20R4%20extends%20%5Binfer%20R3%2C%20infer%20T4%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20R3%20extends%20%5Binfer%20R2%2C%20infer%20T3%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20R2%20extends%20%5Binfer%20R1%2C%20infer%20T2%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20R1%20extends%20%5Bnull%2C%20infer%20T1%5D%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3F%20And%3CAnd%3CAnd%3COr%3CAnd%3CT1%2CT2%3E%2CNot%3COr%3CT3%2CT4%3E%3E%3E%2CT5%3E%2CT6%3E%2CT7%3E%20%2F%2F%20propsitional%20logic%20with%20T1%2C%20...%2C%20TK%20variables%20goes%20here%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3A%20False%20%3A%20False%20%3A%20False%20%3A%20False%20%3A%20False%20%3A%20False%20%3A%20False%0D%0A%20%20%7D%0D%0A%0D%0A%20%20type%20RecurseToPropsition%20%3D%20RecurseToPropsitionP%3Cnull%3E%0D%0A%0D%0A%20%20%2F%2F%20The%20type%20RecurseToPropsition%20appears%20to%20be%20assignable%20to%20RecurseToZero%3A%0D%0A%20%20declare%20let%20a%3A%20RecurseToFalse%0D%0A%20%20declare%20let%20b%3A%20RecurseToPropsition%0D%0A%20%20a%20%3D%20b%0D%0A%0D%0A%20%20%2F%2F%20In%20order%20for%20the%20TypeScript%20to%20find%20the%20member%20breaking%20the%20structural%20assignability%20it%20needs%20to%20solve%20the%20satisfiability%20problem%20for%20the%20propsition%20hardcoded%20in%20RecurseToPropsitionP%3CT%3E%3A%0D%0A%20%20a%0D%0A%20%20%20%20.recurseWithTrue.recurseWithTrue%0D%0A%20%20%20%20.recurseWithFalse.recurseWithFalse%0D%0A%20%20%20%20.recurseWithTrue.recurseWithTrue.recurseWithTrue%0D%0A%20%20%20%20.mustBeFalse%20%0D%0A%20%20%20%20%3D%0D%0A%20%20b%0D%0A%20%20%20%20.recurseWithTrue.recurseWithTrue%0D%0A%20%20%20%20.recurseWithFalse.recurseWithFalse%0D%0A%20%20%20%20.recurseWithTrue.recurseWithTrue.recurseWithTrue%0D%0A%20%20%20%20.mustBeFalse%0D%0A%7D)
**Related Issues:** None to my knowledge. Please link to some if you find them.
**Fully structural assignability**
Things get more interesting if we change the example 4 to allow two separate paths on each recursion level. In such a situation it seems that the work TypeScript would need to perform in order to find whether the types are actually structurally equivalent could be equivalent to solving a propositional logic satisfiability problem:
```ts
enum True { true = "true" }
enum False { false = "false" }
type And<Bool1,Bool2> = Bool1 extends True ? Bool2 extends True ? True : False : False
type Or<Bool1, Bool2> = Bool1 extends True ? True : Bool2 extends True ? True : False
type Not<Bool> = Bool extends False ? True : Bool extends True ? False : never
interface RecurseToFalseP<T> {
recurseWithTrue: RecurseToFalseP<[T,True]>
recurseWithFalse: RecurseToFalseP<[T,False]>
mustBeFalse: False
}
type RecurseToFalse = RecurseToFalseP<null>
interface RecurseToPropsitionP<T> {
recurseWithTrue: RecurseToPropsitionP<[T,True]>
recurseWithFalse: RecurseToPropsitionP<[T,False]>
// The type of "mustBeFalse" member will be different from False only when T represents the assignment satisfying propositional logic statement hardcoded below
mustBeFalse: T extends [infer R6, infer T7]
? R6 extends [infer R5, infer T6]
? R5 extends [infer R4, infer T5]
? R4 extends [infer R3, infer T4]
? R3 extends [infer R2, infer T3]
? R2 extends [infer R1, infer T2]
? R1 extends [null, infer T1]
? And<And<And<Or<And<T1,T2>,Not<Or<T3,T4>>>,T5>,T6>,T7> // propsitional logic with T1, ..., TK variables goes here
: False : False : False : False : False : False : False
}
type RecurseToPropsition = RecurseToPropsitionP<null>
// The type RecurseToPropsition appears to be assignable to RecurseToZero:
declare let a: RecurseToFalse
declare let b: RecurseToPropsition
a = b
// In order for the TypeScript to find the member path breaking the structural assignability it needs to solve the satisfiability problem for the propsition hardcoded in RecurseToPropsitionP<T>:
a
.recurseWithTrue.recurseWithTrue
.recurseWithFalse.recurseWithFalse
.recurseWithTrue.recurseWithTrue.recurseWithTrue
.mustBeFalse
=
b
.recurseWithTrue.recurseWithTrue
.recurseWithFalse.recurseWithFalse
.recurseWithTrue.recurseWithTrue.recurseWithTrue
.mustBeFalse
```
This is a famously NP-complete problem. So if `P != NP` there is no polynomial way to decide whether two types are structurally equivalent. | Bug,Domain: Conditional Types | low | Critical |
337,246,248 | godot | C# Exporting variables not working when Nullable is used | When writing a script in C# and exporting a Nullable variable the exported variable doesn't appear in the Godot editor.
To reproduce:
1. create a node in your scene
2. add a script to it
3. add the following variable to your class:
```C#
[Export]
public float _testVariable;
[Export]
public Nullable<float> _testVariable2;
```
After you do some magic for the exported variable to appear in the Godot Inspector (mainly rebuilding the project) you will see that _testVariable appears in the Godot Inspector but _testVariable2 does not.
About the versions i am using:
Godot 3.0.4.stable.mono.official.916135
macOS High Sierra 10.13.4
Sorry if this is duplicated just couldn't find it.
Thank you,
Blake | enhancement,topic:dotnet | low | Major |
337,254,744 | rust | Lint against float literals like 4444444444_f32 that "overflow" their mantissa | `4444444444_f32` is actually the number `4444444700` because `f32` doesn't have enough bits to represent `4444444444`. This seems like an obvious thing we should be linting against, but http://play.rust-lang.org/?gist=3e360c4ab4deb3ef9cc16b9c9a084f6e&version=stable&mode=debug&edition=2015 emits no warnings today.
I'm not sure if this should be a new lint or part of the existing `overflowing_literals` lint. It obviously "feels" like the same sort of issue, and it's hard to imagine a situation where you'd want to allow/deny/forbid this and not the other kinds of `overflowing_literals`. But it's also clearly not "overflowing" in the usual sense of being an unrepresentable value greater than MAX or less than MIN, rather it's an unrepresentable value in-between multiple representable values.
(thought of this when reading https://github.com/rust-lang/rust/issues/51534#issuecomment-396917857) | A-lints,T-lang,C-feature-request | medium | Critical |
337,263,706 | rust | trace_macros shows nothing if rustc never actually finishes | Observed in #51754, the following `main.rs`
```
#![feature(trace_macros)]
trace_macros(true);
macro_rules! there_is_a_bug {
( $id:ident: $($tail:tt)* ) => {
there_is_a_bug! { $($tail:tt)* }
};
}
fn main() {
there_is_a_bug! { something: more {} }
}
```
never shows *anything* because it never hits an error. The reason that it never hits an error is the focus of #51754, but in summary rustc is performing clones that amount to time spent that is quadratic in the (already exponential) token stream length.
Ideally `trace_macros` should produce output more strictly. I imagine it currently does not due to however it uses the diagnostics API.
Probably even once #51754 is fixed this example will still show nothing except `out of memory (core dumped)`. | C-enhancement,A-diagnostics,A-macros,T-compiler | low | Critical |
337,270,631 | go | cmd/compile: msan cannot handle structs with padding | ### What version of Go are you using (`go version`)?
go version devel +0dc814c Sat Jun 30 01:04:30 2018 +0000 linux/amd64
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
linux/amd64
### What did you do?
Attempted to dereference a C struct with padding from Go with the memory sanitizer enabled:
```go
// msan.go
package main
/*
#include <stdlib.h>
struct s {
int i;
char c;
};
struct s* mks(void) {
struct s* s = malloc(sizeof(struct s));
s->i = 0xdeadbeef;
s->c = 'n';
return s;
}
*/
import "C"
import "fmt"
func main() {
s := *C.mks()
fmt.Println(s.c)
}
```
I compiled with:
```
CC=clang-6.0 CXX=clang++-6.0 go build -msan -o msan msan.go
```
Upon execution, msan crashes spuriously:
```
~/go1.10/misc/cgo/testsanitizers/src/foo$ ./msan
Uninitialized bytes in __msan_check_mem_is_initialized at offset 5 inside [0x701000000000, 8)
==21637==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x4e8b08 (/home/benesch/go1.10/misc/cgo/testsanitizers/src/foo/msan+0x4e8b08)
SUMMARY: MemorySanitizer: use-of-uninitialized-value (/home/benesch/go1.10/misc/cgo/testsanitizers/src/foo/msan+0x4e8b08)
Exiting
```
The problem appears to be that the Go instrumentation is coarser than the C instrumentation. Only bytes 0-5 are marked as initialized by C (bytes 6-8 are padding), but Go asks msan to verify that all 8 bytes are initialized when it stores `s`.
In this particular example, there are two easy ways to soothe msan. The first is to remove the padding from the struct (e.g., `struct s { int i; int c }`). The second is to access fields within the struct without storing it into a temporary (e.g., `fmt.Println(C.mks().c)`). Neither of these "workarounds" are viable for real programs. | NeedsInvestigation,compiler/runtime | medium | Critical |
337,286,409 | TypeScript | meta: factory function inconsistencies | Related to #25220
There are quite a few inconsistencies among the factory functions:
* naming: sometimes the name doesn't really describe what it does.
* `createStatement` instead of `createExpressionStatement`: what statement does it create?
* `createBinary`: maybe binary number literal?
* `createParen`: `ParenthesizedExpression` or `ParenthesizedType`?
* `createSpread`: `SpreadElement` or `SpreadAssignment`?
* `createThrow`: becomes ambiguous once the ThrowExpression proposal advances to stage 4
* `createDo`: becomes ambiguous once the DoExpression proposal advances to stage 4
* Should all of the factory function names match the name of the Node exactly? If so, should they be renamed while keeping a deprecated alias around for backwards compatibility (like #25348)?
* nullable parameters:
* `createTypeLiteral` allows the `members` parameter to be `undefined`
* `createInterfaceDeclaration` on the other hand requires an array as `members` parameter
* overload signatures to avoid creating invalid nodes:
* `createYield` has overloads for that purpose
* `createImportClause` or `createExportDeclaration` for example allow all parameters to be `undefined`, resulting in an invalid node
* optional parameters:
* `objectAssignmentInitializer` in `createShorthandPropertyAssignment` is optional
* `typeArguments` in `createTypeReferenceNode` is required although it's nullable
* `decorators`
* e.g. `createConstructor` or `createExportAssignment` require a `decorators` parameter although it's an error to have decorators on those nodes
* `modifiers`
* e.g. `createExportAssignment` requires a `modifiers` parameter although this node cannot have any modifiers (not even `declare`) | Bug,Help Wanted,API | low | Critical |
337,303,671 | TypeScript | Why do we need to manually type unified overload signatures? | Please close if this has been asked before - nothing came up when searching.
## Search Terms
overload, function, multiple, default, combine, unify, top
## Suggestion
Today, when defining an overloaded call signature you have to manually type the implementation. Would it be possible to extend contextual types so TS can infer the implementation's types from context, just like TS already does for non-overloaded signatures?
## Use Cases
Today, contextual typing isn't able to infer parameter types for overloaded call signatures. Instead, users have to manually type the implementation.
## Examples
### Before
```ts
type Reserve = {
(from: Date, to: Date, destination: string): Reservation
(from: Date, destination: string): Reservation
}
let reserve: Reserve = (
from: Date,
toOrDestination: Date | string,
destination?: string
) => { /* ... */ }
```
### After
```ts
type Reserve = {
(from: Date, to: Date, destination: string): Reservation
(from: Date, destination: string): Reservation
}
let reserve: Reserve = (
from, // inferred as Date | Date = Date
toOrDestination, // inferred as Date | string
destination // inferred as string | undefined
) => { /* ... */ }
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,Awaiting More Feedback | low | Major |
337,323,163 | godot | Mono: Add reload icon to refresh export variables in the editor | **Godot version:**
3.0/3.1
**OS/device including version:**
All
**Issue description:**
Currently sometimes the editor already doesn't consistently reload exported variables in the editor, but another issue is when dev's are building outside of the editor using msbuild. There's no way to reload the editor to see those changes
**Suggest:**
Give us a little reload icon to refresh the editor without exiting the editor to detect these changes on command. | enhancement,topic:dotnet | low | Minor |
337,334,626 | rust | Diagnostic `error[E0408]: variable ... is not bound in all patterns` should be more helpful with typos. | Example:
```rust
enum Lol {
Foo,
Bar,
}
fn foo(x: (Lol,Lol)) {
use Lol::*;
match x {
(Foo,Bar)|(Ban,Foo) => {}
_ => {}
}
}
fn main() {
use Lol::*;
foo((Foo,Bar));
}
```
```
error[E0408]: variable `Ban` is not bound in all patterns
--> src/main.rs:9:9
|
9 | (Foo,Bar)|(Ban,Foo) => {}
| ^^^^^^^^^ --- variable not in all patterns
| |
| pattern doesn't bind `Ban`
```
I want it to detect atypical upper-cased variable and trigger search for of enum variants with small edit distance. The notice about the type may be even shown before the actual error message itself to avoid missing it. | C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics,D-papercut,D-terse | low | Critical |
337,380,266 | TypeScript | Mapped Types Breakdown With Extends (extends / implements have inconsistent behavior, | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.8.3, 2.9.1, 3.0.0-dev.20180630
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** `mapped implements extends`, `extends does not exist`
**Code**
```ts
// Take all keys in type T, add "x", and then make a new type out of this. Also, make
// the "x" property have a different type.
type AddX<T> = {
[key in (keyof T | "x")]: (
key extends "x" ? number : string
)
};
{
interface Interface extends AddX<Interface> {
}
let object!: Interface;
// test.ts:12:12 - error TS2339: Property 'x' does not exist on type 'Interface'.
object.x;
// test.ts:14:9 - error TS2322: Type 'Interface' is not assignable to type 'AddX<Interface>'.
// Property 'x' is missing in type 'Interface'.
let explicitObject: AddX<Interface> = object;
class InterfaceClass implements Interface {
// Without this line a type error happens, so x is definitely required on any instance
// of Interface, but it also doesn't exist on variables with a type of Interface.
x!: number;
}
}
{
class Class implements AddX<Class> {
x!: number;
}
let object!: Class;
object.x;
let explicitObject: AddX<Class> = object;
}
```
**Expected behavior:**
No errors from either code section.
**Actual behavior:**
The interface code complains that property x does not exist on type 'Interface'. It also complains that Interface is not assignable to AddX<Interface>, even though it extends that type.
**Playground Link:** https://www.typescriptlang.org/play/index.html#src=type%20AddX%3CT%3E%20%3D%20%7B%0D%0A%20%20%20%20%5Bkey%20in%20(keyof%20T%20%7C%20%22x%22)%5D%3A%20(%0D%0A%20%20%20%20%20%20%20%20key%20extends%20%22x%22%20%3F%20number%20%3A%20string%0D%0A%20%20%20%20)%0D%0A%7D%3B%0D%0A%0D%0A%7B%0D%0A%20%20%20%20interface%20Interface%20extends%20AddX%3CInterface%3E%20%7B%0D%0A%20%20%20%20%20%20%20%20%0D%0A%20%20%20%20%7D%0D%0A%20%20%20%20let%20object!%3A%20Interface%3B%0D%0A%20%20%20%20object.x%3B%0D%0A%0D%0A%20%20%20%20let%20explicitObject%3A%20AddX%3CInterface%3E%20%3D%20object%3B%0D%0A%0D%0A%20%20%20%20class%20InterfaceClass%20implements%20Interface%20%7B%0D%0A%20%20%20%20%20%20%20%20%2F%2F%20Without%20this%20line%20a%20type%20error%20happens%2C%20so%20x%20is%20definitely%20required%20on%20any%20instance%20of%20Interface%2C%20but%20it%20also%0D%0A%20%20%20%20%20%20%20%20%2F%2F%20%20doesn't%20exist%20on%20variables%20with%20a%20type%20of%20Interface.%0D%0A%20%20%20%20%20%20%20%20x!%3A%20number%3B%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0A%7B%0D%0A%20%20%20%20class%20Class%20implements%20Interface%20%7B%0D%0A%20%20%20%20%20%20%20%20x!%3A%20number%3B%0D%0A%20%20%20%20%7D%0D%0A%20%20%20%20let%20object!%3A%20Class%3B%0D%0A%20%20%20%20object.x%3B%0D%0A%0D%0A%20%20%20%20let%20explicitObject%3A%20AddX%3CClass%3E%20%3D%20object%3B%0D%0A%7D
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
Possibly related to https://github.com/Microsoft/TypeScript/issues/21326, in both cases we are trying to combine constant properties with mapped types. But I have very little understanding of why my code doesn't work, so they might not be related at all.
| Bug | low | Critical |
337,413,302 | pytorch | [caffe2] how to substract the mean value ? | I have a trouble that ,how caffe2 sub the mean val:
```python
# cast to float
data = m.Cast(data_uint8, "data", to=core.DataType.FLOAT)
# sub the mean val
mean = workspace.FeedBlob("meanval", np.array(127.5))
data = m.Sub(data, "meanval", data, broadcast=1)
# scale
data = m.Scale(data, 'data', scale=float(scale_factor))
```
when execute the line: m.Sub, I got :
TypeError: _CreateAndAddToSelf() takes at most 4 arguments (6 given) | caffe2 | low | Critical |
337,496,629 | go | x/tools/present: editable code is not sync'd between main slide and presenter slide | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.10.2 linux/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
### What did you do?
Using go "present -notes" to show editing some codes in the editable .code/.play presentation slides with presenter notes window active.
### What did you expect to see?
The edited codes are synced between the main window and the presenter window.
### What did you see instead?
The edited code is not synced between the two windows. The changes are only shown in the active window, i.e. If editing process in the main window, the codes in the presenter window do not change, and vice versa.
### Notes:
Bug in golang.org/x/tools/cmd/present/static/slides.js
Function setupPlayCodeSync adds EventListener in a loop using variable index i (probably expecting the inputHandler function is instantiated for each Listener).
It is not, there is only one instant of inputHandler, and the variable i will contain the last value of i, which is play.length.
One solution to this is as follow (Javascript coders probably could give better solution):
```
--- golang.org/x/tools/cmd/present/static/slides-orig.js 2017-09-20 21:24:06.052672029 +0700
+++ golang.org/x/tools/cmd/present/static/slides.js 2018-07-02 17:41:45.431850356 +0700
@@ -578,9 +578,10 @@
var play = document.querySelectorAll('div.playground');
for (var i = 0; i < play.length; i++) {
play[i].addEventListener('input', inputHandler, false);
+ play[i].setAttribute('play-index', i);
function inputHandler(e) {
- localStorage.setItem('play-index', i);
+ localStorage.setItem('play-index', e.target.getAttribute('play-index'));
localStorage.setItem('play-code', e.target.innerHTML);
}
}
``` | NeedsInvestigation,Tools | low | Critical |
337,500,317 | rust | panic! source location information does not account for macro expansion | Minimal working example ([playground](https://play.rust-lang.org/?gist=87760b03b9820a3b4e8226998cc72e4e&version=stable&mode=debug&edition=2015)):
```rust
macro_rules! foo {
($e:expr) => {
bar!($e);
baz!($e);
}
}
macro_rules! bar {
($e:expr) => { assert!($e); } // line 8
}
macro_rules! baz {
($e:expr) => { assert!(!$e); } // line 11
}
fn main() {
foo!(true); // line 15
}
```
produces the following run-time error:
```
thread 'main' panicked at 'assertion failed: !true', src/main.rs:15:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.
```
Notice how the `panic!` points at line 15, but the error happens at line 11.
Enabling `RUST_BACKTRACE=1` does not help:
```
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::print
at libstd/sys_common/backtrace.rs:71
at libstd/sys_common/backtrace.rs:59
2: std::panicking::default_hook::{{closure}}
at libstd/panicking.rs:211
3: std::panicking::default_hook
at libstd/panicking.rs:227
4: std::panicking::rust_panic_with_hook
at libstd/panicking.rs:463
5: std::panicking::begin_panic
at /checkout/src/libstd/panicking.rs:397
6: playground::main
at src/main.rs:15
7: std::rt::lang_start::{{closure}}
at /checkout/src/libstd/rt.rs:74
8: std::panicking::try::do_call
at libstd/rt.rs:59
at libstd/panicking.rs:310
9: __rust_maybe_catch_panic
at libpanic_unwind/lib.rs:105
10: std::rt::lang_start_internal
at libstd/panicking.rs:289
at libstd/panic.rs:374
at libstd/rt.rs:58
11: std::rt::lang_start
at /checkout/src/libstd/rt.rs:74
12: main
13: __libc_start_main
14: _start
```
---
It would be better for `assert!` and similar macros to not construct the error message from a `file:line:col`, but to use something better instead (like a macro-expansion API).
A much more useful error message would have been:
```
thread 'main' panicked at 'assertion failed: !true',
src/main.rs:15:5: foo!(true);
src/main.rs:11:19: assert!(!$e);
note: Run with `RUST_BACKTRACE=1` for a backtrace.
``` | A-debuginfo,C-enhancement,T-compiler | low | Critical |
337,579,004 | vscode | User profile | What am going to ask is huge. I know it might take a lot of work. It is not a small thing but very important for me.
I am using VS Code from day one. And during that time I have few OS reinstalled and one PC change. Every time when that happens I need to install all plugins and configure VS Code again.
Now I've realized that I use VS Code without reinstall for quite a while.
1. I have pretty much nice setup, which was made through 1-2 years. But I do not remember all changes I made in configuration or all extensions. I remember one day accidentally I found an article that recommended to use some cool font, I followed procedure to install and activate it. Also I've went through so many themes before I've found the one I like.
But if my PC crashes, I am not sure I'll ever get back to the same setup. Because I do not remember all extensions I have now, and where to look the name of that font and how to activate it. I do not remember name of the theme. So PC crush it will be pretty much devastating experience in terms of IDE setup.
2. I have PC at work and at home and laptop. I have VS Code installed everywhere, I often jump between PCs doing same job and sync through github. But my setup is different on every PC. And it is not something that I like. I cannot find that cool article about that cool font. So I just resigned that this is how it works. Sometimes I want to run a tool and discover it is not here because I did not yet installed it on this VS Code instance.
3. Sometimes I have to code few lined on the PC of other people as a supervisor. But setup of those users completely turn me off. I just cannot work on their color theme and without tools I love and depend on.
4. VS Code become a very advanced, versatile and flexible tool. МЫ Code setup for PHP developer looks and feel different then setup of JS developer or Markdown writer. For instance when I edit markdown files I want wrap lines at 80, when edit JS files I want to wrap limes at 160. And so on.
**feature request**
Allow users lo login to VS Code with live ID, and create cloud profile, and VS code not only restore configuration, key map, themes but also installs all extensions I have. If I install something new on PC one, when I open PC two it will automatically update. Also user can create few profiles for example one for working with PHP with needed extensions and theme, another for documentation work with markdown and such. not only user can restore environment, user can have different environments and quickly switch between them.
So I could have uniform experience, where ever I am. Even if I am in internet cafe on the edge of the universe, just install VS Code to write few lines, login, and here we are, MY VS Code. My. Only my, exactly like it was last time i'd opened it on the other edge of the universe.
| feature-request | medium | Critical |
337,581,361 | rust | Doc-comments inside using groups | The following is currently not permitted:
```rust
pub use third_party::{
/// Doc-comment explaining reexported entity.
some_function,
// Normal comments are allowed, though.
some_other_function,
};
```
I'm very fond of group usings, so I prefer to use them everywhere, but it is impossible in situations where the documentation is needed (we have encountered this problem [here](https://github.com/exonum/exonum/pull/756/files#diff-db51f3139619c4526a5967fac3ee28d9R42), by the way). | T-rustdoc,A-attributes,T-compiler,C-feature-request | low | Major |
337,584,262 | vscode | [json] don't suggest top level snippet if there's already an object |
Issue Type: <b>Bug</b>
- Open one of the user snippets JSON files
- Type " to bring up the empty snippet suggestion
- Tab to insert it
The provided snippet inserts top-level curly braces, which (I don't know JSON) an existing snippet file already has, making the new snippet invalid.
```JSON
// Valid
{
"Existing Snippet": {
"prefix": "my_snippet",
"body": "this is a snippet",
"description": "I'm an existing snippet"
}
}
// "End of file expected" warning on the next line
{
"snippetName": {
"prefix": "prefix",
"body": "snippet",
"description": "description"
}
}
```
VS Code version: Code 1.24.1 (24f62626b222e9a8313213fb64b10d741a326288, 2018-06-13T17:51:32.889Z)
OS version: Windows_NT x64 6.3.9600
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz (8 x 2395)|
|GPU Status|2d_canvas: enabled<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: disabled_software<br>video_decode: enabled<br>video_encode: enabled<br>vpx_decode: unavailable_software<br>webgl: enabled<br>webgl2: enabled|
|Memory (System)|15.95GB (10.49GB free)|
|Process Argv|C:\Program Files\Microsoft VS Code\Code.exe|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (2)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-lua|gcc|0.1.2
color-highlight|nau|2.3.0
(1 theme extensions excluded)
</details>
<!-- generated by issue reporter --> | feature-request,json | low | Critical |
337,586,991 | go | gccgo: syscall.Syscall error result is not reliable | The implementation of syscall.Syscall in the Go frontend version of the library sets `errno` to `0`, calls the C library function `syscall`, and then checks `errno`. If the final value of `errno` is not `0`, then `syscall.Syscall` returns a non-nil `error` value. This is the only documented way to check whether the C `syscall` function failed. Unfortunately, it is not reliable if a signal handler runs on the thread while `syscall` is executing. The signal handler may happen to set `errno` to a non-zero value while it executes. We should figure out a more reliable way to determine whether `syscall` failed. This may require writing assembly code. | NeedsInvestigation | low | Critical |
337,589,375 | go | cmd/link: exporting `free` c function from c-shared library hangs executable with that library LD_PRELOADed | ### What version of Go are you using (`go version`)?
go version go1.10.2 linux/amd64
### Does this issue reproduce with the latest release?
No idea
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/tumdum/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/tumdum/go"
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build872026307=/tmp/go-build -gno-record-gcc-switches"
### What did you do?
I've tried to intercept C function `free` via LD_PRELOAD.
```
$ cat main.go
package main
import "unsafe"
import "C"
//export free
func free(p unsafe.Pointer) {}
func main() {}
$ go build -o fake_malloc.so -buildmode=c-shared main.go
$ LD_PRELOAD=./fake_malloc.so yes
# in never prints nor ends
# in other shell:
$ sudo gdb -q -batch -ex "attach $(pidof yes)" -ex "thread apply all bt"
[New LWP 7421]
Loading Go Runtime support.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007fa6355d1072 in futex_wait_cancelable (private=<optimized out>, expected=0, futex_word=0x7fa635ed1488 <runtime_init_cond+40>) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
88 ../sysdeps/unix/sysv/linux/futex-internal.h: Nie ma takiego pliku ani katalogu.
Thread 2 (Thread 0x7fa6355c2700 (LWP 7421)):
#0 0x00007fa6355d1072 in futex_wait_cancelable (private=<optimized out>, expected=0, futex_word=0x7fa635ed1488 <runtime_init_cond+40>) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1 __pthread_cond_wait_common (abstime=0x0, mutex=0x7fa635ed1420 <runtime_init_mu>, cond=0x7fa635ed1460 <runtime_init_cond>) at pthread_cond_wait.c:502
#2 __pthread_cond_wait (cond=0x7fa635ed1460 <runtime_init_cond>, mutex=0x7fa635ed1420 <runtime_init_mu>) at pthread_cond_wait.c:655
#3 0x00007fa635c4a423 in _cgo_wait_runtime_init_done () at gcc_libinit.c:40
#4 0x00007fa635c4a2ae in free (p0=0x0) at _cgo_export.c:19
#5 0x00007fa6355cc26d in __pthread_attr_destroy (attr=<optimized out>) at pthread_attr_destroy.c:40
#6 0x00007fa635c4a690 in x_cgo_init (g=0x7fa635eb55c0 <runtime.g0>, setg=<optimized out>) at gcc_linux_amd64.c:49
#7 0x00007fa635c41407 in runtime.rt0_go () at /usr/local/go/src/runtime/asm_amd64.s:199
#8 0x00007fa6355c2700 in ?? ()
#9 0x0000000000000000 in ?? ()
Thread 1 (Thread 0x7fa6360be740 (LWP 7420)):
#0 0x00007fa6355d1072 in futex_wait_cancelable (private=<optimized out>, expected=0, futex_word=0x7fa635ed1488 <runtime_init_cond+40>) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1 __pthread_cond_wait_common (abstime=0x0, mutex=0x7fa635ed1420 <runtime_init_mu>, cond=0x7fa635ed1460 <runtime_init_cond>) at pthread_cond_wait.c:502
#2 __pthread_cond_wait (cond=0x7fa635ed1460 <runtime_init_cond>, mutex=0x7fa635ed1420 <runtime_init_mu>) at pthread_cond_wait.c:655
#3 0x00007fa635c4a423 in _cgo_wait_runtime_init_done () at gcc_libinit.c:40
#4 0x00007fa635c4a2ae in free (p0=p0@entry=0x5615ac7d7390) at _cgo_export.c:19
#5 0x00007fa63580fb4f in _nl_load_locale_from_archive (category=category@entry=12, namep=namep@entry=0x7ffde9675240) at loadarchive.c:190
#6 0x00007fa63580e6c7 in _nl_find_locale (locale_path=0x0, locale_path_len=0, category=category@entry=12, name=name@entry=0x7ffde9675240) at findlocale.c:154
#7 0x00007fa63580ddfc in __GI_setlocale (category=<optimized out>, locale=<optimized out>) at setlocale.c:340
#8 0x00005615ac2e085d in ?? ()
#9 0x00007fa6358031c1 in __libc_start_main (main=0x5615ac2e0830, argc=1, argv=0x7ffde9675428, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffde9675418) at ../csu/libc-start.c:308
#10 0x00005615ac2e0aba in ?? ()
```
This works fine with equivalent C version:
```
$ cat main.c
void free(void* p)
{
}
$ g++ main.c -fPIC -shared -ldl -o fake_malloc.so
$ LD_PRELOAD=./fake_malloc.so yes
y
y
...
```
It looks like some sort of deadlock in _cgo_wait_runtime_init_done
### What did you expect to see?
Stream of 'y'.
### What did you see instead?
Nothing.
| help wanted,NeedsInvestigation,compiler/runtime | low | Critical |
337,626,984 | TypeScript | Generic methods with type parameters in generic constraints cannot be overridden | **TypeScript Version:** 3.0.0-dev.20180630
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** generic method, subclass, override, type parameter constraint
**Code**
```ts
class A {
m<T, U extends T>(t: T, u: U) { }
}
class B extends A {
m<T, U extends T>(t: T, u: U) { }
// ﹋ error
}
```
**Expected behavior:**
No errors; the `m` method in the subclass has exactly the same signature as the method it is overriding.
**Actual behavior:**
The compiler warns on the method override `B.m`:
```Property 'm' in type 'B' is not assignable to the same property in base type 'A'.
Type '<T, U extends T>(t: T, u: U) => void' is not assignable
to type '<T, U extends T>(t: T, u: U) => void'.
Two different types with this name exist, but they are unrelated.
Types of parameters 'u' and 'u' are incompatible.
Type 'U' is not assignable to type 'T'.
Type 'T' is not assignable to type 'T'.
Two different types with this name exist, but they are unrelated.
```
Note that even if you change the names of the type parameters in method `B.m`, as in:
```ts
class B extends A {
m<T2, U2 extends T2>(t: T2, u: U2) { }
}
```
you still get the error `Type 'T' is not assignable to type 'T'. Two different types with this name exist, but they are unrelated.`
If you remove the `extends T` generic constraint, it compiles.
**Playground Link:**
[here](https://www.typescriptlang.org/play//#src=class%20A%20%7B%0D%0A%20%20m%3CT%2C%20U%20extends%20T%3E%28t%3A%20T%2C%20u%3A%20U%29%20%7B%20%7D%0D%0A%7D%0D%0A%0D%0Aclass%20B%20extends%20A%20%7B%0D%0A%20%20m%3CT%2C%20U%20extends%20T%3E%28t%3A%20T%2C%20u%3A%20U%29%20%7B%20%7D%0D%0A%7D)
**Related Issues:**
Nothing obviously related; #23960 maybe?
---
So, is this a compiler bug? Or is it intended, and if so, what's the reasoning for it? This issue is inspired by [a Stack Overflow question](https://stackoverflow.com/questions/51137379/override-abstract-method-typescript) that left me scratching my head.
| Bug | low | Critical |
337,638,127 | TypeScript | Infer project references from common monorepo patterns / tools | Genesis: see section "Repetitive configuration" in https://github.com/Microsoft/TypeScript/issues/3469#issuecomment-400439520
## Search Terms
monorepo infer project references automatically yarn lerna workspace package.json
## Suggestion
For common monorepo managers, we should natively understand cross-project references declared in `package.json` as if they were declared in `tsconfig.json`
Open questions:
* Which formats (lerna, yarn, pnpm, etc) would be supported? Can all of them be consistently detected, or would you need to opt in to a specific "monorepo format" to enable a specific resolution algorithm?
* How do we find the `tsconfig.json` file? This data is actually not present in the current (non-tsconfig) dependency graph. We could assume it to be in the package root; what if it's elsewhere?
* Would you need to opt in? Would there be a way to opt out? What should that look like?
## Use Cases
* Monorepos of all (supportable) flavors
## Examples
https://github.com/RyanCavanaugh/learn-a
This repo has a fair bit of duplication where projects need to write down their dependencies in package.json *and* as references (with different syntax) in `tsconfig.json`.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,In Discussion,Scenario: Monorepos & Cross-Project References | high | Critical |
337,690,971 | flutter | Material splash persists after leaving/reopening app | Internal: b/292548233
If the user leaves the app during a touch interaction, then reopens the app from the device home screen, the material splash is still visible. Instead, I would expect the splash to be completely gone when the app reopens.
This can happen naturally if the user taps a link in the app which bounces them out to a different app. Or a little less naturally, if the user hits the home button while their finger is already resting on the screen.
Some observations:
- It doesn't matter how long the user has been away from the app; if they left with a splash intact then they will briefly see it upon returning.
- Usually the splash fades away upon returning, but sometimes I've gotten it stuck on screen until the next tap interaction.
## Steps to Reproduce
1. Rest your finger on a widget with a material splash and let the splash grow.
2. With your finger still on the screen, press the device's home button to exit the app.
3. Open the app again (and lift your finger off the screen). The splash will still be visible in the app.
For instance, with this layout:
```dart
Widget build(BuildContext context) {
return Material(
child: InkWell(
child: Center(child: Text('hello')),
onTap: () {},
),
);
```
I captured this screenshot after returning to the app as noted in step 3 above, with no fingers on the screen:

| framework,f: material design,customer: mulligan (g3),has reproducible steps,P2,found in release: 3.3,found in release: 3.5,team-design,triaged-design | low | Major |
337,736,974 | godot | Deleting textures. Scene editor doesn't update sprite which was using it | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** 3.1.dev.custom_build.ecee0c9
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Windows 10 x64
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:** As in issue title. Only updates after restarting.
<!-- What happened, and what was expected. -->
**Steps to reproduce:** Add texture. Create scene with Sprite. Assign texture to the Sprite. Delete texture from FileSystem view(dock). Additionally you can delete it from .import folder (for some reason it won't delete imported version of the texture, also not sure if it's intended). Look at Sprite in Scene editor. | enhancement,discussion,topic:editor | low | Minor |
337,822,617 | rust | right shift of i128i by zero fails on s390x (SystemZ) | I've set up a repository that reproduces this issue on travis (using qemu): https://github.com/gnzlbg/repro_s390x
The following code right shifts a `<1 x i128>` containing `1` by `0`. The result of the shift should be `1`, but on debug builds it is `0`, causing the following code to panic (on release the code does not panic):
```rust
#![feature(repr_simd, platform_intrinsics)]
#![allow(non_camel_case_types)]
#[derive(Copy,Clone)]
#[repr(simd)]
pub struct i128x1(i128);
extern "platform-intrinsic" {
pub fn simd_shr<T>(x: T, y: T) -> T;
}
#[test]
pub fn test() {
unsafe {
let z = i128x1(0 as i128);
let o = i128x1(1 as i128);
if simd_shr(o, z).0 != o.0 {
panic!();
}
}
}
```
The `test` function generates the following LLVM-IR
```llvm
define void @repro_s390x::test() unnamed_addr #0 {
start:
%tmp_ret = alloca <1 x i128>, align 16
%o = alloca <1 x i128>, align 16
%z = alloca <1 x i128>, align 16
%0 = bitcast <1 x i128>* %z to i128*
store i128 0, i128* %0, align 8
%1 = bitcast <1 x i128>* %o to i128*
store i128 1, i128* %1, align 8
%2 = load <1 x i128>, <1 x i128>* %o, align 16
%3 = load <1 x i128>, <1 x i128>* %z, align 16
%4 = ashr <1 x i128> %2, %3
store <1 x i128> %4, <1 x i128>* %tmp_ret, align 16
%5 = load <1 x i128>, <1 x i128>* %tmp_ret, align 16
br label %bb1
bb1: ; preds = %start
%6 = bitcast <1 x i128> %5 to i128
%7 = bitcast <1 x i128>* %o to i128*
%8 = load i128, i128* %7, align 8
%9 = icmp ne i128 %6, %8
br i1 %9, label %bb2, label %bb3
bb2: ; preds = %bb1
; call std::panicking::begin_panic
call void @_ZN3std9panicking11begin_panic17h3f2f8b63a0f87b42E([0 x i8]* noalias nonnull readonly bitcast (<{ [14 x i8] }>* @byte_str.2 to [0 x i8]*), i64 14, { [0 x i64], { [0 x i8]*, i64 }, [0 x i32], i32, [0 x i32], i32, [0 x i32] }* noalias readonly dereferenceable(24) bitcast (<{ i8*, [16 x i8] }>* @byte_str.1 to { [0 x i64], { [0 x i8]*, i64 }, [0 x i32], i32, [0 x i32], i32, [0 x i32] }*))
unreachable
bb3: ; preds = %bb1
ret void
}
```
which is lowered to
```asm
repro_s390x::test (src/lib.rs:12):
stmg %r11, %r15, 88(%r15)
aghi %r15, -240
lgr %r11, %r15
lgr %r0, %r15
lgr %r1, %r0
aghi %r1, -24
la %r0, 168(%r1)
lgr %r2, %r0
nill %r2, 65520
lgr %r15, %r1
lgr %r0, %r15
lgr %r1, %r0
aghi %r1, -24
la %r0, 168(%r1)
lgr %r3, %r0
nill %r3, 65520
lgr %r15, %r1
lgr %r0, %r15
lgr %r1, %r0
aghi %r1, -24
la %r0, 168(%r1)
lgr %r4, %r0
nill %r4, 65520
lgr %r15, %r1
mvghi 8(%r4), 0
mvghi 0(%r4), 0
mvghi 8(%r3), 1
mvghi 0(%r3), 0
lg %r0, 0(%r3)
lg %r1, 8(%r3)
lg %r5, 0(%r4)
lg %r4, 8(%r4)
stg %r4, 200(%r11)
stg %r5, 192(%r11)
stg %r1, 216(%r11)
stg %r0, 208(%r11)
la %r0, 224(%r11)
la %r1, 208(%r11)
la %r4, 192(%r11)
stg %r2, 184(%r11)
lgr %r2, %r0
stg %r3, 176(%r11)
lgr %r3, %r1
brasl %r14, __ashrti3@PLT
lg %r0, 224(%r11)
lg %r1, 232(%r11)
lg %r2, 184(%r11)
stg %r1, 8(%r2)
stg %r0, 0(%r2)
lg %r0, 8(%r2)
lg %r1, 0(%r2)
stg %r0, 168(%r11)
stg %r1, 160(%r11)
j .LBB0_1
.LBB0_1:
lg %r1, 176(%r11)
lg %r0, 8(%r1)
lg %r2, 0(%r1)
lg %r3, 160(%r11)
xgr %r3, %r2
lg %r2, 168(%r11)
xgr %r2, %r0
ogr %r2, %r3
cghi %r2, 0
je .LBB0_3
j .LBB0_2
.LBB0_2:
larl %r2, .Lbyte_str.2
larl %r4, .Lbyte_str.1
lghi %r3, 14
brasl %r14, std::panicking::begin_panic@PLT
j .Ltmp9+2
.LBB0_3:
lmg %r11, %r15, 328(%r11)
br %r14
```
cc @rkruppe
Who maintains Rust's SystemZ support?
| O-SPARC,C-bug,O-SystemZ | low | Critical |
337,900,131 | pytorch | [Caffe2] Error running net train when running resnet50 | ## Issue description
I am getting the following error when training resnet50:
E0703 14:25:24.046447 34382 prefetch_op.h:110] Prefetching error basic_string::_M_construct null not valid
E0703 14:25:24.046572 34336 prefetch_op.h:83] Prefetching failed.
E0703 14:25:24.046938 34336 net_simple.cc:68] Operator failed: input: "train_reader" output: "imonaboat/data_nhwc" output: "imonaboat/label" name: "" type: "ImageInput" arg { name: "std" f: 128 } arg { name: "scale" i: 256 } arg { name: "cudnn_exhaustive_search" i: 0 } arg { name: "crop" i: 224 } arg { name: "is_test" i: 0 } arg { name: "use_cudnn" i: 1 } arg { name: "batch_size" i: 32 } arg { name: "mirror" i: 1 } arg { name: "mean" f: 128 } device_option { device_type: 1 cuda_gpu_id: 0 }
## Code example
The error is here: workspace.RunNet(train_model.net.Proto().name)
I am following this tutorial: https://github.com/caffe2/tutorials/blob/master/Multi-GPU_Training.ipynb
- Caffe2: | caffe2 | low | Critical |
337,935,730 | flutter | Row/Column with CrossAxisAlignment.stretch is unclear | By answering a lot on stackoverflow I realized many peoples struggle to correctly use `CrossAxisAlignment.stretch` when we want `Row/Column` to take the least amount of cross axis size.
The solution is quite straightforward
```dart
IntrinsicHeight(
child: Row(
crossAxisAlignment: CrossAxisAlignment.stretch,
...
)
)
```
and same thing but with `IntrinsicWidth` for `Column`. But there's a catch :
This isn't intuitive. It's hard to find `IntrinsicWidth/Height` and understand why and when we should use them. And the warning :
> This class is relatively expensive
adds a layer of fear on the top of the unknown.
I'll admit even myself when I started flutter spent a few hours in hell trying to achieve such layout.
_____
Taking this into consideration, I think it would be great to either make it more intuitive or help newcomers discover `IntrinsicHeight/Width`
This could be a new property on `Flex` such as `crossAxisSize: CrossAxisSize.min` or something similar.
We can also think of a cookbook. Or adding the example from above inside [IntrinsicHeight](https://docs.flutter.io/flutter/widgets/IntrinsicHeight-class.html) documentation as an usage example.
| framework,d: api docs,P2,team-framework,triaged-framework | low | Major |
337,954,214 | javascript-algorithms | Translate to russian | Hello.
I want to translate all README into Russian, are there any rules for translation? | enhancement | low | Major |
338,028,836 | go | gccgo: export data linked into binary | With gccgo,
```
$ go version
go version go1.10.3 gccgo (GCC) 9.0.0 20180622 (experimental) linux/amd64
$ gccgo --version
gccgo (GCC) 9.0.0 20180622 (experimental)
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
```
```
$ go build -gccgoflags="-static-libgo" hello.go
$ objdump -h hello
hello: file format elf64-x86-64
Sections:
Idx Name Size VMA LMA File off Algn
...
31 .go_export 000363b8 0000000000000000 0000000000000000 002b460b 2**0
CONTENTS, READONLY
...
```
Dumping out the .go_export section, it looks like indeed export data. I think it is not needed at run time. It probably should not be linked into the binary.
`-static-libgo` is just for demonstration. The export data also present in the default dynamically linked binary, or completely `-static` binary, though the size varies.
With some big binaries, like kubernetes, the export data can be quite big (over 40% of the binary size).
cc @ianlancetaylor
| NeedsFix | low | Major |
338,032,403 | go | cmd/compile: debug info for inlined funcs are sometimes lost | with go version devel +23ce272bb1 Mon Jul 2 17:50:00 2018 +0000 linux/amd64
<pre>
$ cat main.go
package main
import "fmt"
func main() {
x := &X{Foo: 10}
y := InlineThis(x, "bar")
fmt.Printf("Y = %+v\n", y)
}
type X struct {
Foo int
}
func InlineThis(x *X, s string) int {
y := x.Foo + len(s)
return y
}
</pre>
With go1.11, I expected debug info for the InlineThis would be present in the (default) optimized binary,
but it doesn't.
With `--gcflags="-N"`, I could see some trace of InlineThis in the binary.
<pre>
<1><732df>: Abbrev Number: 3 (DW_TAG_subprogram)
<732e0> DW_AT_name : main.InlineThis
<732f0> DW_AT_inline : 1 (inlined)
<732f1> DW_AT_external : 1
</pre>
@heschik | NeedsInvestigation,Debugging,compiler/runtime | low | Critical |
338,049,467 | nvm | NVM uses packages which failed checksum, with warning only. It should fail to install, instead. | - Operating system and version:
Mac OS/X High Sierra
- `nvm debug` output:
<details>
<!-- do not delete the following blank line -->
```sh
nvm --version: v0.33.11
$TERM_PROGRAM: Apple_Terminal
$SHELL: /bin/bash
$SHLVL: 1
$HOME: /Users/doug
$NVM_DIR: '$HOME/.nvm'
$PATH: $NVM_DIR/versions/node/v10.4.0/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/VMware Fusion.app/Contents/Public
$PREFIX: ''
$NPM_CONFIG_PREFIX: ''
$NVM_NODEJS_ORG_MIRROR: ''
$NVM_IOJS_ORG_MIRROR: ''
shell version: 'GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin17)'
uname -a: 'Darwin 17.6.0 Darwin Kernel Version 17.6.0: Tue May 8 15:22:16 PDT 2018; root:xnu-4570.61.1~1/RELEASE_X86_64 x86_64'
OS version: Mac 10.13.5 17F77
curl: /usr/bin/curl, curl 7.54.0 (x86_64-apple-darwin17.0) libcurl/7.54.0 LibreSSL/2.0.20 zlib/1.2.11 nghttp2/1.24.0
wget: /usr/local/bin/wget, GNU Wget 1.19.4 built on darwin17.3.0.
git: /usr/bin/git, git version 2.15.2 (Apple Git-101.1)
grep: /usr/bin/grep, grep (BSD grep) 2.5.1-FreeBSD
awk: /usr/bin/awk, awk version 20070501
sed: illegal option -- -
usage: sed script [-Ealn] [-i extension] [file ...]
sed [-Ealn] [-i extension] [-e script] ... [-f script_file] ... [file ...]
sed: /usr/bin/sed,
cut: illegal option -- -
usage: cut -b list [-n] [file ...]
cut -c list [file ...]
cut -f list [-s] [-d delim] [file ...]
cut: /usr/bin/cut,
basename: illegal option -- -
usage: basename string [suffix]
basename [-a] [-s suffix] string [...]
basename: /usr/bin/basename,
rm: illegal option -- -
usage: rm [-f | -i] [-dPRrvW] file ...
unlink file
rm: /bin/rm,
mkdir: illegal option -- -
usage: mkdir [-pv] [-m mode] directory ...
mkdir: /bin/mkdir,
xargs: illegal option -- -
usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] [-J replstr]
[-L number] [-n number [-x]] [-P maxprocs] [-s size]
[utility [argument ...]]
xargs: /usr/bin/xargs,
nvm current: v10.4.0
which node: $NVM_DIR/versions/node/v10.4.0/bin/node
which iojs:
which npm: $NVM_DIR/versions/node/v10.4.0/bin/npm
npm config get prefix: $NVM_DIR/versions/node/v10.4.0
npm root -g: $NVM_DIR/versions/node/v10.4.0/lib/node_modules
```
</details>
- `nvm ls` output:
<details>
<!-- do not delete the following blank line -->
```sh
v8.2.1
v8.9.3
-> v10.4.0
default -> 8.2.1 (-> v8.2.1)
node -> stable (-> v10.4.0) (default)
stable -> 10.4 (-> v10.4.0) (default)
iojs -> N/A (default)
lts/* -> lts/carbon (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.14.3 (-> N/A)
lts/carbon -> v8.11.3 (-> N/A)
```
</details>
- How did you install `nvm`? (e.g. install script in readme, Homebrew):
From script on main site.
- What steps did you perform?
I had `v8.9.3` "used" and wanted to switch to an later version. I typed in `nvm use v10.4.0`. Nvm told me it was not installed so I ran the install command: `nvm install v10.4.0`.
- What happened?
I had an outdated version of `shasum` sitting in my `/usr/local/bin` folder, which was giving me wrong checksums. When I attempted to install node, the checksum test failed, but it downloaded the package anyway, and installed it. Here is the output:
```
$ nvm install v10.4.0
Downloading and installing node v10.4.0...
Local cache found: $NVM_DIR/.cache/bin/node-v10.4.0-darwin-x64/node-v10.4.0-darwin-x64.tar.gz
Computing checksum with sha256sum
Checksums do not match: 'ae191912605f3ebc9d17ef58d1e6b976b1ce6e8d' found, '82b27983c990a6860e8d729e0b15acf9643ffca0eff282a926268849dfd2c3d2' expected.
Checksum check failed!
Removing the broken local cache...
Downloading https://nodejs.org/dist/v10.4.0/node-v10.4.0-darwin-x64.tar.gz...
######################################################################## 100.0%
Computing checksum with sha256sum
Checksums do not match: 'ae191912605f3ebc9d17ef58d1e6b976b1ce6e8d' found, '82b27983c990a6860e8d729e0b15acf9643ffca0eff282a926268849dfd2c3d2' expected.
Now using node v10.4.0 (npm v6.1.0)
```
- What did you expect to happen?
I expected `nvm` to fail the installation with an error, and refuse to `use` the version which failed the checksum.
- Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`?
No.
| bugs,installing node: checksums,pull request wanted | low | Critical |
338,059,272 | go | runtime: status of relative filenames in //line directives unclear | I'd like to be able to specify relative paths in `//line` directives, but it's unclear whether this is permitted. Experimentally, it seems as though they are not. #3335 and #24183 are both related in terms of trying to specify what's allowed in the directive, but I haven't seen this specific issue mentioned.
(For `runtime.Caller` info, it seems to expect that the filename is an absolute path.)
#### What did you do?
```shell
$ mkdir foo
$ cat > foo/foo.go
package main
func main() {
asdf
}
$ cat > foo/bar.go
package main
//line baz.go:1234
func quux() {
asdf
}
$ go build ./foo
```
#### What did you expect to see?
```
# foo
foo/baz.go:1235:5: undefined: asdf
foo/foo.go:4:5: undefined: asdf
```
#### What did you see instead?
```
# foo
baz.go:1235: undefined: asdf
foo/foo.go:4:5: undefined: asdf
```
#### System details
```
go version go1.10.3 linux/amd64
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/light/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/light"
GORACE=""
GOROOT="[redacted]"
GOTMPDIR=""
GOTOOLDIR="[redacted]"
GCCGO="gccgo"
CC="clang"
CXX="[redacted]"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build576788924=/tmp/go-build -gno-record-gcc-switches"
GOROOT/bin/go version: go version go1.10.3 linux/amd64
GOROOT/bin/go tool compile -V: compile version go1.10.3
uname -sr: [redacted]
/lib/x86_64-linux-gnu/libc.so.6: [redacted]
gdb --version: [redcated]
``` | Documentation,NeedsFix,compiler/runtime | low | Critical |
338,073,005 | neovim | TUI: system(), systemlist(), :! in same process-group | #8217 and #8450 describe some casualties of the change in #8107: since #8107, all processes spawned by Nvim are created in their own session (by calling `setsid()` in the child). Using `setsid()` avoids serious bugs like #6530 (orphaned processes). It's a good thing to do, and is essentially what Vim's `job_start()` does as well.
However, the terminal version (at least) of Vim doesn't do that for `system()`, `systemlist()` and `:!`.
"Fixing" this as attempted in https://github.com/neovim/neovim/pull/8389 would regress https://github.com/neovim/neovim/pull/8107 for `system()`, `systemlist()`, and `:!`. For example, CTRL-C during this command would leave orphan processes:
:echo system('sleep 30|sleep 30|sleep 30')
Vim's approach for `system()`, `systemlist()`, and `:!` is:
1. Block signals
2. `fork()`
3. Unblock signals in the child
4. On CTRL-C, send SIGINT to the process-group id
5. The parent (`vim`) ignores the signal. The child and its descendants do _not_, so sending the signal to the group kills everything except `vim` itself.
To do this in Nvim will (I think) require adding a case in `process_spawn()` to work without using libuv. This could be implemented like how `src/nvim/os/pty_process_unix.c` implements `ProcessType.kProcessTypePty`: add `ProcessType.kProcessTypeUnix` and special-case it.
The work will be in implementing `src/nvim/os/unix_process.c` (again as mentioned, `src/nvim/os/pty_process_unix.c` would be a good starting point).
| compatibility,job-control,tui,system | low | Critical |
338,078,848 | pytorch | [caffe2] Drop connections | Hi,
Is there a way to implement a drop connection layer in caffe2? I'm not talking of dropout.
https://stats.stackexchange.com/questions/201569/difference-between-dropout-and-dropconnect
Thanks. | caffe2 | low | Minor |
338,089,845 | rust | edition lint: migrating `extern crate` with `#[macro_use]` | The migration issue is that `#[macro_use] extern crate foo;` beings macros into scope from `foo` and `extern crate` is unidiomatic in the 2018 edition. `local_inner_macros ` is the current solution but as discussed in #50911, we're not 100% that works. | C-enhancement,A-lints,T-compiler,L-macro_use_extern_crate,A-edition-2018 | low | Major |
338,099,151 | rust | edition lint: declarations obviated by in-band lifetimes | cc https://github.com/rust-lang/rust/issues/44524
e.g., in `fn two_args<'b>(arg1: &Foo, arg2: &'b Bar) -> &'b Baz`, suggest that `<'b>` is not necessary. | A-lints,T-lang,A-edition-2018 | low | Minor |
338,181,264 | TypeScript | Set TypeScript compiler's base/working directory | It looks like TypeScript compiler resolves files relative to the location of the `tsconfig.json` file (however, I couldn't find anything about paths resolution in the official documentation).
Is there a way to specify an alternative base/working directory, which will be used for relative paths resolution?
I want to use a generic `tsconfig.json` file to compile multiple projects in various directories (one at a time).
P/S: I've [created a question](https://stackoverflow.com/q/51158989/1056679) on StackOverflow first, but haven't received any attention there, so I've decided to ask here directly. | Suggestion,In Discussion | medium | Critical |
338,231,394 | pytorch | nvcc fatal : A single input file is required for a non-link phase when an outputfile is specified | nvcc fatal : A single input file is required for a non-link phase when an outputfile is specified
CMake Error at gloo_cuda_generated_nccl.cu.o.Release.cmake:203 (message):
Error generating
/home/lty/caffe2/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/./gloo_cuda_generated_nccl.cu.o
nvcc fatal : A single input file is required for a non-link phase when an outputfile is specified
third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/build.make:77: recipe for target 'third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o' failed
make[2]: *** [third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o] Error 1
nvcc fatal : A single input file is required for a non-link phase when an outputfile is specified
make[2]: *** 正在等待未完成的任务....
CMake Error at gloo_cuda_generated_cuda.cu.o.Release.cmake:203 (message):
Error generating
/home/lty/caffe2/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir//./gloo_cuda_generated_cuda.cu.o
CMake Error at gloo_cuda_generated_cuda_private.cu.o.Release.cmake:203 (message):
Error generating
/home/lty/caffe2/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir//./gloo_cuda_generated_cuda_private.cu.o
third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/build.make:63: recipe for target 'third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/gloo_cuda_generated_cuda.cu.o' failed
make[2]: *** [third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/gloo_cuda_generated_cuda.cu.o] Error 1
third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/build.make:70: recipe for target 'third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/gloo_cuda_generated_cuda_private.cu.o' failed
make[2]: *** [third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/gloo_cuda_generated_cuda_private.cu.o] Error 1
CMakeFiles/Makefile2:626: recipe for target 'third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/all' failed
make[1]: *** [third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/all] Error 2
make[1]: *** 正在等待未完成的任务....
[ 54%] Built target caffe2
[ 54%] Built target python_copy_files
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
This is the output when make -j8 install.
I really have no idea and could not find any answer about this.
Can anyone help me? | awaiting response (this tag is deprecated),caffe2 | low | Critical |
338,250,796 | node | Embedding: static nodejs variables are not set to default | * **Version**: v8.x (maybe any)
* **Platform**: MS Windows 7
* **Subsystem**: embedding
<!-- Enter your issue details below this comment. -->
I use nodejs to run javascript code in my application. And I have found that after running script from code, script from file will not run. It is caused by unpredicted value of `eval_string` static variable, that was not set to nullptr at previous run, and bootstrap tries to run code instead of file. Also I think, some other static variables can interrupt running scripts too. | help wanted,c++,embedding | low | Major |
338,260,169 | opencv | Failed tests perf_features2d, perf_stitching, sanity_features2d, sanity_stitching when compiled with bundled libjpeg | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV and testdata => 3.4.1
- Operating System / Platform => Windows 10 x64 1803
- Compiler => mingw-w64 8.1.0
- CMake: 3.11.4
- CMake generator: MinGW Makefiles
- CMake build tool: C:/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin/mingw32-make.exe
- Configuration: Release
##### Detailed description
When compiled with bundled libjpeg, tests perf_features2d, perf_stitching, sanity_features2d, sanity_stitching fail. The reason is s1.jpg and s2.jpg images located in testdata/stitching are not read properly.
##### Steps to reproduce
Compilation:
> set PrefixDir=C:
> cd %PrefixDir%\libs\opencv-3.4.1-build
> cmake -DWITH_VTK:BOOL="0" -DWITH_FFMPEG:BOOL="0" -DBUILD_opencv_cudaimgproc:BOOL="0" -DBUILD_PERF_TESTS:BOOL="1" -DWITH_GTK:BOOL="0" -DWITH_CUFFT:BOOL="0" -DCMAKE_BUILD_TYPE:STRING="Release" -DWITH_EIGEN:BOOL="0" -DBUILD_TESTS:BOOL="1" -DBUILD_IPP_IW:BOOL="0" -DCMAKE_INSTALL_PREFIX:PATH="%PrefixDir%\libs\opencv-3.4.1-release-x64" -DWITH_OPENCL:BOOL="0" -DBUILD_opencv_python_bindings_generator:BOOL="0" -DBUILD_opencv_python3:BOOL="0" -DBUILD_PROTOBUF:BOOL="0" -DBUILD_opencv_cudabgsegm:BOOL="0" -DBUILD_opencv_dnn:BOOL="0" -DWITH_MATLAB:BOOL="0" -DWITH_WEBP:BOOL="0" -DBUILD_opencv_cudaarithm:BOOL="0" -DBUILD_opencv_cudastereo:BOOL="0" -DWITH_PROTOBUF:BOOL="0" -DBUILD_opencv_cudawarping:BOOL="0" -DWITH_GPHOTO2:BOOL="0" -DWITH_OPENCLAMDFFT:BOOL="0" -DWITH_NVCUVID:BOOL="0" -DWITH_GSTREAMER:BOOL="0" -DWITH_CUDA:BOOL="0" -DWITH_1394:BOOL="0" -DBUILD_opencv_cudafeatures2d:BOOL="0" -DBUILD_opencv_cudalegacy:BOOL="0" -DBUILD_opencv_cudaobjdetect:BOOL="0" -DBUILD_opencv_cudaoptflow:BOOL="0" -DWITH_IPP:BOOL="0" -DWITH_OPENMP:BOOL="1" -DWITH_ITT:BOOL="0" -DBUILD_opencv_cudev:BOOL="0" -DBUILD_JAVA:BOOL="0" -DBUILD_ITT:BOOL="0" -DWITH_OPENCLAMDBLAS:BOOL="0" -DBUILD_opencv_python2:BOOL="0" -DWITH_LAPACK:BOOL="0" -DBUILD_opencv_cudafilters:BOOL="0" -DBUILD_opencv_cudacodec:BOOL="0" -DBUILD_opencv_java_bindings_generator:BOOL="0" -DWITH_CUBLAS:BOOL="0" -G "MinGW Makefiles" ..\opencv-3.4.1
> mingw32-make -j4
> set OPENCV_TEST_DATA_PATH=C:\libs\testdata\
> mingw32-make test
##### Which does not help
Updating libjpeg to recent version 9c
##### Which helps
Resaving these images into jpeg simply with Paint, or...
1. Compiling libjpeg-turbo version 1.5.3 with default settings using cmake and mingw.
2. Compiling opencv with libjpeg-turbo (static variant). Configuration with cmake parameters:
-DBUILD_JPEG:BOOL="0" -DJPEG_LIBRARY:PATH=C:\libs\libjpeg-turbo-gcc64\lib\libturbojpeg.a -DJPEG_INCLUDE_DIR:PATH=C:\libs\libjpeg-turbo-gcc64\include\
Media I/O:
ZLib: build (ver 1.2.11)
JPEG: C:/libs/libjpeg-turbo-gcc64/lib/libturbojpeg.a (ver )
PNG: build (ver 1.6.34)
TIFF: build (ver 42 - 4.0.9)
JPEG 2000: build (ver 1.900.1)
OpenEXR: build (ver 1.7.1)
| category: imgcodecs | low | Critical |
338,361,609 | vscode | Editor/terminal only anti aliasing | So would love to have a working anti aliasing. Right now it's all or nothing with workbench setting - so either my editor fonts are blurry or, if I turn off AA then the menu fonts (which I can't appear to change) get all crappy due to them using a font that doesnt work well with AA.
Can we please have an option to use anti aliasing in the editor and terminal only? Seems like a pretty standard thing that all other editors support. I realize there is a lot of discussions but all the issues have been closed with no real working resolution a far as I am concerned. | feature-request,font-rendering | low | Major |
338,371,416 | node | doc: troubleshooting FAQ? | We have some frequently emerged issues mostly concerned Node.js installation. Just two classic examples:
1. Windows issue with antivirus and "Performance counters" / "Event tracing" options: https://github.com/nodejs/node/issues/20538
2. Global `npm` installation issue: https://github.com/nodejs/node/issues/21661
Should we create some troubleshooting FAQ doc and link to it from the main `nodejs/node` GitHub page and from the nodejs.org site?
| help wanted,doc,meta | low | Major |
338,373,840 | pytorch | Mismatch in behaviour of WeightedRandomSampler and other samplers | ## Issue description
All the other samplers for `torch.utils.data.DataLoader()` are designed to be used such that you can iterate an epoch (one epoch being the number of listed indices/data-points) over them by simply having them in a `for` loop. However, looping through a `DataLoader` with the sampler as `WeightedRandomSampler` simply occurs `num_samples` times. The `num_samples` argument is misleading, as one might imagine they need only one sample at a time, and/or they plan to update the weights after every sample.
It would be more consistent with other samplers if the `WeightedRandomSampler` can be iterated over as many times as the length of the `weights` parameter (similar to how many times the `SubsetRandomSampler` iterates) and the `num_samples` argument is made to default to `1`
## Code example
Note: The `enumerate()` function is simply used to count the number of times the loop can procede
Code:
```python
import torch
from torchvision import datasets, transforms
mnist_test_dataset = datasets.MNIST(
"data",
train=False,
download=True,
transform=transforms.Compose([transforms.ToTensor()]),
)
train_loader_no_sampler = torch.utils.data.DataLoader(
mnist_test_dataset, batch_size=1, shuffle=True
)
train_loader_random = torch.utils.data.DataLoader(
mnist_test_dataset,
sampler=torch.utils.data.sampler.RandomSampler(mnist_test_dataset),
)
train_loader_subsetrandom = torch.utils.data.DataLoader(
mnist_test_dataset,
sampler=torch.utils.data.sampler.SubsetRandomSampler(range(10000)),
)
train_loader_weightedrandom = torch.utils.data.DataLoader(
mnist_test_dataset,
sampler=torch.utils.data.sampler.WeightedRandomSampler(
weights=[1] * 10000, num_samples=1
),
)
for iter_no, (data, label) in enumerate(train_loader_no_sampler):
continue
print("No sampler: ", iter_no)
for iter_no, (data, label) in enumerate(train_loader_random):
continue
print("Random sampler: ", iter_no)
for iter_no, (data, label) in enumerate(train_loader_subsetrandom):
continue
print("Subset Random sampler: ", iter_no)
for iter_no, (data, label) in enumerate(train_loader_weightedrandom):
continue
print("Weighted Random sampler: ", iter_no)
```
Output: (MNIST test dataset contains 10000 samples)
```
No sampler: 9999
Random sampler: 9999
Subset Random sampler: 9999
Weighted Random sampler: 0
```
## System Info
Collecting environment information...
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: 8.0.61
OS: Ubuntu 16.04.4 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.11.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.2.88
GPU models and configuration: GPU 0: GeForce 840M
Nvidia driver version: 396.24.02
cuDNN version: Probably one of the following:
/usr/local/cuda-9.0/lib64/libcudnn.so.7.1.4
/usr/local/cuda-9.0/lib64/libcudnn_static.a
/usr/local/cuda-9.2/lib64/libcudnn.so.7.1.4
/usr/local/cuda-9.2/lib64/libcudnn_static.a
Versions of relevant libraries:
[pip] numpy (1.14.5)
[pip] numpydoc (0.8.0)
[pip] torch (0.4.0)
[pip] torchvision (0.2.1)
[conda] magma-cuda80 2.3.0 1 soumith
[conda] torch 0.4.0 <pip>
[conda] torchvision 0.2.1 <pip>
cc @SsnL @VitalyFedyunin @ejguan | todo,module: dataloader,triaged | low | Critical |
338,378,984 | rust | It would be nice if JoinHandle<T> were must-use for T != () | One IRC today, someone had code not working because they were doing
```rust
thread::spawn(|| mylib::my_function);
```
Which of course returns the function rather than running it.
Arguably it's almost always mistake to not keep the JoinHandle if the thread is going to return a non-unit value, since if you didn't care the closure could just be `-> ()`, so this should lint, but AFAIK `#[must_use]` on the function or type would warn for `JoinHandle<()>` too today.
Repro link: https://play.rust-lang.org/?gist=fd8973579bf0bf474a7e2b9e135ff946&version=nightly | A-diagnostics,T-libs-api,T-compiler,C-feature-request | low | Major |
338,384,498 | vscode | Shared properties in launch.json | ## Search Terms
launch.json, configuration, url
## Suggestion
The ability to specify shared properties among launch configurations.
Allowing properties to be inherited or specified in some common way while allowing overrides in individual entries would ease this situation greatly.
## Use Cases
For example, I have many launch configurations that are identical except the URL. Management of changes then has to be propagated to each and every entry. Often a single launch.json can contain dozens of entries, and changing something simple but common like browser or other option must be done by search and replace. SNR itself is easy, but the repeated entries clutter up the launch.json and make spotting the actual differences when they exist other than URL more difficult.
I fully understand this is a corner case and not relevant to a large audience, however it would be extremely helpful for those who do need it, and the implementation is small in scope.
## Examples
Not too picky about actual implementation, but a possible idea is below. Multiple shared configs accessed by name would be even better to allow multiple default settings.
```json
{
"version": "0.2.0",
"defaultConfiguration": [
"preLaunchTask": "build",
"request": "launch",
"sourceMaps": true,
"port": 9222,
"smartStep": true,
"breakOnLoad": true,
"webRoot": "${workspaceRoot}"
],
"configurations": [
{
"name": "Demo1 (Chrome)",
"type": "chrome",
"url": "http://127.0.0.1:8888/demo1/",
},
{
"name": "Demo1 (Firefox)",
"type": "firefox",
"url": "http://127.0.0.1:8888/demo1/",
},
{
"name": "Demo2 (Chrome)",
"type": "chrome",
"url": "http://127.0.0.1:8888/demo2/",
},
]
``` | feature-request,debug | high | Critical |
338,397,698 | pytorch | [Caffe2] build_ios.sh => 'is only available on iOS 11 or newer' | ## Issue description
I can't install build_ios.sh for caffe2 ios.
`aligned allocation function of tyle ... is only available on iOS 11 or newer`
My ipad version is higher than iOS 11.
Macbook Pro version is macOS High Sierra 10.13.5
Xcode version is 9.4.1
Python3
How can I fix the problem?!?
Thank you
iOS installing Error with below log
`[ 35%] Building CXX object caffe2/CMakeFiles/caffe2.dir/utils/threadpool/ThreadPool.cc.o
In file included from /Users/pulse9_mac/iOS/pytorch/caffe2/utils/threadpool/ThreadPool.cc:2:
In file included from /Users/pulse9_mac/iOS/pytorch/caffe2/utils/threadpool/WorkersPool.h:6:
/Users/pulse9_mac/iOS/pytorch/caffe2/core/common.h:172:29: error: aligned allocation function of
type 'void *(std::size_t, std::align_val_t)' is only available on iOS 11 or newer
[-Waligned-allocation-unavailable]
return std::unique_ptr<T>(new T(std::forward<Args>(args)...));
^
/Users/pulse9_mac/iOS/pytorch/caffe2/utils/threadpool/ThreadPool.cc:80:18: note: in
instantiation of function template specialization 'caffe2::make_unique<caffe2::ThreadPool,
int &>' requested here
return caffe2::make_unique<ThreadPool>(numThreads);
^
/Users/pulse9_mac/iOS/pytorch/caffe2/core/common.h:172:29: note: if you supply your own aligned
allocation functions, use -Wno-aligned-allocation-unavailable to silence this diagnostic
return std::unique_ptr<T>(new T(std::forward<Args>(args)...));
^
/Users/pulse9_mac/iOS/pytorch/caffe2/core/common.h:172:29: error: aligned deallocation function
of type 'void (void *, std::align_val_t) noexcept' is only available on iOS 11 or newer
[-Waligned-allocation-unavailable]
return std::unique_ptr<T>(new T(std::forward<Args>(args)...));
^
/Users/pulse9_mac/iOS/pytorch/caffe2/core/common.h:172:29: note: if you supply your own aligned
allocation functions, use -Wno-aligned-allocation-unavailable to silence this diagnostic
2 errors generated.
make[2]: *** [caffe2/CMakeFiles/caffe2.dir/utils/threadpool/ThreadPool.cc.o] Error 1
make[1]: *** [caffe2/CMakeFiles/caffe2.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....`
| caffe2 | low | Critical |
338,506,753 | go | cmd/go: check for import path collision in go build | ### What version of Go are you using (`go version`)?
go version go1.10.3 linux/amd64
### Does this issue reproduce with the latest release?
Yes
**Proposal:** Raise a warning or an error at compile time if the import path of a local package matches the import path of package from the standard library.
This is prevents a fairly niche problem, namely collisions, but in practice the compile error that they produce remind of a declaration/syntax/code organization error so it's pretty hard to guess what went wrong. | Proposal,Proposal-Accepted,NeedsFix,GoCommand | low | Critical |
338,585,112 | rust | coercions do not reach into aggregates - converging coercions | [This (playground):](https://play.rust-lang.org/?gist=c95124c2c3f228927fb6632c7f6e1137&version=stable&mode=debug&edition=2015)
```rust
fn foo() {}
fn bar() {}
fn main() {
let _ = [(foo), (bar)]; // OK
let _ = [(foo,), (bar,)]; // FAILS
}
```
fails to compile with the following error:
```
error[E0308]: mismatched types
--> src/main.rs:5:23
|
5 | let _ = [(foo,), (bar,)]; // FAILS
| ^^^ expected fn item, found a different fn item
|
= note: expected type `fn() {foo}`
found type `fn() {bar}`
``` | C-enhancement,T-lang,A-coercions | low | Critical |
338,615,704 | pytorch | [pytorch] Make dtype second positional argument of tensor factory methods | NumPy lets the user pass dtype without kwarg, as the second positional argument: `np.zeros((3, 4), np.float32)`
In PyTorch `torch.zeros((3,4), torch.float32)` results in: `TypeError: zeros(): argument 'out' (position 2) must be Tensor, not torch.dtype`
I'd suggest the `out` argument is more exotic than `dtype`, and is a better candidate for later positional args, or as keyword-only.
cc @mruberry @rgommers @heitorschueroff | triaged,module: numpy,module: pybind,module: ux | low | Critical |
338,631,172 | TypeScript | Poor type inference for `reduce` | **TypeScript Version:** 3.0.0-dev.20180705
**Code**
```ts
function toStrings(arr: ReadonlyArray<object>): string[] {
return arr.reduce((acc, obj) => {
acc.push(obj.toString());
return acc;
}, [] as string[]);
}
```
**Expected behavior:**
No error.
**Actual behavior:**
```
src/a.ts:2:2 - error TS2322: Type 'object' is not assignable to type 'string[]'.
Property 'length' is missing in type '{}'.
2 return arr.reduce((acc, obj) => {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 acc.push(obj.toString());
~~~~~~~~~~~~~~~~~~~~~~~~~~~
4 return acc;
~~~~~~~~~~~~~
5 }, [] as string[]);
~~~~~~~~~~~~~~~~~~~~
src/a.ts:3:7 - error TS2339: Property 'push' does not exist on type 'object'.
3 acc.push(obj.toString());
```
No error if `arr` is `ReadonlyArray<number>` or some other non-`object` type.
No error if I explicitly specify `arr.reduce<string[]>`.
No error if I remove the first two overloads to `reduce`, which are non-generic. | Suggestion,Needs Proposal,Domain: lib.d.ts | medium | Critical |
338,660,728 | flutter | add setTag to flutter_google_maps plugin | Hi,
currently its not possible to associate any custom data to a marker on the Map which is a big disadvantage.
As the native API offers a setTag method https://developers.google.com/maps/documentation/android-sdk/marker#marker-data
It would be great if that could be added to the `MarkerOptions` and returned by the tap handler.
Thanks for your awesome work! | c: new feature,p: maps,customer: google,package,c: proposal,team-ecosystem,P2,triaged-ecosystem | low | Minor |
338,687,088 | go | cmd/vet: reported typecheck errors not sorted by file position | Compare gc and vet typecheck errors:
```
$ go test
# golang.org/x/vgo2/vendor/cmd/go/internal/modfetch [golang.org/x/vgo2/vendor/cmd/go/internal/modfetch.test]
./coderepo.go:279:70: undefined: file2
./coderepo.go:294:8: no new variables on left side of :=
./coderepo.go:294:30: undefined: gomod
# golang.org/x/vgo2/vendor/cmd/go/internal/modfetch
./coderepo.go:294:30: undeclared name: gomod
./coderepo.go:294:8: no new variables on left side of :=
./coderepo.go:279:70: undeclared name: file2
vet: typecheck failures
```
(go test printing both sets at all is a different issue, but it is nice for seeing how they compare.)
It's weird that the errors from vet come out not sorted in file line order. They should. | NeedsInvestigation,Analysis | low | Critical |
338,716,844 | rust | Bad codegen partitioning with non-incremental compile in release mode | I have a larger application called [`distributary`](https://github.com/mit-pdos/distributary) that encounters a linking error on current nightly when you try to compile mit-pdos/distributary@216ec42058b962727974ac7a0d43c84097f3f73d (also occurs on earlier commits) in release mode with incremental compilation turned *off*:
```console
$ rustc --version
rustc 1.28.0-nightly (e3bf634e0 2018-06-28)
$ env CARGO_INCREMENTAL=0 cargo b --release --bin souplet
Compiling distributary v0.1.0 (file:///home/jon/dev/distributary)
error: linking with `cc` failed: exit code: 1
|
= note: "cc" "-Wl,--as-needed" "-Wl,-z,noexecstack" "-m64" "-L" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet0-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet1-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet10-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet11-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet12-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet13-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet14-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet15-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet2-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet3-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet4-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet5-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet6-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet7-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet8-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet9-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o" "-o" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b" "/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.crate.allocator.rcgu.o" "-Wl,--gc-sections" "-pie" "-Wl,-z,relro,-z,now" "-Wl,-O1" "-nodefaultlibs" "-L" "/scratch/cargo-target/release/deps" "-L" "/scratch/cargo-target/release/build/backtrace-sys-ed50c3fbaa0d48ec/out" "-L" "/usr/lib" "-L" "/scratch/cargo-target/release/build/librocksdb-sys-84f47854a9cf1b55/out" "-L" "/scratch/cargo-target/release/build/rust-crypto-ced3206b69ce39b0/out" "-L" "/scratch/cargo-target/release/build/miniz-sys-6907519dfa0e351a/out" "-L" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-Wl,-Bstatic" "/scratch/cargo-target/release/deps/libdistributary-da37b7f16a76acbb.rlib" "/scratch/cargo-target/release/deps/libstreamunordered-5a9237ca75ad0178.rlib" "/scratch/cargo-target/release/deps/libstream_cancel-9f8b6a0de5b9cccc.rlib" "/scratch/cargo-target/release/deps/libmir-f520571f43a6ec68.rlib" "/scratch/cargo-target/release/deps/libdataflow-201aa72799505388.rlib" "/scratch/cargo-target/release/deps/libtimekeeper-da7c4412fde7cb09.rlib" "/scratch/cargo-target/release/deps/libtempfile-cfb332116a2ff4ea.rlib" "/scratch/cargo-target/release/deps/libremove_dir_all-1972708e2dff8a43.rlib" "/scratch/cargo-target/release/deps/librocksdb-eeb97c0236eaa111.rlib" "/scratch/cargo-target/release/deps/liblibrocksdb_sys-0d6b79a68eb8cc49.rlib" "/scratch/cargo-target/release/deps/libregex-9fd646d3c71ec7eb.rlib" "/scratch/cargo-target/release/deps/libutf8_ranges-c59216e7b48c08cb.rlib" "/scratch/cargo-target/release/deps/libregex_syntax-7aa96219ae904c00.rlib" "/scratch/cargo-target/release/deps/libucd_util-d358e844ee9b3641.rlib" "/scratch/cargo-target/release/deps/libaho_corasick-f261526ac15754b2.rlib" "/scratch/cargo-target/release/deps/libmemchr-9f54cff398456aba.rlib" "/scratch/cargo-target/release/deps/librand-f0be0a99d754832b.rlib" "/scratch/cargo-target/release/deps/librand_core-890633630e1a08df.rlib" "/scratch/cargo-target/release/deps/libitertools-f2822bd97c79bd2b.rlib" "/scratch/cargo-target/release/deps/libeither-e9639929bc9c96da.rlib" "/scratch/cargo-target/release/deps/libevmap-2dcc25e88ce7edf7.rlib" "/scratch/cargo-target/release/deps/librahashmap-1e2082501f4008b1.rlib" "/scratch/cargo-target/release/deps/libapi-19cbdd42be57edbd.rlib" "/scratch/cargo-target/release/deps/libhyper-6cb1e4a581ec1b74.rlib" "/scratch/cargo-target/release/deps/libwant-324dd6dfe1e46291.rlib" "/scratch/cargo-target/release/deps/libtry_lock-b7c5785edc67802c.rlib" "/scratch/cargo-target/release/deps/libhttparse-cc7214eccbdea8a1.rlib" "/scratch/cargo-target/release/deps/libh2-0dcd5b315066cafd.rlib" "/scratch/cargo-target/release/deps/libindexmap-3d2b781d7eaa209d.rlib" "/scratch/cargo-target/release/deps/libstring-208d469b2eb38187.rlib" "/scratch/cargo-target/release/deps/libhttp-82c0a31552817620.rlib" "/scratch/cargo-target/release/deps/libfutures_cpupool-69177f4662e4ff3e.rlib" "/scratch/cargo-target/release/deps/libchannel-0016965ebd38b3e3.rlib" "/scratch/cargo-target/release/deps/libthrottled_reader-7e60943322dad09e.rlib" "/scratch/cargo-target/release/deps/libasync_bincode-df1d28cefed5211d.rlib" "/scratch/cargo-target/release/deps/libtokio-3d33a85e82bb2bc7.rlib" "/scratch/cargo-target/release/deps/libtokio_udp-e580d834189b3963.rlib" "/scratch/cargo-target/release/deps/libtokio_codec-597cc8ed08684a1a.rlib" "/scratch/cargo-target/release/deps/libtokio_tcp-2525b15268d47a37.rlib" "/scratch/cargo-target/release/deps/libtokio_timer-47b7ed1669fe25dc.rlib" "/scratch/cargo-target/release/deps/libtokio_reactor-e670bc42a7ebc209.rlib" "/scratch/cargo-target/release/deps/libtokio_fs-0c19a7ce1bd82b24.rlib" "/scratch/cargo-target/release/deps/libtokio_threadpool-7caded51d25aa4bb.rlib" "/scratch/cargo-target/release/deps/librand-b1353c4410d02bfd.rlib" "/scratch/cargo-target/release/deps/libnum_cpus-67536290ffc5dc69.rlib" "/scratch/cargo-target/release/deps/libcrossbeam_deque-57a511a18d88fa9c.rlib" "/scratch/cargo-target/release/deps/libcrossbeam_epoch-65a2c7adf5cda377.rlib" "/scratch/cargo-target/release/deps/libscopeguard-bae1bab23bc1a9a3.rlib" "/scratch/cargo-target/release/deps/libmemoffset-cd3e4f40a2d099c7.rlib" "/scratch/cargo-target/release/deps/libcrossbeam_utils-efdefb99cf7a5107.rlib" "/scratch/cargo-target/release/deps/libarrayvec-b588f750819e5262.rlib" "/scratch/cargo-target/release/deps/libnodrop-9a0c3995df5e2fbf.rlib" "/scratch/cargo-target/release/deps/libtokio_executor-70618f29d21d845c.rlib" "/scratch/cargo-target/release/deps/libbufstream-38a663efffaac70d.rlib" "/scratch/cargo-target/release/deps/libtokio_io-9b3d821bab7ebc34.rlib" "/scratch/cargo-target/release/deps/libfutures-a5841c464a217433.rlib" "/scratch/cargo-target/release/deps/libbincode-6ca29a2f700bb392.rlib" "/scratch/cargo-target/release/deps/libbasics-d7efdc7acd49e1c1.rlib" "/scratch/cargo-target/release/deps/libpetgraph-0e757162fad3841a.rlib" "/scratch/cargo-target/release/deps/libordermap-53b0295a398c8085.rlib" "/scratch/cargo-target/release/deps/libfixedbitset-5ddef31ba82f5714.rlib" "/scratch/cargo-target/release/deps/libnom_sql-f2edbbd272e020af.rlib" "/scratch/cargo-target/release/deps/libnom-8a0fabe7e7f869b3.rlib" "/scratch/cargo-target/release/deps/libmemchr-ffca46615f647984.rlib" "/scratch/cargo-target/release/deps/libfnv-3f4dd8f6aa5aae23.rlib" "/scratch/cargo-target/release/deps/libarccstr-4573b2a3f5a39fcd.rlib" "/scratch/cargo-target/release/deps/libassert_infrequent-6ba94e943cd0f7fd.rlib" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libtest-044121a3ee9ec9c8.rlib" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libterm-a530b4df1f6b087a.rlib" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libgetopts-ebaa9330e19ad094.rlib" "/scratch/cargo-target/release/deps/libconsensus-6eb48c8b2338a842.rlib" "/scratch/cargo-target/release/deps/libzookeeper-c38b7e513051484a.rlib" "/scratch/cargo-target/release/deps/libsnowflake-264b253736130360.rlib" "/scratch/cargo-target/release/deps/libmio_extras-8d68c2bd61408eef.rlib" "/scratch/cargo-target/release/deps/liblazycell-2ad1bf5ff037757e.rlib" "/scratch/cargo-target/release/deps/libmio-7873359cdd37de24.rlib" "/scratch/cargo-target/release/deps/libslab-3c66808c5d553bd5.rlib" "/scratch/cargo-target/release/deps/libnet2-10b9dfdf90bbe580.rlib" "/scratch/cargo-target/release/deps/liblazycell-0402bd591d3290e8.rlib" "/scratch/cargo-target/release/deps/liblog-5ae5c43899104168.rlib" "/scratch/cargo-target/release/deps/liblazy_static-836292728b653c0c.rlib" "/scratch/cargo-target/release/deps/libbytes-2e179691966a1956.rlib" "/scratch/cargo-target/release/deps/libiovec-2664e440b623b43b.rlib" "/scratch/cargo-target/release/deps/libslog_term-12cce1c2a70584b6.rlib" "/scratch/cargo-target/release/deps/libthread_local-a28fd41f31b00339.rlib" "/scratch/cargo-target/release/deps/liblazy_static-6ab320be5c343dd8.rlib" "/scratch/cargo-target/release/deps/libunreachable-9a89aeb7bdf0ff6d.rlib" "/scratch/cargo-target/release/deps/libvoid-c3610640208824a7.rlib" "/scratch/cargo-target/release/deps/libterm-1d86d93672404162.rlib" "/scratch/cargo-target/release/deps/libbyteorder-022f4d9bd8b8257a.rlib" "/scratch/cargo-target/release/deps/libisatty-8b82fa60a41976cf.rlib" "/scratch/cargo-target/release/deps/libchrono-ebe951381ac1187c.rlib" "/scratch/cargo-target/release/deps/libnum_integer-ce927daee7214560.rlib" "/scratch/cargo-target/release/deps/libnum_traits-7d7889bf6603c36d.rlib" "/scratch/cargo-target/release/deps/libtime-6e5c8648e8b69a5c.rlib" "/scratch/cargo-target/release/deps/libslog-55ed2cc25693d855.rlib" "/scratch/cargo-target/release/deps/libserde_json-0ced60ec6e124300.rlib" "/scratch/cargo-target/release/deps/libitoa-e09c11d51d2ced84.rlib" "/scratch/cargo-target/release/deps/libdtoa-ad96437661d0db93.rlib" "/scratch/cargo-target/release/deps/libfailure-4fcc023820c12785.rlib" "/scratch/cargo-target/release/deps/libbacktrace-a0b51d305ff6ae2c.rlib" "/scratch/cargo-target/release/deps/libbacktrace_sys-960f672e6c54a924.rlib" "/scratch/cargo-target/release/deps/librustc_demangle-49c48d56bdbd3c9e.rlib" "/scratch/cargo-target/release/deps/libcfg_if-cb30ad19a978a506.rlib" "/scratch/cargo-target/release/deps/libclap-81b819da4f92f017.rlib" "/scratch/cargo-target/release/deps/libvec_map-dcd18c9a06fe218e.rlib" "/scratch/cargo-target/release/deps/libserde-47931d94c909af06.rlib" "/scratch/cargo-target/release/deps/libtextwrap-9f66ed4e6b0d952a.rlib" "/scratch/cargo-target/release/deps/libunicode_width-a0a53cd11f42fbd9.rlib" "/scratch/cargo-target/release/deps/libstrsim-482b76ff49e8daca.rlib" "/scratch/cargo-target/release/deps/libbitflags-978a75e486f708dd.rlib" "/scratch/cargo-target/release/deps/libatty-c19d004b1ddaf458.rlib" "/scratch/cargo-target/release/deps/liblibc-dca1c89ddaa2bbff.rlib" "/scratch/cargo-target/release/deps/libansi_term-b76564dfb4be04c2.rlib" "-Wl,--start-group" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-f9776412ae7aa499.rlib" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libpanic_unwind-54f5e90b6163277a.rlib" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc_jemalloc-d38cd88231d191e3.rlib" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libunwind-d632067cea94a522.rlib" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc_system-942a13ea54bd0c51.rlib" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liblibc-971850e38acc5f31.rlib" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc-e8760697e58c0b14.rlib" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcore-fb37a4ea1db1e473.rlib" "-Wl,--end-group" "/home/jon/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcompiler_builtins-3d06b32e587d2669.rlib" "-Wl,-Bdynamic" "-l" "lz4" "-l" "rocksdb" "-l" "stdc++" "-l" "util" "-l" "util" "-l" "dl" "-l" "rt" "-l" "pthread" "-l" "pthread" "-l" "gcc_s" "-l" "c" "-l" "m" "-l" "rt" "-l" "pthread" "-l" "util" "-l" "util"
= note: /scratch/cargo-target/release/deps/libdistributary-da37b7f16a76acbb.rlib(distributary-da37b7f16a76acbb.distributary15-801f6c5b9baeaaaf659ccf689438db8f.rs.rcgu.o):(.data.rel.ro._ZN12distributary10controller15listen_internal28_$u7b$$u7b$closure$u7d$$u7d$2RS17h352c4cd07a809e86E+0x0): multiple definition of `distributary::controller::listen_internal::{{closure}}::RS'
/scratch/cargo-target/release/deps/souplet-d1dd5e17d8eb8c2b.souplet12-c13c77a8e5cfecca9cb4354767d3d260.rs.rcgu.o:(.data.rel.ro._ZN12distributary10controller15listen_internal28_$u7b$$u7b$closure$u7d$$u7d$2RS17h352c4cd07a809e86E+0x0): first defined here
collect2: error: ld returned 1 exit status
error: aborting due to previous error
error: Could not compile `distributary`.
```
Interestingly, it links just fine incremental compilation turned *on*:
```console
$ env CARGO_INCREMENTAL=1 cargo b --release --bin souplet
Compiling basics v0.1.0 (file:///home/jon/dev/distributary/basics)
Compiling consensus v0.1.0 (file:///home/jon/dev/distributary/consensus)
Compiling channel v0.1.0 (file:///home/jon/dev/distributary/channel)
Compiling api v0.1.0 (file:///home/jon/dev/distributary/api)
warning: unused import: `assert_infrequent`
--> api/src/controller.rs:1:5
|
1 | use assert_infrequent;
| ^^^^^^^^^^^^^^^^^
|
= note: #[warn(unused_imports)] on by default
Compiling dataflow v0.1.0 (file:///home/jon/dev/distributary/dataflow)
Compiling mir v0.1.0 (file:///home/jon/dev/distributary/mir)
Compiling distributary v0.1.0 (file:///home/jon/dev/distributary)
Finished release [optimized + debuginfo] target(s) in 1m 14s
```
It also compiles fine in debug mode with incremental compilation turned off:
```console
$ env CARGO_INCREMENTAL=0 cargo b --bin souplet
Compiling basics v0.1.0 (file:///home/jon/dev/distributary/basics)
Compiling consensus v0.1.0 (file:///home/jon/dev/distributary/consensus)
Compiling channel v0.1.0 (file:///home/jon/dev/distributary/channel)
Compiling api v0.1.0 (file:///home/jon/dev/distributary/api)
Compiling dataflow v0.1.0 (file:///home/jon/dev/distributary/dataflow)
Compiling mir v0.1.0 (file:///home/jon/dev/distributary/mir)
Compiling distributary v0.1.0 (file:///home/jon/dev/distributary)
Finished dev [unoptimized + debuginfo] target(s) in 41.34s
```
The issue also occurs after `cargo clean`. Unfortunately, due to some NLL issues in past nightlies (#51348 and #51649), bisecting nightlies this isn't trivial. I've also tried producing a minimized reproducing example, but to no avail. This *could* be a dupe of #47989, although I don't immediately see any duplicated linking targets. Scanning through cargo's output with `--verbose`, it also looks like the number of linked objects in debug vs release are the same.
https://github.com/mit-pdos/distributary/tree/216ec42058b962727974ac7a0d43c84097f3f73d | A-codegen,T-compiler,C-bug | low | Critical |
338,729,364 | pytorch | [Feature Request] Additional torch.nn.LSTM functionality | If you have a question or would like help and support, please ask at our
[forums](https://discuss.pytorch.org/).
If you are submitting a feature request, please preface the title with [feature request].
If you are submitting a bug report, please fill in the following details.
## Issue description
Ultimately, I would like to create a clean, fully functional, stacked LSTM unit out of existing functions, which allows different widths for each layer, allows dropout, batched and packed sequences with masks, and allows passage of a list of layer sizes to the module for automatic construction. This specific feature request is motivated by the error message from the code below, telling me to ask for a feature request.
## Code example
> class Model1(torch.nn.Module):
>
> [etc, etc]
>
> def forward(self, inputs, lengths):
> pack1 = nn.utils.rnn.pack_padded_sequence(inputs, lengths, batch_first=True)
> out1, self.hidden1 = self.lstm1(pack1, (self.hidden1[0].detach(), self.hidden1[1].detach()))
> pad1 = nn.utils.rnn.pad_packed_sequence(out1, batch_first=True)
> drop1 = self.dropout1(pad1)
>
> pack2 = nn.utils.rnn.pack_padded_sequence(drop1, lengths, batch_first=True)
> out2, self.hidden2 = self.lstm2(pack2, (self.hidden2[0].detach(), self.hidden2[1].detach()))
> pad2 = nn.utils.rnn.pad_packed_sequence(out2, batch_first=True)
>
> return pad2
This results in the error message: "RuntimeError: Returning Variables sharing storage with other Variables that require grad is not supported in Python functions. Please submit a feature request if you hit this error."
That is the direct result of trying to unpack and repack the sequences around the application of the dropout layer, since applying the dropout to a packed sequence does not result in a packed sequence. And trying to apply the dropout manually is necessary because the LSTM layer is hardwired against applying its own dropout to the "last" layer of the LSTM, which is every layer in this approach because they are all separate. They are all separate because the multi-layer stacked LSTM units all assume the same layer width.
And even if I got all that to work right, I would still be tinkering with the module definition directly, because trying to construct those layers in a list ends up with the layer weights not registered as parameters for the optimizer.
I must be missing something obvious, because this seems to be very much more difficult than it should be, to construct a pretty standard configuration with modern features.
## System Info
Collecting environment information...
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.4 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
CMake version: Could not collect
Python version: 3.5
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration: GPU 0: Quadro M4000
Nvidia driver version: 390.48
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.4
/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
Versions of relevant libraries:
[pip3] numpy (1.14.2)
[pip3] torch (0.4.0)
[pip3] torchvision (0.2.1)
[conda] Could not collect
cc @zou3519 | todo,module: rnn,triaged,enhancement | low | Critical |
338,744,691 | vue | Error thrown when using transition-group with component v-bind:is directive | ### Version
2.5.16
### Reproduction link
[https://jsfiddle.net/50wL7mdz/451168/](https://jsfiddle.net/50wL7mdz/451168/)
### Steps to reproduce
Render a `component` (not just any component, but the build-in one: https://vuejs.org/v2/api/#component), with the `v-bind:is` directive set to `"transition-group"` (or simply `is="transition-group"`):
```
<component is="transition-group"></component>
```
### What is expected?
I expect no errors to be thrown.
### What is actually happening?
This component appears to work as expected but throws the following error in the console:
```
vue.js:597 [Vue warn]: Unknown custom element: <component> - did you register the component correctly? For recursive components, make sure to provide the "name" option.
found in
---> <TransitionGroup>
<TransitionWrapper>
<Root>
```
No error is thrown if `transition` is used instead of `transition-group`.
---
I came across this bug because I was building a re-usable animation component that looks like this:
```
<template>
<component
:is="group ? 'transition-group' : 'transition'"
@enter="velocityEnter"
@leave="velocityLeave"
>
<slot/>
</component>
</template>
```
The component actually seems to work as expected, but the error described above is thrown.
<!-- generated by vue-issues. DO NOT REMOVE --> | transition | low | Critical |
338,841,148 | vscode | Use code editor for rename input box | Release 1.25.0 just introduced sub-word navigation –thank you!- yet it is not available on the `F2` _Rename Symbol_ input prompt box.
Ideally it should be possible to navigate in it as well. It'd be specially useful on camel-cased methods renaming. | feature-request,rename | medium | Major |
338,865,919 | pytorch | /usr/bin/ld: cannot find -lpthreads | --
-- ******** Summary ********
-- General:
-- CMake version : 3.5.1
-- CMake command : /usr/bin/cmake
-- Git version : v0.1.11-9200-g7b25cbb-dirty
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 5.4.0
-- BLAS : Eigen
-- CXX flags : -fvisibility-inlines-hidden -fvisibility=hidden -fvisibility-inlines-hidden -fvisibility=hidden -fvisibility-inlines-hidden -DONNX_NAMESPACE=onnx_c2 -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-error=deprecated-declarations
-- Build type : Release
-- Compile definitions :
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : /usr/local
--
-- BUILD_CAFFE2 : ON
-- BUILD_ATEN : OFF
-- BUILD_BINARY : ON
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : ON
-- Python version : 2.7.12
-- Python includes : /usr/include/python2.7
-- BUILD_SHARED_LIBS : OFF
-- BUILD_TEST : OFF
-- USE_ASAN : OFF
-- USE_ATEN : OFF
-- USE_CUDA : ON
-- CUDA static link : OFF
-- USE_CUDNN : ON
-- CUDA version : 8.0
-- cuDNN version : 5.1.5
-- CUDA root directory : /usr/local/cuda
-- CUDA library : /usr/lib/x86_64-linux-gnu/libcuda.so
-- cudart library : /usr/local/cuda/lib64/libcudart_static.a;-lpthread;dl;/usr/lib/x86_64-linux-gnu/librt.so
-- cublas library : /usr/local/cuda/lib64/libcublas.so;/usr/local/cuda/lib64/libcublas_device.a
-- cufft library : /usr/local/cuda/lib64/libcufft.so
-- curand library : /usr/local/cuda/lib64/libcurand.so
-- cuDNN library : /usr/local/lib/libcudnn.so
-- nvrtc : /usr/local/cuda/lib64/libnvrtc.so
-- CUDA include path : /usr/local/cuda/include
-- NVCC executable : /usr/local/cuda/bin/nvcc
-- CUDA host compiler : /usr/bin/cc
-- USE_TENSORRT : OFF
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FFMPEG : OFF
-- USE_GFLAGS : ON
-- USE_GLOG : ON
-- USE_GLOO : ON
-- USE_GLOO_IBVERBS : OFF
-- USE_LEVELDB : ON
-- LevelDB version : 1.18
-- Snappy version : 1.1.3
-- USE_LITE_PROTO : OFF
-- USE_LMDB : ON
-- LMDB version : 0.9.17
-- USE_METAL : OFF
-- USE_MKL :
-- USE_MOBILE_OPENGL : OFF
-- USE_MPI : ON
-- USE_NCCL : ON
-- USE_SYSTEM_NCCL : OFF
-- USE_NERVANA_GPU : OFF
-- USE_NNPACK : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : ON
-- OpenCV version : 3.2.0
-- USE_OPENMP : OFF
-- USE_PROF : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- Public Dependencies : Threads::Threads;gflags;glog::glog
-- Private Dependencies : $<TARGET_FILE:nnpack>;$<TARGET_FILE:cpuinfo>;cpuinfo;/usr/lib/x86_64-linux-gnu/liblmdb.so;/usr/lib/x86_64-linux-gnu/libleveldb.so;/usr/lib/x86_64-linux-gnu/libsnappy.so;/usr/lib/x86_64-linux-gnu/libnuma.so;opencv_core;opencv_highgui;opencv_imgproc;opencv_imgcodecs;opencv_videoio;opencv_video;/usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/libmpi.so;gloo;onnxifi_loader;gcc_s;gcc;dl
-- Configuring incomplete, errors occurred!
See also "/home/jasonma/pytorch/build/CMakeFiles/CMakeOutput.log".
See also "/home/jasonma/pytorch/build/CMakeFiles/CMakeError.log"
_______________________________________________________________________________
/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_21679.dir/link.txt --verbose=1
/usr/bin/cc -DCHECK_FUNCTION_EXISTS=pthread_create CMakeFiles/cmTC_21679.dir/CheckFunctionExists.c.o -o cmTC_21679 -lpthreads
1./usr/bin/ld: cannot find -lpthreads
collect2: error: ld returned 1 exit status
CMakeFiles/cmTC_21679.dir/build.make:97: recipe for target 'cmTC_21679' failed
make[1]: *** [cmTC_21679] Error 1
make[1]: Leaving directory '/home/jasonma/pytorch/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_21679/fast' failed
make: *** [cmTC_21679/fast] Error 2
what does it mean?
how can i fix this!? | caffe2 | low | Critical |
338,913,235 | go | net/http/cookiejar: escaped path matching | Some servers and software return cookies with paths encoded
eg. `/path/contains%20some%20spaces`
Go's default cookiejar can't match those when we make calls back no matter how we make them
There are some characters in RFC6265 which should be encoded/escaped, but space isn't one of the ones listed. This might be a side effect of go being really nice with urls and making it so we rarely have to deal with escaped paths in code.
### What version of Go are you using (`go version`)?
```
go version go1.10.3 darwin/amd64
```
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/user/Library/Caches/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/shannon/go"
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/08/9qf64r3d209cvmgznzj49zg40000gn/T/go-build088227774=/tmp/go-build -gno-record-gcc-switches -fno-common"
```
### What did you do?
https://play.golang.org/p/_3MyXdTJbm-
nb. The cookie set in that example is forced escaping to emulate what the software I'm calling is doing.
### What did you expect to see?
all is well
### What did you see instead?
cookie not found
cookie not found
### How did you work around it
I copied cookiejar, modified jar.go to attempt to unescape the path, there might be a better option, but it worked for this instance.
```
@@ -369,6 +369,10 @@
if i == 0 {
return "/" // Path has the form "/abc".
}
+
+ if s, err := url.PathUnescape(path[:i]); err == nil {
+ return s
+ }
return path[:i] // Path is either of form "/abc/xyz" or "/abc/xyz/".
}
@@ -386,6 +390,8 @@
if c.Path == "" || c.Path[0] != '/' {
e.Path = defPath
+ } else if path, err := url.PathUnescape(c.Path); err == nil {
+ e.Path = path
} else {
e.Path = c.Path
}
``` | NeedsInvestigation | low | Critical |
338,965,928 | pytorch | Where is the include and lib path for caffe2? | i installed pytorch with caffe2 from source by using 'python setup_caffe2.py install' command.
Can anyone tell that where is the default include and lib path for caffe2? | caffe2 | low | Minor |
338,968,930 | pytorch | [Caffe2] compiling error with gcc-6 | If you have a question or would like help and support, please ask at our
[forums](https://discuss.pytorch.org/).
If you are submitting a feature request, please preface the title with [feature request].
If you are submitting a bug report, please fill in the following details.
## Issue description
When compiling caffe2, the following error occurs
[ 72%] Linking CXX shared library ../lib/libcaffe2.so
[ 72%] Built target caffe2
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_yellowfin_op_gpu.cu.o
gcc-6: error: unrecognized command line option ‘-faligned-new’; did you mean ‘-falign-jumps’?
CMake Error at caffe2_gpu_generated_yellowfin_op_gpu.cu.o.Release.cmake:219 (message):
Error generating
/home/mburon/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/sgd/./caffe2_gpu_generated_yellowfin_op_gpu.cu.o
caffe2/CMakeFiles/caffe2_gpu.dir/build.make:917: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_yellowfin_op_gpu.cu.o' failed
make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_yellowfin_op_gpu.cu.o] Error 1
CMakeFiles/Makefile2:1442: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Provide a short description.
## Code example
Please try to provide a minimal example to repro the bug.
Error messages and stack traces are also helpful.
## System Info
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch or Caffe2: Caffe2
- How you installed PyTorch (conda, pip, source): cmake of caffe2
- Build command you used (if compiling from source): sudo make install
- OS: Ubuntu 18
- PyTorch version:
- Python version: 2.7
- CUDA/cuDNN version: 9.0 / 7.0.5
- GPU models and configuration: GTX 1070
- GCC version (if compiling from source): gcc-6
- CMake version: 3.10.2
- Versions of any other relevant libraries:
| caffe2 | low | Critical |
339,045,033 | vue | Vue.compile should return the errors which happens during compilation even in prod env | ### What problem does this feature solve?
I am building a VueJS frontend where templates are coming from a backend where end users contributed them in a CMS or something else. VueJS is bundled with esm in order to have the Vue.compile method.
When I run Vue.compile with the template string coming from the backend, I have no way to know if a compilation error occured in order to display a message to the user. In development I just have the warn messages in the console.
### What does the proposed API look like?
Vue.compile could return an error boolean in an attribute or an array of all errors that occured during compilation.
<!-- generated by vue-issues. DO NOT REMOVE --> | feature request,improvement | low | Critical |
339,072,649 | pytorch | [Caffe2] Cannot load caffe2.python. Error: libcaffe2.so: cannot open shared object file: No such file or directory | ## Issue description
I was trying to read my onnx model via caffe and deployed it onto AWS lambda.
The complete deployment seems to be working fine in my local but on AWS lambda we are getting this error.
## Code example
import caffe2.python.onnx.backend as backend
Please try to provide a minimal example to repro the bug.
Error messages and stack traces are also helpful.
Cannot load caffe2.python. Error: libcaffe2.so: cannot open shared object file: No such file or directory
## System Info
. AWS lambda deployed using zappa.
- PyTorch or Caffe2:
- How you installed Caffe2 (conda, pip, source):
Source
- Build command you used (if compiling from source):
- OS:
Linux and deployed onto AWS lambda
- PyTorch version:
0.4.0
- Python version:
2.7
- CMake version: 4.8
| caffe2 | low | Critical |
339,082,777 | pytorch | [feature request] Implementing Block Sparse Operations | **TL;DR:** Implementing block-sparse operations for faster matrix-multiplication.
Is this something worth adding to PyTorch?
Goals:
1. Faster matrix-multiplication by taking advantage of block-sparsity
2. Improve running time of large-scale LSTMs - Word Language or Sentiment Analysis Models
3. Based on https://blog.openai.com/block-sparse-gpu-kernels/
New Features:
1. LinearBSMM Layer – A plug & play replacement for the standard linear layer where the user specifies the block size and sparsity pattern
2. Helper functions to generate different sparsity patterns – Random, Barabasi Albert, Watts Strogatz
Examples:
**Original Linear Layer**
```
m = nn.Linear(20, 30)
input = torch.randn(128, 20)
output = m(input)
```
**Block Sparse Linear Layer**
```
D = 20
N = 30
m = linearBSMM.random(D, N, p=0.25, block_size=32)
input = torch.randn(128, D)
output = m(input)
```
OR
```
D = 20
N = 30
sparsity_pattern = linearBSMM.random(D, N, p=0.25, block_size=32)
m = nn.LinearBSMM(D, N, sparsity_pattern)
input = torch.randn(128, D)
output = m(input)
``` | module: sparse,feature,triaged | medium | Critical |
339,109,977 | flutter | Flutter stacktrace is too verbose | I'm trying to find the place in my code where the error occurred, but it's giving me loads of internal code that I'm not editing and much of it is very repetitious and doesn't actually help me debug. I feel it should at least point me to the line where the error started.
```
I/flutter (19972): ══╡ EXCEPTION CAUGHT BY RENDERING LIBRARY ╞═════════════════════════════════════════════════════════
I/flutter (19972): The following assertion was thrown during performLayout():
I/flutter (19972): SliverGeometry is not valid: The "maxPaintExtent" is less than the "paintExtent".
I/flutter (19972): The maxPaintExtent is 1367.0408163265306, but the paintExtent is 1367.0408163265308. Maybe you have
I/flutter (19972): fallen prey to floating point rounding errors, and should explicitly apply the min() or max()
I/flutter (19972): functions, or the clamp() method, to the paintExtent? By definition, a sliver can't paint more than
I/flutter (19972): the maximum that it can paint!
I/flutter (19972): The RenderSliver that returned the offending geometry was:
I/flutter (19972): RenderSliverGrid#b8b4a relayoutBoundary=up14 NEEDS-LAYOUT NEEDS-PAINT
I/flutter (19972): creator: SliverGrid ← SliverPadding ← ShrinkWrappingViewport ← _ScrollableScope ←
I/flutter (19972): IgnorePointer-[GlobalKey#5f6a6] ← Semantics ← Listener ← _GestureSemantics ←
I/flutter (19972): RawGestureDetector-[LabeledGlobalKey<RawGestureDetectorState>#a3500] ←
I/flutter (19972): _ExcludableScrollSemantics-[GlobalKey#beb97] ← RepaintBoundary ← CustomPaint ← ⋯
I/flutter (19972): parentData: paintOffset=Offset(0.0, 0.0) (can use size)
I/flutter (19972): constraints: SliverConstraints(AxisDirection.down, GrowthDirection.forward, ScrollDirection.idle,
I/flutter (19972): scrollOffset: 0.0, remainingPaintExtent: Infinity, crossAxisExtent: 391.4, crossAxisDirection:
I/flutter (19972): AxisDirection.right, viewportMainAxisExtent: Infinity, remainingCacheExtent: Infinity cacheOrigin:
I/flutter (19972): 0.0 )
I/flutter (19972): geometry: SliverGeometry(scrollExtent: 1367.0, paintExtent: 1367.0, maxPaintExtent: 1367.0,
I/flutter (19972): hasVisualOverflow: true, cacheExtent: 1367.0)
I/flutter (19972): currently live children: 0 to 5
I/flutter (19972):
I/flutter (19972): When the exception was thrown, this was the stack:
I/flutter (19972): #0 SliverGeometry.debugAssertIsValid.<anonymous closure>.verify (package:flutter/src/rendering/sliver.dart:672:9)
I/flutter (19972): #1 SliverGeometry.debugAssertIsValid.<anonymous closure> (package:flutter/src/rendering/sliver.dart:690:15)
I/flutter (19972): #2 SliverGeometry.debugAssertIsValid (package:flutter/src/rendering/sliver.dart:702:6)
I/flutter (19972): #3 RenderSliver.debugAssertDoesMeetConstraints (package:flutter/src/rendering/sliver.dart:1057:21)
I/flutter (19972): #4 RenderObject.layout.<anonymous closure> (package:flutter/src/rendering/object.dart:1572:19)
I/flutter (19972): #5 RenderObject.layout (package:flutter/src/rendering/object.dart:1572:67)
I/flutter (19972): #6 RenderSliverPadding.performLayout (package:flutter/src/rendering/sliver_padding.dart:182:11)
I/flutter (19972): #7 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #8 RenderViewportBase.layoutChildSequence (package:flutter/src/rendering/viewport.dart:405:13)
I/flutter (19972): #9 RenderShrinkWrappingViewport._attemptLayout (package:flutter/src/rendering/viewport.dart:1640:12)
I/flutter (19972): #10 RenderShrinkWrappingViewport.performLayout (package:flutter/src/rendering/viewport.dart:1603:20)
I/flutter (19972): #11 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #12 _RenderProxyBox&RenderBox&RenderObjectWithChildMixin&RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:109:13)
I/flutter (19972): #13 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #14 _RenderProxyBox&RenderBox&RenderObjectWithChildMixin&RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:109:13)
I/flutter (19972): #15 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #16 _RenderProxyBox&RenderBox&RenderObjectWithChildMixin&RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:109:13)
I/flutter (19972): #17 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #18 _RenderProxyBox&RenderBox&RenderObjectWithChildMixin&RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:109:13)
I/flutter (19972): #19 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #20 _RenderProxyBox&RenderBox&RenderObjectWithChildMixin&RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:109:13)
I/flutter (19972): #21 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #22 _RenderProxyBox&RenderBox&RenderObjectWithChildMixin&RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:109:13)
I/flutter (19972): #23 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #24 _RenderProxyBox&RenderBox&RenderObjectWithChildMixin&RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:109:13)
I/flutter (19972): #25 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #26 _RenderProxyBox&RenderBox&RenderObjectWithChildMixin&RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:109:13)
I/flutter (19972): #27 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #28 RenderFlex.performLayout (package:flutter/src/rendering/flex.dart:738:15)
I/flutter (19972): #29 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #30 _RenderProxyBox&RenderBox&RenderObjectWithChildMixin&RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:109:13)
I/flutter (19972): #31 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #32 RenderStack.performLayout (package:flutter/src/rendering/stack.dart:520:15)
I/flutter (19972): #33 RenderObject.layout (package:flutter/src/rendering/object.dart:1570:7)
I/flutter (19972): #34 MultiChildLayoutDelegate.layoutChild (package:flutter/src/rendering/custom_layout.dart:141:11)
I/flutter (19972): #35 _ScaffoldLayout.performLayout (package:flutter/src/material/scaffold.dart:338:7)
I/flutter (19972): #36 MultiChildLayoutDelegate._callPerformLayout (package:flutter/src/rendering/custom_layout.dart:211:7)
I/flutter (19972): #37 RenderCustomMultiChildLayoutBox.performLayout (package:flutter/src/rendering/custom_layout.dart:355:14)
I/flutter (19972): #38 RenderObject._layoutWithoutResize (package:flutter/src/rendering/object.dart:1445:7)
I/flutter (19972): #39 PipelineOwner.flushLayout (package:flutter/src/rendering/object.dart:704:18)
I/flutter (19972): #40 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding.drawFrame (package:flutter/src/rendering/binding.dart:270:19)
I/flutter (19972): #41 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding&WidgetsBinding.drawFrame (package:flutter/src/widgets/binding.dart:627:13)
I/flutter (19972): #42 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding._handlePersistentFrameCallback (package:flutter/src/rendering/binding.dart:208:5)
I/flutter (19972): #43 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._invokeFrameCallback (package:flutter/src/scheduler/binding.dart:990:15)
I/flutter (19972): #44 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:930:9)
I/flutter (19972): #45 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._handleDrawFrame (package:flutter/src/scheduler/binding.dart:842:5)
I/flutter (19972): #46 _invoke (dart:ui/hooks.dart:120:13)
I/flutter (19972): #47 _drawFrame (dart:ui/hooks.dart:109:3)
I/flutter (19972):
I/flutter (19972): The following RenderObject was being processed when the exception was fired:
I/flutter (19972): RenderSliverGrid#b8b4a relayoutBoundary=up14 NEEDS-LAYOUT NEEDS-PAINT
I/flutter (19972): creator: SliverGrid ← SliverPadding ← ShrinkWrappingViewport ← _ScrollableScope ←
I/flutter (19972): IgnorePointer-[GlobalKey#5f6a6] ← Semantics ← Listener ← _GestureSemantics ←
I/flutter (19972): RawGestureDetector-[LabeledGlobalKey<RawGestureDetectorState>#a3500] ←
I/flutter (19972): _ExcludableScrollSemantics-[GlobalKey#beb97] ← RepaintBoundary ← CustomPaint ← ⋯
I/flutter (19972): parentData: paintOffset=Offset(0.0, 0.0) (can use size)
I/flutter (19972): constraints: SliverConstraints(AxisDirection.down, GrowthDirection.forward, ScrollDirection.idle,
I/flutter (19972): scrollOffset: 0.0, remainingPaintExtent: Infinity, crossAxisExtent: 391.4, crossAxisDirection:
I/flutter (19972): AxisDirection.right, viewportMainAxisExtent: Infinity, remainingCacheExtent: Infinity cacheOrigin:
I/flutter (19972): 0.0 )
I/flutter (19972): geometry: SliverGeometry(scrollExtent: 1367.0, paintExtent: 1367.0, maxPaintExtent: 1367.0,
I/flutter (19972): hasVisualOverflow: true, cacheExtent: 1367.0)
I/flutter (19972): currently live children: 0 to 5
I/flutter (19972): This RenderObject had the following descendants (showing up to depth 5):
I/flutter (19972): RenderRepaintBoundary#1e411 NEEDS-PAINT
I/flutter (19972): RenderSemanticsGestureHandler#3da27 NEEDS-PAINT
I/flutter (19972): RenderPointerListener#135b5 NEEDS-PAINT
I/flutter (19972): RenderSemanticsAnnotations#decb0 NEEDS-PAINT
I/flutter (19972): RenderPadding#3bec8 NEEDS-PAINT
I/flutter (19972): RenderRepaintBoundary#fb7a2 NEEDS-PAINT
I/flutter (19972): RenderSemanticsGestureHandler#62735 NEEDS-PAINT
I/flutter (19972): RenderPointerListener#67771 NEEDS-PAINT
I/flutter (19972): RenderSemanticsAnnotations#bbc5a NEEDS-PAINT
I/flutter (19972): RenderPadding#e0ea4 NEEDS-PAINT
I/flutter (19972): RenderRepaintBoundary#3a683 NEEDS-PAINT
I/flutter (19972): RenderSemanticsGestureHandler#19b8c NEEDS-PAINT
I/flutter (19972): RenderPointerListener#e5029 NEEDS-PAINT
I/flutter (19972): RenderSemanticsAnnotations#381f5 NEEDS-PAINT
I/flutter (19972): RenderPadding#06f0e NEEDS-PAINT
I/flutter (19972): RenderRepaintBoundary#1d923 NEEDS-PAINT
I/flutter (19972): RenderSemanticsGestureHandler#75600 NEEDS-PAINT
I/flutter (19972): RenderPointerListener#e721d NEEDS-PAINT
I/flutter (19972): RenderSemanticsAnnotations#82f4d NEEDS-PAINT
I/flutter (19972): RenderPadding#0c78a NEEDS-PAINT
I/flutter (19972): RenderRepaintBoundary#9231b NEEDS-PAINT
I/flutter (19972): RenderSemanticsGestureHandler#ad195 NEEDS-PAINT
I/flutter (19972): RenderPointerListener#cad1f NEEDS-PAINT
I/flutter (19972): RenderSemanticsAnnotations#e1153 NEEDS-PAINT
I/flutter (19972): RenderPadding#2d2e9 NEEDS-PAINT
I/flutter (19972): ...(descendants list truncated after 25 lines)
I/flutter (19972): ════════════════════════════════════════════════════════════════════════════════════════════════════
I/flutter (19972): Another exception was thrown: SliverGeometry is not valid: The "maxPaintExtent" is less than the "paintExtent".
I/flutter (19972): Another exception was thrown: SliverGeometry is not valid: The "maxPaintExtent" is less than the "paintExtent".
I/flutter (19972): Another exception was thrown: RenderBox was not laid out: RenderShrinkWrappingViewport#07de8 relayoutBoundary=up12 NEEDS-PAINT
I/flutter (19972): Another exception was thrown: RenderBox was not laid out: RenderIgnorePointer#7798d relayoutBoundary=up11 NEEDS-PAINT
I/flutter (19972): Another exception was thrown: RenderBox was not laid out: RenderSemanticsAnnotations#0d4fa relayoutBoundary=up10 NEEDS-PAINT
I/flutter (19972): Another exception was thrown: RenderBox was not laid out: RenderPointerListener#e72cf relayoutBoundary=up9 NEEDS-PAINT
I/flutter (19972): Another exception was thrown: RenderBox was not laid out: RenderSemanticsGestureHandler#80aa5 relayoutBoundary=up8 NEEDS-PAINT
I/flutter (19972): Another exception was thrown: RenderBox was not laid out: _RenderExcludableScrollSemantics#d5e7a relayoutBoundary=up7 NEEDS-PAINT
I/flutter (19972): Another exception was thrown: RenderBox was not laid out: RenderRepaintBoundary#eb88d relayoutBoundary=up6 NEEDS-PAINT
I/flutter (19972): Another exception was thrown: RenderBox was not laid out: RenderCustomPaint#a5d7c relayoutBoundary=up5 NEEDS-PAINT
I/flutter (19972): Another exception was thrown: RenderBox was not laid out: RenderRepaintBoundary#c81ea relayoutBoundary=up4 NEEDS-PAINT
I/flutter (19972): Another exception was thrown: RenderBox was not laid out: RenderRepaintBoundary#c81ea relayoutBoundary=up4 NEEDS-PAINT
I/flutter (19972): Another exception was thrown: RenderBox was not laid out: _RenderExcludableScrollSemantics#d5e7a relayoutBoundary=up7 NEEDS-PAINT
``` | framework,engine,dependency: dart,P2,team-engine,triaged-engine | low | Critical |
339,111,428 | rust | Use hash of compilation unit expression tree to prevent needless recompiles after formatting changes | If it is possible to obtain some sort of digest or fingerprint for the complete expression tree of a single compilation unit after tokenization and parsing has been completed but before actual compilation takes place, it should be possible to optimize away recompilations of code that has only changed aesthetically but reduces to an identical call tree.
Essentially, the idea is to explore whether it is possible to obtain a unique signature for a unit of code that has been parsed but before the heaviest lifting is done or any real compilation takes place, such that after compiling a file containing - for example - the following:
```rust
fn main() {
return match 1 == 1 {
true => { () }
false => { () }
}
}
```
that file is refactored to contain the following:
```rust
fn main() {
return match 1 == 1 {
true => (),
false => (),
}
}
```
the compiler is able to determine after a quick first pass that although the file has changed, the logic of the file remains unchanged and apart from updating symbol locations, etc. the actual compilation need not be repeated.
While this was an extremely naive example, there are a host of other changes that could be taken into account. Ultimately, it would be wonderful if (as a benchmark) any valid code once compiled would not trigger a complete recompile if `cargo fmt` is run regardless of how many superficial changes that cleanup/reformatting triggered.
Things that come to mind:
* Changing `use` statements
* Referring to types by their abbreviated vs unabbreviated names
* Adding or dropping commas or braces in places where the meaning is not affected
* Adding or removing comments anywhere
* Adding or removing whitespace anywhere
* Literally reordering independent (non-nested) blocks within a file such that `struct foo;` which was once before `struct bar;` now comes after it, etc. | C-enhancement,I-compiletime,T-compiler,A-incr-comp,C-feature-request,WG-incr-comp | low | Minor |
339,112,341 | TypeScript | Ability to patch/overwrite missing/wrong declarations | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
correct wrong declaration, fix declaration, overwrite module declaration, fix module type
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
Sometimes we encounter an npm module with missing declarations and incorrect declarations in its types file. Wish we have this ability patch/correct its declaration for temporary using before PR a patch and have it's released.
## Current behavior & Workaround
Consider this situation, module `moduleWithIssues` indeed exports `itemExistedWithoutDeclaration` but its declaration file doesn't contain it, and has a incorrect declaration `itemWithWrongDeclaration`
```ts
import {
foo, bar,
itemExistedWithoutDeclaration, // report 'itemExistedWithoutDeclaration' doesn't exist
itemWithWrongDeclaration
} from 'moduleWithIssues'
// 'itemWithWrongDeclaration' is number type but declared as string, TS report type error
console.log(Math.abs(itemWithWrongDeclaration))
```
At present, I found a workaround is adding a local module declaration for `itemExistedWithoutDeclaration` and assert `itemWithWrongDeclaration` as its correct declaration
```ts
import { foo, bar, itemExistedWithoutDeclaration, itemWithWrongDeclaration } from 'moduleWithIssues'
declare module 'moduleWithIssues' {
const itemExistedWithoutDeclaration: number
}
const itemCorrected: number = itemWithWrongDeclaration as any
```
It works in `*.ts` file but not in `*.d.ts` file. An idea patching solution should be re-declare some items of `moduleWithIssues` in a `*.d.ts` file in the project. like below:
```ts
// interfaceInProject.d.ts
declare module 'moduleWithIssues' {
const itemExistedWithoutDeclaration: number
const itemWithWrongDeclaration: number
}
```
unfortunately this patch module will shadow original module declaration of `moduleWithIssues` package.
```ts
import { foo, bar, itemExistedWithoutDeclaration, itemWithWrongDeclaration } from 'moduleWithIssues'
```
`foo` and `bar` are reported non-existent.
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,Needs Proposal | medium | Critical |
339,113,685 | rust | Optimize incremental compilation of data-only changes | Given a complex call tree such as `foo(bar(baz("hello")))` compiled successfully then changed to `foo(bar(baz("goodbye")))`, it should be possible for the compiler to realize that (with no other changes) no recompilation aside from the change of static data is needed, as I do not believe rust generics allow for introspection based on the content of the string data (like TypeScript does). | C-enhancement,T-compiler,A-incr-comp,C-optimization | low | Minor |
339,128,312 | terminal | cmd: add environment variable to disable/enable 'Terminate batch job (Y/N)?' confirmation | (migrated from https://github.com/PowerShell/PowerShell/issues/7080 per @BrucePay's suggestion)
When I cancel a `cmd` script with Ctrl C, the following dialog pops up:
Terminate batch job (Y/N)?
I understand why this exists, but I'd also like the option to disable it, so cancelling cancels immediately. [Judging by Stack Overflow, I'm not the only one](https://stackoverflow.com/questions/39085380/how-can-i-suppress-terminate-batch-job-y-n-confirmation-in-powershell)
So a request: could we have an environment variable to control this?
| Issue-Feature,Product-Cmd.exe,Area-Interaction | high | Critical |
339,141,343 | neovim | Focus reporting in terminal windows | Neovim does not appear to forward focus reporting into `:terminal` windows.
After `printf '\e[?1004h'; cat -` you should see `^[[O` and `^[[I` when your terminal emulator supports focus reporting, and focus moves out/in.
When using `:term` this does not work anymore.
Since Neovim gets notified about it, it should forward it to the current terminal window (if any).
Additionally it should send focus-in/-out codes when entering/leaving a terminal window - similar to how tmux triggers it when moving in/out of a pane (with the focus-events option).
This should only happen if focus reporting was requested, of course.
(mentioned in https://github.com/neovim/neovim/issues/8696#issuecomment-403123949 already, but taking it out from there into a separate issue) | enhancement,terminal | low | Minor |
339,180,631 | flutter | TabController.animateTo() should return future like ScrollController.animateTo() | ScrollController.animateTo() returns a Future<null>, so that I can easily animate a scroll change, and then do something when we've arrived.
TabController.animateTo() is void, so I can't.
I would like these to have a more consistent signature, and I'd like to be able to easily do something after a TabController's animateTo has completed. | framework,a: animation,f: material design,c: proposal,P2,team-design,triaged-design | low | Major |
339,190,137 | godot | Cannot remove extra quads in the cylindrical part of CapsuleMesh and CylinderMesh | Godot 3.0.4
There are extra Quads in the cylindrical section of `CapsuleMesh` and `CylinderMesh`.
If you remove rings in `CapsuleMesh`, it also reduces them on the spherical sections, which is unwanted.
You cannot remove all rings from `CylinderMesh`, there is always at least one in the middle.

| enhancement,topic:rendering,topic:3d | low | Minor |
339,201,386 | opencv | Build fail on macOS 10.13 with python 3.7 binding | - OpenCV => 4.0 - pre
- Operating System / Platform => macOS 10.13
- Compiler => Apple LLVM version 9.1.0 (clang-902.0.39.2)
##### Detailed description
```
[ 84%] Building CXX object modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2.cpp.o
Buildfile: /Volumes/GDrive/git/opencv/build/java_test/build.xml
[ 84%] Building CXX object samples/cpp/CMakeFiles/example_tutorial_pnp_registration.dir/tutorial_code/calib3d/real_time_pose_estimation/src/main_registration.cpp.o
[ 84%] Building CXX object samples/cpp/CMakeFiles/example_tutorial_pnp_detection.dir/tutorial_code/calib3d/real_time_pose_estimation/src/main_detection.cpp.o
compile:
[mkdir] Created dir: /Volumes/GDrive/git/opencv/build/java_test/build/classes
[javac] Compiling 55 source files to /Volumes/GDrive/git/opencv/build/java_test/build/classes
/Volumes/GDrive/git/opencv/modules/python/src2/cv2.cpp:919:11: error: cannot initialize a variable of type
'char *' with an rvalue of type 'const char *'
char* str = PyString_AsString(obj);
^ ~~~~~~~~~~~~~~~~~~~~~~
[ 84%] Building CXX object samples/cpp/CMakeFiles/example_tutorial_pnp_registration.dir/tutorial_code/calib3d/real_time_pose_estimation/src/CsvReader.cpp.o
[ 84%] Building CXX object samples/cpp/CMakeFiles/example_tutorial_pnp_detection.dir/tutorial_code/calib3d/real_time_pose_estimation/src/CsvReader.cpp.o
```
##### Steps to reproduce
Following is my build configuration
```
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D CMAKE_C_COMPILER=gcc \
-D CMAKE_CXX_COMPILER=g++ \
-D PYTHON_LIBRARY=/usr/local/Cellar/python@2/2.7.15/Frameworks/Python.framework/Versions/2.7/lib \
-D PYTHON_INCLUDE_DIR=/usr/local/Cellar/python@2/2.7.15/Frameworks/Python.framework/Versions/2.7/include/python2.7 \
-D PYTHON_PACKAGE_PATH=/usr/local/Cellar/python@2/2.7.15/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages \
-D PYTHON_NUMPY_INCLUDE_DIRS=/usr/local/lib/python2.7/site-packages/numpy/core/include \
-D PYTHON3_PACKAGE_PATH=/Users/Azhng/.virtual_env/keras/lib/python3.6/site-package\
-D PYTHON3_LIBRARY=/usr/local/Cellar/python3/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/libpython3.7.dylib \
-D PYTHON3_INCLUDE_DIR=/usr/local/Cellar/python3/3.7.0/Frameworks/Python.framework/Versions/3.7/include/python3.7m \
-D PYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/lib/python3.7/site-packages/numpy/core/include \
-D BUILD_NEW_PYTHON_SUPPORT=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D BUILD_EXAMPLES=ON \
-D BUILD_opencv_python3=ON \
-D OPENCV_EXTRA_MODULES_PATH=/Volumes/GDrive/git/opencv_contrib/modules \
-D WITH_TBB=ON \
-D TBBROOT=/opt/intel/tbb \
-D TBB_ENV_INCLUDE=/opt/intel/compilers_and_libraries_2018.1.126/mac/tbb/include \
-D TBB_ENV_LIB=/opt/intel/compilers_and_libraries_2018.1.126/mac/tbb/lib/libtbb.dylib \
-D OPENCL_INCLUDE_DIR=/System/Library/Frameworks/OpenCL.framework/Versions/A/Headers \
-D OPENCL_LIBRARY=/System/Library/Frameworks/OpenCL.framework \
-D WITH_IPP=ON \
-D IPPROOT=/opt/intel/ipp \
-D BUILD_DOCS=ON \
-D WITH_EIGEN=ON \
-D WITH_AVFOUNDATION=ON \
-D WITH_GSTREAMER=ON \
-D WITH_JASPER=ON \
-D WITH_JPEG=ON \
-D TINYDNN_ROOT=/Volumes/GDrive/git/tiny-dnn \
-DTINYDNN_USE_SSE=ON \
-DTINYDNN_USE_AVX=ON \
-DENABLE_CXX11=ON \
..
``` | bug,category: python bindings,category: build/install | low | Critical |
339,203,038 | flutter | shared_preferences plugin should support value observing | Both Android and iOS support the observer pattern for `SharedPreferences`/`UserDefaults`.
Alas, the `shared_preferences` plugin in Flutter should, in addition to simple writing & reading, provide `Stream` endpoints to the values as well.
Android:
https://developer.android.com/reference/android/content/SharedPreferences.OnSharedPreferenceChangeListener
- This will listen to any change in the preferences, and therefore might have performance implications for preference-heavy apps
iOS:
https://developer.apple.com/documentation/foundation/userdefaults/1408206-didchangenotification
- This will also listen to any change, and is therefore not providing the best performance
- `UserDefaults` does support the `KVO`, which can be used to listen for specific changes | c: new feature,p: shared_preferences,package,team-ecosystem,P3,triaged-ecosystem | medium | Major |
339,221,504 | opencv | Some functions in filterengine.hpp are not accessible any more |
##### System information (version)
- OpenCV => master
##### Detailed description
The following functions are not accessible outside
https://github.com/opencv/opencv/blob/43f821afb91330958d4723f6300a5ec0bfa2185e/modules/imgproc/src/filterengine.hpp#L282-L352
Some of them are accessible from the 2.4 branch
https://github.com/opencv/opencv/blob/f1c5d8364f83c693a631f7b6ed659faebd85d155/modules/imgproc/include/opencv2/imgproc/imgproc.hpp#L286-L303
Are there any reasons to hide them.
| category: imgproc,RFC | low | Minor |
339,223,966 | opencv | Unknown exception occurs randomly when executing Mat.clone() | ##### System information (version)
OpenCV 3.4.1
OS Windows 10 Home Edition 64 Bit
Java 8 with OpenCV java binding
##### Detailed description
Unknown exception occurs randomly when executing Mat.clone() method, we also use Mat.empty() method to check whether the Mat is empty. Here is the detail stacktrace, we did not find the location of OpenCV native log file to find out what is the reason of this issue.
```
java.lang.Exception: unknown exception
at org.opencv.core.Mat.n_clone(Native Method) ~[opencv-3.4.1.jar:3.4.1]
at org.opencv.core.Mat.clone(Mat.java:219) ~[opencv-3.4.1.jar:3.4.1]
at com.scsmait.preprocessor.services.SurveillanceService$2.onNext(SurveillanceService.java:127) [classes/:na]
at com.scsmait.preprocessor.services.SurveillanceService$2.onNext(SurveillanceService.java:1) [classes/:na]
at io.reactivex.internal.operators.observable.ObservableSubscribeOn$SubscribeOnObserver.onNext(ObservableSubscribeOn.java:58) [rxjava-2.1.14.jar:na]
at io.reactivex.observers.SerializedObserver.onNext(SerializedObserver.java:113) [rxjava-2.1.14.jar:na]
at io.reactivex.internal.operators.observable.ObservableThrottleFirstTimed$DebounceTimedObserver.onNext(ObservableThrottleFirstTimed.java:82) [rxjava-2.1.14.jar:na]
at io.reactivex.internal.operators.observable.ObservableObserveOn$ObserveOnObserver.drainNormal(ObservableObserveOn.java:200) [rxjava-2.1.14.jar:na]
at io.reactivex.internal.operators.observable.ObservableObserveOn$ObserveOnObserver.run(ObservableObserveOn.java:252) [rxjava-2.1.14.jar:na]
at io.reactivex.internal.schedulers.ScheduledRunnable.run(ScheduledRunnable.java:66) [rxjava-2.1.14.jar:na]
at io.reactivex.internal.schedulers.ScheduledRunnable.call(ScheduledRunnable.java:57) [rxjava-2.1.14.jar:na]
at java.util.concurrent.FutureTask.run(Unknown Source) [na:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown Source) [na:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) [na:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.8.0_141]
at java.lang.Thread.run(Unknown Source) [na:1.8.0_141]」
```
Any suggestion is welcome. Thank you. | bug,category: java bindings,incomplete | low | Minor |
339,224,686 | terminal | Give API to measure the space that a string occupies | This is an extension to #57.
Under a certain console/PTY, assume the font family/size is specified, give a string, and return the space (a bit mask of the character matrix?) it would occupy. | Issue-Feature,Product-Conhost,Area-Server | medium | Major |
339,230,962 | opencv | minAreaRect fails with a particular contour (and probably others) | ##### System information (version)
- OpenCV => 3.4.1 (I found the same behavior with 3.3.0 though)
- Operating System / Platform => OSX / WIN10
- Compiler => apple clang / Visual c++ 2017
##### Detailed description
minAreaRect is supposed to provide a rotated bounding box of a set of points (contour), it works fine in probably most of the cases but I encountered one case in which the rotated rectangle is not enclosing the contour.
##### Steps to reproduce
cpp code:
```
std::vector<Point> hull;
hull.push_back(Point(7826,5663));
hull.push_back(Point(4190,5685));
hull.push_back(Point(882,5699));
hull.push_back(Point(879,5390));
hull.push_back(Point(871,2696));
hull.push_back(Point(865,558));
hull.push_back(Point(870,140));
hull.push_back(Point(7369,119));
hull.push_back(Point(7797,119));
hull.push_back(Point(7825,5314));
hull.push_back(Point(7826,5598));
RotatedRect rbb = minAreaRect(hull);
Point2f vertices[4];
rbb.points(vertices);
Mat image = Mat::zeros(6000, 8000, CV_8UC3);
std::vector<std::vector<Point> > contours(1);
contours[0] = hull;
drawContours(image, contours, 0, Scalar(0, 0, 255));
for (int i = 0; i < 4; ++i) {
line(image, vertices[i], vertices[(i+1)%4], Scalar(0, 255, 0));
}
imwrite("./view.png", image);
```
The resulting image (rotated rect is drawn in green and original contour in red):

Zoom on bottom left corner:
<img width="841" alt="bottomleftcorner" src="https://user-images.githubusercontent.com/10025039/42420874-c4261b3e-82cc-11e8-9ac8-4152e0f44f48.png">
Zoom on bottom right corner:
<img width="705" alt="bottomrightcorner" src="https://user-images.githubusercontent.com/10025039/42420893-e9fff5e6-82cc-11e8-94d2-984c36a133c3.png">
| category: imgproc,RFC | low | Major |
339,236,686 | rust | `./x.py test src/doc` fails with "no rules matched" | commit: 0c0315cfd9
An example in src/bootstrap/README.md, `./x.py test src/doc`, doesn't seem to actually work. It fails with the following output:
```
$ RUST_BACKTRACE=1 ./x.py test src/doc
Updating only changed submodules
Submodules updated in 0.03 seconds
Finished dev [unoptimized] target(s) in 0.22s
thread 'main' panicked at 'Error: no rules matched src/doc.', bootstrap/builder.rs:239:21
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::print
at libstd/sys_common/backtrace.rs:71
at libstd/sys_common/backtrace.rs:59
2: std::panicking::default_hook::{{closure}}
at libstd/panicking.rs:211
3: std::panicking::default_hook
at libstd/panicking.rs:227
4: std::panicking::rust_panic_with_hook
at libstd/panicking.rs:511
5: std::panicking::continue_panic_fmt
at libstd/panicking.rs:426
6: std::panicking::begin_panic_fmt
at libstd/panicking.rs:413
7: bootstrap::builder::StepDescription::run
at bootstrap/builder.rs:239
8: bootstrap::builder::Builder::run_step_descriptions
at bootstrap/builder.rs:569
9: bootstrap::builder::Builder::execute_cli
at bootstrap/builder.rs:559
10: bootstrap::Build::build
at bootstrap/lib.rs:465
11: bootstrap::main
at bootstrap/bin/main.rs:29
12: std::rt::lang_start::{{closure}}
at /checkout/src/libstd/rt.rs:74
13: std::panicking::try::do_call
at libstd/rt.rs:59
at libstd/panicking.rs:310
14: __rust_maybe_catch_panic
at libpanic_unwind/lib.rs:105
15: std::rt::lang_start_internal
at libstd/panicking.rs:289
at libstd/panic.rs:392
at libstd/rt.rs:58
16: std::rt::lang_start
at /checkout/src/libstd/rt.rs:74
17: main
18: __libc_start_main
19: _start
failed to run: /home/euclio/repos/rust/build/bootstrap/debug/bootstrap test src/doc
Build completed unsuccessfully in 0:00:01
```
I didn't check the other examples, but they ought to be checked as well. | T-bootstrap,C-bug | low | Critical |
339,240,284 | go | runtime: document the behaviour of Caller and Callers when used in deferred functions | ### What version of Go are you using (`go version`)?
```
go version go1.10 linux/amd64
```
### Does this issue reproduce with the latest release?
Yes, I've tried on b001ffb.
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/ainar/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/ainar/go"
GORACE=""
GOROOT="/home/ainar/go/go1.10"
GOTMPDIR=""
GOTOOLDIR="/home/ainar/go/go1.10/pkg/tool/linux_amd64"
GCCGO="/usr/bin/gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build197597232=/tmp/go-build -gno-record-gcc-switches"
```
### What did you do?
https://play.golang.org/p/Jz6y0GkqTNW
### What did you expect to see?
Either
```
direct: main.go:11
deferred: main.go:10
```
Or the documentation of [`runtime.Caller`](https://golang.org/pkg/runtime/#Caller) mentioning that deferred functions have line numbers that point to the line after `return`.
### What did you see instead?
```
direct: main.go:11
deferred: main.go:12
```
And no mention of `defer` in `runtime.Caller` or `runtime.Callers` docs.
I understand why it's that way, and I've even created a wonky workaround ([on Russian SO](https://ru.stackoverflow.com/a/851812/180092), [on Playground](https://play.golang.org/p/mTFoW5WtEJq)). But I think that it would be good to mention this behaviour and basic reasoning behind it in the functions' docs. | Documentation,NeedsFix,compiler/runtime | low | Critical |
339,286,866 | go | proposal: bytes/v2: remove NewBufferString | The constructor is often misused and confuses people. It can be used to put initial contents into a buffer that will be written into after, but so can buf.Write. Meanwhile, NewReader handles the case that most beginners misuse, calling NewBufferString when they only need NewBuffer.
This function's use case is not worth the confusion it causes and it should be deleted from the library. | v2,Proposal | medium | Major |
339,289,391 | pytorch | [feature request] SSIM-based cost function as part of the standard set of loss functions | ## Issue description
SSIM-based cost function as part of the standard set of loss functions in PyTorch.
See example here: https://github.com/Po-Hsun-Su/pytorch-ssim | module: loss,triaged,enhancement | low | Major |
339,292,055 | go | proposal: spec: clone and splice, new channel primitives | Here are a couple of suggestions made by Doug McIlroy, original author of test/chan/powser[12].go and instigator of pipes in Unix. They are intriguing.
In Doug's words:
====
`splice c1 c2`
where channel c1 contains no buffered data,
identifies the write end of channel c1 with that
of channel c2. Channel c1 becomes write-only.
`clone c`
makes a new read-only channel positioned at the
same place in the data stream as readable
channel c. Thereafter both channels read the
same data sequence from the stream at unrelated
rates.
Splice allows a filter to elide itself from a pipeline
when it has no further substantive work to do, rather than
going into a copy loop.
Clone enables (buffered) fanout of a data stream.
Buffered data items may be garbage-collected when they
have been delivered to all readers.
These two capabilities are of general utility in stream
processing. golang.org/test/chan/powser1.go is one
application that could profitably use them--to eliminate
the many instances of Copy and also the per-element
go-routines spawned for every Split. Some Unix variants
have offered pipe splicing, and fanout is a staple of
dataflow algorithms. The current workarounds consume an
awful lot of bootless machine cycles. | LanguageChange,Proposal,LanguageChangeReview | high | Critical |
339,404,814 | rust | Implement `propagate_panic` on `LockResult` | `LockResult<Guard>` is an alias of `Result<T, E>` used when locking a `Mutex<T>`, `RwLock<T>`, or other synchronization primitive. It exists to take into account the fact that a panicked thread may poison the lock, thereby corrupting the state of whatever is contained within the lock.
When developing in Rust, it is common to search a codebase for instances of `.unwrap()` to find places in which the code may panic. However, instances of `.unwrap()` are extremely common in multithreaded code where locks are frequently used.
Since calling `.unwrap()` upon a `LockResult<Guard>` **only** triggers a panic if a panic has already occurred in the thread holding the lock, it cannot be the 'original' source of a panic, and therefore is perfectly fine to call in a codebase in which no other instances of `.unwrap()` exist.
I suggest implementing a `.propagate_panic()` method on `LockResult<Guard>` that has identical functionality to `.unwrap()`, but prevents additional noise when searching a codebase for real sources of panics. | T-libs-api,C-feature-request | low | Minor |
339,517,793 | node | Creating a branch of the Docs using RunKit | As a part of the @nodejs/website-redesign work, we've gone through some work to analyze a way to improve some of the interactivity of the documentation on the website once we re-launch with the work the Website Redesign team is working on.
Early on, RunKit approached us with an interest to help improve the docs and make them more interactive while degrading gracefully.
My personal biggest concern was offline mode and not blocking the Docs for those who have JavaScript disabled – both of which are entirely addressed by the demos that the RunKit team has provided.
They've also been going to great lengths to ensure that some of the edge cases we have are addressed entirely within the platform.
@tolmasky from the RunKit team has requested that we begin to spin up an initial setup of RunKit enabled docs to help load-test the RunKit servers. This is purely to help see if they're going to need to set up a dedicated server for Node.js or if their existing infra will suffice.
I offered to be a bridge to Core to begin to help process this request – not sure what the barriers on the Docs side will be, but happy to help connect the dots and do what we need to to make this work 👍
Also wanted to mention that the RunKit team has been incredibly willing to help out and bend over backward to enable the Website Redesign team to succeed in revamping the Node.js website and improving the UX for developers. Huge thank you to them for all the work they're doing 🙌 | doc,meta | medium | Major |
339,530,049 | godot | Color transformation issue when rendering to viewports | Godot 3.0.2 + Windows 10
When rendering a solid color image to a viewport, the final image will produce a different color to the same image being rendered outside of the viewport.
<br>
Consider this node structure:
```
root
--imageA
--viewportTexture
--viewport
----imageB
```
`imageA` and `imageB` are solid color `#090800` quads, both rendered sampling the color from a texture in the shader, using the engine default `force_srgb` option code:
```
mix(pow((color + vec3(0.055)) * (1.0 / (1.0 + 0.055)),vec3(2.4)),color * (1.0 / 12.92),lessThan(color,vec3(0.04045)));
```
`imageA` renders to screen as the original `#090800`, whereas imageB renders as `#0d0d00`, making them slightly off, and thus non-indentical, even though they should be.
<br>
I guess this implies that the transformation code is not accurate and when applied twice (when rendering to viewport + rendering to screen) distorts the source color. This happens both with a custom shader and a default spatialMaterial with the force srgb setting.
If not fixes, quick hacks to fix this appreciated.
____________________________________
Edit: The issue appears to be reading the viewport texture color, which produces the incorrect value.
Edit2: Tried using the `keep_3d_linear` flag mentioned in #19375, available in the 3.1 nightlies, but the result remains unchanged, the read, and in turn rendered, color remains the wrong one. | bug,topic:rendering | low | Major |
339,584,335 | go | os: Remove/RemoveAll should remove read-only folders on Windows | ### What version of Go are you using (`go version`)?
`go version go1.10.2 windows/amd64`
### Does this issue reproduce with the latest release?
Untested, but I don't see any changes that would have fixed it
### What operating system and processor architecture are you using (`go env`)?
`windows/amd64`
### What did you do?
```go
package main
import (
"os"
)
func main() {
err := os.RemoveAll("./folder_with_icon")
if err != nil {
panic(err)
}
}
```
`folder_with_icon` is an empty folder that has had a custom icon added via Windows Explorer's interface (`Properties` => `Customize` => `Change Icon...`).
### What did you expect to see?
`folder_with_icon` should be deleted.
### What did you see instead?
```
panic: remove ./folder_with_icon: Access is denied.
goroutine 1 [running]:
main.main()
C:/[path]/test.go:10 +0x6a
```
### Notes
This is similar to #9606.
Setting a custom folder icon on Windows sets the `read-only` attribute on the folder and creates a `desktop.ini` file with the `system` attribute inside the folder. Despite being read-only, the folder can be deleted both through File Explorer and by `rd /s folder_with_icon` by its owner, even if the owner is not an administrator account. The above Go program will not delete the folder, even if it is run with administrator privileges.
`os.RemoveAll()` removes the `desktop.ini` but does not delete the folder. | OS-Windows,NeedsInvestigation | medium | Major |
339,585,640 | pytorch | [gradcheck] warn about the case that mulitple inputs share storage | also re-enable the skipped einsum tests
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved | module: autograd,triaged | low | Minor |
339,586,685 | TypeScript | --noImplicitThis error is inconsistent | **TypeScript Version:** 3.0.0-dev.20180707
**Code**
```ts
declare function f(a: any): void;
f(function() {
this.m();
});
f({
callback: function() {
this.m();
}
});
```
**Expected behavior:**
With `--noImplicitThis`, errors in both cases or neither case.
**Actual behavior:**
Only error in the first case. | Bug | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.