id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
βŒ€
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
389,749,252
rust
Recursive call passes `cargo check` but fails with `cargo build`
Hi Rust team, Thank you so much for the great language. ## Problem Today I encountered a strange behavior of Cargo, 2018 edition. This code passes `cargo check`, but it fails with `cargo build`. ```rust fn repeat(n: i64, f: impl Fn()) { if n > 0 { f(); repeat(n - 1, &f); } } fn main() { repeat(3, || {}); } ``` `cargo check` output: ``` Finished dev [unoptimized + debuginfo] target(s) in 0.16s ``` `cargo build --verbose` output: <details> <summary> (collapsed)</summary> ``` Compiling rust-err v0.1.0 (/Users/ryu/work/rust-err) Running `rustc --edition=2018 --crate-name rust_err src/main.rs --color never --crate-type bin --emit=dep-info,link -C debuginfo=2 -C metadata=330d5c8df6305891 -C extra-filename=-330d5c8df6305891 --out-dir /Users/ryu/work/rust-err/target/debug/deps -C incremental=/Users/ryu/work/rust-err/target/debug/incremental -L dependency=/Users/ryu/work/rust-err/target/debug/deps` error[E0275]: overflow evaluating the requirement `[closure@src/main.rs:9:15: 9:20]: std::ops::Fn<()>` | = help: consider adding a `#![recursion_limit="128"]` attribute to your crate = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` = note: required because of the requirements on the impl of `std::ops::Fn<()>` for `&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[closure@src/main.rs:9:15: 9:20]` error: aborting due to previous error For more information about this error, try `rustc --explain E0275`. error: Could not compile `rust-err`. Caused by: process didn't exit successfully: `rustc --edition=2018 --crate-name rust_err src/main.rs --color never --crate-type bin --emit=dep-info,link -C debuginfo=2 -C metadata=330d5c8df6305891 -C extra-filename=-330d5c8df6305891 --out-dir /Users/ryu/work/rust-err/target/debug/deps -C incremental=/Users/ryu/work/rust-err/target/debug/incremental -L dependency=/Users/ryu/work/rust-err/target/debug/deps` (exit code: 1) ``` It seems that there is a recursive trait requirement, but I have no idea why this simple code causes it. </details> ## Meta `rustc --version --verbose`: ``` rustc 1.31.0 (abe02cefd 2018-12-04) binary: rustc commit-hash: abe02cefd6cd1916df62ad7dc80161bea50b72e8 commit-date: 2018-12-04 host: x86_64-apple-darwin release: 1.31.0 LLVM version: 8.0 ```
A-closures,T-lang,A-specialization,A-impl-trait
low
Critical
389,820,766
rust
Different compiler versions disagree on whether failure_derive macro is used or not
This leads to some compatibility issues. The code I have now compiles on 1.25.0 and 1.28.0 but reports an unused ```[macro_use]``` import for 1.31.0. https://travis-ci.org/stratis-storage/devicemapper-rs/jobs/466549505. Is this a bug in Rust, or something about the behavior of the ```failure_derive``` crate? Duplicate issue in failure crate: https://github.com/rust-lang-nursery/failure/issues/278, because I'm hoping someone has a hunch about which it is.
C-enhancement,A-diagnostics,T-compiler
low
Critical
389,839,158
go
runtime: incorrect constant value for PROCESS_ALL_ACCESS
https://github.com/golang/go/blob/01e072db5d26c224dfbe7763a5b94ab23c163983/src/runtime/syscall_windows_test.go#L815 From winnt.h we can see that if we are running on a version of Windows greater than Vista, `0xFFFF` should be or-ed with the `STANDARD_RIGHTS_REQUIRED` and `SYNCHRONIZE` flags. _winnt.h#L11279_ ```c++ #if (NTDDI_VERSION >= NTDDI_VISTA) #define PROCESS_ALL_ACCESS (STANDARD_RIGHTS_REQUIRED | SYNCHRONIZE | \ 0xFFFF) #else #define PROCESS_ALL_ACCESS (STANDARD_RIGHTS_REQUIRED | SYNCHRONIZE | \ 0xFFF) #endif ```
Testing,NeedsInvestigation,compiler/runtime
low
Minor
389,871,389
flutter
`flutter doctor` should report whether the Flutter install dir is clean
If the Flutter install directory contains any changes `flutter doctor` or `flutter doctor -v` should report that. It would make it easier for support questions that are caused by inadvertently modified files in the Flutter install dir. I created https://github.com/flutter/flutter/wiki/Workarounds-for-common-issues#flutter-installation-corrupted because this happens quite a lot. See also #9218 I'd suggest to do the `git clean -xfd` `git stash drop`, ... when `--force` (`flutter upgrade --force`, `flutter channel foo --force`) is passed. It might be useful to execute the `git clean` stuff with `flutter upgrade --force` even when no upgrade is available.
c: new feature,tool,t: flutter doctor,P2,team-tool,triaged-tool
low
Major
389,931,100
pytorch
In-place operations on `.data` or `.detach()` of sparse tensor doesn't update the original tensor
Previously, if we change the tensor metadata (e.g. sizes / strides / storage / storage_offset) of a derived tensor (i.e. tensors created from Python `tensor.data` or Python/C++ `tensor.detach()`), those metadata in the original tensor will also be updated. However, after https://github.com/pytorch/pytorch/pull/13827 is merged, the new behavior is that those metadata changes to a derived tensor will not update the original tensor anymore, and we created the `allow_tensor_metadata_change_` flag to make such changes explicitly illegal, to prevent users from changing metadata of the derived tensor and expecting the original tensor to also be updated. In particular, this affects in-place operations (e.g. `zero_()` / `copy_()` / `add_()`) on `.data` or `.detach()` of sparse tensors, because for sparse tensors, `indices` and `values` are also considered as a tensor's metadata, and changing those two fields on a derived tensor will not update the original tensor. Examples: `zero_()` ```python >>> import torch >>> i = torch.tensor([[0, 1, 1], [2, 0, 2]]) >>> v = torch.tensor([3., 4., 5.]) >>> st = torch.sparse.FloatTensor(i, v) >>> dst = st.data # Without the allow_tensor_metadata_change_ flag: >>> dst.zero_() # Note that `dst` and `st` have different values now >>> dst tensor(indices=tensor([], size=(2, 0)), values=tensor([], size=(0,)), size=(2, 3), nnz=0, layout=torch.sparse_coo) >>> st tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), size=(2, 3), nnz=3, layout=torch.sparse_coo) # With the allow_tensor_metadata_change_ flag: >>> dst.zero_() # This operation is disallowed, because it breaks consistency between `dst` and `st` Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: resize_and_clear_ is not allowed on Tensor created from .data or .detach() ``` `copy_()` ```python >>> import torch >>> i = torch.tensor([[0, 1, 1], [2, 0, 2]]) >>> v = torch.tensor([3., 4., 5.]) >>> st = torch.sparse.FloatTensor(i, v) >>> dst = st.data # Without the allow_tensor_metadata_change_ flag: >>> i2 = torch.tensor([[0, 1, 1], [2, 0, 2]]) >>> v2 = torch.tensor([10., 10., 10.]) >>> st2 = torch.sparse.FloatTensor(i2, v2) >>> dst.copy_(st2) # Note that `dst` and `st` have different values now >>> dst tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([10., 10., 10.]), size=(2, 3), nnz=3, layout=torch.sparse_coo) >>> st tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), size=(2, 3), nnz=3, layout=torch.sparse_coo) # With the allow_tensor_metadata_change_ flag: >>> dst.copy_(st2) # This operation is disallowed, because it breaks consistency between `dst` and `st` Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: resize_ is not allowed on Tensor created from .data or .detach() ``` `add_()` ```python >>> import torch >>> i = torch.tensor([[0, 1, 1], [2, 0, 2]]) >>> v = torch.tensor([3., 4., 5.]) >>> st = torch.sparse.FloatTensor(i, v) >>> dst = st.data >>> st2 = torch.sparse.FloatTensor(torch.tensor([[0, 1, 1], [2, 0, 2]]), torch.tensor([10., 10., 10.])) # Without the allow_tensor_metadata_change_ flag: >>> dst.add_(st2) # Note that `dst` and `st` have different values now >>> dst tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([13., 14., 15.]), size=(2, 3), nnz=3, layout=torch.sparse_coo) >>> st tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), size=(2, 3), nnz=3, layout=torch.sparse_coo) # With the allow_tensor_metadata_change_ flag: >>> dst.add_(st2) # This operation is disallowed, because it breaks consistency between `dst` and `st` Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: set_indices_and_values_unsafe is not allowed on Tensor created from .data or .detach() ``` The current solution to this problem is to always perform in-place operations on the original sparse tensor and not the derived tensor (i.e. tensor obtained from `.data` or `.detach()`). However, if in the future we do want to have `zero_()` / `copy_()` / `add_()` on `.data` or `.detach()` of sparse tensor work, we can consider adding one more level of indirection to how we store the `indices` and `values` of a sparse tensor (e.g. by having a `SparseStorageImpl` that stores those two fields), and we make the derived tensor from `.data` and `.detach()` share the same storage pointer with the original tensor, so that any changes to `indices` and `values` will always be reflected back to the original tensor.
module: sparse,triaged
low
Critical
389,957,581
You-Dont-Know-JS
this & Object Prototypes - Chapter 3: Symbol.iterator enumerablility
In the end of the chapter there's an example and statement: ```js var myObject = { a: 2, b: 3 }; Object.defineProperty( myObject, Symbol.iterator, { enumerable: false, writable: false, configurable: true, value: function() { var o = this; var idx = 0; var ks = Object.keys( o ); return { next: function() { return { value: o[ks[idx++]], done: (idx > ks.length) }; } }; } } ); // iterate `myObject` manually var it = myObject[Symbol.iterator](); it.next(); // { value:2, done:false } it.next(); // { value:3, done:false } it.next(); // { value:undefined, done:true } // iterate `myObject` with `for..of` for (var v of myObject) { console.log( v ); } // 2 // 3 ``` > **Note:** We used `Object.defineProperty(..)` to define our custom `@@iterator` (mostly so we could make it non-enumerable), but using the `Symbol` as a *computed property name* (covered earlier in this chapter), we could have declared it directly, like `var myObject = { a:2, b:3, [Symbol.iterator]: function(){ /* .. */ } }`. I'm somewhat confused by the part: > (mostly so we could make it non-enumerable) [MDN states](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/iterator) that `Symbol.iterator` is non-enumerable, my own experimentation is a little bit confusing... ```js const iterable = { a: 1, [Symbol.iterator]: function() {} }; for (x in iterable) console.log(x); // a console.log(iterable.propertyIsEnumerable(Symbol.iterator)); // true ...WTF?! ``` Is there something else we should know?
for second edition
low
Minor
389,958,495
create-react-app
Add a way to specify an alternate tsconfig for production builds
### Is this a bug report? No. This is a feature request. Our team is now using CRA with TS and it is great! One thing we can't seem to find is any information on is having different TS compiler settings when doing a production build. Simply adding a `tsconfig.prod.json` appears to have no effect. I can't seem to find a documented way to add this and it would be super useful! ### Did you try recovering your dependencies? N/A ### Which terms did you search for in User Guide? TypeScript https://reactjs.org/docs/static-type-checking.html#typescript ### Environment ``` Environment Info: System: OS: macOS 10.14.1 CPU: x64 Intel(R) Core(TM) i7-7920HQ CPU @ 3.10GHz Binaries: Node: 10.14.0 - /usr/local/bin/node Yarn: 1.12.3 - /usr/local/bin/yarn npm: 6.4.1 - ~/Projects/cirrus-admin-webapp/node_modules/.bin/npm Browsers: Chrome: 70.0.3538.110 Firefox: 63.0.3 Safari: 12.0.1 npmPackages: @types/react: 16.7.7 => 16.7.7 @types/react-dom: 16.0.11 => 16.0.11 react: 16.6.3 => 16.6.3 react-dom: 16.6.3 => 16.6.3 react-scripts: 2.1.1 => 2.1.1 npmGlobalPackages: create-react-app: Not Found ``` ### Steps to Reproduce 1. Add TS to your project as outlined in the docs. 2. Attempt to add a `tsconfig.prod.json` with different settings than `tsconfig.json`: ```json { "extends": "./tsconfig.json", "compilerOptions": { "noUnusedLocals": true } } ``` 3. Run `yarn build` or whatever you have configured to do a production build with `react-scripts` 4. Notice the settings aren't applied. ### Expected Behavior It would honor the `tsconfig.prod.json` ### Actual Behavior It honors the original `tsconfig.json`. ### Reproducible Demo N/A. I can add one later if people have any issues reproducing.
issue: proposal
medium
Critical
389,980,541
TypeScript
PluginModule.onConfigurationChanged not called when plugin config in tsconfig.json changes
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.3.0-dev.20181208 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** plugin configuration change onConfigurationChanged **Expected behavior:** `PluginModule.onConfigurationChanged` is called when the configuration of the plugin in `tsconfig.json` changes. The Project already watches the tsconfig.json and therefore knows when the config of a plugin changes. **Actual behavior:** `PluginModule.onConfigurationChanged` is only called when the client sends a `ConfigurePlugin` command. **Related Issues:** #15915 is basically the same issue from 1,5 years ago. Back then it was decided this is working as intended and a plugin is responsible for watching the config file. In the meantime `onConfigurationChanged` was introduced in #28106. Since `ConfigurePlugin` command is supposed to override any config in tsconfig.json, a plugin that watches for config file changes would need special handling to account for that fact.
Suggestion,Domain: TSServer,Experience Enhancement
low
Critical
390,027,253
angular
Support ::slotted() in ViewEncapsulation.Emulated
# πŸš€ feature request This was discussed previously in #11595 and (incorrectly) closed as complete by @robwormald. ### Relevant Package @angular/core ### Description ::slotted() is part of the Shadow DOM [working draft](https://drafts.csswg.org/css-scoping/#slotted-pseudo) (see also [MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/::slotted)), with significant support by vendors. ### Describe the solution you'd like ViewEncapsulation.Emulated supports ::slotted(), following the behavior in the Shadow DOM specification. ### Describe alternatives you've considered ViewEncapsulation.Native is a viable alternative for some browsers. However, neither IE or current Edge supports ::slotted, and Safari support is buggy (https://caniuse.com/#feat=shadowdomv1). ::ng-deep is currently the only viable alternative, but Angular has deprecated it.
feature,action: discuss,area: core,state: has PR,core: CSS encapsulation,feature: under consideration
medium
Critical
390,027,426
go
cmd/go: 'go mod download' of a replaced module downloads the go.mod for its replacement
`go mod download` of a specific version of a module should download *only* that module. Today, if that version happens to have a replacement, `go mod download` also downloads the `go.mod` file (but not the `.zip` file) for its replacement. <details> ``` $ go mod init golang.org/issue/scratch go: creating new go.mod: module golang.org/issue/scratch $ go mod edit -require golang.org/x/[email protected] $ go mod edit -replace golang.org/x/[email protected]=golang.org/x/[email protected] $ go mod download -json golang.org/x/[email protected] go: finding golang.org/x/text v0.3.0 go: finding golang.org/x/text v0.2.0 { "Path": "golang.org/x/text", "Version": "v0.2.0", "Info": "/tmp/tmp.3uwW75Tyk7/_gopath/pkg/mod/cache/download/golang.org/x/text/@v/v0.2.0.info", "GoMod": "/tmp/tmp.3uwW75Tyk7/_gopath/pkg/mod/cache/download/golang.org/x/text/@v/v0.2.0.mod", "Zip": "/tmp/tmp.3uwW75Tyk7/_gopath/pkg/mod/cache/download/golang.org/x/text/@v/v0.2.0.zip", "Dir": "/tmp/tmp.3uwW75Tyk7/_gopath/pkg/mod/golang.org/x/[email protected]", "Sum": "h1:WtDSLEtcB5GqbjSlyn8XcYtxjw+SgFMc2RILOvq7CuE=", "GoModSum": "h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=" } $ find $GOPATH -name '*.mod' /tmp/tmp.3uwW75Tyk7/_gopath/pkg/mod/cache/download/golang.org/x/text/@v/v0.2.0.mod /tmp/tmp.3uwW75Tyk7/_gopath/pkg/mod/cache/download/golang.org/x/text/@v/v0.3.0.mod $ find $GOPATH -name '*.zip' /tmp/tmp.3uwW75Tyk7/_gopath/pkg/mod/cache/download/golang.org/x/text/@v/v0.2.0.zip ``` </details>
NeedsFix,GoCommand,modules
low
Minor
390,102,220
node
http2: streams can emit 'close' before 'end' and/or 'finish'
Based on https://github.com/nodejs/node/commit/83ec33b9335a7140c1f8b46357303ff7a8122a0d > My understanding is that ideally, streams > should not emit `'close'` before `'end'` and/or `'finished'`, so this > might be another bug, but changing this would require modifying tests > and almost certainly be a breaking change.
http2
low
Critical
390,127,329
pytorch
torch.save does not work if nn.Module has partial JIT.
## πŸ› Bug ## To Reproduce Steps to reproduce the behavior: ```python import torch import torch.nn as nn class Sub(torch.jit.ScriptModule): def __init__(self): super(Sub, self).__init__() self.weight = nn.Parameter(torch.randn(2)) @torch.jit.script_method def forward(self, thing): return self.weight + thing class M(torch.jit.ScriptModule): __constants__ = ['mods'] def __init__(self): super(M, self).__init__() self.mods = nn.ModuleList([Sub() for i in range(10)]) @torch.jit.script_method def forward(self, v): for m in self.mods: v = m(v) return v class Wrap(torch.nn.Module): def __init__(self): super(Wrap, self).__init__() self.m = M() def forward(self, x): return self.m(x) w = Wrap() torch.save(w, './test') ``` ``` TypeError: can't pickle M objects ``` ## Expected behavior Successfully saving the model. ## Environment ``` PyTorch version: 1.0.0a0+db5d313 Is debug build: No CUDA used to build PyTorch: 9.2.148 OS: Ubuntu 18.04.1 LTS GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0 CMake version: version 3.12.2 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 9.2.148 GPU models and configuration: GPU 0: GeForce GTX 1080 Ti Nvidia driver version: 410.73 cuDNN version: Probably one of the following: /usr/local/cuda-9.2/lib64/libcudnn.so /usr/local/cuda-9.2/lib64/libcudnn.so.7 /usr/local/cuda-9.2/lib64/libcudnn.so.7.0.3 /usr/local/cuda-9.2/lib64/libcudnn_static.a Versions of relevant libraries: [pip] Could not collect [conda] blas 1.0 mkl [conda] magma-cuda92 2.4.0 1 pytorch [conda] mkl 2018.0.3 1 [conda] mkl-include 2019.1 144 [conda] mkl_fft 1.0.6 py37h7dd41cf_0 [conda] mkl_random 1.0.1 py37h4414c95_1 [conda] mkldnn 0.14.0 0 mingfeima [conda] torch 1.0.0a0+db5d313 <pip> ``` ## Additional context While `model.save(f)` works fine for standalone jit.ScriptModule, I cannot find a way to save the entire `torch.nn.Module` containing `torch.jit.ScriptModule`, and it seems like a bug that I cannot use `torch.save` for the case.
oncall: jit
low
Critical
390,154,905
TypeScript
Docs: Advanced Types section "Interfaces vs. Type Aliases" is contradictory
**TypeScript Version:** N/A **Search Terms:** label: Docs "Advanced Types" "Interfaces vs Type Aliases" I've been trying to understand the differences between interfaces and type aliases, and I've found contradictory information on the ["Advanced Types"](https://www.typescriptlang.org/docs/handbook/advanced-types.html) page of the docs. The "Type Aliases" section begins with "Type aliases create a new name for a type." Just a few paragraphs later though in the "Interfaces vs. Type Aliases" subsection, it says: "Type aliases **don’t** create a new name". (Emphasis is mine). See also [this blog post](https://medium.com/@martin_hotell/interface-vs-type-alias-in-typescript-2-7-2a8f1777af4c), which enumerates some other aspects of this section that would benefit from clarification/update.
Docs
low
Minor
390,175,244
You-Dont-Know-JS
ch6: Benchmarking | Use console.time()
https://github.com/getify/You-Dont-Know-JS/blob/master/async%20%26%20performance/ch6.md#benchmarking You could use the built-in `console.time()` and `console.timeEnd()` in this benchmarking example, as the `console` offers many more options than just `log()`. https://developer.mozilla.org/en-US/docs/Web/API/Console/time ``` console.time() // do some operation console.timeEnd() // default: 4886.283ms ```
for second edition
low
Major
390,231,186
pytorch
Different implementations of upsampleBilinear between pytorch and caffe2
## πŸ› Bug <!-- A clear and concise description of what the bug is. --> First, this is not actually a bug, but when one uses pytorch to train a model and then convert the model to onnx which later will be used to do inference by Caffe2, then different result just happened! ## To Reproduce Let's take a look at what bilinear upsample implementation in aten/THNN/generic/SpatialUpsamplingBilinear.cc and caffe2/operators/upsamle_op.cc. 1. In pytorch, upsampleBilinear operator is done through 2 functions, i.e. linear_upsampling_compute_scale and linear_upsampling_compute_source_index. Specifically, by invoking linear_upsampling_compute_scale, pytorch first calculates something simliar to scale, then it will call linear_upsampling_compute_source_index to calculate source_index from dst_index and scale previously computed. 2. In Caffe2, things become a little different. Specifically, in upsample_op.cc, const float rheight = (input_height > 1) ? (float)(output_height - 1) / (input_height - 1) : 0.f; this variable has the similar meaning to scale, then on line 125, by rheight * h2 , it calculates src_index.; <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> A simple example that can reproduce the difference is as follows: `import torch import torch.nn as nn import torch.nn.functional as F import onnx import caffe2.python.onnx.backend as bk h = 3 w = 3 class TestModel(nn.Module): def __init__(self): super(TestModel, self).__init__() def forward(self, x): x = F.interpolate(x, (h + 1, w + 1), mode='bilinear') return x torch_model = TestModel() dummy_input = torch.randn(1, 3, h, w) print(dummy_input[0, 0]) torch_model.eval() torch.onnx.export(torch_model, dummy_input, 'tmp_model.onnx', export_params=True, verbose=True) with torch.no_grad(): torch_out = torch_model(dummy_input).numpy() print('Now the result infered by pytorch is :') print('*' * 50) print('pytorch result is ') print(torch_out[0, 0]) print('-' * 50) onnx_model = onnx.load('tmp_model.onnx') onnx.checker.check_model(onnx_model) preprared_backend = bk.prepare(onnx_model) c2_out = preprared_backend.run(dummy_input.numpy())[0] print('caffe2 result is ') print(c2_out[0, 0]) ` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment Pytorch 1.0 Centos7 Please copy and paste the output from our [environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py) (or fill out the checklist below manually). You can get the script and run it with: ``` wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py # For security purposes, please check the contents of collect_env.py before running it. python collect_env.py ``` - PyTorch Version (e.g., 1.0): PyTorch1.0 - OS (e.g., Linux): Centos7 - How you installed PyTorch (`conda`, `pip`, source): source - Build command you used (if compiling from source): python setup.py install - Python version: Python3.6.4 - CUDA/cuDNN version: CUDA9.0, cuDNN 7.3.1 - GPU models and configuration: V100 - Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
caffe2
low
Critical
390,312,140
rust
False dead_code warning on struct pattern match
The following code will emit a `dead_code` warning ([playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=5c94d6fd6a0bf1512d68495aedbd1b82)): ```rust use std::mem; fn main() { struct Value { a: i32, b: i32, }; let Value { a, b } = unsafe { mem::zeroed() }; println!("{} {}", a, b); } ``` It states that `Value` is never constructed when in fact it has been within the `let Value { a, b }` pattern match. ``` Compiling playground v0.0.1 (/playground) warning: struct is never constructed: `Value` --> src/main.rs:4:5 | 4 | struct Value { | ^^^^^^^^^^^^ | = note: #[warn(dead_code)] on by default Finished dev [unoptimized + debuginfo] target(s) in 0.90s Running `target/debug/playground` ```
A-lints,T-compiler,C-bug
low
Critical
390,340,911
TypeScript
Remote declaration file
## Search Terms - Remote Reference - Remote Declaration - Declaration URL ## Suggestion Being able to reference .d.ts files from a remote location ## Use Cases Reference .d.ts files which can match with import "http://example.com/Component1" ## Examples ```typescript /// <referance path="http://example.com/Definitions/Component1" /> import "http://example.com/Component1" ``` ## Checklist My suggestion meets these guidelines: * [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [ ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Awaiting More Feedback
medium
Critical
390,347,313
flutter
Flutter Text widgets should have multiple wrapping modes
See issue #24399 The Unicode spec (http://unicode.org/reports/tr14/#QU) describes a line wrapping method that attempts to accommodate the broadest language requirements possible; however, the wrapping at times makes no sense for English punctuation. For example, the Unicode standard specifies that a single whitespace character following a quotation mark and preceding another punctuation mark should be treated as "non-breaking." This particular method might make sense in many languages, but in English academic papers, where citations in parentheses often follow quotation marks, this line-breaking method is unacceptable. Flutter currently follows the Unicode spec as it should. However, I contend that Flutter should implement a second wrapping method in addition to the Unicode specification for the following reasons. A suggested workaround is to add an additional "zero width space" after each quotation mark that precedes another punctuation mark. This would work, but for any English app that pulls text from a web API and displays it in a text-wrapping widget, the developer cannot guarantee that such additional whitespace exists and therefore must run all text through a potentially expensive search and replace operation (depending on the length of text involved). Furthermore, text wrapping is already a fairly expensive operation, and honoring all the edge cases of the Unicode spec, though certainly desirable for a global platform such as Flutter, makes an already expensive operation even moreso. Finally, at least regarding the aforementioned wrapping behavior, the Chrome browser does not strictly follow the Unicode spec. As a result, developers who come from the web world will be unaware of this strict adherence to the Unicode standard and will be surprised by the Flutter method of line wrapping. As proof of my point, the Github textbox in which I am currently writing does not honor the Unicode spec: "This is a quote terminating in a long word followed by a parenthetical comment incididunteisumod" (Parenthetical comment is here) On my browser (Chrome) supporting UTF-8, the Unicode wrapping spec is not honored, and the line breaks after the quotation mark in both the text edit box and also on the screen. Therefore, I am not proposing that Flutter violate or ignore the Unicode standard. But, I am proposing that Flutter Text Widgets support at least two wrapping methods: a "strict" method and a "loose" method. The strict method would follow the Unicode spec exactly, while the loose method would wrap on any whitespace. Loose wrapping would work for the majority of cases, would replicate default browser wrapping, and would be faster to render while strict wrapping would exist for any developer who needs to hold to the Unicode spec.
c: new feature,engine,a: typography,P2,team-engine,triaged-engine
low
Major
390,381,814
flutter
Reset CupertionTimerPicker duration
I would like to reset current duration in CupertionTimerPicker by clicking on a button, but it seems that there is no way to do that. Are you going to add this possibility or is there any workaround?
c: new feature,framework,f: date/time picker,f: cupertino,P2,team-design,triaged-design
low
Minor
390,435,174
flutter
[google_maps_flutter] Cannot set map bounds (cameraTargetBounds).
Setting the map by providing bounds does not work. It is always zoomed out. It should automatically adjust zoom according to the map bounds. Currently, in the code, the bounds are set to new york. ![map](https://user-images.githubusercontent.com/14096113/49902698-ada6a680-fe32-11e8-819f-420405e20818.png) ### Code ```dart @override Widget build(BuildContext context) { return GoogleMap( onMapCreated: _onMapCreated, options: GoogleMapOptions( cameraTargetBounds: new CameraTargetBounds(new LatLngBounds( northeast: LatLng(40.73215972821489, -73.980936957489), southwest: LatLng(40.7152797683329, -74.01919598687743) ) ), zoomGesturesEnabled: true, mapType: MapType.normal, trackCameraPosition: true, )); } ``` ### Flutter Doctor ``` [βœ“] Flutter (Channel beta, v1.0.0, on Linux, locale en_US.UTF-8) [βœ“] Android toolchain - develop for Android devices (Android SDK 28.0.3) [βœ“] Android Studio (version 3.2) βœ— Flutter plugin not installed; this adds Flutter specific functionality. βœ— Dart plugin not installed; this adds Dart specific functionality. [!] IntelliJ IDEA Ultimate Edition (version 2018.2) βœ— Flutter plugin not installed; this adds Flutter specific functionality. βœ— Dart plugin not installed; this adds Dart specific functionality. [βœ“] VS Code (version 1.29.1) [βœ“] Connected device (1 available) ! Doctor found issues in 1 category. ```
customer: crowd,p: maps,package,team-ecosystem,has reproducible steps,P2,found in release: 2.0,found in release: 2.2,triaged-ecosystem
low
Critical
390,437,328
go
cmd/doc: show which version of Go introduced symbols
#5778 introduced this feature in x/tools/cmd/godoc, but it is missing in cmd/doc, which replaces the godoc command line mode. /cc @dmitshur
help wanted,NeedsFix
low
Minor
390,444,532
pytorch
Update third_party/googletest - Ability to skip tests in GTEST
At the moment, all test skipping in open source in gtest is done as some variation of: ``` if (skipCondition) return; ``` This is bad, because it means we report skipped tests as "passed". This makes it harder to tell if a test ran or not. Inside fbcode, we have the convention that: ``` GTEST_FATAL_FAILURE_("Test skipped by client") ``` is not a real fatal failure, but instead means that the test is skipped. But this is just a convention; the TestPilot runner is responsible for reinterpreting this to mean skip, and simply doing this in fbcode would not work. Newer versions of gtest support skipping but we cannot easily upgrade the version of gtest used in fbcode. cc @mruberry @VitalyFedyunin @walterddr
module: tests,triaged,module: third_party
low
Critical
390,466,579
pytorch
Use std::variant to represent C++ side enumerations (with binding support)
Today, we have an awkward problem in PyTorch, where we have no good way of representing enumerations. At the Python level, we prefer to represent enumeration options as strings (as they are convenient to type, and Python has no typing guarantees anyway. However, in C++, it is not that great to represent enumerations as strings, because (1) they are not well-typed (you can easily mash in a string for an invalid option in a function) and (2) they require a O(n) in size of string equality test, which is totally pointless and unnecessary. The compromise we have right now is most enumerations are represented as ints in the C++ API, which is not good user experience. We don't really want to create a separate enum type for every function in C++ either, because you end up with an enum per function. We can solve the API design problem in C++ using std::variant. std::variant is a C++17 class (with polyfills for C++11 available) that represents a type-safe union between different types. So, instead of defining an `enum class` for any given enum we need, we instead define the enum on the fly with std::variant with from a vocabulary of pre-defined "keywords". So, for example, consider the following two functions which take enumerations: * MSELoss argument 'reduction' takes 'none' | 'mean' | 'sum' * KLDivLoss argument 'reduction' takes β€˜none’ | β€˜batchmean’ | β€˜sum’ | β€˜mean’ Both of these enumerations are quite similar, but because KLDivLoss also supports batchmean, an ordinary approach using 'enum class' would require two separate enums. With std::variant, we define this way: ``` namespace enum { struct None {} struct Mean {} struct Sum {} struct BatchMean {} } enum::None kNone; enum::Mean kMean; enum::Sum kSum; enum::BatchMean kBatchMean; Tensor MSELoss(std::variant<enum::None, enum::Mean, enum::Sum> reduction, ...); Tensor KLDivLoss(std::variant<enum::None, enum::Mean, enum::Sum, enum::BatchMean> reduction, ...) // Invoke as: MSELoss(/*reduction=*/ kNone) ``` CC @tugrulates @gchanan @zou3519 @goldsborough cc @yf225
module: cpp,triaged
low
Major
390,466,767
TypeScript
allow wildcards in getSupportedCodeFixes
I'm developing a LanguageService Plugin for a linter that creates fixable Diagnostics. Unfortunately I don't know the codes and if they are fixable before they actually occur. AFAICT VSCode handles the case where a Diagnostic's code is supposed to be fixable and `getCodeFixes` returns no fix. I'm proposing some kind of wildcard matching or a way to specify that all Diagnostics with a matching `Diagnostic.source` are considered fixable.
Suggestion,In Discussion,API
low
Minor
390,466,824
TypeScript
--noImplicitAny codefixes infer anonymous object types despite appropriate interfaces in (or out of) scope
**TypeScript Version:** 3.3.0-dev.20181212 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** noimplicitany infer code fix codefix quick suggested **Code** ```ts interface IBox { x: number; y: number; } const shiftBox = (box) => { box.x += 1; box.y += 1; } const box = { x: 0, y: 0, }; shiftBox(box); ``` **Expected behavior:** `--noImplicitAny` suggested fix on the `box` parameter: ```typescript const shiftBox = (box: IBox) => ``` **Actual behavior:** ```typescript const shiftBox = (box: { x: any; y: any; }) => { ``` **Related Issues:** #13243 _(parent tracking quick fixes)_ and #15114 _(discussion on inference difficulties)_ If there is an interface available that can satisfy an inferred type in a `--noImplicitAny` code fix, can we use that? Perhaps with an ordering of possibilities like: 1. Interfaces already available in the file, by how small they are 2. User-defined interfaces that could be imported, by how distant the file is 3. Module types already imported in a user file, by how distant the nearest import is ...where, if multiple interfaces could satisfy the best possibility of those three, we choose the one with the fewest fields? In code bases that don't explicitly type when unnecessary (e.g. `: IBox` for variables), I'm finding the `--noImplicitAny` fixes to be a bit useless for anything other than primitives.
Suggestion,Awaiting More Feedback
low
Minor
390,474,048
TypeScript
Warn when JSDoc type cast misses parentheses
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. --> ## Search Terms <!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->jsdoc. type cast ## Suggestion <!-- A summary of what you'd like to see added or changed --> JSDoc type cast currently requires parentheses for technical reasons (#18212), but it does not warn when they don't exist. ## Use Cases <!-- What do you want to use this for? What shortcomings exist with current approaches? --> ```js // This doesn't work, the JSDoc comment is silently ignored // A warning message will help. const img = /** @type {HTMLImageElement} */ document.getElementById('#cat') img.src = 'cat.gif' ``` ## Examples <!-- Show how this would be used and what the behavior would be --> ```js const img = /** @type {HTMLImageElement} */ document.getElementById('#cat') ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ Warning: The JSDoc type cast misses parentheses and is being ignored img.src = 'cat.gif' ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,In Discussion,Domain: JSDoc
low
Critical
390,476,054
TypeScript
Low readability when type casting by JSDoc
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. --> ## Search Terms <!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->jsdoc, type cast ## Suggestion Currently type casting by JSDoc is not pretty 1) because the comment must be after variable declaration 2) it requires parentheses (#18212). ```js // @ts-check const img = /** @type {HTMLImageElement} */ (document.getElementById('#cat')) img.src = 'cat.gif' ``` How about the JSDoc type declaration automatically casts the assigned type, only when the variable declaration has the value assigning part? ## Use Cases <!-- What do you want to use this for? What shortcomings exist with current approaches? --> The code will be prettier: https://github.com/w3c/respec/pull/1949#discussion_r240971984 ## Examples <!-- Show how this would be used and what the behavior would be --> Current: ```js // @ts-check /** @type {HTMLImageElement} */ const img = document.getElementById('#cat') // Error: Type 'Element' is not assignable to type 'HTMLImageElement' img.src = 'cat.gif' ``` Suggestion: ```js // @ts-check /** @type {HTMLImageElement} */ const img = document.getElementById('#cat') // Element being casted as HTMLImageElement img.src = 'cat.gif' /** @type {HTMLImageElement} */ let img2; img2 = document.getElementById('#cat') // Still an error ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,In Discussion,Domain: JSDoc
low
Critical
390,487,913
rust
Vec::append should swap if the lhs is empty
Here's a reasonable pattern: ``` let mut results = Vec::new(); for ... { results.append(new_results); } ``` On the first append, we can just swap ourselves with the rhs, avoiding an allocation and copy. This is annoying for our users to remember to do, and is basically free to support given how much work this method otherwise does. It should perhaps only be done if lhs.capacity is 0, just because we don't want to mess up a user who has preallocated a nice big buffer. Maybe lhs.capacity < rhs.capacity?
I-slow,T-libs-api
low
Minor
390,505,460
go
testing: t.FailNow() in a deferred function masks test panic
When a test panics, any deferred functions will still process allowing the test to properly clean up. If a deferred function executes `t.FailNow()` the test output will not include any information on the panic or the panic's goroutine trace. An example is shown here: https://play.golang.org/p/gWPDVluKaST Despite that test clearly panicking with a `t.FailNow()` in the deferred function the test output only shows: ``` --- FAIL: TestPanic (0.00s) FAIL ``` The desired behavior would be along the lines of this example: https://play.golang.org/p/qrLtmYnkqAh. (Where the deferred function calls `runtime.Goexit()` instead) This produces the following output: ``` --- FAIL: TestPanic (0.00s) panic: Test Panic panic: test executed panic(nil) or runtime.Goexit goroutine 5 [running]: testing.tRunner.func1(0x4560b0, 0xf) /usr/local/go/src/testing/testing.go:792 +0x460 runtime.Goexit() /usr/local/go/src/runtime/panic.go:397 +0x140 main.TestPanic.func1() /tmp/sandbox284554843/main.go:18 +0x20 panic(0x121940, 0x165a58) /usr/local/go/src/runtime/panic.go:513 +0x240 main.TestPanic(0x4560b0, 0xbab699fc) /tmp/sandbox284554843/main.go:20 +0x60 testing.tRunner(0x4560b0, 0x1558f8) /usr/local/go/src/testing/testing.go:827 +0x140 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:878 +0x3c0 ``` I believe this is the result of this line: https://github.com/golang/go/blob/bbae8d55083d14c414f32af638d5a5174b8027cc/src/testing/testing.go#L813 `!t.finished` will return false because `t.Now()` marks the test finished and err will be nil because the `runtime.Goexit()` in `t.FatalNow()` is what is being recovered (not the original panic). This prevents the expected panic on this line: https://github.com/golang/go/blob/bbae8d55083d14c414f32af638d5a5174b8027cc/src/testing/testing.go#L827
NeedsInvestigation
low
Major
390,510,958
vscode
Eager evaluation feature like Chrome debugger
<!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> One amazing feature provided chrome debugger, is eager evaluation, see this for more details: https://developers.google.com/web/updates/2018/05/devtools#eagerevaluation I am not sure if it is easy to support, but it is the main feature I missed from Chrome debugger
feature-request,debug
low
Critical
390,535,257
go
testing: Logf not reported outside of inner benchmark run
Applying the following patch illustrates the problem: ```Diff diff --git a/src/go/format/benchmark_test.go b/src/go/format/benchmark_test.go index 7bd45c0e95..62131e0977 100644 --- a/src/go/format/benchmark_test.go +++ b/src/go/format/benchmark_test.go @@ -58,6 +58,7 @@ var tests = []struct { } func BenchmarkFormat(b *testing.B) { + b.Logf("foo") // <<< not reported var src bytes.Buffer for _, t := range tests { src.Reset() @@ -74,6 +75,7 @@ func BenchmarkFormat(b *testing.B) { } b.Run(fmt.Sprintf("%s-%d", t.name, t.n), func(b *testing.B) { + b.Logf("foo") // <<< reported b.SetBytes(int64(len(data))) b.ReportAllocs() b.ResetTimer() ``` When running this benchmark, the outer b.Logf call doesn't get reported in the output. I expected to see both.
NeedsInvestigation
low
Minor
390,559,167
pytorch
Maybe a bug when using DataParallel
## πŸ› Bug There is (maybe) a bug when using DataParallel which will lead to exception. If the sample count is not divisible by batch_size, the last batch (sample count is less than batch_size) will have some interesting behaviours. For example, If I have 4 GPUs and the batch_size is 32. When the input data in the shape of (32, x, y), after the `DataParallel.scatter`, the inputs will be splited into [(4, x, y), (4, x, y), (4, x, y), (4, x, y)]. If the dim of input data is (10, x, y), the `DataParallel.scatter` will generates [(3, x, y), (3, x, y), (3, x, y), (1, x, y)] But when I input a (9, x, y) data, the `DataParallel.scatter` will gives me [(3, x, y), (3, x, y), (3, x, y)]. Only the first 3 GPUs will have valid data as input. The last GPU will not get input, which leads to exception `TypeError: forward() missing 1 required positional argument:` However, when I input a (4, x, y) data, the `DataParallel.scatter` will split the data into [(1, x, y), (1, x, y), (1, x, y), (1, x, y)]. I think the data distribution for each GPU is `ceil(batch_size/gpu_count)` and then dispatcher data to each GPUs one by one. After data scattered to first `x` GPUs, the last `gpu_count-x` GPU will have no input data. Is this a bug or a special design? ## Expected behavior for 4 GPUs and batch_size=32 real input data | data scattered to 4 GPUs (32, x, y) ---scatter---> [(4, x, y), (4, x, y), (4, x, y), (4, x, y)] (10, x, y) ---scatter---> [(3, x, y), (3, x, y), (2, x, y), (2, x, y)] (9, x, y) ---scatter---> [(3, x, y), (2, x, y), (2, x, y), (2, x, y)] (4, x, y) ---scatter---> [(1, x, y), (1, x, y), (1, x, y), (1, x, y)] ## Environment - PyTorch Version (e.g., 1.0): 1.0 - OS (e.g., Linux): Ubunto - How you installed PyTorch (`conda`, `pip`, source): conda - Build command you used (if compiling from source): - Python version: 3.7 - CUDA/cuDNN version: 9.2 - GPU models and configuration: Tesla V100 - Any other relevant information:
triaged,module: data parallel
low
Critical
390,574,985
godot
Strange script editor "jumps" when scrolling after changing position and size of editor
**Godot version:** 3.1 c7cef29 **OS/device including version:** Windows 10 **Issue description:** When I place window at left side of monitor and I scroll page then everything works smoothly, but when I change editor position(and/or its size), then scrolling, even very little, cause that script editor jumps strange. **Steps to reproduce:** ![jfile](https://user-images.githubusercontent.com/41945903/70704463-53a12100-1cd2-11ea-93fb-1f02c144ab57.gif)
enhancement,topic:editor
low
Minor
390,623,295
vscode
Antimalware Service Executable is still spiking when some project is loaded
Problem still alive -> #63070 When Im opening a some project under the git, Antimalvaware still spiking :( http://files.rjwebdesign.cz/i2/20181213-115016.png And yes, i updated yesterday release (1.30) @roblourens
bug,upstream,search,confirmed
medium
Critical
390,642,346
pytorch
cudnn not found
When I build from source on tag v1.0.0 it doesn't detect cudnn. I use arch linux and in the latest release arch has moved cudnn to standard locations (/usr/lib and /usr/include). When I run the following, cudnn is not detected: ``` python setup.py install ``` But when I run it with the following cudnn is detected: ``` CUDNN_INCLUDE_DIR=/usr/include CUDNN_LIBRARY=/usr/lib python setup.py install ``` - PyTorch Version : 1.0.0 - OS (e.g., Linux): Arch linux - How you installed PyTorch : source - Python version: 3.7 - CUDA: 10 - cuDNN version: 7.4.1 - GPU models and configuration: Nvidia GTX 1050 log when cudnn not detected: ``` Building wheel torch-1.0.0a0+db5d313 running install setup.py::run() running build_deps setup.py::build_deps::run() + SYNC_COMMAND=cp ++ command -v rsync + '[' -x '' ']' + CMAKE_COMMAND=cmake ++ command -v cmake3 + [[ -x '' ]] + USE_CUDA=0 + USE_FBGEMM=0 + USE_ROCM=0 + USE_NNPACK=0 + USE_MKLDNN=0 + USE_QNNPACK=0 + USE_GLOO_IBVERBS=0 + CAFFE2_STATIC_LINK_CUDA=0 + RERUN_CMAKE=1 + [[ 5 -gt 0 ]] + case "$1" in + USE_CUDA=1 + shift + [[ 4 -gt 0 ]] + case "$1" in + USE_NNPACK=1 + shift + [[ 3 -gt 0 ]] + case "$1" in + USE_MKLDNN=1 + shift + [[ 2 -gt 0 ]] + case "$1" in + USE_QNNPACK=1 + shift + [[ 1 -gt 0 ]] + case "$1" in + break + CMAKE_INSTALL='make install' + BUILD_SHARED_LIBS=ON + USER_CFLAGS= + USER_LDFLAGS= + [[ -n '' ]] + [[ -n '' ]] + [[ -n '' ]] ++ uname + '[' Linux == Darwin ']' +++ dirname ../tools/build_pytorch_libs.sh ++ cd ../tools/.. +++ pwd ++ printf '%q\n' /home/amsha/builds/pytorch + BASE_DIR=/home/amsha/builds/pytorch + TORCH_LIB_DIR=/home/amsha/builds/pytorch/torch/lib + INSTALL_DIR=/home/amsha/builds/pytorch/torch/lib/tmp_install + THIRD_PARTY_DIR=/home/amsha/builds/pytorch/third_party + C_FLAGS= + C_FLAGS=' -DOMPI_SKIP_MPICXX=1' + LDFLAGS= + LD_POSTFIX=.so ++ uname + [[ Linux == \D\a\r\w\i\n ]] + [[ 0 -eq 1 ]] + LDFLAGS=' -Wl,-rpath,$ORIGIN' + CPP_FLAGS=' -std=c++11 ' + THD_FLAGS= + [[ 0 -eq 1 ]] + CUDA_NVCC_FLAGS=' -DOMPI_SKIP_MPICXX=1' + [[ -z '' ]] + CUDA_DEVICE_DEBUG=0 + '[' -z '' ']' ++ getconf _NPROCESSORS_ONLN + MAX_JOBS=8 + BUILD_TYPE=Release + [[ -n '' ]] + [[ -n '' ]] + echo 'Building in Release mode' Building in Release mode + mkdir -p /home/amsha/builds/pytorch/torch/lib/tmp_install + for arg in "$@" + [[ caffe2 == \c\a\f\f\e\2 ]] + build_caffe2 + [[ -z '' ]] + EXTRA_CAFFE2_CMAKE_FLAGS=() + [[ -n '' ]] + [[ -n /home/amsha/virtualenv/torch-1.0-20181213/lib/python3.7/site-packages ]] + EXTRA_CAFFE2_CMAKE_FLAGS+=("-DCMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH") + [[ 1 -eq 1 ]] + cmake /home/amsha/builds/pytorch -DPYTHON_EXECUTABLE=/home/amsha/virtualenv/torch-1.0-20181213/bin/python -DPYTHON_LIBRARY=/usr/lib/libpython3.7m.so.1.0 -DPYTHON_INCLUDE_DIR=/usr/include/python3.7m -DBUILDING_WITH_TORCH_LIBS=ON -DTORCH_BUILD_VERSION=1.0.0a0+db5d313 -DCMAKE_BUILD_TYPE=Release -DBUILD_TORCH=ON -DBUILD_PYTHON=ON -DBUILD_SHARED_LIBS=ON -DBUILD_BINARY=OFF -DBUILD_TEST=ON -DINSTALL_TEST=ON -DBUILD_CAFFE2_OPS=ON -DONNX_NAMESPACE=onnx_torch -DUSE_CUDA=1 -DUSE_DISTRIBUTED=ON -DUSE_FBGEMM=0 -DUSE_NUMPY= -DNUMPY_INCLUDE_DIR=/usr/lib/python3.7/site-packages/numpy/core/include -DUSE_SYSTEM_NCCL=OFF -DNCCL_INCLUDE_DIR= -DNCCL_ROOT_DIR= -DNCCL_SYSTEM_LIB= -DCAFFE2_STATIC_LINK_CUDA=0 -DUSE_ROCM=0 -DUSE_NNPACK=1 -DUSE_LEVELDB=OFF -DUSE_LMDB=OFF -DUSE_OPENCV=OFF -DUSE_QNNPACK=1 -DUSE_FFMPEG=OFF -DUSE_GLOG=OFF -DUSE_GFLAGS=OFF -DUSE_SYSTEM_EIGEN_INSTALL=OFF -DCUDNN_INCLUDE_DIR=/usr/include -DCUDNN_LIB_DIR=/usr -DCUDNN_LIBRARY=/usr/lib -DUSE_MKLDNN=1 -DNCCL_EXTERNAL=1 -DCMAKE_INSTALL_PREFIX=/home/amsha/builds/pytorch/torch/lib/tmp_install -DCMAKE_C_FLAGS= -DCMAKE_CXX_FLAGS= '-DCMAKE_EXE_LINKER_FLAGS= -Wl,-rpath,$ORIGIN ' '-DCMAKE_SHARED_LINKER_FLAGS= -Wl,-rpath,$ORIGIN ' -DTHD_SO_VERSION=1 -DCMAKE_PREFIX_PATH=/home/amsha/virtualenv/torch-1.0-20181213/lib/python3.7/site-packages -- std::exception_ptr is supported. -- NUMA is available -- Turning off deprecation warning due to glog. -- Current compiler supports avx2 extension. Will build perfkernels. -- Current compiler supports avx512f extension. Will build fbgemm. -- Building using own protobuf under third_party per request. -- Use custom protobuf build. -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/home/amsha/builds/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- The BLAS backend of choice:MKL -- Checking for [mkl_intel_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl] -- Library mkl_intel_lp64: /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so -- Library mkl_gnu_thread: /opt/intel/mkl/lib/intel64/libmkl_gnu_thread.so -- Library mkl_core: /opt/intel/mkl/lib/intel64/libmkl_core.so -- Found OpenMP_C: -fopenmp -- Found OpenMP_CXX: -fopenmp -- Found OpenMP: TRUE -- Library gomp: -fopenmp -- Library pthread: /usr/lib/libpthread.so -- Library m: /usr/lib/libm.so -- Library dl: /usr/lib/libdl.so -- Brace yourself, we are building NNPACK -- Found PythonInterp: /home/amsha/virtualenv/torch-1.0-20181213/bin/python (found version "3.7.1") -- LLVM FileCheck Found: /usr/bin/FileCheck -- git Version: v1.4.0-505be96a -- Version: 1.4.0 -- Performing Test HAVE_STD_REGEX -- success -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Performing Test HAVE_POSIX_REGEX -- success -- Performing Test HAVE_STEADY_CLOCK -- success -- Found Numa (include: /usr/include, library: /usr/lib/libnuma.so) -- Using third party subdirectory Eigen. Python 3.7.1 -- Found PythonInterp: /home/amsha/virtualenv/torch-1.0-20181213/bin/python (found suitable version "3.7.1", minimum required is "2.7") -- Could NOT find pybind11 (missing: pybind11_DIR) -- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) -- Using third_party/pybind11. -- MPI support found -- MPI compile flags: -- MPI include path: /usr/include -- MPI LINK flags path: -Wl,-rpath -Wl,/usr/lib/openmpi -Wl,--enable-new-dtags -pthread -- MPI libraries: /usr/lib/openmpi/libmpi_cxx.so/usr/lib/openmpi/libmpi.so [amsha-arch:11971] mca_base_component_repository_open: unable to open mca_fcoll_dynamic: /usr/lib/openmpi/openmpi/mca_fcoll_dynamic.so: undefined symbol: mca_common_ompio_register_print_entry (ignored) [amsha-arch:11971] mca_base_component_repository_open: unable to open mca_fcoll_individual: /usr/lib/openmpi/openmpi/mca_fcoll_individual.so: undefined symbol: mca_common_ompio_file_write (ignored) [amsha-arch:11971] mca_base_component_repository_open: unable to open mca_fcoll_two_phase: /usr/lib/openmpi/openmpi/mca_fcoll_two_phase.so: undefined symbol: mca_common_ompio_register_print_entry (ignored) [amsha-arch:11971] mca_base_component_repository_open: unable to open mca_fcoll_dynamic_gen2: /usr/lib/openmpi/openmpi/mca_fcoll_dynamic_gen2.so: undefined symbol: mca_common_ompio_register_print_entry (ignored) [amsha-arch:11971] mca_base_component_repository_open: unable to open mca_fcoll_static: /usr/lib/openmpi/openmpi/mca_fcoll_static.so: undefined symbol: mca_common_ompio_register_print_entry (ignored) CMake Warning at cmake/Dependencies.cmake:619 (message): OpenMPI found, but it is not built with CUDA support. Call Stack (most recent call first): CMakeLists.txt:201 (include) -- Found CUDA: /opt/cuda (found suitable version "10.0", minimum required is "7.0") -- Caffe2: CUDA detected: 10.0 -- Caffe2: CUDA nvcc is: /opt/cuda/bin/nvcc -- Caffe2: CUDA toolkit directory: /opt/cuda -- Caffe2: Header version is: 10.0 -- Found CUDNN: /usr/include -- Found cuDNN: v7.4.1 (include: /usr/include, library: /usr/lib) -- Automatic GPU detection failed. Building for common architectures. -- Autodetected CUDA architecture(s): 3.0;3.5;5.0;5.2;6.0;6.1;7.0;7.0+PTX -- Added CUDA NVCC flags for: -gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70 -- Could NOT find CUB (missing: CUB_INCLUDE_DIR) -- MPI include path: /usr/include -- MPI libraries: /usr/lib/openmpi/libmpi_cxx.so/usr/lib/openmpi/libmpi.so -- CUDA detected: 10.0 -- -- ******** Summary ******** -- CMake version : 3.13.1 -- CMake command : /usr/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 8.2.1 -- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wnon-virtual-dtor -- Build type : Release -- Compile definitions : TH_BLAS_MKL -- CMAKE_PREFIX_PATH : /home/amsha/virtualenv/torch-1.0-20181213/lib/python3.7/site-packages -- CMAKE_INSTALL_PREFIX : /home/amsha/builds/pytorch/torch/lib/tmp_install -- CMAKE_MODULE_PATH : /home/amsha/builds/pytorch/cmake/Modules;/home/amsha/builds/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.3.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_BUILD_TESTS : OFF -- ONNX_BUILD_BENCHMARKS : OFF -- ONNX_USE_LITE_PROTO : OFF -- ONNXIFI_DUMMY_BACKEND : OFF -- -- Protobuf compiler : -- Protobuf includes : -- Protobuf libraries : -- BUILD_ONNX_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Removing -DNDEBUG from compile flags -- MAGMA not found. Compiling without MAGMA support -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- AVX compiler support found -- AVX2 compiler support found -- Atomics: using C11 intrinsics -- Found a library with BLAS API (mkl). -- Found a library with LAPACK API. (mkl) -- Found CUDA: /opt/cuda (found suitable version "10.0", minimum required is "5.5") disabling ROCM because NOT USE_ROCM is set -- MIOpen not found. Compiling without MIOpen support CMake Deprecation Warning at third_party/ideep/mkl-dnn/CMakeLists.txt:21 (cmake_policy): The OLD behavior for policy CMP0048 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. CMake Deprecation Warning at third_party/ideep/mkl-dnn/CMakeLists.txt:22 (cmake_policy): The OLD behavior for policy CMP0054 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- Detecting Intel(R) MKL: trying mklml_intel -- Intel(R) MKL: include /opt/intel/mkl/include -- Intel(R) MKL: lib /opt/intel/mkl/lib/intel64/libmkl_rt.so -- Intel(R) MKL: OpenMP lib -fopenmp -- Found OpenMP_C: -fopenmp -- Found OpenMP_CXX: -fopenmp -- VTune profiling environment is unset CMake Warning (dev) at cmake/public/mkldnn.cmake:1 (find_package): Policy CMP0074 is not set: find_package uses <PackageName>_ROOT variables. Run "cmake --help-policy CMP0074" for policy details. Use the cmake_policy command to set the policy and suppress this warning. CMake variable MKLDNN_ROOT is set to: /home/amsha/builds/pytorch/third_party/ideep/mkl-dnn For compatibility, CMake is ignoring the variable. Call Stack (most recent call first): cmake/Dependencies.cmake:1309 (INCLUDE) CMakeLists.txt:201 (include) This warning is for project developers. Use -Wno-dev to suppress it. -- GCC 8.2.1: Adding gcc and gcc_s libs to link line -- Using python found in /home/amsha/virtualenv/torch-1.0-20181213/bin/python -- Configuring build for SLEEF-v3.2 Target system: Linux-4.18.16-arch1-1-ARCH Target processor: x86_64 Host system: Linux-4.18.16-arch1-1-ARCH Host processor: x86_64 Detected C compiler: GNU @ /usr/bin/cc -- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef -- Building shared libs : OFF -- MPFR : /usr/lib/libmpfr.so -- MPFR header file in /usr/include -- GMP : /usr/lib/libgmp.so -- RUNNING_ON_TRAVIS : 0 -- COMPILER_SUPPORTS_OPENMP : 1 -- Using python found in /home/amsha/virtualenv/torch-1.0-20181213/bin/python -- /usr/bin/c++ /home/amsha/builds/pytorch/torch/abi-check.cpp -o /home/amsha/builds/pytorch/build/abi-check -- Determined _GLIBCXX_USE_CXX11_ABI=1 -- Found CUDA: /opt/cuda (found suitable version "10.0", minimum required is "7.5") -- MPI_LIBRARIES: /usr/lib/openmpi/libmpi_cxx.so;/usr/lib/openmpi/libmpi.so -- Building the gloo backend with TCP support only -- MPI_LINK_FLAGS: -Wl,-rpath -Wl,/usr/lib/openmpi -Wl,--enable-new-dtags -pthread -- Found CUDA: /opt/cuda (found version "10.0") -- Building C10D with CUDA support -- MPI_INCLUDE_PATH: /usr/include -- MPI_LIBRARIES: /usr/lib/openmpi/libmpi_cxx.so;/usr/lib/openmpi/libmpi.so -- MPIEXEC: /usr/bin/mpiexec -- Include NCCL operators -- Including IDEEP operators -- Excluding image processing operators due to no opencv -- Excluding video processing operators due to no opencv -- Include Observer library -- Using lib/python3.7/site-packages as python relative installation path -- Automatically generating missing __init__.py files. -- A previous caffe2 cmake run already created the __init__.py files. CMake Warning at CMakeLists.txt:389 (message): Generated cmake files are only fully tested if one builds with system glog, gflags, and protobuf. Other settings may generate files that are not well tested. -- -- ******** Summary ******** -- General: -- CMake version : 3.13.1 -- CMake command : /usr/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 8.2.1 -- BLAS : MKL -- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -Wno-stringop-overflow -- Build type : Release -- Compile definitions : TH_BLAS_MKL;ONNX_NAMESPACE=onnx_torch;USE_C11_ATOMICS=1;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1 -- CMAKE_PREFIX_PATH : /home/amsha/virtualenv/torch-1.0-20181213/lib/python3.7/site-packages -- CMAKE_INSTALL_PREFIX : /home/amsha/builds/pytorch/torch/lib/tmp_install -- -- TORCH_VERSION : 1.0.0 -- CAFFE2_VERSION : 1.0.0 -- BUILD_ATEN_MOBILE : OFF -- BUILD_ATEN_ONLY : OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_DOCS : OFF -- BUILD_PYTHON : ON -- Python version : 3.7.1 -- Python executable : /home/amsha/virtualenv/torch-1.0-20181213/bin/python -- Pythonlibs version : 3.7.1 -- Python library : /usr/lib/libpython3.7m.so.1.0 -- Python includes : /usr/include/python3.7m -- Python site-packages: lib/python3.7/site-packages -- BUILD_CAFFE2_OPS : ON -- BUILD_SHARED_LIBS : ON -- BUILD_TEST : ON -- USE_ASAN : OFF -- USE_CUDA : 1 -- CUDA static link : 0 -- USE_CUDNN : OFF -- CUDA version : 10.0 -- CUDA root directory : /opt/cuda -- CUDA library : /usr/lib/libcuda.so -- cudart library : /opt/cuda/lib64/libcudart_static.a;-pthread;dl;/usr/lib/librt.so -- cublas library : /opt/cuda/lib64/libcublas.so -- cufft library : /opt/cuda/lib64/libcufft.so -- curand library : /opt/cuda/lib64/libcurand.so -- nvrtc : /opt/cuda/lib64/libnvrtc.so -- CUDA include path : /opt/cuda/include -- NVCC executable : /opt/cuda/bin/nvcc -- CUDA host compiler : /usr/bin/cc -- USE_TENSORRT : OFF -- USE_ROCM : 0 -- USE_EIGEN_FOR_BLAS : -- USE_FBGEMM : 0 -- USE_FFMPEG : OFF -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LEVELDB : OFF -- USE_LITE_PROTO : OFF -- USE_LMDB : OFF -- USE_METAL : OFF -- USE_MKL : ON -- USE_MKLDNN : ON -- USE_MOBILE_OPENGL : OFF -- USE_NCCL : ON -- USE_SYSTEM_NCCL : OFF -- USE_NNPACK : 1 -- USE_NUMPY : ON -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENCV : OFF -- USE_OPENMP : OFF -- USE_PROF : OFF -- USE_QNNPACK : 1 -- USE_REDIS : OFF -- USE_ROCKSDB : OFF -- USE_ZMQ : OFF -- USE_DISTRIBUTED : ON -- USE_MPI : ON -- USE_GLOO : ON -- USE_GLOO_IBVERBS : OFF -- Public Dependencies : Threads::Threads;caffe2::mkl;caffe2::mkldnn -- Private Dependencies : qnnpack;nnpack;cpuinfo;/usr/lib/libnuma.so;fp16;/usr/lib/openmpi/libmpi_cxx.so;/usr/lib/openmpi/libmpi.so;gloo;aten_op_header_gen;onnxifi_loader;rt;gcc_s;gcc;dl -- Configuring done -- Generating done ``` When run with the CUDNN paths set: ``` Building wheel torch-1.0.0a0+db5d313 running install setup.py::run() running build_deps setup.py::build_deps::run() + SYNC_COMMAND=cp ++ command -v rsync + '[' -x '' ']' + CMAKE_COMMAND=cmake ++ command -v cmake3 + [[ -x '' ]] + USE_CUDA=0 + USE_FBGEMM=0 + USE_ROCM=0 + USE_NNPACK=0 + USE_MKLDNN=0 + USE_QNNPACK=0 + USE_GLOO_IBVERBS=0 + CAFFE2_STATIC_LINK_CUDA=0 + RERUN_CMAKE=1 + [[ 5 -gt 0 ]] + case "$1" in + USE_CUDA=1 + shift + [[ 4 -gt 0 ]] + case "$1" in + USE_NNPACK=1 + shift + [[ 3 -gt 0 ]] + case "$1" in + USE_MKLDNN=1 + shift + [[ 2 -gt 0 ]] + case "$1" in + USE_QNNPACK=1 + shift + [[ 1 -gt 0 ]] + case "$1" in + break + CMAKE_INSTALL='make install' + BUILD_SHARED_LIBS=ON + USER_CFLAGS= + USER_LDFLAGS= + [[ -n '' ]] + [[ -n '' ]] + [[ -n '' ]] ++ uname + '[' Linux == Darwin ']' +++ dirname ../tools/build_pytorch_libs.sh ++ cd ../tools/.. +++ pwd ++ printf '%q\n' /home/amsha/builds/pytorch + BASE_DIR=/home/amsha/builds/pytorch + TORCH_LIB_DIR=/home/amsha/builds/pytorch/torch/lib + INSTALL_DIR=/home/amsha/builds/pytorch/torch/lib/tmp_install + THIRD_PARTY_DIR=/home/amsha/builds/pytorch/third_party + C_FLAGS= + C_FLAGS=' -DOMPI_SKIP_MPICXX=1' + LDFLAGS= + LD_POSTFIX=.so ++ uname + [[ Linux == \D\a\r\w\i\n ]] + [[ 0 -eq 1 ]] + LDFLAGS=' -Wl,-rpath,$ORIGIN' + CPP_FLAGS=' -std=c++11 ' + THD_FLAGS= + [[ 0 -eq 1 ]] + CUDA_NVCC_FLAGS=' -DOMPI_SKIP_MPICXX=1' + [[ -z '' ]] + CUDA_DEVICE_DEBUG=0 + '[' -z '' ']' ++ getconf _NPROCESSORS_ONLN + MAX_JOBS=8 + BUILD_TYPE=Release + [[ -n '' ]] + [[ -n '' ]] + echo 'Building in Release mode' Building in Release mode + mkdir -p /home/amsha/builds/pytorch/torch/lib/tmp_install + for arg in "$@" + [[ caffe2 == \c\a\f\f\e\2 ]] + build_caffe2 + [[ -z '' ]] + EXTRA_CAFFE2_CMAKE_FLAGS=() + [[ -n '' ]] + [[ -n /home/amsha/virtualenv/torch-1.0-20181213/lib/python3.7/site-packages ]] + EXTRA_CAFFE2_CMAKE_FLAGS+=("-DCMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH") + [[ 1 -eq 1 ]] + cmake /home/amsha/builds/pytorch -DPYTHON_EXECUTABLE=/home/amsha/virtualenv/torch-1.0-20181213/bin/python -DPYTHON_LIBRARY=/usr/lib/libpython3.7m.so.1.0 -DPYTHON_INCLUDE_DIR=/usr/include/python3.7m -DBUILDING_WITH_TORCH_LIBS=ON -DTORCH_BUILD_VERSION=1.0.0a0+db5d313 -DCMAKE_BUILD_TYPE=Release -DBUILD_TORCH=ON -DBUILD_PYTHON=ON -DBUILD_SHARED_LIBS=ON -DBUILD_BINARY=OFF -DBUILD_TEST=ON -DINSTALL_TEST=ON -DBUILD_CAFFE2_OPS=ON -DONNX_NAMESPACE=onnx_torch -DUSE_CUDA=1 -DUSE_DISTRIBUTED=ON -DUSE_FBGEMM=0 -DUSE_NUMPY= -DNUMPY_INCLUDE_DIR=/usr/lib/python3.7/site-packages/numpy/core/include -DUSE_SYSTEM_NCCL=OFF -DNCCL_INCLUDE_DIR= -DNCCL_ROOT_DIR= -DNCCL_SYSTEM_LIB= -DCAFFE2_STATIC_LINK_CUDA=0 -DUSE_ROCM=0 -DUSE_NNPACK=1 -DUSE_LEVELDB=OFF -DUSE_LMDB=OFF -DUSE_OPENCV=OFF -DUSE_QNNPACK=1 -DUSE_FFMPEG=OFF -DUSE_GLOG=OFF -DUSE_GFLAGS=OFF -DUSE_SYSTEM_EIGEN_INSTALL=OFF -DCUDNN_INCLUDE_DIR=/usr/include -DCUDNN_LIB_DIR=/usr -DCUDNN_LIBRARY=/usr/lib -DUSE_MKLDNN=1 -DNCCL_EXTERNAL=1 -DCMAKE_INSTALL_PREFIX=/home/amsha/builds/pytorch/torch/lib/tmp_install -DCMAKE_C_FLAGS= -DCMAKE_CXX_FLAGS= '-DCMAKE_EXE_LINKER_FLAGS= -Wl,-rpath,$ORIGIN ' '-DCMAKE_SHARED_LINKER_FLAGS= -Wl,-rpath,$ORIGIN ' -DTHD_SO_VERSION=1 -DCMAKE_PREFIX_PATH=/home/amsha/virtualenv/torch-1.0-20181213/lib/python3.7/site-packages -- The CXX compiler identification is GNU 8.2.1 -- The C compiler identification is GNU 8.2.1 -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found -- Performing Test COMPILER_WORKS -- Performing Test COMPILER_WORKS - Success -- Performing Test SUPPORT_GLIBCXX_USE_C99 -- Performing Test SUPPORT_GLIBCXX_USE_C99 - Success -- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED -- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success -- std::exception_ptr is supported. -- Performing Test CAFFE2_IS_NUMA_AVAILABLE -- Performing Test CAFFE2_IS_NUMA_AVAILABLE - Success -- NUMA is available -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Failed -- Turning off deprecation warning due to glog. -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512F_EXTENSIONS -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512F_EXTENSIONS - Success -- Current compiler supports avx512f extension. Will build fbgemm. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success -- Performing Test COMPILER_SUPPORTS_RDYNAMIC -- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Success -- Building using own protobuf under third_party per request. -- Use custom protobuf build. -- Looking for pthread.h -- Looking for pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/home/amsha/builds/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- The BLAS backend of choice:MKL -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- Checking for [mkl_intel_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl] -- Library mkl_intel_lp64: /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so -- Library mkl_gnu_thread: /opt/intel/mkl/lib/intel64/libmkl_gnu_thread.so -- Library mkl_core: /opt/intel/mkl/lib/intel64/libmkl_core.so -- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found OpenMP: TRUE (found version "4.5") -- Library gomp: -fopenmp -- Library pthread: /usr/lib/libpthread.so -- Library m: /usr/lib/libm.so -- Library dl: /usr/lib/libdl.so -- Looking for cblas_sgemm -- Looking for cblas_sgemm - found -- The ASM compiler identification is GNU -- Found assembler: /usr/bin/cc -- Check if compiler accepts -pthread -- Check if compiler accepts -pthread - yes -- Brace yourself, we are building NNPACK -- Found PythonInterp: /home/amsha/virtualenv/torch-1.0-20181213/bin/python (found version "3.7.1") -- LLVM FileCheck Found: /usr/bin/FileCheck -- Found Git: /usr/bin/git (found version "2.20.0") -- git Version: v1.4.0-505be96a -- Version: 1.4.0 -- Performing Test HAVE_CXX_FLAG_STD_CXX11 -- Performing Test HAVE_CXX_FLAG_STD_CXX11 - Success -- Performing Test HAVE_CXX_FLAG_WALL -- Performing Test HAVE_CXX_FLAG_WALL - Success -- Performing Test HAVE_CXX_FLAG_WEXTRA -- Performing Test HAVE_CXX_FLAG_WEXTRA - Success -- Performing Test HAVE_CXX_FLAG_WSHADOW -- Performing Test HAVE_CXX_FLAG_WSHADOW - Success -- Performing Test HAVE_CXX_FLAG_WERROR -- Performing Test HAVE_CXX_FLAG_WERROR - Success -- Performing Test HAVE_CXX_FLAG_PEDANTIC -- Performing Test HAVE_CXX_FLAG_PEDANTIC - Success -- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS -- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS - Success -- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 -- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 - Failed -- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL -- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL - Success -- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING -- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING - Success -- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS -- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS - Success -- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING -- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING - Success -- Performing Test HAVE_CXX_FLAG_WD654 -- Performing Test HAVE_CXX_FLAG_WD654 - Failed -- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY -- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY - Failed -- Performing Test HAVE_CXX_FLAG_COVERAGE -- Performing Test HAVE_CXX_FLAG_COVERAGE - Success -- Performing Test HAVE_STD_REGEX -- Performing Test HAVE_STD_REGEX -- Performing Test HAVE_STD_REGEX -- success -- Performing Test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Performing Test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- success -- Performing Test HAVE_STEADY_CLOCK -- Performing Test HAVE_STEADY_CLOCK -- Performing Test HAVE_STEADY_CLOCK -- success -- Found Numa: /usr/include -- Found Numa (include: /usr/include, library: /usr/lib/libnuma.so) -- Using third party subdirectory Eigen. Python 3.7.1 -- Found PythonInterp: /home/amsha/virtualenv/torch-1.0-20181213/bin/python (found suitable version "3.7.1", minimum required is "2.7") -- Found PythonLibs: /usr/lib/libpython3.7m.so.1.0 (found suitable version "3.7.1", minimum required is "2.7") -- Could NOT find pybind11 (missing: pybind11_DIR) -- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) -- Using third_party/pybind11. -- Found MPI_C: /usr/lib/openmpi/libmpi.so (found version "3.1") -- Found MPI_CXX: /usr/lib/openmpi/libmpi_cxx.so (found version "3.1") -- Found MPI: TRUE (found version "3.1") -- MPI support found -- MPI compile flags: -- MPI include path: /usr/include -- MPI LINK flags path: -Wl,-rpath -Wl,/usr/lib/openmpi -Wl,--enable-new-dtags -pthread -- MPI libraries: /usr/lib/openmpi/libmpi_cxx.so/usr/lib/openmpi/libmpi.so [amsha-arch:13270] mca_base_component_repository_open: unable to open mca_fcoll_dynamic: /usr/lib/openmpi/openmpi/mca_fcoll_dynamic.so: undefined symbol: mca_common_ompio_register_print_entry (ignored) [amsha-arch:13270] mca_base_component_repository_open: unable to open mca_fcoll_individual: /usr/lib/openmpi/openmpi/mca_fcoll_individual.so: undefined symbol: mca_common_ompio_file_write (ignored) [amsha-arch:13270] mca_base_component_repository_open: unable to open mca_fcoll_two_phase: /usr/lib/openmpi/openmpi/mca_fcoll_two_phase.so: undefined symbol: mca_common_ompio_register_print_entry (ignored) [amsha-arch:13270] mca_base_component_repository_open: unable to open mca_fcoll_dynamic_gen2: /usr/lib/openmpi/openmpi/mca_fcoll_dynamic_gen2.so: undefined symbol: mca_common_ompio_register_print_entry (ignored) [amsha-arch:13270] mca_base_component_repository_open: unable to open mca_fcoll_static: /usr/lib/openmpi/openmpi/mca_fcoll_static.so: undefined symbol: mca_common_ompio_register_print_entry (ignored) CMake Warning at cmake/Dependencies.cmake:619 (message): OpenMPI found, but it is not built with CUDA support. Call Stack (most recent call first): CMakeLists.txt:201 (include) -- Found CUDA: /opt/cuda (found suitable version "10.0", minimum required is "7.0") -- Caffe2: CUDA detected: 10.0 -- Caffe2: CUDA nvcc is: /opt/cuda/bin/nvcc -- Caffe2: CUDA toolkit directory: /opt/cuda -- Caffe2: Header version is: 10.0 -- Found CUDNN: /usr/include -- Found cuDNN: v7.4.1 (include: /usr/include, library: /usr/lib) -- Automatic GPU detection failed. Building for common architectures. -- Autodetected CUDA architecture(s): 3.0;3.5;5.0;5.2;6.0;6.1;7.0;7.0+PTX -- Added CUDA NVCC flags for: -gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70 -- Could NOT find CUB (missing: CUB_INCLUDE_DIR) CMake Warning (dev) at third_party/gloo/CMakeLists.txt:21 (option): Policy CMP0077 is not set: option() honors normal variables. Run "cmake --help-policy CMP0077" for policy details. Use the cmake_policy command to set the policy and suppress this warning. For compatibility with older versions of CMake, option is clearing the normal variable 'BUILD_BENCHMARK'. This warning is for project developers. Use -Wno-dev to suppress it. -- MPI include path: /usr/include -- MPI libraries: /usr/lib/openmpi/libmpi_cxx.so/usr/lib/openmpi/libmpi.so -- CUDA detected: 10.0 CMake Warning at cmake/Dependencies.cmake:873 (message): mobile opengl is only used in android or ios builds. Call Stack (most recent call first): CMakeLists.txt:201 (include) CMake Warning at cmake/Dependencies.cmake:949 (message): Metal is only used in ios builds. Call Stack (most recent call first): CMakeLists.txt:201 (include) -- -- ******** Summary ******** -- CMake version : 3.13.1 -- CMake command : /usr/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 8.2.1 -- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wnon-virtual-dtor -- Build type : Release -- Compile definitions : TH_BLAS_MKL -- CMAKE_PREFIX_PATH : /home/amsha/virtualenv/torch-1.0-20181213/lib/python3.7/site-packages -- CMAKE_INSTALL_PREFIX : /home/amsha/builds/pytorch/torch/lib/tmp_install -- CMAKE_MODULE_PATH : /home/amsha/builds/pytorch/cmake/Modules;/home/amsha/builds/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.3.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_BUILD_TESTS : OFF -- ONNX_BUILD_BENCHMARKS : OFF -- ONNX_USE_LITE_PROTO : OFF -- ONNXIFI_DUMMY_BACKEND : OFF -- -- Protobuf compiler : -- Protobuf includes : -- Protobuf libraries : -- BUILD_ONNX_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Removing -DNDEBUG from compile flags -- MAGMA not found. Compiling without MAGMA support -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Looking for cpuid.h -- Looking for cpuid.h - found -- Performing Test HAVE_GCC_GET_CPUID -- Performing Test HAVE_GCC_GET_CPUID - Success -- Performing Test NO_GCC_EBX_FPIC_BUG -- Performing Test NO_GCC_EBX_FPIC_BUG - Success -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Failed -- Performing Test C_HAS_AVX_2 -- Performing Test C_HAS_AVX_2 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Failed -- Performing Test C_HAS_AVX2_2 -- Performing Test C_HAS_AVX2_2 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Failed -- Performing Test CXX_HAS_AVX_2 -- Performing Test CXX_HAS_AVX_2 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Failed -- Performing Test CXX_HAS_AVX2_2 -- Performing Test CXX_HAS_AVX2_2 - Success -- AVX compiler support found -- AVX2 compiler support found -- Performing Test HAS_C11_ATOMICS -- Performing Test HAS_C11_ATOMICS - Success -- Atomics: using C11 intrinsics -- Performing Test BLAS_F2C_DOUBLE_WORKS -- Performing Test BLAS_F2C_DOUBLE_WORKS - Failed -- Performing Test BLAS_F2C_FLOAT_WORKS -- Performing Test BLAS_F2C_FLOAT_WORKS - Success -- Performing Test BLAS_USE_CBLAS_DOT -- Performing Test BLAS_USE_CBLAS_DOT - Success -- Found a library with BLAS API (mkl). -- Found a library with LAPACK API. (mkl) -- Found CUDA: /opt/cuda (found suitable version "10.0", minimum required is "5.5") disabling ROCM because NOT USE_ROCM is set -- MIOpen not found. Compiling without MIOpen support -- Found MKLDNN: /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so;/opt/intel/mkl/lib/intel64/libmkl_gnu_thread.so;/opt/intel/mkl/lib/intel64/libmkl_core.so;-fopenmp;/usr/lib/libpthread.so;/usr/lib/libm.so;/usr/lib/libdl.so CMake Deprecation Warning at third_party/ideep/mkl-dnn/CMakeLists.txt:21 (cmake_policy): The OLD behavior for policy CMP0048 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. CMake Deprecation Warning at third_party/ideep/mkl-dnn/CMakeLists.txt:22 (cmake_policy): The OLD behavior for policy CMP0054 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- Detecting Intel(R) MKL: trying mklml_intel -- Detecting Intel(R) MKL: trying mklml -- Detecting Intel(R) MKL: trying mkl_rt -- Intel(R) MKL: include /opt/intel/mkl/include -- Intel(R) MKL: lib /opt/intel/mkl/lib/intel64/libmkl_rt.so -- Intel(R) MKL: OpenMP lib -fopenmp -- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found Doxygen: /usr/bin/doxygen (found version "1.8.14") found components: doxygen missing components: dot -- VTune profiling environment is unset CMake Warning (dev) at cmake/public/mkldnn.cmake:1 (find_package): Policy CMP0074 is not set: find_package uses <PackageName>_ROOT variables. Run "cmake --help-policy CMP0074" for policy details. Use the cmake_policy command to set the policy and suppress this warning. CMake variable MKLDNN_ROOT is set to: /home/amsha/builds/pytorch/third_party/ideep/mkl-dnn For compatibility, CMake is ignoring the variable. Call Stack (most recent call first): cmake/Dependencies.cmake:1309 (INCLUDE) CMakeLists.txt:201 (include) This warning is for project developers. Use -Wno-dev to suppress it. -- Looking for clock_gettime in rt -- Looking for clock_gettime in rt - found -- Looking for mmap -- Looking for mmap - found -- Looking for shm_open -- Looking for shm_open - found -- Looking for shm_unlink -- Looking for shm_unlink - found -- Looking for malloc_usable_size -- Looking for malloc_usable_size - found -- Performing Test C_HAS_THREAD -- Performing Test C_HAS_THREAD - Success -- GCC 8.2.1: Adding gcc and gcc_s libs to link line -- Using python found in /home/amsha/virtualenv/torch-1.0-20181213/bin/python -- Check size of long double -- Check size of long double - done -- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE -- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE - Success -- Performing Test COMPILER_SUPPORTS_FLOAT128 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Success -- Performing Test COMPILER_SUPPORTS_SSE2 -- Performing Test COMPILER_SUPPORTS_SSE2 - Success -- Performing Test COMPILER_SUPPORTS_SSE4 -- Performing Test COMPILER_SUPPORTS_SSE4 - Success -- Performing Test COMPILER_SUPPORTS_AVX -- Performing Test COMPILER_SUPPORTS_AVX - Success -- Performing Test COMPILER_SUPPORTS_FMA4 -- Performing Test COMPILER_SUPPORTS_FMA4 - Success -- Performing Test COMPILER_SUPPORTS_AVX2 -- Performing Test COMPILER_SUPPORTS_AVX2 - Success -- Performing Test COMPILER_SUPPORTS_SVE -- Performing Test COMPILER_SUPPORTS_SVE - Failed -- Performing Test COMPILER_SUPPORTS_AVX512F -- Performing Test COMPILER_SUPPORTS_AVX512F - Success -- Performing Test COMPILER_SUPPORTS_OPENMP -- Performing Test COMPILER_SUPPORTS_OPENMP - Success -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Success -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Success -- Configuring build for SLEEF-v3.2 Target system: Linux-4.18.16-arch1-1-ARCH Target processor: x86_64 Host system: Linux-4.18.16-arch1-1-ARCH Host processor: x86_64 Detected C compiler: GNU @ /usr/bin/cc -- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef -- Building shared libs : OFF -- MPFR : /usr/lib/libmpfr.so -- MPFR header file in /usr/include -- GMP : /usr/lib/libgmp.so -- RUNNING_ON_TRAVIS : 0 -- COMPILER_SUPPORTS_OPENMP : 1 -- Using python found in /home/amsha/virtualenv/torch-1.0-20181213/bin/python -- /usr/bin/c++ /home/amsha/builds/pytorch/torch/abi-check.cpp -o /home/amsha/builds/pytorch/build/abi-check -- Determined _GLIBCXX_USE_CXX11_ABI=1 -- Performing Test HAS_THREAD_LOCAL -- Performing Test HAS_THREAD_LOCAL - Success -- Found CUDA: /opt/cuda (found suitable version "10.0", minimum required is "7.5") -- MPI_LIBRARIES: /usr/lib/openmpi/libmpi_cxx.so;/usr/lib/openmpi/libmpi.so -- Building the gloo backend with TCP support only -- MPI_LINK_FLAGS: -Wl,-rpath -Wl,/usr/lib/openmpi -Wl,--enable-new-dtags -pthread -- Found CUDA: /opt/cuda (found version "10.0") -- Building C10D with CUDA support -- MPI_INCLUDE_PATH: /usr/include -- MPI_LIBRARIES: /usr/lib/openmpi/libmpi_cxx.so;/usr/lib/openmpi/libmpi.so -- MPIEXEC: /usr/bin/mpiexec -- Include NCCL operators -- Including IDEEP operators -- Excluding image processing operators due to no opencv -- Excluding video processing operators due to no opencv -- Include Observer library -- Using lib/python3.7/site-packages as python relative installation path -- Automatically generating missing __init__.py files. CMake Warning at CMakeLists.txt:389 (message): Generated cmake files are only fully tested if one builds with system glog, gflags, and protobuf. Other settings may generate files that are not well tested. -- -- ******** Summary ******** -- General: -- CMake version : 3.13.1 -- CMake command : /usr/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 8.2.1 -- BLAS : MKL -- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -Wno-stringop-overflow -- Build type : Release -- Compile definitions : TH_BLAS_MKL;ONNX_NAMESPACE=onnx_torch;USE_C11_ATOMICS=1;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1 -- CMAKE_PREFIX_PATH : /home/amsha/virtualenv/torch-1.0-20181213/lib/python3.7/site-packages -- CMAKE_INSTALL_PREFIX : /home/amsha/builds/pytorch/torch/lib/tmp_install -- -- TORCH_VERSION : 1.0.0 -- CAFFE2_VERSION : 1.0.0 -- BUILD_ATEN_MOBILE : OFF -- BUILD_ATEN_ONLY : OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_DOCS : OFF -- BUILD_PYTHON : ON -- Python version : 3.7.1 -- Python executable : /home/amsha/virtualenv/torch-1.0-20181213/bin/python -- Pythonlibs version : 3.7.1 -- Python library : /usr/lib/libpython3.7m.so.1.0 -- Python includes : /usr/include/python3.7m -- Python site-packages: lib/python3.7/site-packages -- BUILD_CAFFE2_OPS : ON -- BUILD_SHARED_LIBS : ON -- BUILD_TEST : ON -- USE_ASAN : OFF -- USE_CUDA : 1 -- CUDA static link : 0 -- USE_CUDNN : ON -- CUDA version : 10.0 -- cuDNN version : 7.4.1 -- CUDA root directory : /opt/cuda -- CUDA library : /usr/lib/libcuda.so -- cudart library : /opt/cuda/lib64/libcudart_static.a;-pthread;dl;/usr/lib/librt.so -- cublas library : /opt/cuda/lib64/libcublas.so -- cufft library : /opt/cuda/lib64/libcufft.so -- curand library : /opt/cuda/lib64/libcurand.so -- cuDNN library : /usr/lib -- nvrtc : /opt/cuda/lib64/libnvrtc.so -- CUDA include path : /opt/cuda/include -- NVCC executable : /opt/cuda/bin/nvcc -- CUDA host compiler : /usr/bin/cc -- USE_TENSORRT : OFF -- USE_ROCM : 0 -- USE_EIGEN_FOR_BLAS : -- USE_FBGEMM : 0 -- USE_FFMPEG : OFF -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LEVELDB : OFF -- USE_LITE_PROTO : OFF -- USE_LMDB : OFF -- USE_METAL : OFF -- USE_MKL : ON -- USE_MKLDNN : ON -- USE_MOBILE_OPENGL : OFF -- USE_NCCL : ON -- USE_SYSTEM_NCCL : OFF -- USE_NNPACK : 1 -- USE_NUMPY : ON -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENCV : OFF -- USE_OPENMP : OFF -- USE_PROF : OFF -- USE_QNNPACK : 1 -- USE_REDIS : OFF -- USE_ROCKSDB : OFF -- USE_ZMQ : OFF -- USE_DISTRIBUTED : ON -- USE_MPI : ON -- USE_GLOO : ON -- USE_GLOO_IBVERBS : OFF -- Public Dependencies : Threads::Threads;caffe2::mkl;caffe2::mkldnn -- Private Dependencies : qnnpack;nnpack;cpuinfo;/usr/lib/libnuma.so;fp16;/usr/lib/openmpi/libmpi_cxx.so;/usr/lib/openmpi/libmpi.so;gloo;aten_op_header_gen;onnxifi_loader;rt;gcc_s;gcc;dl -- Configuring done -- Generating done ``` cc @malfet @seemethere @walterddr @csarofeen @ptrblck @xwang233
module: build,module: cudnn,triaged,actionable
low
Critical
390,686,994
rust
Implement AsRawFd and FromRawFd for ReadDir
Most unixes provide `dirfd()`, which can be used to get a file descriptor from `struct DIR *`. The two errors are `EINVAL` (`dirfd` called on invalid pointer), which can't occur in correct code, and `ENOTSUP` (not supported by implementation), which doesn't occur on most implementations. Most unixes also provide `fdopendir()`, which opens a file descriptor as `struct DIR *`. It fails if the file descriptor is invalid or if it fails to allocate memory.
T-libs-api,C-feature-request
low
Critical
390,742,402
go
x/text/cmd/gotext: gotext extract fails on type incompatibility issues
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.11.2 darwin/amd64 </pre> ### Does this issue reproduce with the latest release? This is the latest release today. ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/Users/fgm/Library/Caches/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/fgm/src/go" GOPROXY="" GORACE="" GOROOT="/usr/local/Cellar/go/1.11.2/libexec" GOTMPDIR="" GOTOOLDIR="/usr/local/Cellar/go/1.11.2/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="/Users/fgm/src/OSInet/net/plusvite/kurz/src/code.osinet.fr/fgm/kurz/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/0g/p780bc554njc4qj110_8rmbr0000gn/T/go-build677218822=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre></details> ### What did you do? <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> - Store https://play.golang.org/p/7y5rpr2MpL6 as a simple go app in main.go - run `gotext -srclang=en extract -lang=fr,en .` ### What did you expect to see? - Nothing on the output - Directories `locales/fr` and `locales/en` created with an `out.gotext.json` file in each, ideally containing message definitions for NoError and Unimplemented strings. ### What did you see instead? - On the first invocation: - errors on the output: ``` /Users/fgm/src/go/src/golang.org/x/text/internal/number/number.go:150:42: cannot convert t (variable of type golang.org/x/text/language.Tag) to golang.org/x/text/internal/language/compact.Tag /Users/fgm/src/go/src/golang.org/x/text/feature/plural/plural.go:259:42: cannot convert t (variable of type golang.org/x/text/language.Tag) to golang.org/x/text/internal/language/compact.Tag gotext: extract failed: : : couldn't load packages due to errors: golang.org/x/text/feature/plural, golang.org/x/text/internal/number ``` - a `locales/en-US` directory created with an `out.gotext.json` file containing only this: ```json { "language": "en-US", "messages": null } ``` - On subsequent invocations: - nothing on the output - same locale/en-US directory with the same file in it
NeedsInvestigation
low
Critical
390,775,896
material-ui
[TextField][InputAdornment] InputLabel should not start shrunken if TextField has an InputAdornment
<!--- Provide a general summary of the issue in the Title above --> <!-- Thank you very much for contributing to Material-UI by creating an issue! ❀️ To avoid duplicate issues we ask you to check off the following list. --> <!-- Checked checkbox should look like this: [x] --> - [x] This is not a v0.x issue. <!-- (v0.x is no longer maintained) --> - [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate. ## Expected Behavior <!--- Describe what should happen. --> Input label should start on its normal position, as seen here: https://material-components.github.io/material-components-web-catalog/#/component/text-field ## Current Behavior <!--- Describe what happens instead of the expected behavior. --> Input label starts shrunken ## Steps to Reproduce https://material-ui.com/demos/text-fields/#outlined-input-adornments ## Your Environment <!--- Include as many relevant details about the environment with which you experienced the bug. If you encounter issues with typescript please include version and tsconfig. --> | Tech | Version | |--------------|---------| | Material-UI | 3.6.1 | | Material-UI styles | 3.0.0-alpha.2 | | React | 16.7.0-alpha.2 | | Browser | Chrome 71.0.3578.98 | | TypeScript | 3.2.1 |
component: text field,design: material you
high
Critical
390,793,457
kubernetes
External provisioning problems
<!-- Please only use this template for submitting enhancement requests --> I did more investigations in issue https://github.com/kubernetes/kubernetes/issues/71928#issuecomment-447065491, I think there are two problems: - scheduler has no way to notify external provisioner to retry provisioning on related objects are updated (e.g. invalid storage class is fixed) - external provisioner has no way to notify scheduler to reschedule PVC when it encounters unrecoverable situations (e.g. selected-node is wrong) Here are two scenarios: #### Scenario 1) selected node is right, but need to notify external provisioner to provision because non-PVC objects (e.g. storage class) changed Possible ways to do: 1.1) update [external provisioner lib](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner) to retry claim on storage class events. This requires to update external provisioner. 1.2) schedule PVC always when necessary, e.g. storage class or PVC is updated, no matter it's already being selected a node or not. But we must skip nodes except current `selected-node`, because it's not safe to change selected node by scheduler. A volume may be in creating on old node, we must relies on external provisioner to reject node for safety. Using this method, we need find a way to (e.g. add another annotation) to trigger PVC update event without changing current `selected-node`. This requires to update external provisioner to provision again immediately on PVC update event. 1.3) remove `selected-node` on bind timeout We assume it's safe to remove `selected-node` after bind timeout. It may take some time (depends bind timeout), but will notify external provisioner to provision again eventually, even if external provisioner exceeded retry limits. This requires to update external provisioner to provision again immediately on PVC update event. #### Scenario 2) selected node is wrong, we need notify scheduler to find a feasible node Possible ways to do: 2.1) wait provisioner to reject PVC from given node by removing `selected-node` annotation, then scheduler can reset it to notify provisioner to retry again. This works like pod scheduling, when a Pod is assigned to a node, but kubelet finds it is not correct, it will be rejected, e.g. node selector not matched now. We need to distinguish two kinds of failures in external provisioner: - unrecoverable failures, e.g. capacity/volume count hard limits on this node - scheduler should retry on these failures, and it will and must select a new node for given PVC (otherwise, it may enter into infinite loop) - recoverable failures, e.g. network error, provisioner bugs etc - provisioners should retry on these failures and wait network to recover or bugs to be fixed A more brutal way is to remove `selected-node` annotation on all failures, like internal provisioner. Anyhow, it requires external provisioner to cooperate with scheduler. 2.2) Another way is to schedule PVC always when necessary. And assign PVC to new node to notify provisioner to provision again. But it's not safe to change selected node by scheduler, see method 1.2). And current external provisioner will not retry provision immediately on `selected-node` field changed. We need to update external provisioner too. 2.3) Same as 1.3) ### My suggestion For now, we can do 1.1), this method has no drawbacks IMO and can fix part of problems. <!-- DO NOT EDIT BELOW THIS LINE --> /kind feature /sig storage
sig/storage,kind/feature,lifecycle/frozen
low
Critical
390,822,031
go
net: TestUDPZeroBytePayload is potentially flaky on iOS
Examples: https://build.golang.org/log/4801b71522ac597735f5dfc9e3de0a5177f1b014 https://build.golang.org/log/ac45684f4eeb9337972d8489ff01b93bf69adef2v https://build.golang.org/log/bef89502b0ebe850c741f3e69ed9e01d7b497c5a ``` --- FAIL: TestUDPZeroBytePayload (0.11s) udpsock_test.go:369: read udp4 127.0.0.1:49432: i/o timeout FAIL FAIL net 9.988s ``` CC @mikioh @bradfitz @ianlancetaylor
Testing,OS-Darwin,NeedsInvestigation,mobile
low
Major
390,845,955
go
runtime/debug: document BuildInfo.Main.Version == "(devel)"
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version devel +571d93e977 Thu Dec 13 15:08:48 2018 +0000 darwin/amd64 </pre> ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/Users/mr/Library/Caches/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/mr/go" GOPROXY="" GORACE="" GOROOT="/Users/mr/gotip/src/github.com/golang/go" GOTMPDIR="" GOTOOLDIR="/Users/mr/gotip/src/github.com/golang/go/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="/Users/mr/gomod/debug-module-version-demo/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/ct/bl4_z3g51ks8239_r2k07v_40000gn/T/go-build638578617=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre></details> ### What did you do? <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> Repro case is https://github.com/mark-rushakoff/debug-module-version-demo. It's a module, and its main.go contents are a simple use of `debug.ReadBuildInfo`: ```go package main import ( "fmt" "runtime/debug" _ "rsc.io/quote" ) func main() { bi, ok := debug.ReadBuildInfo() if !ok { panic("couldn't read build info") } fmt.Printf("%s version %s\n", bi.Path, bi.Main.Version) for _, d := range bi.Deps { fmt.Printf("\tbuilt with %s version %s\n", d.Path, d.Version) } } ``` ### What did you expect to see? I expected to see the first line print `github.com/mark-rushakoff/debug-module-version-demo version v0.0.0-20181213...` when checked out at an arbitrary commit, or `github.com/mark-rushakoff/debug-module-version-demo version v0.0.1` when checked out at tag v0.0.1. I tried both `go run .` and `go build . && ./debug-module-version-demo` but both cases printed `(devel)`. ### What did you see instead? ``` github.com/mark-rushakoff/debug-module-version-demo version (devel) built with golang.org/x/text version v0.0.0-20170915032832-14c0d48ead0c built with rsc.io/quote version v1.5.2 built with rsc.io/sampler version v1.3.0 ``` Based on the behavior I've observed, it looks as though the main module returned by `debug.ReadBuildInfo` is hardcoded to `(devel)` for the main module, which I assume is intended behavior. If so, that's unfortunate for use cases like `mycmd version` to easily print the module version of the software being built; but it should be documented. The current documentation at https://tip.golang.org/pkg/runtime/debug/#ReadBuildInfo does not mention `(devel)` in any way, nor does it mention any special behavior of the `Main` module. /cc @hyangah since you're on git blame for src/runtime/debug/mod.go.
Documentation,NeedsInvestigation,modules
high
Critical
390,860,655
puppeteer
Intercept target creation
In many ways, users want to intercept targets being created to attach and set them up. Usecases: - when popup is getting opened, attach to it and enable request interception - when a link click opens a new page, set proper device emulation before website is getting loaded We might be able to do this with CDP using `Target.setAutoAttach` and `waitForDebugger` option. I'd like this to be scoped to browser context though so that's there's a better flexibility. The API might look like this: ```js await browserContext.setTargetInterception(true); browserContext.on('targetcreated', async target => { if (target.type() !== 'page') { await target.resumeLoading(); return; } const page = await target.page(); await page.setViewport({width: 400, height: 400}); await target.resumeLoading(); }); ``` Related issues: #1378, #3648
feature,chromium,chrome
high
Critical
390,873,460
go
wiki: CodeReviewComments change
Is it okay to add an ImportsBlank section to the CodeReviewComments with the following text? ## ImportBlank Packages that are imported only for their side effects (using the syntax `import _ "pkg"`) should only be imported in the main package of a program, or in tests that require them.
Documentation,NeedsDecision
low
Major
390,873,584
godot
Physics is not deterministic when using time scale.
Win10 64bit d030c17 Demo Project (Timescale exposed as an export var in the scene root.): [3.1 Timescale Determinism Issue.zip](https://github.com/godotengine/godot/files/2677839/3.1.Timescale.Determinism.Issue.zip) I have been experimenting with Engine.time_scale and noticed the results in the built-in physics can vary wildly. It makes me a concerned about using time slowing effects in games, because the way the game will play will change. Maybe there might be points in the game where the character will now just barely never be able to reach a platform if they're using a time slowing powerup. It would be very hard to identify problems like that and hack in work arounds in script. I made a small test project. It's RigidBody2Ds with a KinematicBody2D arm, being animated with an AnimationPlayer on the physics process. They stay deterministic from play to play until the timescale is modified. Below are some animations showing these results at different time scales. Not clear at least if this is a bug, limitation, or requires a totally different approach. ## 1X Timescale ![1x timescale](https://user-images.githubusercontent.com/13004169/49969316-f0e73f00-ff30-11e8-8296-28fc31f68aec.gif) ## 4X Timescale ![4x timescale](https://user-images.githubusercontent.com/13004169/49969318-f47ac600-ff30-11e8-9a17-489e1994d99f.gif)
discussion,topic:physics
low
Critical
390,874,600
rust
mem::size_of::<T> not const - use of type variable from outer function
The following CDR-serde implementation defines functions to serialize data-types such as u16, u32 etc. It is using an abstract function to align write-position, to match with multiple of mem::size_of<T>(), where T is a primitive int or float type. As the bytesize is known at compile time, I would like to declare the value as const, using the following patch ://github.com/frehberg/cdr-rs/pull/1 ```rust impl<W, E> Serializer<W, E> where W: Write, E: ByteOrder, { .... fn write_padding_of<T>(&mut self) -> Result<()> { // Calculate the required padding to align with 1-byte, 2-byte, 4-byte, 8-byte boundaries // Instead of using the slow modulo operation '%', the faster bit-masking is used const PADDING: [u8; 8] = [0; 8]; const ALIGNMENT: usize = std::mem::size_of::<T>(); const MASK = ALIGNMENT - 1; // mask like 0x0, 0x1, 0x3, 0x7 match (self.pos as usize) & MASK { 0 => Ok(()), n @ 1...7 => { let amt = ALIGNMENT - n; self.pos += amt as u64; self.writer.write_all(&PADDING[..amt]).map_err(Into::into) } _ => unreachable!(), } } ``` but compiler yields with error ```sh error: Could not compile `cdr`. warning: build failed, waiting for other jobs to finish... error[E0401]: can't use type parameters from outer function --> src/ser.rs:51:54 47 | fn write_padding_of<T>(&mut self) -> Result<()> { | ---------------- - type variable from outer function | | | try adding a local type parameter in this method instead ... 51 | const ALIGNMENT: usize = std::mem::size_of::<T>(); | ^ use of type variable from outer function For more information about this error, try `rustc --explain E0401`. error: Could not compile `cdr`. ``` As the generic function is not exported by lib, and it is instanciated only from inside the crate, I am wondering why the compiler is not able to derive the type of parameter T. Any idea? *EDIT* the explanation of E0401 does not fit to this code
A-diagnostics,T-compiler,C-bug,A-const-eval
low
Critical
390,876,075
godot
Kinematic bodies having collision normal problems against other Kinematic bodies.
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** <!-- Specify commit hash if using non-official build. --> 9c3293b844fd5fa524778b519c2e0ce6ff495c19 **OS/device including version:** <!-- Specify GPU model, drivers, and the backend (GLES2, GLES3, Vulkan) if graphics-related. --> Win 10 **Issue description:** <!-- What happened, and what was expected. --> The issue is that when trying to reflect kinematic bodies on collision, they sometimes stick to each other. This results also in sticking to walls. ![sticking](https://user-images.githubusercontent.com/13004169/49969750-0315ad00-ff32-11e8-8c16-5a545556c1a5.gif) It can be alleviated it somewhat by making temporarily exclusion between bodies for a few frames, which has a potential for worse side effects. ```GDscript var ball = col.collider if(ball is KinematicBody2D): add_collision_exception_with(ball) temp_excl[ball] = delta * EXCLUSION_FRAMES ``` **Steps to reproduce:** Use the MRP below or the following code. ```GDscript var col = move_and_collide(heading * speed * delta ) if(col): heading = heading.bounce(col.normal) ``` **Minimal reproduction project:** <!-- A small Godot project which reproduces the issue. Drag and drop a zip archive to upload it. --> [Kinematic Bodies Sticking.zip](https://github.com/godotengine/godot/files/5732950/Kinematic.Bodies.Sticking.zip)
bug,confirmed,topic:physics
low
Major
390,888,473
go
x/build: trybots should include all platforms that can contribute release-blockers
If a build failure for a given platform P can be considered a release-blocker (such as #29221), then trybots should also run on P. Otherwise we have to rely on the actual build failure before we can fix the issue. Example: In #29221, some newly added math tests failed on s390x, yet the trybots didn't notice. If it's too expensive (too slow) to run the trybots for some platforms all the time, maybe we could consider doing it at least some time before any of the imminent release stages. For instance, if we are planning to cut a Beta or RC, we might want to consider starting to run the trybots a week before on all platforms where failures might block the release. cc: @golang/osp-team cc: @bradfitz cc: @dmitshur cc: @andybons
Builders,NeedsFix
low
Critical
390,905,267
TypeScript
Errors on non-callable unions should be more specific
While working on #29011 I realized that the error we produce for unions of things which look callable but for which we cannot synthesize a signature for are lackluster. We report an error like: ``` Cannot use 'new' with an expression whose type lacks a call or construct signature. ``` when we could do better and say something like ``` Cannot use 'new' with an expression whose type lacks a call or construct signature. All members of union type {0} have call or construct signatures, but none are similar enough to resolve a call in a typesafe way. ``` which would make it far more clear what's going on in these scenarios. Once #29011 is merged, the conditions can be even more specific, such as ``` Cannot use 'new' with an expression whose type lacks a call or construct signature. All members of union type {0} have call or construct signatures, but more than one has overloads, which prevents this call from being resolved in a typesafe way. ``` and ``` Cannot use 'new' with an expression whose type lacks a call or construct signature. All members of union type {0} have call or construct signatures, but more than one has type parameters, which prevents this call from being resolved in a typesafe way. ```
Suggestion,Domain: Error Messages,Experience Enhancement
low
Critical
390,920,598
rust
u128 atomic compare_and_set (cxchg) emits linker error
On Rust 1.32.0 nightly for both x86_64-pc-linux-gnu & x86_64-pc-windows-gnu. `undefined reference to '__sync_val_compare_and_swap_16'` while also failing to link to libatomic if provided. Example: https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=9aeb9e2d8ce7e5b2cac9cdeaadd03ee2
A-linkage,T-compiler,C-bug,A-intrinsics,requires-nightly,A-atomic
low
Critical
391,004,241
rust
libcompiler_builtins and libprofiler_builtins for aarch64-pc-windows-msvc contain x86 objects
I was trying to look at the code generated by nightly rust for arm64 windows, which turned up to not be supported by my binutils, but in the middle of the otherwise empty `objdump -d` output, I got the surprise to see x86 assembly. As it turns out, several object files embedded in libcompiler_builtins and libprofiler_builtins (but not all of them) are pe-i386 objects. ``` C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/ucmpti2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/ucmpdi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/trunctfsf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/trunctfdf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/truncsfhf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/truncdfsf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/truncdfhf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/subvti3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/subvsi3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/subvdi3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/powixf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/popcountti2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/popcountsi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/popcountdi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/parityti2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/paritysi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/paritydi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/negvti2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/negvsi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/negvdi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/negti2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/negsf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/negdi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/negdf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/mulxc3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/mulvti3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/mulvsi3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/mulvdi3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/mulsc3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/muldc3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/int_util.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/floatunsitf.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/floatunditf.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/floatsitf.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/floatditf.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/fixunstfti.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/fixunstfsi.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/fixunstfdi.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/fixtfti.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/fixtfsi.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/fixtfdi.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/ffsti2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/extendsftf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/extendhfsf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/extenddftf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/divxc3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/divsc3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/divdc3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/ctzti2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/ctzsi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/ctzdi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/comparetf2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/cmpti2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/cmpdi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/clzti2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/clzsi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/clzdi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/apple_versioning.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/addvti3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/addvsi3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/addvdi3.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/absvti2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/absvsi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/compiler_builtins-ab033e49e368917e/out/./compiler-rt/lib/builtins/absvdi2.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/WindowsMMap.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingWriter.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingValue.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingUtil.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingRuntime.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingPlatformOther.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingPlatformLinux.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingPlatformDarwin.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingNameVar.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingMergeFile.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingMerge.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingFile.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfilingBuffer.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/InstrProfiling.o: file format pe-i386 C:/projects/rust/build/x86_64-pc-windows-msvc/stage2-std/aarch64-pc-windows-msvc/release/build/profiler_builtins-d4c98d7f03f32e3d/out/GCDAProfiling.o: file format pe-i386 ``` Cc: @froydnj, @alexcrichton
O-windows-msvc,O-AArch64
low
Major
391,023,032
go
proposal: cmd/go/v2: prohibit in-package tests from extending a package's API
Currently in-package tests are free to add exported identifiers to the package that they are testing. Specifically an in-package test: - can define new variables - can define new types - can implement methods on non-test types This makes tooling harder, because any given package can have several possible variants, depending on which package's tests are being run. Consider the following example: -- go.mod -- module example -- a/a.go -- package a import ( "fmt" "example/b" ) var A fmt.Stringer = b.B{} -- b/b.go -- package b type B struct{} -- b/b1_test.go -- package b func (B) String() string { return "b" } -- b/b2_test.go -- package b_test import ( "testing" "example/a" ) func Test(t *testing.T) { if got, want := a.A.String(), "b"; got != want { t.Fatalf("got %q want %q", got, want) } } Running `go test example/b` succeeds because when b is being tested, the internal tests extended the B type so it implements fmt.Stringer. But `example/a` won't actually compile on its own because B doesn't implement fmt.Stringer without the test code added by the internal tests in `example/b`! It's as if tests exist in a closely related but different version of the universe in which every package might not be *quite the same* as its usual version. And that version is potentially different for every package. This "parallel universe" property of testing makes Go types harder to reason about and the Go tooling less efficient, because it's not possible to type-check a package once and for all in the general case. ## Where does this problem come from? Currently in Go, test code can be internal to a package, extending that package for the duration of the test and allowing the test code access to unexported identifiers within the package, or it can be external, with access to exported identifiers only. A hybrid approach is also possible, where tests are largely external, but some package-internal test code, often in an `export_test.go` file, is used to provide access to selected parts of the internal API. It's the final case that is problematic, because external test code is free to depend on other packages which themselves depend on the package being tested which has been extended with extra test code. All three approaches are common in existing Go code. I believe that any solution to this issue should continue to support almost all existing code (including the possibility of automatic code migration with `gofix`), otherwise substantial parts of the ecosystem will not be able to move forward with new Go features. ## Proposed solution I propose that it should not be possible to write tests in the same package as that being tested. That is, all tests must be in the external test package (e.g. `foo_test` to test package `foo`). A test file can use a special form of import statement to request access to unexported identifiers from the package being tested. If they use this special form, code within that file (only) may access unexported identifiers within the package without compiler errors. A possible (straw man) spelling for the syntax is: import "." That is, import the package in the current directory. This form is currently invalid because relative import paths are not allowed, and "." is not a valid package name by itself, so wouldn't overlap with existing import path syntax. As there is only one package that can be imported in this way, there is no ambiguity problem. It also satisfies [issue 29036](https://golang.org/issue/29036), as the imported package name can be automatically derived from the current file's package identifier by stripping the `_test` suffix. Another possibility might be to add an extra keyword, recognized only within an import block; for example: import "mypackagepath" internal Whatever the form for the special import syntax, this solution seems to solve the initial problem. It allows both all-internal tests (always use `import "."`), all-external tests (never use `import "."`) and hybrid tests ("internal" test code can define selected functionality to be used by external tests, similarly to how `export_test.go` files are used today). It also reduces the overall complexity of the way that tests work in Go. I believe that it should also be possible to automatically convert almost all existing code to the new rules. Some care would need to be taken to rename symbols that appear in both internal and external test code, but that's not an hard issue to solve. Perhaps that issue is rare enough that manual intervention could be required. More investigation is needed to see how common it is. As an example, some test code that exposes selected functionality to external tests might look like this: -- b/b.go -- package b type B struct { internalField string } func GetBValue() B { return B{ internalField: "b", } } -- b/b1_test.go -- package b_test import "." // allow access to unexported identifiers within this file. func BInternalField(x b.B) string { return x.internalField } -- b/b2_test.go -- package b_test import ( "testing" "example/b" ) func Test(t *testing.T) { x := b.GetBValue() if got, want := BInternalField(x), "b"; got != want { t.Fatalf("got %q want %q", got, want) } } ## Other possible solutions We could disallow all tests that rely on direct access to unexported identifiers i.e. allow external tests only. This is an attractive option for some, but the change would be very disruptive for much code. I do not believe that it would be possible to migrate existing internal tests automatically, so I think this possibility is excluded. We could continue to allow both internal and external tests, but treat internal test code as being in its own package, with access allowed to the tested package's unexported identifiers (and all symbols from the package available in global scope), but otherwise separate from it. External tests could use some kind of special import syntax (for example `import "thispackage@test"`) to access symbols defined in the internal tests. This was my original thought, but the model seems more complex than necessary - we have an extra pseudo-package for tests *and* a special import path syntax. We could prohibit internal test code from defining methods on types defined by the testing package. This solves some of the more obvious problems with the current situation, but the "parallel universe" issue is still present, and tooling probably would not become significantly simpler.
v2,Proposal
medium
Critical
391,052,673
flutter
Programmatically show keyboard when scanner is connected
Hi. I want to use a bluetooth barcode scanner to automatically fill the fields in my flutter app. The problem is that it prevents a system keyboard from appearing when scanner is paired with a device. I have two fields: one to scan barcode and second to input some text, and I can not type anything in the second field because keyboard isn't showing. Is there a way to force system keyboard to appear even with hid device connected?
a: text input,c: new feature,framework,engine,P2,team-engine,triaged-engine
low
Minor
391,061,197
angular
Provide ControlValueAccessors for libraries other than @angular/forms
# πŸš€ feature request ### Relevant Package This feature request is for @angular/forms ### Description The infrastructure surrounding `ControlValueAccessor` is widely adopted by component libraries, but currently the built-in ones are hardcoded to only be provided for `ngModel`, `formControl` and `formControlName`, preventing external libraries from benefiting from the existing implementations when implementing a new directive which wants to make use of the accessors. ### Describe the solution you'd like I would like to be able to have some way to also provide these accessors when my own directive is present (instead of ngModel, formControl, formControlName). I don't have a concrete suggestion of what that would look like. ### Describe alternatives you've considered A workaround is to tell users to add `ngModel` as well, but this is not only awkward, it also interferes as now the entire forms infrastructure kicks in and calls `writeValue` on its own etc.
feature,state: blocked,effort2: days,state: Needs Design,freq2: medium,workaround2: non-obvious,area: forms,risk: low,feature: under consideration,core: host directives
medium
Major
391,116,073
go
net/url: misleading error message when url has a leading space
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.11.3 darwin/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/Users/stefanb/Library/Caches/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/stefanb/go" GOPROXY="" GORACE="" GOROOT="/usr/local/Cellar/go/1.11.3/libexec" GOTMPDIR="" GOTOOLDIR="/usr/local/Cellar/go/1.11.3/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/z2/3bdl__bs0xxd3kmf5c1pj0gm0000gn/T/go-build212726590=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre></details> ### What did you do? ```go package main import ( "net/http" ) func main() { _, err := http.Get(" http://example.org") if err != nil { panic(err) } } ``` https://play.golang.org/p/jkcYSD6ZRcO ### What did you expect to see? Either * a generic error message of invalid URL or * a detailed error of invalid URL scheme or * a detailed error of leading space in URL ### What did you see instead? A *misleading detailed* error message: ``` parse http://example.org: first path segment in URL cannot contain colon ``` In #24246 the proposed solution was to trim the URLs before using them, but from the given error message it is very hard to see what the real problem is. I propose to either: * adjust the error message to eg: ``` parse http://example.org: URL scheme cannot contain spaces ``` or * quote the problematic URL in the error message, so that it is more obvious even if the message itself remains misleading: ``` parse " http://example.org": first path segment in URL cannot contain colon ``` or * ideally both adjust the error message + quote the URL: ``` parse " http://example.org": URL scheme cannot contain spaces ``` The `cannot contain spaces` in error message can be changed for `cannot contain whitespace` or even `contains invalid characters`, depending how the check is implemented. Current implementation: https://github.com/golang/go/blob/b50210f5719c15cd512857e2e29e1de152155b35/src/net/url/url.go#L540-L562
NeedsFix
medium
Critical
391,189,157
flutter
Provide an API in the Flutter tool to make it extensible
Edit by @christopherfujino a design doc for this proposal is at https://flutter.dev/go/flutter-tool-extension-api ---- Right now, the Flutter tool is somewhat extensible by wrapping `main()` in a project-specific `main()` method, that uses combinations of `AppContext.run()` and the top-level `run()` function to override certain behaviors of the tool: https://github.com/flutter/flutter/blob/d675bbbd3a0f1e0992961e52161227c6ee37d0e8/packages/flutter_tools/lib/runner.dart#L29 for instance, the code would look something like this: ```dart import 'package:flutter_tools/runner.dart' as runner; import 'package:flutter_tools/src/base/context.dart'; import 'package:flutter_tools/src/commands/config.dart'; import 'package:flutter_tools/src/commands/daemon.dart'; import 'package:flutter_tools/src/commands/devices.dart'; import 'package:flutter_tools/src/commands/logs.dart'; import 'package:flutter_tools/src/runner/flutter_command.dart'; Future<Null> main(List<String> args) async { final Map<Type, Generator> overrides = await _getProjectSpecificOverrides(); await context.run( overrides: overrides, body: () { return runner.run(args, <FlutterCommand>[ new _ProjectSpecificAttachCommand(), new ConfigCommand(), new DaemonCommand(), new DevicesCommand(), new _ProjectSpecificDoctorCommand(), new LogsCommand(), new _ProjectSpecificRunCommand(), ]); }, ); } ``` The problem with this is that in order to accomplish this behavior, the project has to reach into implementation files in `flutter_tools`, as well as the fact that this is all undocumented because it's not an official API. There are a growing number of use cases for making the Flutter tool extensible, such as allowing project to: * wire up code generation as part of their build (and likewise for hot reload) * wire up C/C++ compilation as part of their build (moreso when https://github.com/dart-lang/sdk/issues/34452 and https://github.com/flutter/flutter/issues/7053 are implemented) * Support a different enumeration of device types (e.g. desktop embedders) * Expose a Flutter tool "plugin", such that other Flutter projects can customize their developer experience just by installing a set of plugins For a targeted set of use cases, we should expose an API in the Flutter tool for how such projects could import & extend the tool.
c: new feature,tool,customer: crowd,P2,team-tool,triaged-tool
low
Critical
391,209,453
flutter
update_dart_sdk.ps1 should be more graceful about download failure (and mention China)
`update_dart_sdk.ps1` should do the same thing as `update_dart_sdk.sh` in terms of this part of the latter script: ```sh curl --continue-at - --location --output "$DART_SDK_ZIP" "$DART_SDK_URL" 2>&1 || { echo echo "Failed to retrieve the Dart SDK from: $DART_SDK_URL" echo "If you're located in China, please see this page:" echo " https://flutter.io/community/china" echo rm -f -- "$DART_SDK_ZIP" exit 1 } unzip -o -q "$DART_SDK_ZIP" -d "$FLUTTER_ROOT/bin/cache" || { echo echo "It appears that the downloaded file is corrupt; please try again." echo "If this problem persists, please report the problem at:" echo " https://github.com/flutter/flutter/issues/new?template=ACTIVATION.md" echo rm -f -- "$DART_SDK_ZIP" exit 1 } ``` As far as I can tell the PowerShell version just doesn't do any error checking here, so those error messages don't appear at all in the PowerShell version.
tool,platform-windows,a: china,P2,team-tool,triaged-tool
low
Critical
391,218,542
react
Warning should appear when versions of react and react-dom do not match
**Do you want to request a *feature* or report a *bug*?** Request a feature **What is the current behavior?** If the version of react and react-dom do not match, some features fail silently. See this issue for example: https://github.com/reduxjs/react-redux/issues/1125 In this issue, the new Context API wasn't working as intended, but no errors or warnings were visible. Components simply did not update. It turns out that this issue was because I updated react to version 16.6.3, but still had react-dom at version 16.5. **What is the expected behavior?** I would like to see some sort of warning message in the console in development mode when the versions of react and react-dom do not match.
Type: Discussion
low
Critical
391,235,741
TypeScript
Rename a symbol of a named export: Do not create export alias
- VSCode Version: 1.30 - OS Version: Win 10 When I rename a symbol which is a named export in a module, then there is just an alias created in the module `export` (VS Code 1.30). What I would expect is that the symbol is renamed in the module export AND all in all modules that are importing it (that is the bevavior of 1.29.1). Or an option by which I can specify the old behavior. This feature is soo useful to rename a bunch of modules. Release notes of 1.30 only include some [rename changes for destructuring ](https://code.visualstudio.com/updates/v1_30#_renames-handle-jsts-destructuring-properly), but I did not have any destructuring in code. Old (1.29.1): ![vscode_1 29 1](https://user-images.githubusercontent.com/16099458/50003130-e82f5100-ffa2-11e8-8f9e-eb7de99e3beb.gif) New (1.30): ![vscode_1 30](https://user-images.githubusercontent.com/16099458/50003146-fc734e00-ffa2-11e8-8d7c-b1cbfa39380f.gif) Does this issue occur when all extensions are disabled?: Yes, running clean versions of 1.30, 1.29.1 ~~PS: Typescript 3.2.2 in both tests.~~ // EDIT: Actually, after all VS Code 1.29.1 did use Typescript *3.1.4* in the first test, not 3.2.2. Switching to TS 3.2.2 with the old 1.29.1 version, the same behavior (just aliasing the export instead of renaming in the whole workspace) reoccurs. I am not sure, if there are interdependencies between VS Code and TS and therefore leave the issue open or not?
Suggestion,In Discussion
medium
Critical
391,253,801
go
html/template: ambiguous errors when style tag is not closed
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.11.3 linux/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/home/george/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/george/.go" GOPROXY="" GORACE="" GOROOT="/usr/local/go" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/home/george/tmp/resume/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build336466026=/tmp/go-build -gno-record-gcc-switches" </pre></details> ### What did you do? Ran the following program: https://play.golang.org/p/BtMWAAxnb3o ### What did you expect to see? Program not returning an error, or at least an error informing that the style tag was not closed. ### What did you see instead? ``` html/template:test:11:17: {{.}} appears in an ambiguous context within a URL ```
NeedsInvestigation
low
Critical
391,294,501
rust
For *-apple-ios targets, if license agreement is not agreed to (yet), better behaviour is necessary
Currently if one attempts to use `*-apple-ios` targets right after XCode is installed, it will appear that the target does not exist. This happens because `xcrun` says something about license, sudo and whatnot. We should have a better behaviour here. ![bad behaviour](https://user-images.githubusercontent.com/679122/50029796-b054f780-fffc-11e8-90fa-805a16dd9bef.png)
O-ios,T-compiler,A-licensing
low
Minor
391,299,016
go
cmd/compile: improve interface dispatch performance
Reviewing the following code: ![image](https://user-images.githubusercontent.com/4958833/50028158-5597ad80-ffb4-11e8-97f3-8223c4bb1e9a.png) and the generated assembly: <details><summary> <pre> varint_bench_test.go:14 0x10f07ee 488b4c2430 MOVQ 0x30(SP), CX varint_bench_test.go:14 0x10f07f3 488b5118 MOVQ 0x18(CX), DX ... varint_bench_test.go:14 0x10f0807 ffd2 CALL DX </pre> </summary> <pre> TEXT command-line-arguments.WriteUvarint(SB) /Users/robertengels/gotest/gotest/varint_bench_test.go varint_bench_test.go:12 0x10f07b0 65488b0c2530000000 MOVQ GS:0x30, CX varint_bench_test.go:12 0x10f07b9 483b6110 CMPQ 0x10(CX), SP varint_bench_test.go:12 0x10f07bd 0f869f000000 JBE 0x10f0862 varint_bench_test.go:12 0x10f07c3 4883ec28 SUBQ $0x28, SP varint_bench_test.go:12 0x10f07c7 48896c2420 MOVQ BP, 0x20(SP) varint_bench_test.go:12 0x10f07cc 488d6c2420 LEAQ 0x20(SP), BP varint_bench_test.go:13 0x10f07d1 488b442440 MOVQ 0x40(SP), AX varint_bench_test.go:13 0x10f07d6 eb09 JMP 0x10f07e1 varint_bench_test.go:18 0x10f07d8 488b442440 MOVQ 0x40(SP), AX varint_bench_test.go:18 0x10f07dd 48c1e807 SHRQ $0x7, AX varint_bench_test.go:13 0x10f07e1 483d80000000 CMPQ $0x80, AX varint_bench_test.go:13 0x10f07e7 7243 JB 0x10f082c varint_bench_test.go:13 0x10f07e9 4889442440 MOVQ AX, 0x40(SP) varint_bench_test.go:14 0x10f07ee 488b4c2430 MOVQ 0x30(SP), CX varint_bench_test.go:14 0x10f07f3 488b5118 MOVQ 0x18(CX), DX varint_bench_test.go:14 0x10f07f7 83c880 ORL $-0x80, AX varint_bench_test.go:14 0x10f07fa 88442408 MOVB AL, 0x8(SP) varint_bench_test.go:14 0x10f07fe 488b442438 MOVQ 0x38(SP), AX varint_bench_test.go:14 0x10f0803 48890424 MOVQ AX, 0(SP) varint_bench_test.go:14 0x10f0807 ffd2 CALL DX varint_bench_test.go:14 0x10f0809 488b442418 MOVQ 0x18(SP), AX varint_bench_test.go:14 0x10f080e 488b4c2410 MOVQ 0x10(SP), CX varint_bench_test.go:15 0x10f0813 4885c9 TESTQ CX, CX varint_bench_test.go:15 0x10f0816 74c0 JE 0x10f07d8 varint_bench_test.go:16 0x10f0818 48894c2448 MOVQ CX, 0x48(SP) varint_bench_test.go:16 0x10f081d 4889442450 MOVQ AX, 0x50(SP) varint_bench_test.go:16 0x10f0822 488b6c2420 MOVQ 0x20(SP), BP varint_bench_test.go:16 0x10f0827 4883c428 ADDQ $0x28, SP varint_bench_test.go:16 0x10f082b c3 RET varint_bench_test.go:20 0x10f082c 488b4c2430 MOVQ 0x30(SP), CX varint_bench_test.go:20 0x10f0831 488b4918 MOVQ 0x18(CX), CX varint_bench_test.go:20 0x10f0835 88442408 MOVB AL, 0x8(SP) varint_bench_test.go:20 0x10f0839 488b442438 MOVQ 0x38(SP), AX varint_bench_test.go:20 0x10f083e 48890424 MOVQ AX, 0(SP) varint_bench_test.go:20 0x10f0842 ffd1 CALL CX varint_bench_test.go:20 0x10f0844 488b442418 MOVQ 0x18(SP), AX varint_bench_test.go:20 0x10f0849 488b4c2410 MOVQ 0x10(SP), CX varint_bench_test.go:20 0x10f084e 48894c2448 MOVQ CX, 0x48(SP) varint_bench_test.go:20 0x10f0853 4889442450 MOVQ AX, 0x50(SP) varint_bench_test.go:20 0x10f0858 488b6c2420 MOVQ 0x20(SP), BP varint_bench_test.go:20 0x10f085d 4883c428 ADDQ $0x28, SP varint_bench_test.go:20 0x10f0861 c3 RET varint_bench_test.go:12 0x10f0862 e89948f6ff CALL runtime.morestack_noctxt(SB) varint_bench_test.go:12 0x10f0867 e944ffffff JMP command-line-arguments.WriteUvarint(SB) :-1 0x10f086c cc INT $0x3 :-1 0x10f086d cc INT $0x3 :-1 0x10f086e cc INT $0x3 :-1 0x10f086f cc INT $0x3 </pre> </details> The generated code loads the interface address using double indirection in every loop invocation, and every call (line 14 & 20). The compiler could easily generate optimized code where the DX is loaded once and used for every interface call, as w is constant in the method. It is my opinion that loops like this are very common in typical Go code and deserve more optimization attention. As an example, issue #29010 makes specific reference to not using interfaces as call sites due to their inefficiency. At a minimum the call address could be placed in a stack local avoiding one indirection. A more advanced change might be to reserve a couple of general purpose registers for the hot interface call address and object reference (r10/r11) and so push/pop r10/r11 on entry/exit for those routines using the optimization. Issue #18597 has some overlap with this.
Performance,NeedsInvestigation,compiler/runtime
low
Major
391,307,900
go
net: UnixListener blocks forever in Close() if File() is used to get the file descriptor
### What version of Go are you using (`go version`)? <pre> $ go version go version go1.11.3 linux/amd64 </pre> ### Does this issue reproduce with the latest release? Yes, it also reproduces with the latest tip. ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/home/prashant/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/prashant/go" GOPROXY="" GORACE="" GOROOT="/home/prashant/.gimme/versions/go1.11.3.linux.amd64" GOTMPDIR="" GOTOOLDIR="/home/prashant/.gimme/versions/go1.11.3.linux.amd64/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build250167096=/tmp/go-build -gno-record-gcc-switches" </pre></details> ### What did you do? Created a unix listener, called `File()` to get the underlying file descriptor, then ran `Accept` in a goroutine. After some while, tried to call `Close` on the listener, but the `Close` blocks indefinitely. Repro: https://github.com/prashantv/unix-close-race/blob/master/main.go ### What did you expect to see? `Close` to return, and cause the `Accept` (in a separate goroutine) to unblock, and return an error. ### What did you see instead? In Go 1.11.3, the `Close` blocks indefinitely: ``` goroutine 1 [semacquire]: internal/poll.runtime_Semacquire(0xc0000a4028) /home/prashant/.gimme/versions/go1.11.3.linux.amd64/src/runtime/sema.go:61 +0x39 internal/poll.(*FD).Close(0xc0000a4000, 0xc0000a4000, 0x0) /home/prashant/.gimme/versions/go1.11.3.linux.amd64/src/internal/poll/fd_unix.go:109 +0xdb net.(*netFD).Close(0xc0000a4000, 0xc0000b5e58, 0xc00008a030) /home/prashant/.gimme/versions/go1.11.3.linux.amd64/src/net/fd_unix.go:184 +0x4f net.(*UnixListener).close(0xc000080540, 0xc0000d0020, 0x5) /home/prashant/.gimme/versions/go1.11.3.linux.amd64/src/net/unixsock_posix.go:186 +0x65 net.(*UnixListener).Close(0xc000080540, 0x1, 0x1) /home/prashant/.gimme/versions/go1.11.3.linux.amd64/src/net/unixsock.go:270 +0x47 main.testCloseAfterAccept(0x511ec0, 0xc000080540) /home/prashant/go/src/github.com/prashantv/unix-close-race/main.go:66 +0x111 main.main() /home/prashant/go/src/github.com/prashantv/unix-close-race/main.go:43 +0x1ce ``` In Go 1.10.6, the `Close` returns, but the `Accept` goroutine is not unblocked, ``` 2018/12/14 14:58:02 Got fd from unix ln5 2018/12/14 14:58:02 wait for accept 2018/12/14 14:58:02 about to accept 2018/12/14 14:58:02 close 2018/12/14 14:58:02 wait for accept to end <blocks> ``` ### Other Notes It looks like the behaviour change between 1.10 and 1.11 may be caused by https://go-review.googlesource.com/c/go/+/119955/ (fix for #24942) I added a flag to the repro, if you pass `--use-new-fd`, instead of calling `Accept` and `Close` on the original unix listener (which we called `File()` on), it uses a new listener from the copied file descriptor. This mitigates the issue (both in Go 1.10 and Go 1.11). It seems like calling `File()` duplicates the file descriptor, but somehow affects the original file descriptor causing issues with `Accept` + `Close`. cc @witriew who originally ran into this issue.
help wanted,NeedsInvestigation
low
Critical
391,318,976
TypeScript
TSServer navtree β€” Way to differentiate function declarations from function expressions
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.3.0-dev.20181214 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** - tsserver - navtree **Problem** For https://github.com/Microsoft/vscode/issues/22981, we'd like a way to differentiate function delcarations from function expressions. Consider a file: ```ts function foo(f) { } foo(function bar() { }) const car = function dar(f) { } ``` VS Code uses the `navtree` to get possible code lens locations. We would only like to show references code lenses on `function foo` and perhaps on `const car =`, not on the expression `foo(function bar() {})`. Using the `navtree` request against the TS Server, we cannot tell function declarations like `foo` from function expressions like `bar` : ``` { "text": "bar", "kind": "function", "kindModifiers": "", "spans": [ { "start": { "line": 3, "offset": 5 }, "end": { "line": 3, "offset": 23 } } ], "nameSpan": { "start": { "line": 3, "offset": 14 }, "end": { "line": 3, "offset": 17 } } }, { "text": "dar", "kind": "function", "kindModifiers": "", "spans": [ { "start": { "line": 5, "offset": 13 }, "end": { "line": 5, "offset": 32 } } ], "nameSpan": { "start": { "line": 5, "offset": 22 }, "end": { "line": 5, "offset": 25 } } }, { "text": "foo", "kind": "function", "kindModifiers": "", "spans": [ { "start": { "line": 1, "offset": 1 }, "end": { "line": 1, "offset": 20 } } ], "nameSpan": { "start": { "line": 1, "offset": 10 }, "end": { "line": 1, "offset": 13 } } } ] ``` **Proposal** Differentiate: ```js function foo(f) { } ``` by using a kindModifier of `"definition"` or a similar name (or do the inverse and mark `function bar() { }` with a kindModifier of `"expression"`) Also, we correctly differentiate `car` using ` "kind": "const"` in the case of: ```js const car = function(f) { } ``` However only `dar` is returned for for: ```js const car = function dar(f) { } ```
Suggestion,In Discussion,API,Domain: TSServer
low
Minor
391,319,488
pytorch
Gather backward is faster than integer indexing on GPU
## πŸš€ Feature When x is a tensor of shape (N, D) and idx is a tensor of indices of shape K, the backward pass of x[idx] is much slower than the equivalent operation implemented using gather. Here is a benchmarking script: https://gist.github.com/jcjohnson/b03a0275e64681bb7587bbc7399a645a On a P100 GPU with PyTorch 1.0 stable, across a variety of problem shapes I get the following results: ``` Forward gather speedup: Min: 0.7549055905220289 Max: 5.590410529614541 Mean: 0.9328673787035276 Median: 0.880012936610608 Backward gather speedup: Min: 1.6313537996980372 Max: 23.95120218579235 Mean: 3.340551245050125 Median: 1.8802246977054176 ``` Basically this says that on the forward pass index is sometimes faster and gather is sometimes faster. However on the backward pass, gather is always faster than integer indexing. This is surprising, and suggests that although the two operations perform the same computation their implementations have very different performance characteristics. Integer indexing is much more intuitive than gather, so I suspect that many users are unknowingly leaving a lot of performance on the table by choosing integer indexing over gather. In one of my own applications, replacing integer indexing with gather resulted in a more than 2x speedup on my overall training iteration times! Would it be possible to somehow unify the implementation of the two operations, or otherwise ensure that integer indexing always performs at least as well as gather?
module: performance,triaged,module: determinism
medium
Major
391,323,612
TypeScript
querySelector return type could be more specific for compound selectors
## Search Terms querySelector, return, type, selector ## Suggestion This issue closely follows #8114, which applies only to **type selectors** ("single-element selectors"). Related to #12568. The return type of [ParentNode#querySelector](https://developer.mozilla.org/en-US/docs/Web/API/ParentNode/querySelector) is `Element` by default, but when the string argument matches exactly a lower-case element name, the return type is the interface of that element. For example, `.querySelector('#hero.wide')` returns an `Element` type, but `.querySelector('img')` returns an `HTMLImageElement` type. This helpful behavior fails beyond **simple type selectors** (a selector containing only an element name). When the argument becomes a [compound selector](https://www.w3.org/TR/selectors-4/#structure), such as `.querySelector('img#hero.wide')`, the return type is the more generic `Element`. This is unhelpful when the element name, `img`, remains in the selector. My suggestion is to improve parsing of the string argument, so that when it is a **compound selector that contains a type selector**, the return type can still be a specific interface. Obviously, this would not apply to selectors *not* containing a type selector, e.g. there is no way to know for sure that `.querySelector('#hero.wide')` is indeed an `HTMLImageElement`. ## Use Cases ```ts document.querySelector('#hero.wide').src = '//example.com/hero.png' // Element // Error: Property 'src' does not exist on type 'Element' document.querySelector('img').src = '//example.com/hero.png' // HTMLImageElement // no error, but cannot specify *which* image document.querySelector('img#hero.wide').src = '//example.com/hero.png' // Element // Error: Property 'src' does not exist on type 'Element' (document.querySelector('img#hero.wide') as HTMLImageElement).src = '//example.com/hero.png' // no error, and can specify which image, but must assert type manually ``` **Summary: It would be nice for TS to infer that `img#hero.wide` selects an `HTMLImageElement`, based on the tag name `img` in the selector. This would eliminate the need to assert the type manually.** ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals). *(Specifically: Goals 5, 6, and 9)*
Suggestion,In Discussion,Domain: lib.d.ts
medium
Critical
391,330,514
pytorch
[caffe2] controlled forward pass
## ❓[caffe2] I am in need of running a controlled forward pass for a model I am working on. As in execute a layer collect its output do some manipulation on it and feed the data to the next layer. How do we do this in caffe2 ? caffe does allow caffe.forward(start=<>, end=<>) python API. Can we do the same in caffe2 - [Discussion Forum](https://discuss.pytorch.org/)
caffe2
low
Minor
391,331,330
TypeScript
Improve signature help when completing object argument
From https://github.com/Microsoft/vscode/issues/56270 <!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.3.0-dev.20181214 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** **Code** For a simple TS File: ```js function foo(x: number, i: { ab: number, cd: number }) { } foo(1, { ``` Trigger signature help on last line. **Problem:** We currently just display the entire type in signature help: ![screen shot 2018-12-14 at 5 40 27 pm](https://user-images.githubusercontent.com/12821956/50037548-630a6300-ffc7-11e8-871b-64c9602f7dd8.png) Some possible quality of life improvements: * Underline which parameter you are currently typing and show its documentation * Show which parameters have already been completed vs which ones are missing This is likely not supported by the current VS Code api. Opening this issue for further discussion on how this could be improved /cc @DanielRosenwasser
Suggestion,Needs Proposal,Domain: Signature Help
low
Critical
391,344,700
pytorch
[Feature Request] cdist: pairwise distances between two sets of tensors with batch mode
Now we've already had [F.pdist](https://pytorch.org/docs/stable/nn.html?highlight=pdist#torch.nn.functional.pdist), which computes pairwise distances between each pair in **a single set** of vectors. However, in retrieval problems, we often need to compute the pairwise distances between each pair consisting one sample from a probe/query set and another sample from a gallery/database set, in order to evaluate the performances of a retrieval model. Specifically, if the database tensor has size N-by-D and the query M-by-D, the tensor returned by a *cdist* function should have size N-by-M where the (i, j)-th element is the distance between the i-th sample from the database and the j-th sample from the query. Currently, a plausible way (ok, I use this method because I had no idea about GPU programming to achieve better performance) to do this evaluation is: 1. transform the tensor to a numpy array: query_np = query.cpu().numpy(), database_np = database.cpu().numpy() 2. using [cdist](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html) provided by scipy: dist_matrix = cdist(query_np, database_np) which is really far from elegent and could not utilize GPU power and thus inefficient. So, could we develop some efficient cdist function which can use GPU, and hopefully act like [cdist in scipy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html)?
triaged,module: batching,function request,module: distance functions
medium
Major
391,353,578
pytorch
err:torch.nn.CrossEntropyLoss
pytorch 1.0 DOC err:torch.nn.CrossEntropyLoss if weight=NONE loss(x,class)=βˆ’x[class]+log(jβˆ‘β€‹exp(x[j])) ,The result is correct. if weight!=NONE loss(x,class)=weight[class]*(βˆ’x[class]+log(jβˆ‘β€‹exp(x[j]))) ,The result is wrong? ``` import torch import torch.nn.functional as F a=[[-5.,-6.],[3.,2.],[3.,2.]] c=[1,0,1] b=[3.,2.] input=torch.tensor(a,requires_grad=True) target = torch.tensor(c) weight = torch.tensor(b) output = F.cross_entropy(input,target,weight,reduce=True,size_average=True) z=output.sum() z.backward() print(output,input.grad) -------------------------- tensor(0.8847, grad_fn=) tensor([[ 0.2089, -0.2089], [-0.1153, 0.1153], [ 0.2089, -0.2089]]) ``` cc @brianjo @mruberry
module: docs,triaged
low
Minor
391,355,402
pytorch
MultiGPU for gru
## πŸ› Bug <!-- A clear and concise description of what the bug is. --> During runtime of GRU under multi-GPU environment, there is a RuntimeError: Expected hidden size (3, 64, 12), got (3, 16, 12) where the first, second, and third arguments are the number of GRU layers, batch size and number of hidden units respectively. My model run well under single-GPU environment. In Pytorch Forum, a question "DataParallel LSTM/GRU wrong hidden batch size (8 GPUs)" has been asked. One solution is to set batch_first to True for gru. But the error was still there after the setting. The correct solution is to store the hidden state inside the model rather than return it like the below code block(from user AnodyneCodeAsher Newcomer): ```python class LSTM(nn.Module): def __init__(self, initial_state): super(LSTM, self).__init__() self.lstm = nn.LSTM( ... batch_first=True) self.hn = initial_state def forward(self, input): output, hn = self.lstm(input, self.hn) self.hn = hn return output ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> My wrong code sample: ```python class LSTM(nn.Module): def __init__(self, initial_state): super(LSTM, self).__init__() self.lstm = nn.LSTM( ... batch_first=True) def forward(self, input, hidden): output, hn = self.lstm(input, hidden) return output, hn ``` Please help to fix this bug. Thanks a lot. ## Environment - PyTorch Version 1,0: - OS Ubuntu 16.04: - How you installed PyTorch`pip`: - Python version: 3.6 - CUDA/cuDNN version: 9.0 cc @zou3519
module: rnn,triaged,module: data parallel
low
Critical
391,359,779
TypeScript
Separate type application from function application
As discussed in #28931, currently the type application for the function with generic parameters is tied to that function application. Specialize the value of the function with generics is only possible during the call. So the following compiles fine: ```ts function id<V> (v : V) { return v } const some : Date = id<Date>(new Date()) ``` But this does not: ```ts function id<V> (v : V) { return v } type IdDate = typeof id<Date> // TS1005: ';' expected const dateId = id<Date> // TS1109: Expression expected. ``` It would be very beneficial to separate type application from function application. For example (the original reason of this request), it will allow mixins with generic parameters: ```ts export type Constructable<T extends any> = new (...args : any[]) => T export type AnyFunction = (...input: any[]) => any export type Mixin<T extends AnyFunction> = InstanceType<ReturnType<T>> export const Atom = <V, T extends Constructable<Object>>(base : T) => class Atom extends base { value : V hasValue () : boolean { return this.hasOwnProperty('value') } } export type Atom = Mixin<typeof Atom> // this currently works, but does not have generic argument export type Atom<V> = Mixin<typeof Atom<V>> // this is the goal ```
Suggestion,Awaiting More Feedback
medium
Major
391,366,235
TypeScript
Vscode suggests wrong ts auto import path with js extension after having a json import
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.1.6, but also tried with 3.3.0-dev.201xxxxx <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** auto import json wrong module path js extension issue bug **Code** https://stackblitz.com/edit/typescript-json-import-issue **I am not using any vscode extension for managing imports and as far as I know vscode depends on typescript to show the module import suggestions, so I hope this is the right place for this issue.** tsconfig.json ``` { "compilerOptions": { "module": "es2015", "moduleResolution": "node", "resolveJsonModule": true, "esModuleInterop": true } } ``` **Expected behavior:** When we have: `import packageJson from './package.json';` in VSCode auto import modules autocomplete to suggest: `import { testing } from './test';` **Actual behavior:** in VSCode auto import modules autocomplete suggests: `import { testing } from './test.js';` **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> **Related Issues:** <!-- Did you find other bugs that looked similar? -->
Bug,Domain: TSServer
medium
Critical
391,389,271
go
runtime: exceeded thread wakeup limitation on iOS
We found my gomobile iOS application crash with the below message: ``` Event: wakeups Action taken: none Wakeups: 45001 wakeups over the last 64 seconds (704 wakeups per second average), exceeding limit of 150 wakeups per second over 300 seconds Wakeups limit: 45000 Limit duration: 300s Wakeups caused: 45001 Duration: 17.04s Steps: 18 Hardware model: iPhone8,1 Active cpus: 2 Boot args: ``` I guess Go runtime switched threads so many times that iOS complained, but I am not sure. It is hard to create a minimal code to reproduce this problem, but I'd like to report this (potential) problem in gomobile.
mobile,compiler/runtime
medium
Critical
391,393,835
TypeScript
Document quick fix (or whatever it's officially called)
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.3.0-dev.201xxxxx <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** code fix quick fix suggestion **Code** not entirely applicable, but this is what I had which caused me to get confused ```ts const fs = require("fs"); ``` **Expected behavior:** Commandline `tsc` should behave the same as editor integrations **Actual behavior:** The "quick fix" feature, which appears to be called "code fixes" in the codebase and googling the messages led me to the json file which refers to them as "suggestions", only appears to exist in the typescript server. As a newcomer this had me extremely confused. I was running `tsc` on the commandline and getting no errors, but seeing squiggly lines and messages in my editor (atom, in this case). I was unable to find any documentation about where the message `'require' call may be converted to an import` was coming from, or that this was a hint that typescript was able to fix my code for me. It would also be helpful if the `tsc` client was able to list the code fix suggestions, even if giving it a way to apply them would be too complex. **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> n/a **Related Issues:** <!-- Did you find other bugs that looked similar? --> none that I could find
Suggestion,In Discussion
low
Critical
391,406,484
rust
rustc should output a warning when it encounters problems locating cross-crate sources for error messages
See #53081 - currently it is hard to diagnose what path it is actually looking for. Problems might include: - can't find the file - the file has the wrong hash.
C-enhancement,A-diagnostics,T-compiler
low
Critical
391,410,049
go
x/crypto/ssh: client requires first hostkey to match, knownhosts doesn't expose available key types
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.11.3 linux/amd64 </pre> ### Does this issue reproduce with the latest release? This is the latest docker release (yes) ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/Users/fsalwin/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/Users/fsalwin/go" GOPROXY="" GORACE="" GOROOT="/usr/local/go" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build745071247=/tmp/go-build -gno-record-gcc-switches" </pre></details> ### What did you do? <pre> config := &ssh.ClientConfig{ User: GetRemoteUsername(host), HostKeyCallback: knownHostsCallback, Timeout: 30 * time.Second, Auth: []ssh.AuthMethod{}, } </pre> knownHostsCallback is an ssh.HostKeysCallback. 2 machines: machine A has ecdsa-sha2-nistp256 key for remote host in known hosts file machine B has ssh-ed25519 key for remote host in known hosts file If machine A connects to the remote host, the remote host sends the ecdsa-sha2-nistp256 key and is allowed to continue the handshake If machine B connects to the remote host, the remote host sends the ecdsa-sha2-nistp256 key and is rejected because the callback returns an error. If I set the host key list to prefer ["ssh-ed25519","ecdsa-sha2-nistp256"...] Machine A fails Machine B succeeds ### What did you expect to see? Since there is no mechanism for discovering the available hostkeys for a host published in ssh/knownhosts , I expected it to try each hostkey in the preference list in turn, failing only if all the keys had been tried. ### What did you see instead? SSH Client ceases handshake after receiving the first error response from the HostKeysCallback. Since only the method on the database is returned by knownhosts.New() it is not easy to add another method accessing the same hostKeyDB instance. I propose adding an initializing function for the database from knownhosts which returns the database - NewDB to complement New which would return a hostkeys interface. There would be published methods on the receiver: <pre> type KnownHostDB interface { // HostKeyCallback is knownhosts.New without the DB initialization. HostKeyCallback() ssh.HostKeyCallback // HostKeyAlgorithms takes an address and returns a list of matching key types. HostKeyAlgorithms(address string) ([]string, error) } // NewDB is knownhosts.New without the callback code func NewDB(filename string) (KnownHostDB, error) </pre> That way you can just use: <pre> knownHosts := knownhosts.NewDB(knownhostfile) // just to cut down on example code, the error is ignored. algos, _ := knownHosts.HostKeyAlgorithms(host) config := &ssh.ClientConfig{ User: GetRemoteUsername(host), HostKeyCallback: knownHosts.HostKeysCallback(), Timeout: 30 * time.Second, Auth: []ssh.AuthMethod{}, HostKeyAlgorithms: algos, } </pre> Alternatively if the protocol allows for it, the host key algorithms could be tried in turn until you run out of algorithms (fail) or you have a match (success)
NeedsFix
low
Critical
391,411,541
godot
[Bullet] Rigidbody doesn't appear to apply rotation
Godot 3.1 alpha calinou 6f9aa87 (downloaded 12/4/2018) Windows 10 64-bit I am applying a rotation on my rigidbody. It appears correct in the viewport. When I actually play my project, the rotation seems to only apply to the collisionshape, but not the mesh. I am attaching a sample project. You will see the viewport has a different rotation than the actual gameplay. If you select the floor object and rotate it back to 0, everything works as expected again. potentially related to this ticket... https://github.com/godotengine/godot/issues/22904 [rigidbody-rotate-test.zip](https://github.com/godotengine/godot/files/2683063/rigidbody-rotate-test.zip)
bug,confirmed,topic:physics
low
Minor
391,412,363
rust
prelude path should not appear in error messages
(Split off from discussion on #56188.) There are a few places where the path `std::prelude::v1` appears in diagnostic messages. ``` zmd@ReflectiveCoherence:~/Code/rust/src/test/ui$ ag std::prelude [...] 27 matches 9 files contained matches ``` I argue that this is bad because the point of the prelude is to relieve the ordinary programmer from having to think about where necessities like `Some` and `Vec` live; the module `std::prelude::v1` is an implementation detail that should be mentioned in places like the Book, but which I don't want to see cluttering up error messages. We are manually stripping off the `std::prelude::v1::` in a couple places ([one](https://github.com/rust-lang/rust/commit/80c603fc6589aaf70df7c142723eef9a1d28aec5), another proposed in open PR #56188), but we should: - [ ] strip `std::prelude::v1::` for _all_ diagnostic messages - [ ] do so in a unified [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) way, rather than having that hardcoded string littering the codebase in several places
C-enhancement,A-diagnostics,T-compiler,D-papercut
low
Critical
391,413,355
godot
EditorImporterTexturePlugin to tweak how textures are imported
As discussed with @reduz on IRC, there could be a EditorImporterTexturePlugin API to allow tweaking the texture import process, and do things like changing how mipmaps are computed, before compression takes on. The initial use case I have is the use of a splatmap with a repeating texture. Over the distance, it looks really bad: ![image](https://user-images.githubusercontent.com/1311555/50047572-d1c2eb80-00af-11e9-910a-6b978ebf2ab0.png) I tried changing the bias of mipmaps, but it also looks bad, and for some reasons mipmaps don't converge to a single color: ![image](https://user-images.githubusercontent.com/1311555/50047582-f74ff500-00af-11e9-83fd-64338ceccb36.png) So instead, I decided to blend towards a solid color, taken from the center of the highest mip. This looks a lot better: ![image](https://user-images.githubusercontent.com/1311555/50047624-fcfa0a80-00b0-11e9-949e-2122ed48b377.png) Unfortunately, doing this has a pretty high cost, because it doubles the required texture fetches in a shader that already has a lot to do. So I figured out a very nice alternative would be to bake that color blend into the mipmaps themselves, resulting in that fix being free and nicely integrated. Doing that requires to alter the texture importing process, and that's where such API would be needed. Also, as @Calinou mentionned, there are other effects depending on custom mipmaps, mentionned here https://www.youtube.com/watch?v=exp1Yrxt50A (8:00) And this could even make channel packing easier, which I currently achieve manually by making this plugin: https://godotengine.org/asset-library/asset/230
enhancement,discussion,topic:plugin,topic:import
low
Minor
391,414,878
godot
Changes to shader parameters are not taken into account with Sync Scene Changes
**Godot version:** official Godot 3.1 alpha3 windows10 x64 **OS/device including version:** Acer Aspire 114-31 Intel HD 500 driver 25.20.100.6326 **Issue description:** I`m not sure if this a bug but shader parameters in Inspector are not updated in running game while "Sync Scene Changes" under Debug menu is Enabled although tool tip says "any scene changes". **Steps to reproduce:** select "Sprite" Node in Inspector Material->Shader->Shader Param **Minimal reproduction project:** [Shaders Basics 2D.zip](https://github.com/godotengine/godot/files/2683092/Shaders.Basics.2D.zip)
discussion,topic:editor
low
Critical
391,417,307
TypeScript
tsserver: deprecate getSupportedCodeFixes, add fixable property to Diagnostic
This proposes an alternative to #29010. Making `getSupportedCodeFixes` proxy-able by plugins means the client has to request the list of fixable error codes for each file - and theoretically everytime the program is updated (because the configuration of a plugin could have changed). Could we instead just add a new property `fixable` to `Diagnostic`? That makes `getSupportedCodeFixes` obsolete. Fixable error codes no longer need to be known in advance, which would also resolve #28990.
Suggestion,In Discussion,API,Domain: Quick Fixes
low
Critical
391,439,514
go
cmd/compile: use bit tests for binary search in type switches
Consider a large type switch with constant (non-interface) cases, like: ```go package p func f(e interface{}) int { switch e.(type) { case int: return 0 case bool: return 1 case byte: return 2 case uint16: return 3 case string: return 4 } return 5 } ``` The generated code does a binary search using `<` over hashes for the type cases. However, the actual values, being hashes, are large, so the generated code contains large constants for doing the comparisons. Using 1.11 for the above code, here's an excerpt from the middle: ``` 0x000a 00010 (type_switch_opt.go:7) MOVL 16(AX), CX 0x000d 00013 (type_switch_opt.go:7) CMPL CX, $1715356255 0x0013 00019 (type_switch_opt.go:7) JHI 99 0x0015 00021 (type_switch_opt.go:7) CMPL CX, $335480517 0x001b 00027 (type_switch_opt.go:7) JNE 91 ``` But since the constants are know to be hashes, we could do our binary search using bit tests instead: First split on whether hash&1 != 0, then the second bit, and so on as needed. This should afford a shorter instruction encoding and perhaps even faster execution. (This might need to apply to only some architectures; investigation required.) It would be a good idea to confirm at compile time that the bit test will do a good job of splitting the inputs (almost) in half, and moving on to higher bits if the candidate bit doesn't work well.
Performance,compiler/runtime
low
Minor
391,440,595
godot
Export browse can open in non-existent folder
Godot 3.1 alpha 3 Windows 10 x64 If you export a project into a folder, and then delete that folder, then try to export the project again, then browse within the export dialog will open into the folder that had been deleted. If you then try to export the project within this non-existent folder, Godot fails out with a warning message that it couldn't write to the folder. Maybe best to have Godot export browse dialog open to the next closest existing parent directory, or default to the root of the project folder if the last export folder is non-existent.
enhancement,topic:editor,usability
low
Minor
391,443,595
neovim
Messages related to `Ctrl-X` are displayed on stderr
Messages related to `Ctrl-X` are displayed on stderr always. Given: ```vim new normal! iword1 normal! oword2 function! s:close_pum(...) call feedkeys("\<c-e>\<esc>") endfunction call timer_start(10, 's:close_pum') call feedkeys("oword\<C-p>", 'x!') ``` ``` % nvim -u t-nvim-displays-ctrlx-msg.vim -ccq Scanning tags.-- Keyword completion (^N^P) match 1 of 2Press ENTER or type command to continue % nvim -u t-nvim-displays-ctrlx-msg.vim -ccq --headless Scanning tags.-- Keyword completion (^N^P) match 1 of 2% ``` With `noshowmode` it still displays `Scanning tags.match 1 of 2`. `silent` can be used with `call feedkeys(…)` to work around this, but those messages should not be displayed on stderr in the first place, especially with `--headless`. NVIM v0.3.2-1015-gaec096fc5
io,system,display
low
Major
391,445,213
godot
Godot crashes when trying creating an import file whose name would be longer than the filesystem would allow
Godot 3.1 alpha 3 Windows 10 x64 Windows apparently won't allow file names to be longer than a set length. Godot automatically creates import files in the .import directory when an image is found in the project directory. If the file that would automatically be created would have a file name longer than Windows would allow, then Godot crashes. Steps to reproduce: In the project folder, add an image file. Change the file name to be as long as Windows will allow (put a bunch of dummy text in the file name until Windows forces no more characters from being added to the name). As soon as Windows accepts the new name, Godot crashes when trying to import the file.
bug,platform:windows,discussion,confirmed,topic:import
low
Critical
391,448,924
godot
Code completion in get_node or $ doesn't work for sibling or higher nodes
Godot 3.1 alpha 3 Windows 10 x64 When using the `get_node` function, the code editor will offer autocomplete for all the nodes descending from the node that possesses the script in the scene. This is nice, but no code completion is offered for sibling nodes or further up the ancestry. For example `get_node("../siblingNode")` won't offer autocomplete for "siblingNode" after "../" has been written. Is this because there are possibly many ancestors, siblings, and so on, and so the list of strings would often be very long right out the gate with `get_node(`? Can instead the list of siblings appear only once this much has been written: `get_node("../` , or would that require too much rework of the code completion system? It would be a nice QOL improvement to be able to more quickly reference these nodes with get_node and $, not worry about spelling them incorrectly, and not feel like I had already misspelled something just because the autocomplete turned off.
enhancement,discussion,topic:gdscript,topic:editor,usability
low
Minor
391,450,565
flutter
Async task is skipped when the assert fails
I use [Firebase Core](https://pub.dartlang.org/packages/firebase_core#-readme-tab-) and spot one potential issue (maybe) about [FutureBuilder](https://docs.flutter.io/flutter/widgets/FutureBuilder-class.html) and [Firebase Core](https://pub.dartlang.org/packages/firebase_core#-readme-tab-) itself. When async is combined with assert and it's used inside FutureBuilder, if the assert is not true, somehow the rest of the async task is skipped without notifying, and snapshot's connection state will be "done", here is my scripts **Async with assert** ```dart _test() async { assert(1 == 3); await Future.delayed(Duration(milliseconds: 1000)); return "test"; } getPrefs() async { print("before"); print("_test => ${await _test()}"); print("after"); } @override Widget build(BuildContext context) { print("main executed!"); return FutureBuilder( future: getPrefs(), builder: (BuildContext context, AsyncSnapshot snapshot) { switch (snapshot.connectionState) { case ConnectionState.active: print("Active!"); break; case ConnectionState.none: print("None!"); break; case ConnectionState.waiting: print("Waiting!"); break; case ConnectionState.done: print("Done!"); break; default: print("Default!"); } return MaterialApp( home: Scaffold(body: Center(child: CircularProgressIndicator())), ); }, ); } ``` will print ``` flutter: main executed! flutter: before flutter: Waiting! flutter: Done! ``` **Async without assert** ```dart _test() async { await Future.delayed(Duration(milliseconds: 1000)); return "test"; } getPrefs() async { print("before"); print("_test => ${await _test()}"); print("after"); } @override Widget build(BuildContext context) { print("main executed!"); return FutureBuilder( future: getPrefs(), builder: (BuildContext context, AsyncSnapshot snapshot) { switch (snapshot.connectionState) { case ConnectionState.active: print("Active!"); break; case ConnectionState.none: print("None!"); break; case ConnectionState.waiting: print("Waiting!"); break; case ConnectionState.done: print("Done!"); break; default: print("Default!"); } return MaterialApp( home: Scaffold(body: Center(child: CircularProgressIndicator())), ); }, ); } ``` will print ``` flutter: main executed! flutter: before flutter: Waiting! flutter: _test => test flutter: after flutter: Done! ``` **Note** - I called this a potential issue for `FutureBuilder`, because sure it is programmer's fault that the assert is occurred, but maybe in the future, it would be nice to have a log to notify us..
framework,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-framework,triaged-framework
low
Major
391,461,282
go
os: consider syscall.EEXIST in os.IsExist
Note that I'm not very categorical in my issue title, but this debugging of a [subtle Afero file system bug](https://github.com/spf13/afero/pull/190) has given me enough gray hairs to at least deserve an issue/discussion. The program below runs fine on *nix but fails on Windows (it compiles fine): ```go package main import ( "log" "os" "syscall" ) func main() { if !os.IsExist(syscall.EEXIST) { log.Fatal("failed") } } ``` I assume that `syscall.EEXIST` will never happen on Windows, and then it should probably also not be defined for Windows. I guess that ship has sailed, but you should then consider to expand `os.IsExist` on Windows to also include this error. Because there may be other people creating file system abstractions that can be bitten by this. /cc @spf13
OS-Windows,NeedsInvestigation
low
Critical
391,467,578
TypeScript
The type of Document.documentElement could be SVGSVGElement
Currently, the type of `Document.documentElement` is `HTMLElement`: https://github.com/Microsoft/TypeScript/blob/4d74f67325d305f52a2b00b4f30b5a4f3210c649/lib/lib.dom.d.ts#L4124 But it is possible for a `Document`’s `documentElement` property to be of type `SVGSVGElement` which is not a subtype of `HTMLElement`: Try to run the following code: ```js const d = (new DOMParser()).parseFromString('<svg xmlns="http://www.w3.org/2000/svg"></svg>', 'image/svg+xml'); const k = (d instanceof Document) && (d.documentElement instanceof SVGSVGElement); ``` the value of `k` should be `true`. Also, why isn’t there a `SVGDocument` in TypeScript?
Suggestion,Revisit
low
Major
391,479,960
pytorch
index_add_ with scalar values instead of tensors
Hi, I was wondering whether we could update functions like `index_add_(dim, index, tensor)` so that the third argument (`tensor`) can also be a scalar value?
triaged,enhancement,module: advanced indexing
low
Minor
391,488,678
angular
Expose matched component inside EmptyOutletComponent
<!--πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”… Oh hi there! πŸ˜„ To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…--> # πŸš€ feature request ### Relevant Package <!-- Can you pin-point one or more @angular/* packages the are relevant for this feature request? --> <!-- ✍️edit: --> This feature request is for @angular/router ### Description <!-- ✍️--> First, big thanks for https://github.com/angular/angular/pull/23459 and fixing the issue of supporting lazy loaded aux routes. The introduction of the EmptyOutletComponent indeed fixes the issue, however I noticed a missing capability, which is that we can no longer get reference to the *true* activated component. i.e. When using the (activate) event on router-outlet to get reference to the component instance in the outlet, these named outlets now return an EmptyOutletComponent (correctly) and there is no way to easily get the instance of the actual underlying activated component. It would be nice if we could additionally have a way to get a reference to the router matched component instead of or in addition to the empty outlet component. ### Describe the solution you'd like <!-- ✍️--> Simply emit the underlying router matched component INSTEAD of the the EmptyOutletComponent so that developers using the (activate) event on their named outlet will get the expected component instead of an EmptyOutletComponent reference. ### Describe alternatives you've considered <!-- ✍️--> An alternative solution would be to simply merge activate events from the router outlet inside the empty outlet component into the man named outlet's activate events and have developers filter for the one that they want (e.g. type of component != EmptyOutletComponent)
type: bug/fix,freq3: high,area: router,state: has PR,state: confirmed,P3
medium
Major
391,497,942
rust
OOM on MIPS 32-bit
We previous successfully did a native-compilation of mips rustc 1.30.0 here: https://buildd.debian.org/status/fetch.php?pkg=rustc&arch=mips&ver=1.30.0%2Bdfsg1-2%2Bb1&stamp=1541331217&raw=0 It was compiling itself using a previously-cross-compiled 1.30.0 rustc mips, cross-compiled from amd64. Only 14 tests failed out of the whole test suite, so we're now shipping it in Debian. However the build for 1.31.0 is running out of memory in LLVM on stage 1: https://buildd.debian.org/status/fetch.php?pkg=rustc&arch=mips&ver=1.31.0%2Bdfsg1-1%7Eexp2&stamp=1544918153&raw=0
C-enhancement,O-MIPS,T-compiler,I-compilemem
low
Critical
391,506,336
TypeScript
Type JS's `arguments` object based on function parameters
## Search Terms type "arguments" object parameters ## Suggestion Within a function `x`, the type of the `arguments` object should be `Parameters<typeof x> & IArguments`, with `undefined` removed from the type of any entries in `Parameters<typeof x>` if that parameter has a default value. ## Use Cases ``` class A { fn(it: number) { return true; } } class B extends A { fn(it: number) { // this shouldn't error, because arguments must be at least length 1. // instead we get ERROR: Expected 1 arguments, but got 0 or more. return super.fn(...arguments); } } ``` ## Checklist My suggestion meets these guidelines: * [?] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,In Discussion
low
Critical
391,509,933
pytorch
possible unsafety in torch.distributions.kl_divergence for Bernoullis
## Note This might be my first pytorch issue. ## πŸ› Bug torch.distributions.kl_divergence seems numerically unsafe for Bernoullis. In the following script, I compare with a hand-written divergence between Bernoullis that makes sure to add epsilon before log()'ing. The torch KL and the handwritten version compute the same number down to the fourth decimal, but torch's KL causes a nan grad while mine does not. I found the nan using the torch anomaly catcher (so useful), whose trace I will include below the example code. ## To Reproduce For simplicity, the following code seeks to learn a vector of Bernoulli logits for a distribution q_z to minimize the KL betweeen q_z and a prior p_z. In the following code, switch between loss=kl_pytorch and loss=kl_custom to observe the difference in behavior: ``` import torch import torch.nn as nn from torch.distributions import Bernoulli from torch.distributions import kl_divergence EPS = 1e-16 torch.set_anomaly_enabled(True) def custom_bernoulli_kl(q_logits,p_probs): q1_probs = nn.Sigmoid()(q_logits) q1 = q1_probs q0 = 1 - q1 p1 = p_probs p0 = 1 - p1 logq1 = (q1 + EPS).log() logq0 = (q0 + EPS).log() logp1 = (p1 + EPS).log() logp0 = (p0 + EPS).log() kldiv_1 = q1*(logq1 - logp1) kldiv_0 = q0*(logq0 - logp0) return (kldiv_1 + kldiv_0).sum() q_logits = torch.tensor([10.3,-6.0,30.0],requires_grad=True) optimizer = torch.optim.Adam([q_logits],lr=0.001) for i in range(10): p_probs = torch.tensor([0.5,0.5,0.5]) q_z = Bernoulli(logits=q_logits) p_z = Bernoulli(probs=p_probs) kl_pytorch = torch.distributions.kl_divergence(q_z,p_z).sum() kl_custom = custom_bernoulli_kl(q_logits,p_probs) print("---") print("KL Pytorch:",kl_pytorch) print("KL Custom:",kl_custom) print("---") loss = kl_pytorch #loss = kl_custom # doesnt break loss.backward() optimizer.step() ``` ## torch anomaly detector traceback: The following shows the line of torch.distributions.kl_divergence for Bernoullis that is causing the nan: ``` sys:1: RuntimeWarning: Traceback of forward call that caused the error: File "kl_test.py", line 38, in <module> kl_pytorch = torch.distributions.kl_divergence(q_z,p_z).sum() File "/Users/torch/anaconda3/envs/python3/lib/python3.6/site-packages/torch/distributions/kl.py", line 166, in kl_divergence return fun(p, q) File "/Users/torch/anaconda3/envs/python3/lib/python3.6/site-packages/torch/distributions/kl.py", line 183, in _kl_bernoulli_bernoulli t2 = (1 - p.probs) * ((1 - p.probs) / (1 - q.probs)).log() Traceback (most recent call last): File "kl_test.py", line 48, in <module> loss.backward() File "/Users/torch/anaconda3/envs/python3/lib/python3.6/site-packages/torch/tensor.py", line 102, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/Users/torch/anaconda3/envs/python3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Function 'MulBackward0' returned nan values in its 0th output. ``` ## Expected behavior I would expect not to have nan gradients, even when the logits for the Bernoulli are around magnitude 40. This might arise e.g. in a deep generative model, where a matrix transformation yields a layer of real numbers to be interpreted as Bernoulli logits. Please let me know if one should instead reduce logit magnitudes before initializing a Bernoulli distribution. If something should be fixed, not sure which of the following is better: - "distributions should always have probs that are safe for KL calculation" - "KL should perform additional numerical safety routines" ## Environment PyTorch version: 1.0.0 Is debug build: No CUDA used to build PyTorch: None OS: Mac OSX 10.11.6 GCC version: Could not collect CMake version: version 3.11.4 Python version: 3.6 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip] Could not collect [conda] blas 1.0 mkl [conda] mkl 2018.0.3 1 [conda] mkl_fft 1.0.6 py36hb8a8100_0 [conda] mkl_random 1.0.1 py36h5d10147_1 [conda] pytorch 1.0.0 py3.6_1 pytorch [conda] torchvision 0.2.1 py_2 pytorch ## Additional context Thank you! cc @fritzo @neerajprad @alicanb @nikitaved
module: distributions,triaged,module: NaNs and Infs
low
Critical
391,525,217
youtube-dl
request: daricbennet.com
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.12.17*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.12.17** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [ x [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [x] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://daricbennett.com/courses/slap-course/#slapping-picking-transition - Playlist: https://daricbennett.com/courses/slap-course/ --- ### Description of your *issue*, suggested solution and other information Add support for downloading videos from daricbennet.com bass lessons
account-needed
low
Critical
391,533,167
pytorch
[caffe2] Installation problem on OSX
## πŸ“š Documentation First I tried installing using pip into a Python 3.7 environment. That mostly worked, except the Loading_Pretrained_Models tutorials failed. Apparently it uses a python2 module. Maybe put a disclaimer at the installation guide that Python2 (2.7??) is the supported target platform. Since there was no such restriction I went with latest. Next up I tried using conda (miniconda) and that failed even sooner: $ conda install pytorch-nightly-cpu -c pytorch Solving environment: failed PackagesNotFoundError: The following packages are not available from current channels: - pytorch-nightly-cpu This is the kind of poorly tested functionality I have come to expect from Tensorflow. Since Caffe2 claims to be easy and production-ready, the documentation should clearly address all required steps to get to at least an initial working setup.
caffe2
low
Critical
391,534,993
pytorch
linked error of Pytorch 1.0 release
## πŸ› Bug ## To Reproduce Steps to reproduce the behavior: 1. download pytorch to local 2. Disable USE_DISTRIBUTE=NO 3. MACOSX_DEPLOYMENT_TARGET=10.13 CC=clang CXX=clang++ python setup.py bdist_wheel 4. install compiled library under dist/ folder ## Environment - PyTorch Version (e.g., 1.0): 1.0 - OS (e.g., Linux): macOS - How you installed PyTorch (`conda`, `pip`, source): conda - Build command you used (if compiling from source): MACOSX_DEPLOYMENT_TARGET=10.13 CC=clang CXX=clang++ python setup.py bdist_wheel - Python version: 3.6.7 - CUDA/cuDNN version: 9.2 - GPU models and configuration: Nvidia 1080T - Xcode 8.3.2 with command line ## Error as follow: ```python In [1]: import torch --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-1-eb42ca6e4af3> in <module> ----> 1 import torch ~/miniconda3/lib/python3.6/site-packages/torch/__init__.py in <module> 82 pass 83 ---> 84 from torch._C import * 85 86 __all__ += [name for name in dir(_C) ImportError: dlopen(/Users/llv23/miniconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-darwin.so, 9): Symbol not found: _ompi_mpi_char Referenced from: /Users/llv23/miniconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.dylib Expected in: flat namespace in /Users/llv23/miniconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.dylib ```
module: build,triaged,module: macos
medium
Critical
391,541,848
flutter
analyze --watch output emphasizes warns and deemphasizes errors via the choice of indent
The wrapping makes the analyzer output hard to read with a narrow(ish) window size: ``` Analyzing /home/ianh/dev/cruisemonkey... 155ms warning β€’ This function has a return type of 'Future<void>', but doesn't end with a return statement β€’ test/logic/cruise_model_test.dart:171:3 β€’ missing_return error β€’ Undefined class 'Seamail' β€’ test/logic/cruise_model_test.dart:173:5 β€’ undefined_class error β€’ 'TestTwitarr.newSeamail' ('(Credentials, dynamic, PhotoManager, Set<User>, String, String) β†’ Progress<dynamic>') isn't a valid override of 'Twitarr.newSeamail' ('(Credentials, Seamail, PhotoManager, Set<User>, String, String) β†’ Progress<SeamailThread>') β€’ test/logic/cruise_model_test.dart:180:3 β€’ invalid_override warning β€’ This function has a return type of 'Progress', but doesn't end with a return statement β€’ test/logic/cruise_model_test.dart:181:3 β€’ missing_return error β€’ The name 'SeamailThread' isn't a type so it can't be used as a type argument β€’ test/logic/cruise_model_test.dart:181:12 β€’ non_type_as_type_argument error β€’ Undefined class 'Seamail' β€’ test/logic/cruise_model_test.dart:183:5 β€’ undefined_class error β€’ 'TestTwitarr.fetchProfilePicture' ('(String) β†’ Progress<dynamic>') isn't a valid override of 'Twitarr.fetchProfilePicture' ('(String) β†’ Progress<Uint8List>') β€’ test/logic/cruise_model_test.dart:192:3 β€’ invalid_override warning β€’ This function has a return type of 'Progress', but doesn't end with a return statement β€’ test/logic/cruise_model_test.dart:193:3 β€’ missing_return error β€’ The name 'Uint8List' isn't a type so it can't be used as a type argument β€’ test/logic/cruise_model_test.dart:193:12 β€’ non_type_as_type_argument warning β€’ This function has a return type of 'Progress<List<User>>', but doesn't end with a return statement β€’ test/logic/cruise_model_test.dart:198:3 β€’ missing_return error β€’ Missing concrete implementations of CruiseModel.avatarFor, CruiseModel.getUserList, CruiseModel.heardAboutUserPhoto, CruiseModel.newSeamail and 5 more β€’ test/mocks.dart:124:7 β€’ non_abstract_class_inherits_abstract_member_five_plus error β€’ Missing concrete implementations of CruiseModel.avatarFor, CruiseModel.getUserList, CruiseModel.heardAboutUserPhoto, CruiseModel.newSeamail and 5 more β€’ test/views/settings_test.dart:44:7 β€’ non_abstract_class_inherits_abstract_member_five_plus 12 issues found (5 fixed) β€’ analyzed 1 file in 0.19 seconds ``` In particular, the way that error lines wrap differently than warning lines is especially egregious, I think. cc @gspencergoog, @devoncarew
tool,a: first hour,P2,team-tool,triaged-tool
low
Critical
391,573,539
pytorch
manylinux2014 compatible wheels
Because of hard-constraints around requiring CUDA, we didn't ever correctly produce manylinux1 compatible wheels (manylinux1 = CentOS5 environment, but CUDA requires atleast CentOS6). We also had issues wrt statically linking stdc++ correctly. We've had several 10s of bugs filed because of doing these hacks wrong: https://github.com/pytorch/pytorch/issues/5400 https://github.com/pytorch/pytorch/issues/3111 https://github.com/pytorch/pytorch/issues/5876 https://github.com/pytorch/pytorch/issues/926 https://github.com/pytorch/pytorch/issues/595 https://github.com/pytorch/pytorch/issues/4101 Revisit producing standards-compatible wheels based on manylinux2010, which is based on CentOS6 (which CUDA supports for now): https://www.python.org/dev/peps/pep-0571/ To correctly statically link to stdc++, see if some of the things that Apache Arrow does would fix the underlying issues for us: https://github.com/apache/arrow/blob/master/cpp/src/arrow/symbols.map Do check that the issues I described in https://github.com/pytorch/pytorch/issues/5400#issuecomment-369428125 are resolved
module: build,triaged
low
Critical
391,574,189
material-ui
The type of styled-component does not match for "strict: true"
When we use styled-component in TypeScript to inherit the Material-UI component, props type does not match. How do I overwrite a style with styled-component? There is a sample on how to inherit in the case of styled-component in the following link, but this sample will generate an error if it is set to `strict: true` in tsconfig setting. https://material-ui.com/guides/interoperability/#styled-components ```jsx import React from 'react'; import styled from 'styled-components'; import Button from '@material-ui/core/Button'; const StyledButton = styled(Button)` background: linear-gradient(45deg, #fe6b8b 30%, #ff8e53 90%); border-radius: 3px; border: 0; color: white; height: 48px; padding: 0 30px; box-shadow: 0 3px 5px 2px rgba(255, 105, 135, .3); `; function StyledComponentsButton() { return ( <div> <Button> Material-UI </Button> <StyledButton> Styled Components </StyledButton> </div> ); } export default StyledComponentsButton; ``` The way I tried is to cast the component with React.SFC and then pass the props type of the material-ui component to that generics. ```jsx import TextField, { TextFieldProps } from "@material-ui/core/TextField"; export const StyledTextField = styled(TextField as React.SFC<TextFieldProps>)` // some style `; ``` in this case ``` Type '{ disabled: true; label: string; defaultValue: string; variant: "outlined"; }' is missing the following properties from type 'Pick<Pick<TextFieldProps..... ``` The above error will be displayed :( It was able to solve by the following method. ```jsx interface TextFieldProps { disabled: boolean; label: string; defaultValue: string; variant: string; } export const StyledTextField = styled(TextField as React.SFC<TextFieldProps>)` // some style `; ``` However, I do not think this is a nice implementation. Required Props It is because you have to re-implement the interface every time it changes. --- @material-ui/core version: 3.6.2 @material-ui/icons version: 3.0.1 styled-components version: 4.1.2 typescript version: 3.2.2 macOS: 10.13.6
typescript
medium
Critical