id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
411,622,919 | go | x/crypto/ed25519: Use sync.Pool to reuse hash.Hash | In [Ed25519](https://github.com/golang/crypto/tree/master/ed25519) package, every [signing](https://github.com/golang/crypto/blob/master/ed25519/ed25519.go#L130) and [verifying](https://github.com/golang/crypto/blob/master/ed25519/ed25519.go#L192) calls create a new SHA-512 `hash.Hash` by calling `sha512.New`.
Thinking of scenarios where multiple calls are made to those functions (e.g., in an HTTP handler), using a `sync.Pool` to store those `hash.Hash` and reuse them would optimize both performance and memory usage. | Performance,NeedsInvestigation | low | Major |
411,668,140 | pytorch | [feature request] build and move distributions w/ device and/or dtype |
cc @malfet @seemethere @walterddr @fritzo @neerajprad @alicanb @nikitaved | module: build,module: distributions,triaged | low | Minor |
411,703,846 | rust | rust-1.32.0: Could not compile `core` when using system llvm-8.0.0 RC2 | In Gentoo Linux, we started to test llvm-8.0.0 RC2. We noticed that rust-1.32 won't build against llvm-8.0.0 RC2. After backporting commit df0466d0bb807a7266cc8ac9931cd43b3e84b62e at least rustc_llvm and rustc_codegen_llvm passed but now, core is failing:
```
INFO 2019-02-18T03:12:02Z: rustc_mir::build: fn_id DefId(0/0:5977 ~ core[faab]::coresimd[0]::x86_64[0]::rdrand[0]::_rdseed64_step[0]) has attrs Borrowed([Attribute { id: AttrId(42618), style: Outer, path: path(doc), tokens: TokenStream { kind: Stream([TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:28:1: 28:80, Eq)) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:28:1: 28:80, Literal(Str_(/// Read a 64-bit NIST SP800-90B and SP800-90C compliant random value and store), None))) }]) }, is_sugared_doc: true, span: src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:28:1: 28:80 }, Attribute { id: AttrId(42619), style: Outer, path: path(doc), tokens: TokenStream { kind: Stream([TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:29:1: 29:71, Eq)) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:29:1: 29:71, Literal(Str_(/// in val. Return 1 if a random value was generated, and 0 otherwise.), None))) }]) }, is_sugared_doc: true, span: src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:29:1: 29:71 }, Attribute { id: AttrId(42620), style: Outer, path: path(doc), tokens: TokenStream { kind: Stream([TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:30:1: 30:4, Eq)) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:30:1: 30:4, Literal(Str_(///), None))) }]) }, is_sugared_doc: true, span: src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:30:1: 30:4 }, Attribute { id: AttrId(42621), style: Outer, path: path(doc), tokens: TokenStream { kind: Stream([TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:31:1: 31:111, Eq)) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:31:1: 31:111, Literal(Str_(/// [Intel\'s documentation](https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_rdseed64_step)), None))) }]) }, is_sugared_doc: true, span: src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:31:1: 31:111 }, Attribute { id: AttrId(42622), style: Outer, path: path(inline), tokens: TokenStream { kind: Empty }, is_sugared_doc: false, span: src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:32:1: 32:10 }, Attribute { id: AttrId(42623), style: Outer, path: path(target_feature), tokens: TokenStream { kind: Tree(Delimited(DelimSpan { open: src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:33:17: 33:18, close: src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:33:35: 33:36 }, Delimited { delim: Paren, tts: ThinTokenStream(Some([TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:33:18: 33:24, Ident(enable#0, false))) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:33:25: 33:26, Eq)) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:33:27: 33:35, Literal(Str_(rdseed), None))) }])) })) }, is_sugared_doc: false, span: src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:33:1: 33:37 }, Attribute { id: AttrId(42625), style: Outer, path: path(stable), tokens: TokenStream { kind: Tree(Delimited(DelimSpan { open: src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:35:9: 35:10, close: src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:35:48: 35:49 }, Delimited { delim: Paren, tts: ThinTokenStream(Some([TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:35:10: 35:17, Ident(feature#0, false))) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:35:18: 35:19, Eq)) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:35:20: 35:30, Literal(Str_(simd_x86), None))) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:35:30: 35:31, Comma)) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:35:32: 35:37, Ident(since#0, false))) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:35:38: 35:39, Eq)) }, TokenStream { kind: Tree(Token(src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:35:40: 35:48, Literal(Str_(1.27.0), None))) }])) })) }, is_sugared_doc: false, span: src/libcore/../stdsimd/coresimd/x86_64/rdrand.rs:35:1: 35:50 }])
INFO 2019-02-18T03:12:03Z: rustc_save_analysis: Dumping crate core
INFO 2019-02-18T03:12:03Z: rustc_save_analysis: Writing output to /var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/x86_64-unknown-linux-gnu/stage1-std/x86_64-unknown-linux-gnu/release/deps/save-analysis/libcore-a33876dd01369f78.json
INFO 2019-02-18T03:12:09Z: cargo::core::compiler::job_queue: end: core v0.0.0 (/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/src/libcore) => Target(lib)/Profile(release) => Target
DEBUG 2019-02-18T03:12:09Z: cargo: exit_with_error; err=CliError { error: Some(ProcessError { desc: "process didn\'t exit successfully: `/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/bootstrap/debug/rustc --crate-name core src/libcore/lib.rs --color always --error-format json --crate-type lib --emit=dep-info,link -C opt-level=2 -C metadata=a33876dd01369f78 -C extra-filename=-a33876dd01369f78 --out-dir /var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/x86_64-unknown-linux-gnu/stage1-std/x86_64-unknown-linux-gnu/release/deps --target x86_64-unknown-linux-gnu -L dependency=/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/x86_64-unknown-linux-gnu/stage1-std/x86_64-unknown-linux-gnu/release/deps -L dependency=/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/x86_64-unknown-linux-gnu/stage1-std/release/deps` (signal: 11, SIGSEGV: invalid memory reference)", exit: Some(ExitStatus(ExitStatus(139))), output: None }
stack backtrace:
0: 0x564f44cc0fbd - backtrace::backtrace::trace::h7c9932eb13d3dfdc
1: 0x564f44cbf982 - backtrace::capture::Backtrace::new_unresolved::h78000998b5605dc1
2: 0x564f44cbf5b5 - failure::backtrace::internal::InternalBacktrace::new::hc2a294bb7e196ce2
3: 0x564f44cbf1d1 - <failure::backtrace::Backtrace as core::default::Default>::default::h8191b8bf5187674a
4: 0x564f44cbf209 - failure::backtrace::Backtrace::new::h0c102655b6a92d06
5: 0x564f448f2b0c - cargo::util::process_builder::ProcessBuilder::exec_with_streaming::h8f9b643f5a3963e1
6: 0x564f44a15b42 - cargo::core::compiler::Executor::exec_json::h3681d0a586c5b617
7: 0x564f44a1293a - <F as cargo::core::compiler::job::FnBox<A, R>>::call_box::h3c61d508c5b85b14
8: 0x564f448a2d9e - <F as cargo::core::compiler::job::FnBox<A, R>>::call_box::ha0055cfba8e2fd59
9: 0x564f448a2d9e - <F as cargo::core::compiler::job::FnBox<A, R>>::call_box::ha0055cfba8e2fd59
10: 0x564f448a2e6b - cargo::core::compiler::job::Job::run::hcbfbbbf689ae72ea
11: 0x564f4480620b - <F as crossbeam_utils::thread::FnBox<T>>::call_box::h3c70c36fd9b941e0
12: 0x564f44f2ff49 - __rust_maybe_catch_panic
at libpanic_unwind/lib.rs:102
13: 0x564f4480586c - <F as alloc::boxed::FnBox<A>>::call_box::h5a84c77f8bb490fe
14: 0x564f44f19ded - <alloc::boxed::Box<(dyn alloc::boxed::FnBox<A, Output=R> + 'a)> as core::ops::function::FnOnce<A>>::call_once::h9d902c911a417e39
at liballoc/boxed.rs:682
- std::sys_common::thread::start_thread::h44127e03e78ca137
at libstd/sys_common/thread.rs:24
15: 0x564f44f06515 - std::sys::unix::thread::Thread::new::thread_start::h8f17b97f2223146c
at libstd/sys/unix/thread.rs:90
16: 0x7f4ac215f3f2 - start_thread
at /var/tmp/portage/sys-libs/glibc-2.28-r5/work/glibc-2.28/nptl/pthread_create.c:486
17: 0x7f4ac20704ae - __clone
18: 0x0 - <unknown>
stack backtrace:
0: 0x564f44cc0fbd - backtrace::backtrace::trace::h7c9932eb13d3dfdc
1: 0x564f44cbf982 - backtrace::capture::Backtrace::new_unresolved::h78000998b5605dc1
2: 0x564f44cbf5b5 - failure::backtrace::internal::InternalBacktrace::new::hc2a294bb7e196ce2
3: 0x564f44cbf1d1 - <failure::backtrace::Backtrace as core::default::Default>::default::h8191b8bf5187674a
4: 0x564f44cbf209 - failure::backtrace::Backtrace::new::h0c102655b6a92d06
5: 0x564f448e2495 - <core::result::Result<T, E> as cargo::util::errors::CargoResultExt<T, E>>::chain_err::h54dd35d3e7c5d3b5
6: 0x564f44a1295a - <F as cargo::core::compiler::job::FnBox<A, R>>::call_box::h3c61d508c5b85b14
7: 0x564f448a2d9e - <F as cargo::core::compiler::job::FnBox<A, R>>::call_box::ha0055cfba8e2fd59
8: 0x564f448a2d9e - <F as cargo::core::compiler::job::FnBox<A, R>>::call_box::ha0055cfba8e2fd59
9: 0x564f448a2e6b - cargo::core::compiler::job::Job::run::hcbfbbbf689ae72ea
10: 0x564f4480620b - <F as crossbeam_utils::thread::FnBox<T>>::call_box::h3c70c36fd9b941e0
11: 0x564f44f2ff49 - __rust_maybe_catch_panic
at libpanic_unwind/lib.rs:102
12: 0x564f4480586c - <F as alloc::boxed::FnBox<A>>::call_box::h5a84c77f8bb490fe
13: 0x564f44f19ded - <alloc::boxed::Box<(dyn alloc::boxed::FnBox<A, Output=R> + 'a)> as core::ops::function::FnOnce<A>>::call_once::h9d902c911a417e39
at liballoc/boxed.rs:682
- std::sys_common::thread::start_thread::h44127e03e78ca137
at libstd/sys_common/thread.rs:24
14: 0x564f44f06515 - std::sys::unix::thread::Thread::new::thread_start::h8f17b97f2223146c
at libstd/sys/unix/thread.rs:90
15: 0x7f4ac215f3f2 - start_thread
at /var/tmp/portage/sys-libs/glibc-2.28-r5/work/glibc-2.28/nptl/pthread_create.c:486
16: 0x7f4ac20704ae - __clone
17: 0x0 - <unknown>
Could not compile `core`.), unknown: false, exit_code: 101 }
error: Could not compile `core`.
To learn more, run the command again with --verbose.
command did not execute successfully: "/var/tmp/portage/dev-lang/rust-1.32.0/work/rust-stage0/bin/cargo" "build" "--target" "x86_64-unknown-linux-gnu" "-j" "6" "--release" "--locked" "--frozen" "--features" "panic-unwind backtrace" "--manifest-path" "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/src/libstd/Cargo.toml" "--message-format" "json"
expected success, got: exit code: 101
Traceback (most recent call last):
File "./x.py", line 20, in <module>
bootstrap.main()
File "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/src/bootstrap/bootstrap.py", line 853, in main
bootstrap(help_triggered)
File "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/src/bootstrap/bootstrap.py", line 839, in bootstrap
run(args, env=env, verbose=build.verbose)
File "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/src/bootstrap/bootstrap.py", line 151, in run
raise RuntimeError(err)
RuntimeError: failed to run: /var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/bootstrap/debug/bootstrap build --verbose --config=/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/config.toml -j6 --exclude src/tools/miri
```
Are we missing another patch?
| I-crash,A-LLVM,O-linux,T-compiler | low | Critical |
411,712,629 | flutter | Cannot change the binding address of the Observatory (custom embedding) | The binding address for the observatory appears to be hard-coded to use localhost. There also does not appear to be a way to force binding to a different address.
As I understand, this issue is solved with port forwarding when working with android/ios devices.
This will not work on a device where forwarding is not an option. | engine,dependency: dart,P2,team-engine,triaged-engine | low | Major |
411,733,524 | create-react-app | Rust (wasm) support | I've just started to play around with this https://github.com/thomashorrobin/create-react-app-rust `create-react-app` fork that adds rust support. I had some problems with typescript using it and decide to `rebase` on top of the latest `create-react-app` to not waste time debugging for something that's already fixed here. While doing this I realised that the actual changes for rust seem minimal. I'm not sure if there was ever a discussion about folding this fork into this project. I couldn't find anything in the issues to that extent.
My question is: would it be of interest to include Rust (via WASM) support in this package directly, or maintain the separate fork? If there's interest I can `rebase` again and prepare the fork to be submitted as a PR. Thanks! | issue: proposal | medium | Critical |
411,734,927 | TypeScript | Docs: Function Parameter Bivariance | [Official document about bivariance](https://www.typescriptlang.org/docs/handbook/type-compatibility.html#function-parameter-bivariance) is too simple to understand for newbies.
```
enum EventType { Mouse, Keyboard }
interface Event { timestamp: number; }
interface MouseEvent extends Event { x: number; y: number }
interface KeyEvent extends Event { keyCode: number }
function listenEvent(eventType: EventType, handler: (n: Event) => void) {
/* ... */
}
// Unsound, but useful and common
listenEvent(EventType.Mouse, (e: MouseEvent) => console.log(e.x + "," + e.y));
// Undesirable alternatives in presence of soundness
listenEvent(EventType.Mouse, (e: Event) => console.log((<MouseEvent>e).x + "," + (<MouseEvent>e).y));
listenEvent(EventType.Mouse, <(e: Event) => void>((e: MouseEvent) => console.log(e.x + "," + e.y)));
// Still disallowed (clear error). Type safety enforced for wholly incompatible types
listenEvent(EventType.Mouse, (e: number) => console.log(e));
```
And sample code is also confusing, painfully I have found these awesome examples:
```ts
// Contravariance
class Example {
foo(maybe: number | undefined) { }
str(str: string) { }
compare(ex: Example) { }
}
class Override extends Example {
foo(maybe: number) { } // Bad: should have error.
str(str: 'override') { } // Bad: should have error.
compare(ex: Override) { } // Bad: should have error.
}
```
and
```ts
// Bivariance
interface Comparer<T> {
compare(a: T, b: T): number;
}
declare let animalComparer: Comparer<Animal>;
declare let dogComparer: Comparer<Dog>;
animalComparer = dogComparer; // Ok because of bivariance
dogComparer = animalComparer; // Ok
```
Related references:
- [Strict function types](https://github.com/Microsoft/TypeScript/pull/18654)
- [Overridden method parameters are not checked for parameter contravariance](https://github.com/Microsoft/TypeScript/issues/22156)
Above links are really helpful and clearly to illustrate **bivariance**. | Docs | low | Critical |
411,747,247 | javascript-algorithms | Could we have just one way of doing the swapping to have better consistency across the code | Like how we have here https://github.com/trekhleb/javascript-algorithms/blob/master/src/algorithms/sorting/bubble-sort/BubbleSort.js#L22 instead of
https://github.com/trekhleb/javascript-algorithms/blob/master/src/algorithms/sorting/shell-sort/ShellSort.js#L24 | enhancement | low | Minor |
411,776,494 | angular | Option to leave comments in templates |
# 🚀 feature request
### Relevant Package
<!-- Can you pin-point one or more @angular/* packages the are relevant for this feature request? -->
<!-- ✍️edit: --> This feature request is for @angular/core
### Description
<!-- ✍️-->
Comments are removed in the rendered angular view. We are using another framework alongside angular whose bindings are within the comments. Our whole code base is written using that framework and we cannot afford to stop development to completely migrate all our code to angular. Which is why we want to do it in small iteration.
Stackblitz sample: https://stackblitz.com/edit/angular-zhw3az?file=src%2Fapp%2Fapp.component.ts

### Describe the solution you'd like
<!-- ✍️--> An option within the Component decorator to leave comments
@Component({
selector: 'my-app',
templateUrl: './app.component.html',
styleUrls: [ './app.component.css' ],
**HTMLComments: true**
})
### Describe alternatives you've considered
<!-- ✍️--> The alternative would be to completely rewrite all our views changing those bindings. | feature,area: core,core: basic template syntax,feature: under consideration | low | Major |
411,791,607 | TypeScript | unknown types should not able to concat with strings | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.3.0
**Code**
```ts
declare const a: unknown;
const b = a + 123;
const c = a + '123';
const d = `${a}123`;
```
**Expected behavior:**
All the declarations of variables `b` `c` `d` should have errors.
**Actual behavior:**
Only declaration of variable `b` have error.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
https://www.typescriptlang.org/play/index.html#src=declare%20const%20a%3A%20unknown%3B%0Aconst%20b%20%3D%20a%20%2B%20123%3B%0Aconst%20c%20%3D%20a%20%2B%20'123'%3B%0Aconst%20d%20%3D%20%60%24%7Ba%7D123%60%3B%0A
Although we won't get runtime errors in this case, it may cause unpredictable result, e.g. "[[Object]]123", which is unexpected, so I think ts should not allow unknown types to concat with strings. | Suggestion,In Discussion | low | Critical |
411,793,259 | TypeScript | How to import generic function type declaration from one javascript file to another? |
**TypeScript Version:** 3.3.3
**Search Terms:** import, typedef, callback jsCheck
I have an issue with importing generic function type declared with ```@callback``` in another file.
Here is example:
In file1.js I've defined generic function with template parameter
```js
// file1.js
/**
* @template T
* @template {Error} E
* @callback CallbackWithResult
* @param {E|null} error
* @param {T} [result]
*/
```
And in another file I've imported that declaration
```js
// file2.js
/** @typedef {import('./file1').CallbackWithResult} CallbackWithResult */
/**
* @param {CallbackWithResult<number>} callback
*/
function doSomething(callback) {
callback(null, 42);
}
```
Attempting to check file2.js gives following errors
```bash
file2.js:4:15 - error TS2314: Generic type 'CallbackWithResult' requires 2 type argument(s).
4 /** @typedef {import('./file1').CallbackWithResult} CallbackWithResult */
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
file2.js:7:12 - error TS2315: Type 'CallbackWithResult' is not generic.
7 * @param {CallbackWithResult<number>} callback
~~~~~~~~~~~~~~~~~~~~~~~~~~
```
Question is: how to properly import generic declarations without defining another set of template variables? | Bug,Help Wanted | low | Critical |
411,819,945 | pytorch | Proposal: Add __tensor_wrap__ method similar to numpy __array_wrap__ | ## 🚀 Feature
Add the ability to subclass torch.Tensor so that you can use `isinstance(tensor, CustomTensor)` to determine the "context" of the data stored in the subclass object. I think #9515 describes the need for **dtype** promotion logic, but this proposal is for **python type** promotion logic.
The __array__wrap__ method already exists in numpy and ensures that the following ufunc preserves the custom tensor type:
```python
custom_tensor = CustomTensor(torch.tensor([1, 2, 3, 4])
new_tensor = np.add(custom_tensor, 1.0) -> CustomTensor
new_tensor = np.add(1.0, custom_tensor) -> CustomTensor (Hmmm...)
isinstance(new_tensor, CustomTensor) # returns True
```
**The __tensor_wrap__ method does not exist**. As a result, the custom type information is easily lost:
custom_tensor = CustomTensor(torch.tensor([1, 2, 3, 4])
new_tensor = custom_tensor.add(1.0) -> Tensor
isinstance(new_tensor, CustomTensor) # returns False
## Motivation
The torch.Tensor class provides a dynamically typed wrapper of statically typed Torch Classes (FloatTensor, DoubleTensor, HalfTensor, etc) using the numpy dtype. While this should save us from ever wanting to subclass the Tensor class, it would be nice to use the subclass of a Tensor to provide custom methods specific to the context of the data.
For example:
```python
class TimeDomainTensor(torch.Tensor)
def __repr__()
return 'Time Information:\n' + super(Parameter, self).__repr__()
```
```python
class FrequencyDomainTensor(torch.Tensor)
def __repr__()
return 'Frequency Information:\n' + super(Parameter, self).__repr__()
```
```python
class SurfaceTensor(torch.Tensor)
def __repr__()
return 'Surface Information:\n' + super(Parameter, self).__repr__()
```
## Pitch
Custom subclassing is already utilized by the torch.nn.Parameter Class, so most of the heavy lifting is already done. Note that performing a math operation on a Parameter also downcasts the Parameter to a Tensor.
## Alternatives
Numpy provides the following methods for customizing the output behaviour of a ufunc.
1. def __array_wrap__(self, array):
A subclass can override what happens when executing numpy ufuncs **after** the ufunc is called.
2. def __array_ufunc__(ufunc, method, *inputs, **kwargs):
A subclass can override what happens when executing numpy ufuncs **before and after** the ufunc is called. New in version 1.13.
3. def __array_finalize__(self, obj):
The ability to insert any kind of custom attributes, events, io, etc. This may not work on GPU because this would open up the ability to define highly customized data containers.
a. explicit constructor call (obj = MySubClass(params)).
b. View casting
c. Creating new from template
4. Do Nothing:
Revert to functional implementation. Simply replace all custom_tensor.method(*args, **kwargs) with method(custom_tensor, *args, **kwargs).
The __array_wrap__ method is sufficient for this problem because we want to cast the tensor **after** the function has been performed. The __array_ufunc__ would also work but could result in compatibility problems with older versions of numpy. The __array_finalize method may add too much customization that could create a wide range of gotchas and incompatibility between CPU and GPU.
## Example
Sample code that partially works for add(custom_tensor, other):
```python
from collections import OrderedDict
import torch as th
def _rebuild_custom_tensor(data, requires_grad, backward_hooks):
param = CustomTensor(data, requires_grad)
# NB: This line exists only for backwards compatibility; the
# general expectation is that backward_hooks is an empty
# OrderedDict. See Note [Don't serialize hooks]
param._backward_hooks = backward_hooks
return param
class CustomTensor(th.Tensor):
r"""A kind of Tensor that has time, freq as its inner-most dimensions.
Arguments:
data (Tensor): parameter tensor.
requires_grad (bool, optional): if the parameter requires gradient. See
:ref:`excluding-subgraphs` for more details. Default: `True`
"""
def __new__(cls, data=None, requires_grad=False):
if data is None:
data = th.Tensor()
return th.Tensor._make_subclass(cls, data, requires_grad)
def __deepcopy__(self, memo):
if id(self) in memo:
return memo[id(self)]
else:
result = type(self)(self.data.clone(), self.requires_grad)
memo[id(self)] = result
return result
def __repr__(self):
return 'Custom Message:\n' + super(CustomTensor, self).__repr__()
def __reduce_ex__(self, proto):
# See Note [Don't serialize hooks]
return (
_rebuild_custom_tensor,
(self.data, self.requires_grad, OrderedDict())
)
def add(self, value):
tensor = super().add(value)
return self.__tensor_wrap__(tensor)
def __add__(self, other):
tensor = super().__add__(other)
return self.__tensor_wrap__(tensor)
# Wrap Numpy array again in a suitable tensor when done, to support e.g.
# `numpy.sin(tensor) -> tensor` or `numpy.greater(tensor, 0) -> ByteTensor`
def __array_wrap__(self, array):
if array.dtype == bool:
# Workaround, torch has no built-in bool tensor
array = array.astype('uint8')
return th.from_numpy(array)
else:
return th.Tensor._make_subclass(CustomTensor, th.from_numpy(array), self.requires_grad)
# Same as __array_wrap__ but for Tensors
def __tensor_wrap__(self, tensor):
return th.Tensor._make_subclass(CustomTensor, tensor, self.requires_grad)
```
Sample Results:
```python
import torch
from custom import CustomTensor
x = torch.tensor([1.0, 2.0, 3.0], dtype=torch.double)
custom_x = CustomTensor(x)
custom_x + 1 # -> CustomTensor
1 + custom_x # -> **Tensor** :(
torch.add(1, custom_tensor) -> **Tensor** :(
torch.add(custom_tensor, 1) -> **Tensor** :(
```
## Conclusions
The above example demonstrates how numpy goes beyond binary operator overloading to ensure that all operations return the highest derived subclass of the ndarray. This is not how pytorch is organized. I think #9515 describes the need for dtype promotion logic, but this proposal is for python type promotion logic.
| module: internals,triaged,module: numpy | medium | Critical |
411,890,803 | rust | "Conflicting crates" error should show the filenames of the conflicting crates | I just got this error when compiling rustc:
```
error[E0523]: found two different crates with name `libc` that are not distinguished by differing `-C metadata`. This will result in symbol conflicts between the two.
--> /home/r/.cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.6.1/src/rngs/adapter/reseeding.rs:287:5
|
287 | extern crate libc;
| ^^^^^^^^^^^^^^^^^^
error: aborting due to previous error
For more information about this error, try `rustc --explain E0523`.
```
Usually when I get similar errors, it shows some filenames and deletong some of the files involved unblocks the situation. However, for this particular error, for some reason it does not tell me in which files it found these conflicting `libc` crates. | C-enhancement,A-diagnostics,T-compiler | low | Critical |
411,932,684 | create-react-app | Do not rename existing README, rename the CRA README | I would like to propose that instead of renaming existing `README.md` to `README.old.md` if any, CRA should instead leave that file alone and drop its readme under the name `README-CRA.md` and not `README.md`.
The rationale for this is that if a readme exists already, it's most likely going to be more valuable than the CRA readme and that's what people should keep seeing after CRA is scaffolded in the repository. It's still handy to have the CRA readme handy, but no one is going to start rebuilding their existing readme on top the CRA one, they will just reference bits of the CRA one in their own.
So instead of forcing people to rename the CRA readme and fixing the incorrect rename of their already existing readme, let's create the CRA readme under a new name and respect the exsting readme. | issue: proposal | low | Major |
411,940,625 | flutter | Get camera FOV | Being able to retrieve the horizontal and vertical FOV of a camera would be great. | c: new feature,p: camera,package,team-ecosystem,P3,triaged-ecosystem | low | Minor |
412,028,189 | pytorch | Generated `__init__.pyi` contains invalid default values | ## 🐛 Bug
Generated `__init__.pyi` contains default values (some of them are invalid), due to [conventions](https://github.com/python/typeshed/blob/master/CONTRIBUTING.md#conventions) they should be replaced with `...`.
## To Reproduce
Install `torch` and navigate to `site-packages/torch/__init__.pyi`, e.g. search for `c10::nullopt` in it.
## Expected behavior
All default values are specified as `...`
## Environment
PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Linux Mint 18.3 Sylvia
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
@t-vi and #12500
| triaged,module: codegen | low | Critical |
412,040,995 | TypeScript | Type declaration of CanvasImageSource in lib.webworker.d.ts is incomplete. | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
Type declaration of CanvasImageSource in lib.webworker.d.ts is now only `ImageBitMap`.
But that should be `HTMLOrSVGImageElement | HTMLVideoElement | HTMLCanvasElement | ImageBitmap`
https://developer.mozilla.org/en-US/docs/Web/API/CanvasImageSource
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.201190216
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
CanvasImageSource
**Code**
```ts
let ctx = document.createElement("canvas").getContext("2d")
let img = new Image()
ctx.drawImage(img, 0, 0)
```
tsconfig.json
```json
{
"compilerOptions": {
"target": "es2018",
"module": "commonjs",
"lib": [
"webworker",
"es2018",
"dom",
]
}
}
```
whole project in https://github.com/Muratam/yabai-sample.ts
**Expected behavior:**
No error occurs when run `webpack` nor `webpack --watch`
**Actual behavior:**
Following error occurs when incremental build in `webpack --watch`
"TS2345: Argument of type 'HTMLImageElement' is not assignable to parameter of type 'ImageBitmap'.
Property 'close' is missing in type 'HTMLImageElement' but required in type 'ImageBitmap'"
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
#25468
In that issue, only TypeScript/src/lib/dom.generated.d.ts is referred, but lib.webworker.d.ts is also needed to be referred. Please fix type declaration of CanvasImageSource in lib.webworker.d.ts
| Bug,Help Wanted,Domain: lib.d.ts | low | Critical |
412,064,347 | flutter | [flutter_tools] flutter channel should check out latest release on channel | So a weird thing happened. I've been on the `master` channel on my Windows machine, but wanted to check something out about the latest `dev` channel release.
I last ran `flutter upgrade` on this machine last Friday (so the bits were _fresh_), but when I ran `flutter channel dev`, it took me back to v0.5.7. I was expecting it to take me to the last dev channel version released.
As a result of it pushing me back to 0.5.7, `flutter upgrade` failed because of bug https://github.com/flutter/flutter/issues/14578.
I'd love to understand why switching channel could throw me back eight months in time? This might perhaps also explain why a good number of users show up in the beta channel running 0.9.4 when all other pre-1.0 releases have negligible usage.
```
[C:\Users\timsneath] flutter channel dev
╔════════════════════════════════════════════════════════════════════════════╗
║ A new version of Flutter is available! ║
║ ║
║ To update to the latest version, run "flutter upgrade". ║
╚════════════════════════════════════════════════════════════════════════════╝
Switching to flutter channel 'dev'...
git: From https://github.com/flutter/flutter
git: 7390cc5cd..035e0765c master -> origin/master
git: Your branch is behind 'origin/dev' by 1934 commits, and can be fast-forwarded.
git: (use "git pull" to update your local branch)
git: Switched to branch 'dev'
[C:\Users\timsneath] flutter upgrade
Checking Dart SDK version...
Downloading Dart SDK from Flutter engine 6fe748490d1772d72274bf8b9efb72c5c2160237...
Unzipping Dart SDK...
Updating flutter tool...
Upgrading Flutter from c:\git\flutter...
Checking out files: 21% (508/2404)
Checking out files: 22% (529/2404)
Checking out files: 23% (553/2404)
Checking out files: 24% (577/2404)
Checking out files: 25% (601/2404)
Checking out files: 26% (626/2404)
Checking out files: 27% (650/2404)
Checking out files: 28% (674/2404)
Checking out files: 29% (698/2404)
Checking out files: 30% (722/2404)
Checking out files: 31% (746/2404)
Checking out files: 32% (770/2404)
Checking out files: 33% (794/2404)
Checking out files: 34% (818/2404)
Checking out files: 35% (842/2404)
Checking out files: 36% (866/2404)
Checking out files: 37% (890/2404)
Checking out files: 38% (914/2404)
Checking out files: 38% (935/2404)
Checking out files: 39% (938/2404)
Checking out files: 40% (962/2404)
Checking out files: 41% (986/2404)
Checking out files: 42% (1010/2404)
Checking out files: 43% (1034/2404)
Checking out files: 44% (1058/2404)
Checking out files: 45% (1082/2404)
Checking out files: 46% (1106/2404)
Checking out files: 47% (1130/2404)
Checking out files: 48% (1154/2404)
Checking out files: 49% (1178/2404)
Checking out files: 50% (1202/2404)
Checking out files: 51% (1227/2404)
Checking out files: 52% (1251/2404)
Checking out files: 53% (1275/2404)
Checking out files: 54% (1299/2404)
Checking out files: 54% (1322/2404)
Checking out files: 55% (1323/2404)
Checking out files: 56% (1347/2404)
Checking out files: 57% (1371/2404)
Checking out files: 58% (1395/2404)
Checking out files: 59% (1419/2404)
Checking out files: 60% (1443/2404)
Checking out files: 61% (1467/2404)
Checking out files: 62% (1491/2404)
Checking out files: 63% (1515/2404)
Checking out files: 64% (1539/2404)
Checking out files: 65% (1563/2404)
Checking out files: 66% (1587/2404)
Checking out files: 67% (1611/2404)
Checking out files: 68% (1635/2404)
Checking out files: 69% (1659/2404)
Checking out files: 70% (1683/2404)
Checking out files: 71% (1707/2404)
Checking out files: 72% (1731/2404)
Checking out files: 72% (1737/2404)
Checking out files: 73% (1755/2404)
Checking out files: 74% (1779/2404)
Checking out files: 75% (1803/2404)
Checking out files: 76% (1828/2404)
Checking out files: 77% (1852/2404)
Checking out files: 78% (1876/2404)
Checking out files: 79% (1900/2404)
Checking out files: 80% (1924/2404)
Checking out files: 81% (1948/2404)
Checking out files: 82% (1972/2404)
Checking out files: 83% (1996/2404)
Checking out files: 84% (2020/2404)
Checking out files: 85% (2044/2404)
Checking out files: 86% (2068/2404)
Checking out files: 87% (2092/2404)
Checking out files: 88% (2116/2404)
Checking out files: 89% (2140/2404)
Checking out files: 90% (2164/2404)
Checking out files: 90% (2172/2404)
Checking out files: 91% (2188/2404)
Checking out files: 92% (2212/2404)
Checking out files: 93% (2236/2404)
Checking out files: 94% (2260/2404)
Checking out files: 95% (2284/2404)
Checking out files: 96% (2308/2404)
Checking out files: 97% (2332/2404)
Checking out files: 98% (2356/2404)
Checking out files: 99% (2380/2404)
Checking out files: 100% (2404/2404)
Checking out files: 100% (2404/2404), done.
Updating 66091f969..8661d8aec
.../src/main/res/drawable/launch_background.xml | 0
.../app/src/main/res/mipmap-hdpi/ic_launcher.png | Bin
.../app/src/main/res/mipmap-mdpi/ic_launcher.png | Bin
.../app/src/main/res/mipmap-xhdpi/ic_launcher.png | Bin
.../app/src/main/res/mipmap-xxhdpi/ic_launcher.png | Bin
.../src/main/res/mipmap-xxxhdpi/ic_launcher.png | Bin
.../android}/app/src/main/res/values/styles.xml | 0
.../macrobenchmarks/android}/gradle.properties | 0
.../macrobenchmarks/android}/settings.gradle | 0
.../ios}/Flutter/AppFrameworkInfo.plist | 0
.../macrobenchmarks/ios}/Flutter/Debug.xcconfig | 0
.../macrobenchmarks/ios}/Flutter/Release.xcconfig | 0
.../project.xcworkspace/contents.xcworkspacedata | 0
.../Runner.xcworkspace/contents.xcworkspacedata | 0
.../macrobenchmarks/ios}/Runner/AppDelegate.h | 0
.../macrobenchmarks/ios}/Runner/AppDelegate.m | 0
.../AppIcon.appiconset/Contents.json | 0
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../AppIcon.appiconset/[email protected] | Bin
.../LaunchImage.imageset/Contents.json | 0
.../LaunchImage.imageset/LaunchImage.png | Bin
.../LaunchImage.imageset/[email protected] | Bin
.../LaunchImage.imageset/[email protected] | Bin
.../Assets.xcassets/LaunchImage.imageset/README.md | 0
.../ios}/Runner/Base.lproj/LaunchScreen.storyboard | 0
.../ios}/Runner/Base.lproj/Main.storyboard | 0
.../benchmarks/macrobenchmarks/ios}/Runner/main.m | 0
.../app/src/main/res/mipmap-hdpi/ic_launcher.png | Bin 0 -> 544 bytes
.../app/src/main/res/mipmap-mdpi/ic_launcher.png | Bin 0 -> 442 bytes
.../app/src/main/res/mipmap-xhdpi/ic_launcher.png | Bin 0 -> 721 bytes
.../app/src/main/res/mipmap-xxhdpi/ic_launcher.png | Bin 0 -> 1031 bytes
.../src/main/res/mipmap-xxxhdpi/ic_launcher.png | Bin 0 -> 1443 bytes
.../android/gradle.properties | 0
.../android_views/ios}/.gitignore | 0
.../xcshareddata/xcschemes/Runner.xcscheme | 0
.../ios_host_app/Flutter/.gitkeep | 0
packages/flutter/test/widgets/inherited_model.dart | 0
.../.idea/libraries/Dart_SDK.xml.tmpl | 0
.../.idea/libraries/Flutter_for_Android.xml.tmpl | 0
.../.idea/runConfigurations/main_dart.xml.tmpl | 0
.../{create => app}/.idea/workspace.xml.tmpl | 0
.../java/androidIdentifier/MainActivity.java.tmpl | 0
.../app/src/main/res/mipmap-hdpi/ic_launcher.png | Bin 0 -> 544 bytes
.../app/src/main/res/mipmap-mdpi/ic_launcher.png | Bin 0 -> 442 bytes
.../app/src/main/res/mipmap-xhdpi/ic_launcher.png | Bin 0 -> 721 bytes
.../app/src/main/res/mipmap-xxhdpi/ic_launcher.png | Bin 0 -> 1031 bytes
.../src/main/res/mipmap-xxxhdpi/ic_launcher.png | Bin 0 -> 1443 bytes
.../ios-swift.tmpl/Runner/AppDelegate.swift | 0
.../ios-swift.tmpl/Runner/Runner-Bridging-Header.h | 0
.../ios.tmpl/Flutter/AppFrameworkInfo.plist} | 0
.../AppIcon.appiconset/[email protected] | Bin 0 -> 11112 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 564 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1283 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1588 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1025 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1716 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1920 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1283 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1895 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 2665 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 2665 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 3831 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1888 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 3294 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 3612 bytes
.../LaunchImage.imageset/LaunchImage.png | Bin 0 -> 68 bytes
.../LaunchImage.imageset/[email protected] | Bin 0 -> 68 bytes
.../LaunchImage.imageset/[email protected] | Bin 0 -> 68 bytes
.../ios.tmpl/Runner/Info.plist.tmpl | 0
.../templates/{create => app}/projectName.iml.tmpl | 0
.../src/main/res/mipmap-hdpi/ic_launcher.png | Bin 0 -> 544 bytes
.../Flutter.tmpl/src/main/AndroidManifest.xml.tmpl | 0
.../settings.gradle.copy.tmpl} | 0
.../common}/.idea/modules.xml.tmpl | 0
.../AppIcon.appiconset/[email protected] | Bin 0 -> 11112 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 564 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1283 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1588 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1025 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1716 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1920 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1283 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1895 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 2665 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 2665 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 3831 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 1888 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 3294 bytes
.../AppIcon.appiconset/[email protected] | Bin 0 -> 3612 bytes
.../LaunchImage.imageset/LaunchImage.png | Bin 0 -> 68 bytes
.../LaunchImage.imageset/[email protected] | Bin 0 -> 68 bytes
.../LaunchImage.imageset/[email protected] | Bin 0 -> 68 bytes
.../Flutter.tmpl/README.md} | 0
2323 files changed, 240859 insertions(+), 100260 deletions(-)
mode change 100644 => 100755
Upgrading engine...
Checking Dart SDK version...
Downloading Dart SDK from Flutter engine 3757390fa4b00d2d261bfdf5182d2e87c9113ff9...
Unzipping Dart SDK...
Building flutter tool...
Running pub upgrade...
Downloading package sky_engine... 0.5s
Downloading common tools... 0.4s
Downloading windows-x64 tools... 0.6s
Downloading android-arm-profile/windows-x64 tools... 0.2s
Downloading android-arm-release/windows-x64 tools... 0.3s
Downloading android-arm64-profile/windows-x64 tools... 0.3s
Downloading android-arm64-release/windows-x64 tools... 0.3s
Downloading android-arm-dynamic-profile/windows-x64 tools... 0.3s
Downloading android-arm-dynamic-release/windows-x64 tools... 0.3s
Downloading android-arm64-dynamic-profile/windows-x64 tools... 0.5s
Downloading android-arm64-dynamic-release/windows-x64 tools... 0.3s
Downloading android-x86 tools... 0.5s
Downloading android-x64 tools... 0.5s
Downloading android-arm tools... 0.3s
Downloading android-arm-profile tools... 0.4s
Downloading android-arm-release tools... 0.3s
Downloading android-arm64 tools... 0.3s
Downloading android-arm64-profile tools... 0.3s
Downloading android-arm64-release tools... 0.4s
Downloading android-arm-dynamic-profile tools... 0.3s
Downloading android-arm-dynamic-release tools... 0.5s
Downloading android-arm64-dynamic-profile tools... 0.5s
Downloading android-arm64-dynamic-release tools... 0.4s
Flutter 0.5.7 • channel dev • https://github.com/flutter/flutter.git
Framework • revision 66091f9696 (8 months ago) • 2019-02-14 19:19:53 -0800
Engine • revision 3757390fa4
Tools • Dart 2.0.0-dev.63.0.flutter-4c9689c1d2
Running flutter doctor...
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel dev, v1.2.1, on Microsoft Windows [Version 10.0.16299.904], locale en-US)
[√] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
[√] Android Studio (version 3.3)
[√] VS Code (version 1.31.1)
[!] Connected device
! No devices available
! Doctor found issues in 1 category.
'napshot_pathflutter_tools_dirscript_path"' is not recognized as an internal or external command,
operable program or batch file.
Error: Unable to create dart snapshot for flutter tool.
``` | tool,a: quality,P3,team-tool,triaged-tool | low | Critical |
412,075,252 | youtube-dl | Requesting new site support Stormwind | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2019.02.18*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2019.02.18**
### Before submitting an *issue* make sure you have:
- [ ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [ ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [ ] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2019.02.18
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
...
<end of log>
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
https://campus.stormwindstudios.com/site/path/741#tab/path/activity/12235
If I open up F12 (debug console on Chrome) I can see the MP4 file which is this link after the video starts to play - https://embedwistia-a.akamaihd.net/deliveries/633490c1f0ba6e8b3d39d9ec899ff450bc9eb413/file.mp4
I could supply some creds to login if you would like to send me an email or private message.
---
### Description of your *issue*, suggested solution and other information
Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.
If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
| account-needed | low | Critical |
412,075,434 | pytorch | Generic object to tensor dispatching | ## 🚀 Feature
It is a common issue in extending pytorch for general usage that it's functions allow only tensor inputs. However, some objects, that represent a tensor are better to be stored not as tensor, but as a custom python object.
## Motivation
### Riemannian Optimization

Here is a brief motivating example. Consider a `UdV=X` matrix decomposition. It has 2 matrices on Stiefel Manifold and a vector with positive entries as a minimal description. One would like to work with this representation as it forms a nice way to represent low rank matrices. It is now impossible to pass custom objects to pytorch functions.
```python
>>> torch.svd(myobject)
Traceback (most recent call last):
File "<input>", line 1, in <module>
TypeError: svd(): argument 'input' (position 1) must be Tensor, not object
```
The use case of the above example is interesting for the [geoopt](https://github.com/ferrine/geoopt) project, where I plan to support such kind of manifolds in future. I have no idea how to do it without efforts from pytorch-dev side.
### Probabilistic Programming
Another interesting example is picked up from probabilistic programming languages(like pymc3, edward, pyro). Distribution objects describe the belief over all possible values. And a good user API assumes some abstraction there. In pymc3 (with a help of theano) we mixed tensors and distributions together producing things like
```python
import pymc3 as pm
X, y = linear_training_data()
with pm.Model() as linear_model:
weights = pm.Normal('weights', mu=0, sd=1)
noise = pm.Gamma('noise', alpha=2, beta=1)
y_observed = pm.Normal('y_observed',
mu=X.dot(weights),
sd=noise,
observed=y)
prior = pm.sample_prior_predictive()
posterior = pm.sample()
posterior_pred = pm.sample_posterior_predictive(posterior)
```
You see, calling a distribution object forms a tensor compatible object that can be used in the downstream. We used a lot of hacks with `__new__` there to be honest due to the absence of API like tensorflow provides:
https://www.tensorflow.org/api_docs/python/tf/register_tensor_conversion_function
We also used this particular one in pymc4 design as following. In a different `context` that might be inference or mcmc sampling it changes the behaviour producing either conditional or unconditional distribution.
https://github.com/pymc-devs/pymc4/blob/master/pymc4/random_variables/random_variable.py#L119
Pyro as I know still relies on explicit `.sample(...)` calls
```python
def scale(guess):
weight = pyro.sample("weight", dist.Normal(guess, 1.0))
return pyro.sample("measurement", dist.Normal(weight, 0.75))
```
And I expect, pyro developers would be happy to join this discussion
----
So you see how much benefit can be acquired with tensor conversion functions. It allows 3d party developers build very neat pytorch compatible API improving user experience.
## Pitch
There should be a convention for a developer API to register tensor conversion functions like tensorflow does (they have a nice one API at this point)
This is the tensorflow convension
```python
tf.register_tensor_conversion_function(
base_type,
conversion_func,
priority=100
)
def conversion_func(value, dtype=None, name=None, as_ref=False):
...
```
where
* base_type: The base type or tuple of base types for all objects that conversion_func accepts.
* conversion_func: A function that converts instances of base_type to Tensor.
* priority: Optional integer that indicates the priority for applying this conversion function. Conversion functions with smaller priority values run earlier than conversion functions with larger priority values. Defaults to 100.
(full [page](https://www.tensorflow.org/api_docs/python/tf/register_tensor_conversion_function))
Pytorch does not use names, so may omit some of this. `as_ref` argument is about view or copy return type. An object may own the tensor and may optionally provide itself to perform inplace operations (if passed to `out=myobject`, for example).
Pytorch also has some so called tensor options that include dtype, device, stride, etc. Thus convensions from [Tensor creation guide](https://pytorch.org/cppdocs/notes/tensor_creation.html) may apply as a better api.
So this may look like, finally
```python
torch.register_tensor_conversion_function(
base_type,
conversion_func,
priority=100
)
def conversion_func(
value,
# should either inherit from value variable created, or set its own option
dtype=None,
device=None,
stride=None,
requires_grad=None
):
...
```
Shape inference might be nice as well (if tensor creation is a heavy operation) to check if a resulting tensor is compatible with other arguments. This implies
```python
torch.register_tensor_shape_conversion_function(
base_type,
conversion_func,
priority=100
)
def conversion_func(
value
):
...
```
Shape conversion does not seem to be compulsory as can be inferred with `register_tensor_conversion_function`, but is a good way to debug faster
CC @soumith, @fritzo | feature,triaged | low | Critical |
412,078,034 | TypeScript | Missing comments after applying refactor/codefix | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** missing comment
Several refactors and codefixes cause comments in some positions to disappear after being applied.
- Refactor: Remove braces from arrow function
**Code**
```ts
const a = (a: number) => {/* missing */ return a; /* missing */};
const b = (a: number) => { /* missing */
return a; /* missing */
}
```
- Refactor: Move to a new file
**Code**
```ts
class A { // missing
a(s: string) { // missing
return s;
}
}
```
- Refactor: extract to function in global scope/extract to function in namespace 'Extract'
**Code**
```ts
namespace Extract {
class B { // missing
a(s: string) { // missing
return s;
}
}
}
```
- Codefix: Convert 'SomeType' to mapped object type
**Code**
```ts
type K = "foo" | "bar";
interface SomeType { /*missing*/
a: string;
/*missing*/ [prop: K] /*missing*/: any;
}
```
- Codefix: Change '/*missing*/ ?number' to 'number | null' (fix JSDoc Types)
**Code**
```ts
function fn(a: /*missing*/ ?number) {
return a;
}
```
- Codefix: Convert function to an ES2015 class
**Code**
```js
function fun(b) { // missing
this.b = b;
}
```
- Codefix: Convert to ES6 module
**Code**
```js
exports.f /*missing*/ = /*missing*/ function() { };
```
- Codefix: Initialize property 'foo' in the constructor
**Code**
```js
// @ts-check
class C {
constructor() {/*missing*/
//missing
}
prop = ()=>{ this.foo === 10 };
}
```
**Expected behavior:**
Comments should be present after applying refactor/codefix.
**Actual behavior:**
Comments are missing after refactor/codefix. | Bug,Help Wanted,Effort: Moderate | low | Critical |
412,085,958 | TypeScript | Boolean intellisense | <!-- Please search existing issues to avoid creating duplicates. -->
Intellisense is great for things like `typeof`, so why not make it great for boolean types too!
<!-- Describe the feature you'd like. -->
I think adding intellisense for booleans saves a bit of typing, and it still hasn't been implemented.
Look at this:
```ts
const a = (): boolean => true;
a() === // [true, false] intellisense prompt
const b = a();
if (b === /* [true, false] intellisense prompt*/)
```
I think this would be a great addition to Intellisense. Any feedback is appreciated. | Suggestion,Awaiting More Feedback | low | Major |
412,098,238 | TypeScript | Suggestion: support Node `require` in ES Modules | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
node require commonjs webpack conditional dynamic import
## Suggestion
Node style `require`'s are often used *inside ES Modules* for conditional imports. For example:
``` ts
const getSentry = () => {
if (__CLIENT__) {
return require('./client-sentry').default;
} else {
return require('./server-sentry').default;
}
};
export default getSentry();
```
In this example, `require` returns `any` as per the typings provide by `@types/node`.
https://github.com/DefinitelyTyped/DefinitelyTyped/blob/5f59f3dc7a0965fb1767cf13c670c633bbbbf0a8/types/node/globals.d.ts#L190-L193
In order to acquire types, we have to use `import` types:
``` ts
type ClientSentry = typeof import('./client-sentry');
type ServerSentry = typeof import('./server-sentry');
const getSentry = () => {
if (__CLIENT__) {
return (require('./client-sentry') as ClientSentry).default;
} else {
return (require('./server-sentry') as ServerSentry).default;
}
};
export default getSentry();
```
I would like to suggest that TypeScript supports `require` calls inside of ES Modules (in Node envs). This way the workaround would not be necessary. Alternatively, perhaps TypeScript could make it possible for `@types/node` to improve the return type of `require`, e.g.
``` ts
interface NodeRequireFunction {
<Id extends string>(id: Id): typeof import(Id);
}
```
Note that `require` is desirable over dynamic imports because it is synchronous.
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Critical |
412,109,984 | go | x/build: make GOROOT read-only before running tests | When #28387 is fixed, we should make sure that it doesn't regress.
Can we configure the builders to make GOROOT read-only before they invoke `run.bash`, and perhaps restore write permissions after the tests have finished?
(CC @bradfitz @dmitshur) | Builders,NeedsFix | high | Critical |
412,113,113 | flutter | Support for AssetBundle to get each platforms bundle path | This is related to issue: https://github.com/flutter/flutter/issues/11892
But in particular it would be reasonable if from Dart (and perhaps via the assetbundle API itself) we could have the ability to get the path for each platforms bundle (NSBundle or AssetManager).
This would be useful in order to easily pack other types of files (e.g. databases, pdfs) into the app and read them easily. Right now, the only solution I'm aware of is copying each file manually from assets into each platform's documents directory and then using that as our "file repository". | c: new feature,framework,engine,a: assets,customer: crowd,P2,team-engine,triaged-engine | low | Critical |
412,118,335 | react-native | Support styles (borderX) for <Text /> on IOS | ## 🐛 Bug Report
IOS don't support next styles for `<Text/>` component!
```js
borderTopWidth\borderTopColor
borderBottomWidth\borderBottomColor
borderLeftWidth\borderLeftColor
borderRightWidth\borderRightColor
```
## To Reproduce
```js
<Text style={{borderWidth: 1}}>borderWidth</Text>
<Text style={{borderLeftWidth: 1}}>borderLeftWidth</Text>
<Text style={{borderRightWidth: 1}}>borderRightWidth</Text>
<Text style={{borderTopWidth: 1}}>borderTopWidth</Text>
<Text style={{borderBottomWidth: 1}}>borderBottomWidth</Text>
<Text style={{borderWidth: 1, borderBottomWidth: 0,}}>borderWidth\borderBottomWidth</Text>
```
## Expected Behavior
Full style props (`borderLeftX,borderTopX, ...`) support like Android!
## Code Example
Example: https://snack.expo.io/@retyui/test-borders
### IOS example:

### Android

## Environment
```sh
React Native Environment Info:
System:
OS: Linux 4.15 Linux Mint 18.3 (Sylvia)
CPU: (8) x64 Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
Memory: 9.63 GB / 15.55 GB
Shell: 2.7.1 - /usr/bin/fish
Binaries:
Node: 10.15.1 - /usr/bin/node
Yarn: 1.13.0 - ~/.yarn/bin/yarn
npm: 6.7.0 - /usr/bin/npm
Watchman: 4.9.0 - /usr/local/bin/watchman
SDKs:
Android SDK:
API Levels: 23, 25, 26, 27, 28
Build Tools: 23.0.1, 26.0.2, 26.0.3, 27.0.3, 28.0.2, 28.0.3
System Images: android-23 | Intel x86 Atom_64, android-23 | Google APIs Intel x86 Atom_64, android-26 | Google APIs Intel x86 Atom, android-28 | Google APIs Intel x86 Atom, android-28 | Google APIs Intel x86 Atom_64
npmPackages:
react: 16.8.1 => 16.8.1
react-native: 0.59.0-rc.2 => 0.59.0-rc.2
npmGlobalPackages:
react-native-cli: 2.0.1
react-native-create-library: 3.1.2
```
| Platform: iOS,Issue: Author Provided Repro,Component: Text,Impact: Platform Disparity,Priority: Low,Bug | low | Critical |
412,153,328 | go | proposal: spec: let := support any l-value that = supports | (Pulling this specifically out of #377, the general `:=` bug)
This proposal is about permitting a struct field (and other such l-values) on the left side of `:=`, as long as there's a new variable being created (the usual `:=` rule).
That is, permit the `t.i` here:
```go
func foo() {
var t struct { i int }
t.i, x := 1, 2
...
}
```
This should be backwards compatible with Go 1.
**Edit:** clarification: any l-value that `=` supports, not just struct fields.
/cc @griesemer @ianlancetaylor | LanguageChange,Proposal,LanguageChangeReview | high | Critical |
412,159,156 | flutter | Document which listener will win with HitTestBehavior.translucent | In the HitTestBehavior.translucent doc, document who will typically win if both the translucent parent and its descendent listen to the same event.
https://github.com/flutter/flutter/issues/18450 related | framework,d: api docs,f: gestures,P2,team-framework,triaged-framework | low | Minor |
412,164,163 | flutter | Local Android Profile & Release builds fail due to missing symbols referenced by the Dart VM. | Build flags are: `flutter/tools/gn --unopt --runtime-mode release --no-lto && flutter/tools/gn --unopt --runtime-mode release --no-lto --android && ninja -C out/android_release_unopt && ninja -C out/host_release_unopt`
There seem to be two issues:
1: `native_symbol_android.cc` seems to reference the missing native symbol `__cxa_demangle`
```
ninja: Entering directory `out/android_release_unopt'
[3090/3176] SOLINK ./libflutter.so
FAILED: libflutter.so libflutter.so.TOC lib.stripped/libflutter.so
/Users/chinmaygarde/goma/gomacc ../../buildtools/mac-x64/clang/bin/clang++ -shared -Wl,--version-script=/Users/chinmaygarde/VersionControlled/engine/src/flutter/shell/platform/android/android_exports.lst -Wl,--fatal-warnings -fPIC -Wl,-z,noexecstack -Wl,-z,now -Wl,-z,relro -Wl,-z,defs --gcc-toolchain=../../third_party/android_tools/ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64 -Wl,--no-undefined -Wl,--exclude-libs,ALL -fuse-ld=lld -Wl,--icf=all --target=arm-linux-androideabi -Wl,--warn-shared-textrel -nostdlib --sysroot=/Users/chinmaygarde/VersionControlled/engine/src/third_party/android_tools/ndk/platforms/android-16/arch-arm -L../../third_party/android_tools/ndk/sources/cxx-stl/llvm-libc++/libs/armeabi-v7a -o ./libflutter.so -Wl,--build-id -Wl,-soname=libflutter.so @./libflutter.so.rsp && { /Users/chinmaygarde/goma/gomacc ../../third_party/android_tools/ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64/bin/arm-linux-androideabi-readelf -d ./libflutter.so | grep SONAME ; /Users/chinmaygarde/goma/gomacc ../../third_party/android_tools/ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64/bin/arm-linux-androideabi-nm -gD -f p ./libflutter.so | cut -f1-2 -d' '; } > ./libflutter.so.tmp && if ! cmp -s ./libflutter.so.tmp ./libflutter.so.TOC; then mv ./libflutter.so.tmp ./libflutter.so.TOC; fi && ../../third_party/android_tools/ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64/bin/arm-linux-androideabi-strip --strip-unneeded -o lib.stripped/libflutter.so.tmp libflutter.so && if ! cmp -s lib.stripped/libflutter.so.tmp lib.stripped/libflutter.so; then mv lib.stripped/libflutter.so.tmp lib.stripped/libflutter.so; fi
ld.lld: error: undefined symbol: __cxa_demangle
>>> referenced by native_symbol_android.cc:33 (../../third_party/dart/runtime/vm/native_symbol_android.cc:33)
>>> obj/third_party/dart/runtime/vm/libdart_vm_precompiled_runtime.native_symbol_android.o:(dart::NativeSymbolResolver::LookupSymbolName(unsigned int, unsigned int*))
clang-8: error: linker command failed with exit code 1 (use -v to see invocation)
[3161/3176] CXX clang_x86/obj/third_party/dart/runtime/vm/compiler/aot/libdart_vm_nosnapshot_with_precompiler.precompiler.o^C
ninja: build stopped: interrupted by user.
```
2: `build/gn_run_binary.py` seems to be passed in the unrecognized flag `enable_asserts`
```
python ../../build/gn_run_binary.py clang_x86/gen_snapshot --snapshot_kind=core --enable_mirrors=false --vm_snapshot_data=/Users/chinmaygarde/VersionControlled/engine/src/out/android_release_unopt/gen/flutter/lib/snapshot/vm_isolate_snapshot.bin --vm_snapshot_instructions=/Users/chinmaygarde/VersionControlled/engine/src/out/android_release_unopt/gen/flutter/lib/snapshot/vm_snapshot_instructions.bin --isolate_snapshot_data=/Users/chinmaygarde/VersionControlled/engine/src/out/android_release_unopt/gen/flutter/lib/snapshot/isolate_snapshot.bin --isolate_snapshot_instructions=/Users/chinmaygarde/VersionControlled/engine/src/out/android_release_unopt/gen/flutter/lib/snapshot/isolate_snapshot_instructions.bin --enable_asserts /Users/chinmaygarde/VersionControlled/engine/src/out/android_release_unopt/flutter_patched_sdk/platform_strong.dill
Setting VM flags failed: Unrecognized flags: enable_asserts
```
| platform-android,engine,dependency: dart,P2,team-android,triaged-android | low | Critical |
412,188,044 | TypeScript | Incorrect typing for iterables in lib | **Search Terms:**
iterable return type
asynciterable return type
**Related Issues:**
https://github.com/Microsoft/TypeScript/issues/2983 - describes how the iterables types are not inferred correctly for generators. Fixing this current issue would be a step towards making that issue resolvable as well.
**The Problem**
The type definitions for iterable and asynciterable do not match the specification.
According to the specification, when `IteratorResult` has `done: true`, the `value` is the return value of the iterator. This is not the same type as an element of the iterator. When implementing an iterator or async iterator "by hand", you get spurious type errors. For example:
```
const counter = (offset = 0, limit = 10) => {
let x = 0;
const a: AsyncIterator<number> = {
next: async () => {
if (x < 5) {
x++;
return { x, done: false };
}
return { done: true };
},
};
};
export default counter;
```
Gives errors:
```
src/testing/temp.ts:4:5 - error TS2322: Type '() => Promise<{ x: number; done: false; } | { done: true; x?: undefined; }>' is not assignable to type '(value?: any) => Promise<IteratorResult<number>>'.
Type 'Promise<{ x: number; done: false; } | { done: true; x?: undefined; }>' is not assignable to type 'Promise<IteratorResult<number>>'.
Type '{ x: number; done: false; } | { done: true; x?: undefined; }' is not assignable to type 'IteratorResult<number>'.
Property 'value' is missing in type '{ x: number; done: false; }' but required in type 'IteratorResult<number>'.
4 next: async () => {
~~~~
```
Thus, the types for iterables and async iterables should look more like this:
```
interface IteratorValueResult<T> {
done?: false;
value: T;
}
interface IteratorReturnResult<T = undefined> {
done: true;
value: T;
}
type IteratorResult<T, U = undefined> = IteratorValueResult<T> | IteratorReturnResult<U>;
interface Iterator<T, U = undefined> {
next(value?: any): IteratorResult<T, U>;
return?(value?: any): IteratorResult<T, U>;
throw?(e?: any): IteratorResult<T, U>;
}
interface Iterable<T, U = undefined> {
[Symbol.iterator](): Iterator<T, U>;
}
interface IterableIterator<T, U = undefined> extends Iterator<T, U> {
[Symbol.iterator](): IterableIterator<T, U>;
}
```
and
```
interface AsyncIterator<T, U = undefined, NextArg = undefined> {
// the type of the argument to next (if any) depends on the specific async iterator
next(value?: NextArg): Promise<IteratorResult<T, U>>;
// return should give an iterator result with done: true` and the passed value (if any) as the value
return?<R = undefined>(value: R): Promise<IteratorReturnResult<R>>;
// throw should return a rejected promise with the given argument as the error, or return an iterator result with done: true
throw?(e: any): Promise<IteratorReturnResult<undefined>>;
}
interface AsyncIterable<T, U = undefined, NextArg = undefined> {
[Symbol.asyncIterator](): AsyncIterator<T, U, NextArg>;
}
interface AsyncIterableIterator<T, U = undefined, NextArg = undefined> extends AsyncIterator<T, U> {
[Symbol.asyncIterator](): AsyncIterableIterator<T, U, NextArg>;
}
```
This should allow typescript to correctly infer that `value` is not required when `done: true`. For generators that do have a distinct return type from their yield type, they will be able to typecheck without spurious errors.
If the above definitions are acceptable to the maintainers I can prepare a PR to update the type definitions.
**References**
- [Iterator Interface](https://www.ecma-international.org/ecma-262/9.0/index.html#sec-iterator-interface)
- [AsyncIterator Interface](https://www.ecma-international.org/ecma-262/9.0/index.html#sec-asynciterator-interface)
- [IteratorResult Interface](https://www.ecma-international.org/ecma-262/9.0/index.html#sec-iteratorresult-interface)
| Needs Investigation | low | Critical |
412,191,314 | rust | Clarify that Rust std library `unsafe` tricks can't always be used by others | The union RFC [says](https://github.com/rust-lang/rfcs/blob/master/text/1444-union.md#unions-and-undefined-behavior):
> In addition, since a union declared without `#[repr(C)]` uses an unspecified binary layout, code reading fields of such a union or pattern-matching such a union must not read from a field other than the one written to.
However, there are several cases which explicitly read from a field different from the one written to, for example, [`str.as_bytes()`](https://github.com/rust-lang/rust/blob/74e35d270067afff72034312065c48e6d8cfba67/src/libcore/str/mod.rs#L2131-L2137) which was introduced in #50863 to make many of the stuff `const`. | C-enhancement,A-docs,T-libs | medium | Major |
412,198,799 | pytorch | Use standard docker image for XLA build | Currently the "xla" build uses a different docker image version number than the other builds. Try to unify them.
cc @bdhirsh @gmagogsfm | oncall: releng,triaged,module: xla,module: docker | low | Minor |
412,256,034 | go | all: updating existing errors to work with the Go 1.13 error values | This issue is for discussing how to update errors in the standard library and golang.org/x packages to work with the Go 2 error values changes proposed in #29934.
That issue is for discussing the proposal itself. This is for discussing how best to update existing error types to make the most use of the new additions to the errors package, assuming that it is accepted.
All calls to `errors.New` will be updated automatically.
All calls to `fmt.Errorf` that do not wrap errors will be updated automatically. Those that wrap errors will not be.
No custom types, outside of the errors package itself, will be updated automatically.
How will the errors that would require manual updating be updated?
For `fmt.Errorf` the only question is whether to turn a `%s`/`%v` to a `%w`.
For custom types, the questions are
- should they have an `Unwrap` method?
- should they collect stack frames?
- should they have a `FormatError` method?
- Do any require an `Is` method?
- An `As` Method?
If one of the above obviates an older approach (like an `Err error` field on a struct) should the older approach be marked deprecated?
Even with a general policy, there will likely be exceptions.
`net.Error`'s extra methods vs `os.IsTimeout` and co. is a particular wrinkle.
The `os.IsX` predicates test the underlying error of a finite set of error types for certain (platform specific) error values. This does not work with arbitrarily wrapped errors. @neild's https://golang.org/cl/163058 contains a proof-of-concept for changing it so that one can write the more general `errors.Is(err, os.ErrTimeout)` instead of `os.IsTimeout(err)`.
The predicate methods defined in `net.Error` offer a similar approach. In the case of `IsTimeout`, overlapping. To test a general error for this method you can write
```
var timeout interface { IsTimeout() bool }
if errors.As(err, &timeout) && timeout.IsTimeout() { //etc.
```
This is slightly more verbose than the `Is` construct.
Adding an `Is` method to `net.Error`s that respond to `ErrTimeout` and an `ErrTemporary` could make it more like `os`. Adding predicate methods to the `os` errors could make them more like `net`. Ideally they would be the same because if they're wrapped in further errors the code inspecting them may not know which package the original comes from and would have to handle both.
The approaches are not equivalent when there are multiple errors in the chain that can have the same property. Say error `A` wraps error `B` and either may be temporary but only `B` is. The `As` construct sets `timeout` to `A` whose method returns false (unless it inspects the chain on its own) but the `Is` construct returns true. If the intent is to override the temporary-ness of `B` an `Is` method on `A` that returns false for `ErrTemporary` is easy enough. It seems both more flexible and concise. | NeedsInvestigation | low | Critical |
412,264,447 | flutter | Please add directions to maps plugin | <!-- Thank you for using Flutter!
Please check out our documentation first:
* https://flutter.io/
* https://docs.flutter.io/
If you can't find the answer there, please consider asking a question on
the Stack Overflow Web site:
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
Please don't file a GitHub issue for support requests. GitHub issues are
for tracking defects in the product. If you file a bug asking for help, we
will consider this a request for a documentation update.
-->
| c: new feature,p: maps,package,team-ecosystem,P3,triaged-ecosystem | low | Critical |
412,290,945 | flutter | Could SliverList have header | <!-- Thank you for using Flutter!
Please check out our documentation first:
* https://flutter.io/
* https://docs.flutter.io/
If you can't find the answer there, please consider asking a question on
the Stack Overflow Web site:
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
Please don't file a GitHub issue for support requests. GitHub issues are
for tracking defects in the product. If you file a bug asking for help, we
will consider this a request for a documentation update.
-->
I'm trying to add header text on each `SliverList` or `SliverGrid`. The way I tried is adding `SliverList` with `Text` widget child.
```dart
SliverList(
delegate: SliverChildBuilderDelegate(
(BuildContext context, int index) {
return Padding(
padding: const EdgeInsets.all(8.0),
child: Text(
"Header 1"
),
);
},
childCount: 1
),
),
```
Another way could be having outer `SliverList` which has header and contents list.
Is there any sliver wrapper widget to handle Text? Or wonder if `SliverList` and `SliverGrid` could have header property. | c: new feature,framework,f: scrolling,would be a good package,P3,team-framework,triaged-framework | low | Critical |
412,294,591 | flutter | How to use local aar inside flutter plugin? | I'm trying to use a local .aar file inside a flutter plugin, but so far I couldn't figure out how to achieve this.
Steps I did:
1) Created a flutter plugin (named `flutter_plugin_with_aar`) project with Android Studio using the flutter plugin template
2) Modified the `<Plugin folder>/flutter_plugin_with_aar/android/build.gradle`
3) When I try to run the example app included into the plugin I get the following error:
```
FAILURE: Build completed with 2 failures.
1: Task failed with an exception.
-----------
* What went wrong:
Could not resolve all files for configuration ':flutter_plugin_with_aar:debugCompileClasspath'.
> Could not find :java_lib_name:.
Searched in the following locations: file:<Pluginfolder>/flutter_plugin_with_aar/android/libs/java_lib_name.aar
Required by:
project :flutter_plugin_with_aar
```
See the modified `<Plugin folder>/flutter_plugin_with_aar/android/build.gradle` below:
```gradle
group 'com.company.flutterpluginwithaar'
version '1.0-SNAPSHOT'
buildscript {
ext.kotlin_version = '1.2.71'
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.2.1'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
}
}
rootProject.allprojects {
repositories {
google()
jcenter()
// added `libs` as dependency location
flatDir {
dirs 'libs'
}
}
}
apply plugin: 'com.android.library'
apply plugin: 'kotlin-android'
android {
compileSdkVersion 27
sourceSets {
main.java.srcDirs += 'src/main/kotlin'
}
defaultConfig {
minSdkVersion 16
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
lintOptions {
disable 'InvalidPackage'
}
}
dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
// this .aar below I'm trying to use
implementation (name:"java_lib_name",ext:"aar")
}
```
Any thoughts what is the easiest way to add a local .aar to the flutter plugin?
| platform-android,d: api docs,t: gradle,P2,a: plugins,team-android,triaged-android | low | Critical |
412,320,757 | opencv | Misleading Net::getPerfProfile for OpenCL target | ##### System information (version)
- OpenCV => 4.0.1
- Operating System / Platform => Intel Core i5-7400, compute-runtime 19.05.12254
- Compiler => gcc 8.2.1
##### Detailed description
When running dnn on OpenCL target, `Net::getPerfProfile()` returns time it took to enqueue kernels and not actually compute them.
##### Steps to reproduce
E.g. use [openpose.py](https://github.com/opencv/opencv/blob/master/samples/dnn/openpose.py) on any model and add line `net.setPreferableTarget(cv.dnn.DNN_TARGET_OPENCL)` after net creation.
It will draw ~10ms on screen while actual call to `net.forward()` take ~1.4s.
| category: documentation,category: dnn | low | Minor |
412,329,481 | rust | Fuchsia's target_family should be None, not "unix" | [Currently](https://github.com/rust-lang/rust/blob/74e35d270067afff72034312065c48e6d8cfba67/src/librustc_target/spec/fuchsia_base.rs#L18), the Fuchsia target is part of the unix target_family. I'd like to suggest that this should not be the case, and that `#cfg[unix]` should be false for Fuchsia.
First, I want to capture what I think the _opposite_ case is. Fuchsia does support some amount of posix functionality. That subset is currently informal and pragmatic. It has made starting to port exist software a certain amount easier.
"unix" connotes a lot of things. I've singled out some big ones that do not apply to Fuchsia. I think that in aggregate, these outweigh the benefits mentioned above.
Process model: Fuchsia does not have fork and exec, and does not have a process hierarchy. There's no `wait(2)` or `waitpid(2)` on Fuchsia.
Signals: Fuchsia does not have unix signals.
Filesystems and users: Fuchsia does not have unix users or groups, and does not implement unix filesystem permissions. Fuchsia does not have a global filesystem.
FDs: Files and file descriptors are central, primitive concepts for unix: "everything is a file". In Fuchsia, they are an abstraction built out of other primitives, and working with those primitives directly is often preferred.
C and ABI: Unix system ABIs are typically deeply intertwined with its C standard library, and C is a de facto standard for specifying ABIs (witness `repr(C)`). On Fuchsia, almost all ABIs are specified in a language-agnostic IDL. The biggest exception, the system ABI in, we have been careful to describe in terms of ELF dynamic linkage, rather than C per se.
IO: Portable unix IO boils down to synchronous read(2) and write(2). Abstractions for event-driven programming exist, but are not portable. Fuchsia has limited emulation for some of them, preferring instead to use native constructs more directly.
In aggregate, I think defining `#cfg[unix]` to be true for Fuchsia is tempting, yet a trap. An existing small program which just wants to synchronously manipulate stdin and stdout seems to benefit, for example, until they want to handle ^C with their same unix code.
I'd love for rust programs, existing or not, to be as good as possible as easily as possible when built for Fuchsia. I think not setting `#cfg(unix)` will help with that.
For some background: I work on Fuchsia. Among other things, I am one of the maintainers for our libc.
cc @cramertj | T-compiler,O-fuchsia | medium | Critical |
412,405,868 | rust | type alias incorrectly flagged as unused when solely used in impl header | Consider the following code ([play](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=e36a771655d356700b8fcf9330292edd)):
```rust
type T = ();
struct S<X>(X);
impl Clone for S<T> { fn clone(&self) -> Self { S(()) } }
fn main() {
let s = S(());
drop(s.clone());
}
```
Today this emits the following warning diagnostic:
```
warning: type alias is never used: `T`
--> src/main.rs:1:1
|
1 | type T = ();
| ^^^^^^^^^^^^
|
= note: #[warn(dead_code)] on by default
```
But that type alias is not dead code. It is used in the `impl Clone for S<T> { ... }`, as one can see by trying to recompile the code after commenting out the type alias. | A-lints,E-needs-test,T-compiler | low | Critical |
412,422,301 | flutter | CustomPainter.hitTest missing size argument | `CustomPainter.hitTest` is missing the size of the widget, making it impossible to specify custom hit test behaviour when `paint` uses its `size` argument.
Simplistic example:
```dart
class CirclePainter extends CustomPainter {
@override
void paint(Canvas canvas, Size size) {
final center = size.center(Offset.zero);
final borderPaint = Paint()
..color = const Color(0xffffffff)
..style = PaintingStyle.stroke
..strokeWidth = 2;
canvas.drawCircle(center, size.width/2, borderPaint);
}
@override
bool hitTest(Offset position) {
final size = ???;
final center = size.center(Offset.zero);
return (position - center).distance < 0.5;
}
@override
bool shouldRepaint(CustomPainter oldDelegate) => false;
}
```
A workaround would be to have `paint` save the size in a field, but that only works if the painter is used for a single widget only.
Is it acceptable to add a size argument to `hitTest`?
```
[✓] Flutter (Channel unknown, v1.2.2-pre.22, on Mac OS X 10.14 18A391, locale en-NL)
• Flutter version 1.2.2-pre.22 at /Users/sander/Development/flutter
• Framework revision dd23be3936 (9 hours ago), 2019-02-19 21:35:31 -0800
• Engine revision f45572e95f
• Dart version 2.1.2 (build 2.1.2-dev.0.0 c92d5ca288)
``` | framework,f: gestures,customer: solaris,c: proposal,P2,team-framework,triaged-framework | low | Critical |
412,452,956 | go | cmd/gofmt: inconsistent indentation for comments inside switch-case | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.4 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/local/ZOHOCORP/mani-pt2396/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/local/ZOHOCORP/mani-pt2396/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build688297931=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Commented out the last case of switch with line or block comment(s).
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
[https://play.golang.org/p/fkseigRrxYV](https://play.golang.org/p/fkseigRrxYV)
### What did you expect to see?
The last case(s) to be in their place and not go inside the scope of the previous case.
```
switch val {
case foo1:
foo()
case foo2:
foo()
// case bar:
// bar()
}
switch val {
case foo1:
foo()
case foo2:
foo()
/* case bar:
// bar()*/
}
```
### What did you see instead?
The last case(s) is aligned as if it is inside the scope of the previous case.
```
switch val {
case foo1:
foo()
case foo2:
foo()
// case bar:
// bar()
}
switch val {
case foo1:
foo()
case foo2:
foo()
/* case bar:
// bar()*/
}
```
| NeedsInvestigation | low | Critical |
412,519,388 | rust | Add LLVM atomic memcpy intrinsics, expose in core/std | Expose LLVM's ["element wise atomic memory intrinsics"](https://llvm.org/docs/LangRef.html#element-wise-atomic-memory-intrinsics). In particular:
- Expose `llvm.memcpy.element.unordered.atomic` as `atomic_element_unordered_copy_memory_nonoverlapping`
- Expose `llvm.memmove.element.unordered.atomic` as `
atomic_element_unordered_copy_memory`
- Expose `llvm.memset.element.unordered.atomic` as `atomic_element_unordered_set_memory`
Expose these through functions in the `std::ptr` module. Each function is implemented by the equivalent intrinsic or by a loop of relaxed atomic operations if the intrinsic is not available on the target platform (TODO: Given this platform-specific behavior, can this also be exposed in `core::ptr`?)
- `copy_nonoverlapping_atomic_unordered`, backed by `atomic_element_unordered_copy_memory_nonoverlapping`
- `copy_atomic_unordered`, backed by `atomic_element_unordered_copy_memory`
- `write_atomic_unordered`, backed by `atomic_element_unordered_set_memory`
Previously discussed on the internals forum [here](https://internals.rust-lang.org/t/expose-llvm-atomic-memcpy-in-intrinsics/9466).
Folks with the authority to approve this: Let me know if this is OK to move forward; I'd like to post it to This Week in Rust's CFP. | A-LLVM,T-lang,T-libs-api,C-feature-request,A-atomic | medium | Critical |
412,558,295 | create-react-app | Feature Request: Fine-tuning Terser | <!--
PLEASE READ THE FIRST SECTION :-)
-->
### Is this a bug report?
No
**Proposal:**
Support a new environment variable `TERSER_CONFIG`. A dev could set it to a JSON with [config values supported by Terser](https://github.com/webpack-contrib/terser-webpack-plugin#terseroptions) and CRA would merge default this JSON with its default Terser config.
**Example:**
1. Set `TERSER_CONFIG` to `{ "keep_classnames": true }`
1. `keep_classnames` set to `true` added to [Terser options](https://github.com/facebook/create-react-app/blob/6a5b3cdaaa7e9ee2cae358a50cf005af9d1408bd/packages/react-scripts/config/webpack.config.js#L187)
**Reasoning:**
Certain libraries rely on class names which are minified by default by Terser. It would be nice to change that without maintaining a fork of CRA for one line of code only.
I'm pretty sure there're other cases as well. Anyway it's pretty simple to add this, it would give developers more freedom, so what's the harm, right? :)
**Implementation considerations:**
If `TERSER_CONFIG` environment variable is present, do `JSON.parse` of its value and add it to Terser options [here](https://github.com/facebook/create-react-app/blob/6a5b3cdaaa7e9ee2cae358a50cf005af9d1408bd/packages/react-scripts/config/webpack.config.js#L187). Add `TERSER_CONFIG` to [this list](https://facebook.github.io/create-react-app/docs/advanced-configuration).
*Would you be willing to accept a PR for this functionality?* | issue: proposal | low | Critical |
412,619,700 | pytorch | Implement Adaptive Input Representations for Neural Language Modeling | ## 🚀 Feature
It would be great if you could implement [Adaptive Input Representations for Neural Language Modeling](https://arxiv.org/abs/1809.10853). This is, essentially, the same trick that PyTorch currently uses for adaptive softmax outputs, but applied to the input embeddings as well. In addition, it would be helpful to provide optional support for adaptive input and output weight tying.
## Motivation
PyTorch has already implemented adaptive representations for output. The paper shows that these representations are also useful for input. They allow faster training, and provide better accuracy. In addition, in some situations the paper shows it can be helpful to use weight-tying, which would allow this feature to be integrated nicely with the existing adaptive output functionality.
At the time the paper was written, adaptive input representations were the state of the art in language modeling.
## Alternatives
An alternative would be for this functionality to be moved in to higher level libs like fastai, but that would mean to get weight tying to work, either the existing pytorch adaptive output would have to be recreated in that library, or else that library would have to rely on using the internal data structures of the pytorch module - neither of these would be ideal.
cc @albanD @mruberry | feature,module: nn,triaged | medium | Major |
412,635,254 | go | x/tools/astutil: use goimports import path to import name heuristic in UsesImport | goimports has a sophisticated heuristic to guess a package import name from its import path. We might even eventually enshrine that heuristic in the language (https://github.com/golang/go/issues/29036).
astutil.UsesImport uses a much less sophisticated heuristic. We should move the goimports heuristic to somewhere exported (maybe in astutil?) and use it in astutil.UsesImport.
(astutil.UsesImport could also use some expanded docs, but that is another matter.)
cc @heschik
| NeedsInvestigation,Tools | low | Minor |
412,666,372 | vue | make serverPrefetch() rejection trappable | ### What problem does this feature solve?
Currently, rejections in `serverPrefetch` cannot be handled
eg.
```
serverPrefetch() {
return Promise.reject('myError')
}
```
### What does the proposed API look like?
Maybe send `serverPrefetch` rejections as `renderStream` error event ?
eg.
```
const renderStream = renderer.renderToStream(context)
renderStream.on('error', err => { ... })
```
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | low | Critical |
412,670,278 | create-react-app | Prototype warning when webpack config is modified by another package | issue: proposal | low | Minor |
|
412,682,722 | puppeteer | Requests are not intercepted when cached by Service Worker | Not sure if this is by design but requests that were previously cached by service worker are not intercepted after `page.setRequestInterception(true)`.
Simplified code to reproduce:
```js
const hostname = 'http://localhost:';
const {port, proc} = await startServer(); // local implementation
const browser = await puppeteer.launch({
args: [
'--no-sandbox',
'--disable-setuid-sandbox',
'--enable-features=NetworkService',
],
ignoreHTTPSErrors: true,
});
const page = await browser.newPage();
await page.goto(`${hostname}${port}`);
await page.evaluate('navigator.serviceWorker.ready');
await page.setRequestInterception(true); //toggle this
page.on('request', req => {
console.log('requesting', req.url());
req.continue().catch(e => e /* not intercepting */);
});
await page.reload({waitUntil: 'domcontentloaded'});
await browser.close();
```
1. Set `await page.setRequestInterception(true)`
2. Run test, only URLs not previously cached by service worker are logged
3. Set `await page.setRequestInterception(false)` (or remove Service Worker registration)
4. Run test, all URLs are logged
Are requests intercepted only when requests are sent through to the network, or is this a bug?
| bug,upstream,chromium,confirmed,P3 | medium | Critical |
412,682,796 | godot | Automatic conversion from C# array to GDScript array | **Godot version:**
Godot 3.1 Beta 5
**Issue description:**
You can't pass arrays with types to GDScript, but you can pass object arrays. Wouldn't there be a way to automatically convert C# array to objects and pass them to GDScript, instead of doing it by hand? | enhancement,topic:dotnet | low | Minor |
412,683,510 | electron | Add Support of Windows 10 + 'WinRT MIDI API' via commandLine.appendSwitch | **Is your feature request related to a problem? Please describe.**
Prior to Windows 10, it wasn't possible to use more than one piece of software simultaneously connected to a MIDI device, for example a DAW and a MIDI device editor program. Windows 10 has a new MIDI API that developers can use which allows multiple pieces of software to simultaneously connect to a MIDI device, macOS has always allowed this...The only snag in this new Windows API is that all software needs to be using the new API for it to work, so both DAW and Editor need to use it.
This can be enabled in a browser, which supports the Web MIDI API, via chrome:flags
**Describe the solution you'd like**
Since this has been supported in Chrome for a few years now, it would be great to add support for this flag as well in Electron.
**Describe alternatives you've considered**
There doesn't appear to be an alternative solution using Electron
**Additional context**
The flag is: `use-winrt-midi-api`
set to enabled:
`app.commandLine.appendSwitch('use-winrt-midi-api', 'enabled')` | enhancement :sparkles: | low | Major |
412,704,558 | TypeScript | Give 'this' keyword suggestion the same `sortText` as class properties if inside a class | From https://github.com/Microsoft/vscode/issues/66868
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.20190220
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
- suggestions
- completions
- sortText
**Code**
For the javascript:
```js
class Foo {
constructor() {
this.prop = 1;
}
render() {
th
}
}
```
* Trigger intellisense after `th` in `render`
**Expected behavior:**
`this` should be the first suggestion in VS Code
**Actual behavior:**
`render` and then `prop` are the first suggestions
The root cause of this is that the suggestion for `this` has a `sortText` of `"0"` while the one for `prop` and `render` have a `sortText` of `"1"`. This causes VS Code to sort `prop` and `render` before `this`
@amcasey @minestarks I'm not sure if the current sorting issue would also effect VS
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Suggestion,Experience Enhancement | low | Critical |
412,904,482 | opencv | Add AVX512 support | The discussion thread about https://github.com/opencv/opencv/wiki/OE-29.-Adding-AVX512-Support | optimization,evolution | low | Minor |
412,907,116 | pytorch | torch.nn.CrossEntropyLoss with "reduction" sum/mean is not deterministic on segmentation outputs / labels | ## 🐛 Bug
torch.nn.CrossEntropyLoss doesn't output deterministic results on segmentation outputs / labels, when using reduction other than 'none'.
Happens only on GPU. CPU does give a consistent behavior.
## To Reproduce
```
import numpy as np
import torch
outputs = np.random.rand(16, 1, 256, 256)
outputs = np.hstack((outputs, 1.0 - outputs))
targets = np.random.randint(2, size=(16, 256, 256))
seed = 0
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
for reduction in ['none', 'sum', 'mean']:
print(reduction)
for i in range(10):
torch.manual_seed(seed)
np.random.seed(seed)
outputs_t, targets_t = torch.from_numpy(outputs), torch.from_numpy(targets)
outputs_t, targets_t = outputs_t.cuda(0), targets_t.cuda(0)
loss_fn = torch.nn.CrossEntropyLoss(reduction=reduction)
loss_fn = loss_fn.cuda(0)
loss = loss_fn(outputs_t, targets_t)
loss = loss.detach().cpu().numpy()
print(i, outputs.sum(), targets.sum(), outputs.mean(), targets.mean(), loss.sum(), loss.mean())
```
## Output
```
none
0 1048576.0 524341 0.5 0.5000505447387695 769533.4950007759 0.7338843297965774
1 1048576.0 524341 0.5 0.5000505447387695 769533.4950007759 0.7338843297965774
2 1048576.0 524341 0.5 0.5000505447387695 769533.4950007759 0.7338843297965774
3 1048576.0 524341 0.5 0.5000505447387695 769533.4950007759 0.7338843297965774
4 1048576.0 524341 0.5 0.5000505447387695 769533.4950007759 0.7338843297965774
5 1048576.0 524341 0.5 0.5000505447387695 769533.4950007759 0.7338843297965774
6 1048576.0 524341 0.5 0.5000505447387695 769533.4950007759 0.7338843297965774
7 1048576.0 524341 0.5 0.5000505447387695 769533.4950007759 0.7338843297965774
8 1048576.0 524341 0.5 0.5000505447387695 769533.4950007759 0.7338843297965774
9 1048576.0 524341 0.5 0.5000505447387695 769533.4950007759 0.7338843297965774
sum
0 1048576.0 524341 0.5 0.5000505447387695 769533.4950007756 769533.4950007756
1 1048576.0 524341 0.5 0.5000505447387695 769533.4950007756 769533.4950007756
2 1048576.0 524341 0.5 0.5000505447387695 769533.4950007757 769533.4950007757
3 1048576.0 524341 0.5 0.5000505447387695 769533.4950007756 769533.4950007756
4 1048576.0 524341 0.5 0.5000505447387695 769533.4950007756 769533.4950007756
5 1048576.0 524341 0.5 0.5000505447387695 769533.4950007756 769533.4950007756
6 1048576.0 524341 0.5 0.5000505447387695 769533.4950007756 769533.4950007756
7 1048576.0 524341 0.5 0.5000505447387695 769533.4950007754 769533.4950007754
8 1048576.0 524341 0.5 0.5000505447387695 769533.4950007756 769533.4950007756
9 1048576.0 524341 0.5 0.5000505447387695 769533.4950007756 769533.4950007756
mean
0 1048576.0 524341 0.5 0.5000505447387695 0.733884329796577 0.733884329796577
1 1048576.0 524341 0.5 0.5000505447387695 0.733884329796577 0.733884329796577
2 1048576.0 524341 0.5 0.5000505447387695 0.733884329796577 0.733884329796577
3 1048576.0 524341 0.5 0.5000505447387695 0.733884329796577 0.733884329796577
4 1048576.0 524341 0.5 0.5000505447387695 0.733884329796577 0.733884329796577
5 1048576.0 524341 0.5 0.5000505447387695 0.7338843297965769 0.7338843297965769
6 1048576.0 524341 0.5 0.5000505447387695 0.733884329796577 0.733884329796577
7 1048576.0 524341 0.5 0.5000505447387695 0.733884329796577 0.733884329796577
8 1048576.0 524341 0.5 0.5000505447387695 0.733884329796577 0.733884329796577
9 1048576.0 524341 0.5 0.5000505447387695 0.733884329796577 0.733884329796577
```
## Expected behavior
I believe the expected behavior of the reduction='sum' and 'mean' should be as consistent as the 'none' option (where I use numpy for reduction).
## Environment
```
PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.4 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration: GPU 0: Tesla K80
Nvidia driver version: 384.111
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.16.1
[pip3] torch==1.0.1.post2
[conda] mkl 2018.0.0 hb491cac_4
[conda] mkl-service 1.1.2 py36h17a0993_4
[conda] torch 0.4.0 <pip>
```
## Additional context
Ran on amazon K80 instance (p2.xlarge).
I know it does seem like a very tiny error but the result of this is that two training sessions of my segmentation network (with identical parameters, initialization, order of image-batches, random seed, etc') doesn't produce identical results. That is problematic when I want to investigate a specific training session.
| module: cuda,triaged | low | Critical |
412,971,865 | pytorch | testConvnetBenchmarks intermittently segfaults | Sample: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-clang7-rocmdeb-ubuntu16.04-test/10928/console
```
23:22:36 ============================= test session starts ==============================
23:22:36 platform linux2 -- Python 2.7.12, pytest-4.2.0, py-1.7.0, pluggy-0.8.1 -- /usr/bin/python2
23:22:36 cachedir: .pytest_cache
23:22:36 rootdir: /var/lib/jenkins, inifile:
23:22:36 plugins: sugar-0.9.2, hypothesis-3.59.0
23:22:47 collecting ... INFO:caffe2.python.net_drawer:Cannot import pydot, which is required for drawing a network. This can usually be installed in python with "pip install pydot". Also, pydot requires graphviz to convert dot files to pdf: in ubuntu, this can usually be installed with "sudo apt-get install graphviz".
23:22:53 net_drawer will not run correctly. Please install the correct dependencies.
23:22:53 collected 2395 items / 1 skipped / 2394 selected
23:22:53
23:22:54 ../.local/lib/python2.7/site-packages/caffe2/python/allcompare_test.py::TestAllCompare::test_allcompare PASSED [ 0%]
23:22:54 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_arg_scope PASSED [ 0%]
23:22:54 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_arg_scope_nested PASSED [ 0%]
23:22:54 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_arg_scope_single PASSED [ 0%]
23:22:54 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_cnn_model_helper_deprecated PASSED [ 0%]
23:22:54 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_cond PASSED [ 0%]
23:22:54 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_double_register PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_dropout PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_fc PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_get_params PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_has_helper PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_loop PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_model_helper PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_param_consistence PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_relu PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_tanh PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewTest::test_validate PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewGPUTest::test_relu PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/brew_test.py::BrewGPUTest::test_tanh PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/caffe_translator_test.py::TestNumericalEquivalence::testBlobs SKIPPED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/checkpoint_test.py::TestCheckpoint::test_ckpt_name_and_load_model_from_ckpts PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/checkpoint_test.py::TestCheckpoint::test_ckpt_save_failure PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/checkpoint_test.py::TestCheckpoint::test_download_group_simple PASSED [ 0%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/checkpoint_test.py::TestCheckpoint::test_reuse_checkpoint_manager PASSED [ 1%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/checkpoint_test.py::TestCheckpoint::test_single_checkpoint PASSED [ 1%]
23:22:55 ../.local/lib/python2.7/site-packages/caffe2/python/checkpoint_test.py::TestCheckpoint::test_upload_checkpoint PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/context_test.py::TestContext::testDecorator PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/context_test.py::TestContext::testMultiThreaded PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_ops_grad_test.py::TestControl::test_disambiguate_grad_if_op_output PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testBoolNet PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testCombineConditions PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testDoUntilLoopWithNet PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testDoUntilLoopWithStep PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testDoWhileLoopWithNet PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testDoWhileLoopWithStep PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testForLoopWithNets PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testForLoopWithStep PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfCondFalseOnBlob PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfCondFalseOnNet PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfCondTrueOnBlob PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfCondTrueOnNet PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfElseCondFalseOnBlob PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfElseCondFalseOnNet PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfElseCondTrueOnBlob PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfElseCondTrueOnNet PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfNotCondFalseOnBlob PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfNotCondFalseOnNet PASSED [ 1%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfNotCondTrueOnBlob PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfNotCondTrueOnNet PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfNotElseCondFalseOnBlob PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfNotElseCondFalseOnNet PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfNotElseCondTrueOnBlob PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testIfNotElseCondTrueOnNet PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testMergeConditionNets PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testSwitch PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testSwitchNot PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testUntilLoopWithNet PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testUntilLoopWithStep PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testWhileLoopWithNet PASSED [ 2%]
23:22:56 ../.local/lib/python2.7/site-packages/caffe2/python/control_test.py::TestControl::testWhileLoopWithStep PASSED [ 2%]
23:24:56 ../.local/lib/python2.7/site-packages/caffe2/python/convnet_benchmarks_test.py::TestConvnetBenchmarks::testConvnetBenchmarks ./.jenkins/caffe2/test.sh: line 118: 14948 Aborted (core dumped) "$PYTHON" -m pytest -x -v --disable-warnings --junit-xml="$pytest_reports_dir/result.xml" --ignore "$caffe2_pypath/python/test/executor_test.py" --ignore "$caffe2_pypath/python/operator_test/matmul_op_test.py" --ignore "$caffe2_pypath/python/operator_test/pack_ops_test.py" --ignore
```
cc @bddppq @iotamudelta | triaged,module: flaky-tests,better-engineering | low | Critical |
413,021,024 | godot | Highlight group titles in inspector if there any not default value | ## Case
If there are any changed fields in inspector and group is collapsed - you can't see it quickly, without careful inspection, opening collapsed groups, etc.
## Proposal:
Highlight with color titles of groups that have changed values, like on this draft:

For example, how it's done in Defold:

And I think Unity now has something similar also for prefabs. | enhancement,topic:editor,usability | low | Minor |
413,027,949 | flutter | Engine license script should work on Mac/Windows | Currently the engine licence script produces different output on Linux, Mac, and Windows, but we treat the Linux output as canonical.
The reason for the differences has to do with the differing paths of third-party dependencies depending on host OS (e.g. on Linux, we install the Android NDK, on Mac, we install a Mac NDK, with platform-specific differences in contents).
Downloading the Linux contents on Mac/Windows would work, but isn't a reasonable solution. It would be useful to catalogue the differences to see how extensive they are and whether they could be abstracted out without risking correctness. | team,engine,P3,team-engine,triaged-engine | low | Minor |
413,053,297 | TypeScript | DOM lib: Add support for Trusted Types API | ## Search Terms
Trusted Types, DOM
## Suggestion
[Trusted Types](https://github.com/WICG/trusted-types) ([spec](https://wicg.github.io/trusted-types/dist/spec/), [introductory article](https://developers.google.com/web/updates/2019/02/trusted-types)) is a new experimental DOM API implemented within the WICG , with a working [Chrome implementation](https://www.chromestatus.com/feature/5650088592408576).
The API creates a few new objects available on the global object in the browser, like most other web APIs ([impl in TS](https://github.com/DefinitelyTyped/DefinitelyTyped/blob/master/types/trusted-types/index.d.ts) and [in Closure compiler](https://github.com/WICG/trusted-types/blob/master/externs/externs.js)).
Under certain conditions, controlled by a HTTP header (analogous to Content-Security-Policy behavior), the API can enable the *enforcement* - then it changes the signature of several DOM API functions and property setters, such that they accept specific object types, and reject strings. Colloquially, DOM API becomes strongly typed.
For example, with Trusted Types `Element.innerHTML` property setter accepts a `TrustedHTML` object.
Trusted Type objects stringify to their inner value. This API shape is a deliberate choice that enables existing web applications and libraries to gradually migrate from strings to Trusted Types without breaking functionality. In our example, it makes it possible to write the following:
```js
const policy = TrustedTypes.createPolicy('foo', {
createHTML: (s) => { /* some validation*/; return s}
});
const trustedHTML = policy.createHTML('bar');
anElement.innerHTML = trustedHTML
anElement.innerHTML === 'bar'
```
The above code works regardless if the Trusted Types *enforcement* is enabled or not.
Reading from the DOM is unaffected, so `Element.innerHTML` getter returns a string. That's for practical reasons -- web applications read from DOM more often than they write to it, and only writing exposes the application to DOM XSS risks. Typing only the setters allows us to secure web applications with minimal code changes.
It's difficult to map that API using TS types (due to #2521). The only way is to change the DOM sink functions to have the `any` type, which is obviously suboptimal.
## Use Cases
Writing a TS application that uses the Trusted Types API.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Revisit | medium | Critical |
413,082,246 | react-native | Can't modify text in TextInput onChangeText callback on Android | ## 🐛 Bug Report
On Android, modifying the text within the onChange (or onChangeText) callback causes corruption of the text in the TextInput. (Not tested on iOS.)
For example, I'm trying to force all caps in my TextInput field. (This is to work around the react native autoCapitalize issue described here: https://github.com/facebook/react-native/issues/8932). So if a lowercase letter is entered, I change it to uppercase in the callback. Unfortunately, *alternate* keystrokes cause the entire previous text to be duplicated, but only if the entered keystroke was lowercase.
So, when forcing all caps, entering `1234` results in `1234` showing up; entering `ABCD` results in `ABCD` showing up; but entering `abcd` results in `AABCAABCD`.
This issue disappears if assigning a Math.random() key to the TextInput; but then of course so does the keyboard focus, making this an unacceptable workaround.
## To Reproduce
See "Bug Report" and "Code Example" sections.
## Expected Behavior
One should be able to modify the value inside TextInput's change callbacks, without the text becoming corrupted on the subsequent redisplay.
## Code Example
export default class TestScr extends Component
{
constructor(props)
{
super(props);
this.state = { s6: '' };
}
textchg(event)
{
const {eventCount, target, text} = event.nativeEvent;
// one would expect the contents of s6 to display after the redraw
this.setState({ s6: text.toUpperCase() });
}
render()
{
// [same behavior if using onChangeText instead of onChange]
let jsx0 = <View style={{ flexDirection: 'row' }} key={ 'hi' }>
<TextInput placeholder={ 'hello' } value={ this.state.s6 }
onChange={ (evt) => this.textchg(evt) }
keyboardType={ 'default' } />
</View>;
return (<View style={{ backgroundColor: '#ffffff', padding: 10, }}>
<ScrollView style={{ backgroundColor: '#ffffff', }}>
{ jsx0 }
</ScrollView>
</View>);
}
}
## Environment
React Native Environment Info:
System:
OS: Linux 3.19 Ubuntu 14.04.3 LTS, Trusty Tahr
CPU: (4) x64 Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz
Memory: 626.14 MB / 15.38 GB
Shell: 6.18.01 - /bin/tcsh
Binaries:
Node: 8.11.3 - /usr/bin/node
npm: 5.6.0 - /usr/bin/npm
SDKs:
Android SDK:
API Levels: 10, 16, 23, 26, 27, 28
Build Tools: 19.1.0, 20.0.0, 21.1.2, 22.0.1, 23.0.1, 23.0.2, 26.0.3, 27.0.3, 28.0.2, 28.0.3
System Images: android-16 | ARM EABI v7a, android-23 | Intel x86 Atom_64, android-23 | Google APIs Intel x86 Atom_64, android-28 | Google APIs Intel x86 Atom
npmPackages:
react: 16.6.3 => 16.6.3
react-native: 0.58.6 => 0.58.6
npmGlobalPackages:
create-react-native-app: 1.0.0
react-native-cli: 2.0.1
| Component: TextInput,Platform: Android,Priority: Mid,Bug | high | Critical |
413,083,651 | godot | Unhandled division / modulo quotient overflow causes crash. | **Godot version:**
3.1 (master), and probably any other
**OS/device including version:**
Any
**Issue description:**
As mentioned in https://github.com/godotengine/godot/pull/26113#issuecomment-465965854 executing GDScript with division / modulo operation on `LLONG_MIN` and `-1`, or simply entering it into script editor (line is executed by autocomplete) causes quotient overflow and crash with `SIGFPE` (Floating point exception: divide by zero). Division by 0 case is handled correctly.
```gdscript
-9223372036854775808 / -1
-9223372036854775808 % -1
```
[IDIV instruction reference](https://www.felixcloutier.com/x86/idiv)
> Overflow is indicated with the #DE (divide error) exception rather than with the CF flag.
| bug,topic:gdscript,confirmed | low | Critical |
413,084,207 | rust | Reference interferes with optimization | (This comes from a [Reddit thread](https://www.reddit.com/r/rust/comments/at6793/why_is_the_iter_version_performance_so_much/).)
In the code below, `slow` takes double the time `fast` takes. If `print` is changed to take a value rather than reference, the difference goes away.
```rust
fn print(s: &u64) {
println!("{}", s);
}
fn fast() {
let mut s: u64 = 0;
for x in 0..10000000000 {
if x % 16 < 4 {
s += x;
}
}
let s = s;
print(&s);
}
fn slow() {
let mut s: u64 = 0;
for x in 0..10000000000 {
if x % 16 < 4 {
s += x;
}
}
print(&s);
}
fn main() {
if std::env::var("FAST").is_ok() {
fast();
} else {
slow();
}
}
```
Timings:
```
$ rustc -O main.rs
$ time FAST=1 ./main
12499999983750000000
real 0m4.334s
user 0m4.328s
sys 0m0.001s
$ time ./main
12499999983750000000
real 0m8.788s
user 0m8.776s
sys 0m0.002s
``` | I-slow,T-compiler | low | Major |
413,087,163 | go | x/tools/cmd/present: add support for quote | This proposal is about to add support for quote into the present tool.
An implementation is available here: golang/tools/pull/30
It introduces a "quote" function into the template to inject quotes and citations.
The text after ".quote" is embedded in a blockquote element after
processing styling. The text after the optional "//CITATION:" is treated
as a citation and embedded in a cite element after processing styling.
Example:
.quote Never memorize something that you can look up //CITATION: Albert Einstein | Proposal,Proposal-Accepted,NeedsInvestigation | low | Major |
413,143,587 | rust | Nest array initialization is not optimized | See the following code:
```rust
pub fn foo() -> [[i32; 1000]; 1000] {
[[0; 1000]; 1000]
}
pub fn bar() -> [i32; 1_000_000] {
[0; 1_000_000]
}
```
Ideally they should generate the same code as `[[i32; 1000]; 1000]` is essentially just `[i32; 1_000_000]`. However, the first function does one `memset` to set a `[i32; 1000]` on the stack, then use `memcpy` to generate the big array, while the second just uses a single `memset`.
The code generated from the first is way larger than the second, and I would also expect it to be much slower given its repeatedly invoking `memcpy`. | I-slow,WG-llvm,A-mir-opt | low | Major |
413,189,916 | go | net: *Conn.ReadMsgXXX documentation could be improved | Unfortunately, this is documentation/API request more than a 'bug'. I started this needing to know what interface a UDP message was received on, and immediately ran into an interface limitation.
Summary: ReadMsgXXX is impossible to use without _a priori_ knowledge of the underlying network API that Go uses. Furthermore, the documentation for said methods refers the user to another package, without explaining what to look for in said package.*
Either the documentation for UdpConn.ReadMsgUDP and/or the function's mere existence in it's current form are at _best_ useless (I understand how frustrating this statement is, but unfortunately this is the case). Additionally, while this message is primarily regarding UdpConn.ReadMsgUDP, it is likely equally applicable to all the net/x ReadMsgXXX (and possibly the WriteMsgXXX) functions as well.
The fundamental problem, which I'm certain you are aware of, is that every different OS stack behaves differently and reports different pieces of information in different ways. However, as written, the documentation does not explain _anything_ about what is the user needs to access the out-of-band data. It only says:
> The packages golang.org/x/net/ipv4 and golang.org/x/net/ipv6 can be used to manipulate IP-level socket options in oob.
But not anything about *how* to do that.
The user has to already understand the underlying network stack enough to connect the 'out-of-band' comment in the documentation for UdpConn.ReadMsgUDP to 'control message' in the overview examples of x/net/ipv4:
> The application might set per packet control message transmissions between the protocol stack within the kernel. When the application needs a destination address on an incoming packet, SetControlMessage of PacketConn is used to enable control message transmissions.
Finally, after determining that these are related the user is still required to manually call Parse on the out-of-band data array to finally convert it to the ControlMessage that they require.
(As an aside, I'm fairly certain @bradfitz is correct here: https://github.com/golang/go/issues/28900#issuecomment-440464403)
*I believe this redirection is referring to SetControlMessage in x/net/ipvX. However, there is no indication that this will do anything to configure the 'out-of-band' data. | Documentation,NeedsFix | low | Critical |
413,194,888 | angular | HttpClient sends the wrong Accept header | # 🐞 bug report
### Affected Package
@angular/common/http
### Is this a regression?
Not sure
### Description
The `get<T>` method on an HttpClient will send the following Accept header with a request if you do not override it:
`Accept: application/json, text/plain, */*`
This suggests that it is OK if the server responds with text/plain (i.e., not JSON).
However, if the server responds with something similar to this (truncated), you will get an error.
<pre><code>Content-Type: text/plain; charset=utf-8
59fb693e0302a519e0974379
</pre></code>
Clearly text/plain is not a valid response type, and the default Accept header should be:
`Accept: application/json`
## 🔬 Minimal Reproduction
I will add this later if deemed necessary.
## 🔥 Exception or Error
<pre><code>"error": {
"name": "SyntaxError",
"message": "Unexpected token f in JSON at position 2"
}
</code></pre>
## 🌍 Your Environment
**Angular Version:**
<pre><code> _ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 7.3.2
Node: 10.14.1
OS: win32 x64
Angular: 7.2.5
... animations, common, compiler, compiler-cli, core, forms
... http, language-service, platform-browser
... platform-browser-dynamic, platform-server, router
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.13.2
@angular-devkit/build-angular 0.13.2
@angular-devkit/build-optimizer 0.13.2
@angular-devkit/build-webpack 0.13.2
@angular-devkit/core 7.3.2
@angular-devkit/schematics 7.3.2
@angular/cdk 7.3.2
@angular/cli 7.3.2
@angular/material 7.3.2
@ngtools/webpack 7.3.2
@schematics/angular 7.3.2
@schematics/update 0.13.2
rxjs 6.4.0
typescript 3.2.4
webpack 4.29.5
</code></pre> | type: bug/fix,breaking changes,freq1: low,area: common/http,state: confirmed,design complexity: low-hanging,P4 | medium | Critical |
413,205,212 | godot | AnimationPlayer - Nodepath in a track in a animation stored in a PackedScene get their instance name automatically when other instance of that packed scene is reparented in the container scene. | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
5e837b3f13ab1e3b31bb8d705e87820fa4eff21e
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Win7
**Issue description:**
<!-- What happened, and what was expected. -->
An Animation in PackedScene with an AnimationPlayer with a track called:
`EnemyPath2D/PathFollow2D:unit_offset`
is automatically renamed to:
`"../**InstancedPackedSceneName**/EnemyPath2D/PathFollow2D:unit_offset"`
on reparenting other copy of the PackedScene in the Scene that contains that PackedScene....
This change produces that **only the first packed scene** loaded in the **currentscene** can play the animation (because the **"InstancedPackedSceneName"** is stored in the animation), and only this instance can follow that track. The packed scene remains inservible by itself (That packed scene **doesn´t have that track name outside the scene**). This behaviour happend with the unsaved resource or the animation stored in an independent tres file.
To note that InstancedPackedSceneName is unique per Node (godot doesn´t allow instances with the same name) and relative to every scene (you can change manually), so , **this name never should be added to the tracks from the original PackedScene** or the original resource (Godot should not change animation tracks names ever, this is not good behaviour, it can break to many things). If you change that track name manually, the animation is reproduced in all nodes, but on next saving, godot by itself destroy the reference other time.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
Will try... | bug,topic:editor,confirmed,topic:animation | low | Critical |
413,208,445 | TypeScript | Go to definition should still work for shorthand property declaration with no local binding | ```ts
declare function foo(option: { [|abcd|]: number }): void;
foo({
abcd/*use*/
});
```
Go to definition at `/*use*/`.
**Expected**: Language service gracefully finds `abcd`.
**Actual**: No definition is available.
This is frustrating because much of the time as I'm actually writing out the type, I want to ***know*** what `abcd` is so I'll jump to its definition. Instead, I can't get any functionality. | Suggestion,Experience Enhancement,Domain: Symbol Navigation | low | Minor |
413,221,129 | angular | router.navigate option to disable URI encoding | <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 🚀 feature request
### Relevant Package
This feature request is for @angular/router
### Description
The problem is that router.navigate automatically encodes a URL ( using encodeURIComponent( .. ) I think), but this encoding is not idempotent.
For example repeated encodes of a path with parameter segments will duplicate encoding of a '% '
Consequently, navigation from a path constructed from prior navigation will cause disaster when
decodeURIComponent( .. ) is used to recover the parameter object .
### Describe the solution you'd like
An optional parameter to disable URL encoding.
Then the user can avoid this problem by encoding each parameter just once.
This parameter could be an option in router.navigate, (easy for a user to notice)
Alternatively a module option in
RouterModule.forRoot(routes,
{ enableTracing: true,
initialNavigation: true,
relativeLinkResolution: 'corrected', // Fix Router BUG
disableUrlEncoding: true // Suggest something like this: but would need documentation
}
### Describe alternatives you've considered
1. navigateByUrl: but I would prefer not to construct absolute URL's ( with possibly unexpected limitations)
2. Custom UrlSerializer: but this seems unreasonably difficult for an average user to investigate and implement.
| feature,freq2: medium,area: router,feature: under consideration | medium | Critical |
413,228,519 | TypeScript | Refactor rename file + rename function | ## Search Terms
Rename refactor move file filename and function name matching
## Suggestion
We have a really common pattern: A file of name `MyFile.tsx` and a matching function of the types:
```ts
export default function MyFile() {}
export function MyFile() {}
export const MyFile = () => {}
```
## Use Cases
I find myself constantly doing a two part refactor. First refactor the function, then refactor the filename. Combining the two would save quite a lot of time and encourage better naming everywhere.
| Suggestion,Awaiting More Feedback,feature-request | low | Major |
413,258,155 | flutter | Need a way to make synchronous calls from plugin platform code to Dart code | This came up as part of https://github.com/flutter/flutter/issues/25329:
The Android WebView API allows passing in a delegate that is invoked on the platform thread before a page is about to be loaded, the delegate needs to synchronously return true or false to determine whether to allow the navigation or to block it.
We needed to expose the same functionality in the webview_flutter plugin, ideally by delegating the decision to the Flutter application's Dart code. But this is currently impossible due to the combination of:
1. Communication over platform channels is asynchronous.
2. Messages sent from Dart to the platform channels are only delivered to the platform thread(by posting a task to the message loop).
Even if we were to block the platform thread waiting for the Dart code to make a decision, we would not be able to receive the message back from Dart(as the platform thread is blocked) and create a deadlock.
One potential approach for overriding this is to allow "platform channels" that deliver message to platform code on a separate thread. We could then:
1. Send a message to Dart from the platform thread.
2. Block the platform thread with some thread synchronization primitive.
3. Dart code will send the response to a dedicated thread.
4. The dedicated thread will release the latch that the platform thread is waiting on.
If we do this we should probably require that the "special thread" is always different from the platform thread.
For https://github.com/flutter/flutter/issues/25329 we're currently proceeding with a workaround that makes some security compromises(which means when an app sets a navigation delegate it should only load trusted content). But we should remove that workaround as soon as this issue is resolved.
cc @cbracken @jason-simmons | c: new feature,engine,p: webview,P3,a: plugins,team-engine,triaged-engine | medium | Critical |
413,295,722 | TypeScript | Emit Numerical Literal with different TokenFlags | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
emitter, numerical
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
Currently, all Numerical Literal has been emitted as Decimal number.
Hope to allow emit Hex Literal or Binary Literal etc
<!-- A summary of what you'd like to see added or changed -->
## Use Cases
```ts
ts.createLiteral('2', TokenFlags.BinarySpecifier) // 0b10
```
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
That is useful to create a codegen
## Examples
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Related: https://github.com/Microsoft/TypeScript/pull/29897 | Suggestion,Awaiting More Feedback | low | Critical |
413,295,888 | go | cmd/compile: '_rt0_amd64_windows_lib' is not called when linking with VS2015 | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
go version go1.11.5 windows/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes (go1.11.5)
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\Andreas Jonsson\AppData\Local\go-build
set GOEXE=.exe
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=C:\Users\Andreas Jonsson\go
set GOPROXY=
set GORACE=
set GOROOT=C:\Go
set GOTMPDIR=
set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\ANDREA~1\AppData\Local\Temp\go-build314745786=/tmp/go-build -gno-record-gcc-switches
</pre></details>
### What did you do?
Compile any package with -buildmode=c-archive on Windows using Go1.11.5 and TDM-GCC-64 (mingw64). Link the library with a small application using VisualStudio 2015 and call any C exported Go function in the library.
### What did you expect to see?
Function should be executed just like when the application was built with Mingw.
### What did you see instead?
The application deadlocks when calling the function because runtime waits for the 'runtime_init_wait' object in src\runtime\cgo\gcc_libinit_windows.c:77. It looks like _rt0_amd64_windows_lib is never called if you link with VisualStudio 2015 (I suspect later versions have the same problem)
I also noted a warning during build "main.a(go.o) : warning LNK4078: multiple '.text' sections found with different attributes (60600060)" I'm not sure if this is the cause of the issue...?
[Go_minimal_c-archive.zip](https://github.com/golang/go/files/2893013/Go_minimal_c-archive.zip) | NeedsInvestigation,compiler/runtime | low | Critical |
413,316,297 | kubernetes | Tight retry loops should not cause cascading failure of the cluster | <!-- Please only use this template for submitting enhancement requests -->
**What happened**:
Problem found by accident during cluster stability testing. The application consumes nearly 7MB of memory per pod.
While the helm chart was written the way that the deployment have set too low limits - see the snippet:
`containers:
......
resources:
limits:
cpu: 200m
memory: 4Mi
requests:
cpu: 100m
memory: 4Mi `
_A pod was started; once initialized, it started consuming more memory than the limit allowed (7MB > 4MB) and pod was deleted by the system
It was happening fast enough to cause the hosting VM crash and in case of many replicas all the worker nodes crashed within a few minutes._
**Impact**:
It will lead to whole cluster crash.
**What need to enhance for K8s**
And here's the problem: we have no control over the chart contents in this area, and logical error may lead to crash of a whole cluster. There must be some mechanism introduced, which prevents the system from such tight loop of killing and restarting pods with memory limits set incorrectly**:
**Environment**:
Kubernetes version (use `kubectl version`): v1.12.2 and v1.12.2.
It shall be common issue, doesn't matter with releases.
/sig node
/kind feature
/sig architecture
/sig scheduling
/sig cluster-lifecycle | priority/important-soon,sig/scheduling,area/reliability,sig/api-machinery,kind/feature,sig/apps,sig/architecture,lifecycle/frozen | medium | Critical |
413,361,365 | flutter | flutter analyze should handle analyzer plugins | I created an [analyzer_plugin to enforce Flutter Style Guide](https://github.com/a14n/flutter_style_guide_analyzer_plugin) and [registered it in flutter's packages](https://github.com/a14n/flutter/commit/558ed5910cfe32b2c2457153794a9926dcd77a89).
VSCode displays correctly the errors triggered by my plugin but `flutter analyze --flutter-repo` don't show any errors. (`flutter analyze --flutter-repo --watch` has the same behaviour)
Digging into the code `flutter analyze` stops once it receives an `{"isAnalyzing"::false}` event. But this event is sent before plugin responds.
I'm not sure what's the best way to fix that. Advices would be very welcome.
/cc @devoncarew @scheglov @bwilkerson | c: new feature,tool,dependency: dart,P3,team-tool,triaged-tool | medium | Critical |
413,428,074 | rust | Does the MIR generated for match still build in order-deps visible to MIR-borrowck? | Spawned off of PR #57609
The above PR has an interesting unit test that checks that we aren't assuming too much order dependence; the test (from file named "match-cfg-fake-edges.rs") looks like this ([play](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=f48fdf189e92c85e03ff3142f2aaa73c)):
```rust
fn all_previous_tests_may_be_done(y: &mut (bool, bool)) {
let r = &mut y.1;
// We don't actually test y.1 to select the second arm, but we don't want
// borrowck results to be based on the order we match patterns.
match y {
(false, true) => 1, //~ ERROR cannot use `y.1` because it was mutably borrowed
(true, _) => { r; 2 } // (use of "above" borrow)
(false, _) => 3,
};
}
```
while reviewing that test case, I wondered whether order-dependence, as implicitly defined by the above test, would also include this variant:
```rust
fn any_tests_may_be_done(y: &mut (bool, bool)) {
let r = &mut y.1;
// We don't test y.1 to select the first arm. Does this present
// an instance where borrowck results are based on the order we
// match patterns?
match y {
(true, _) => { r; 2 } // (use of "below" borrow)
(false, true) => 1, //~ ERROR cannot use `y.1` because it was mutably borrowed
(false, _) => 3,
};
}
```
Today, `all_previous_tests_may_be_done` is rejected (with a warning under NLL migration mode, and a hard error if you opt into `#![feature(nll)`, but `any_tests_may_be_done` is accepted.
Is this an internally consistent policy?
----
Maybe my question is most simply put like this:
* since we are adding some "fake edges" from some match-arms to the other arms that follow (to try to avoid prematurely tying ourselves to a particular code-generation strategy for MIR, in terms of what inputs MIR-borrowck will accept), ...
* shouldn't we be adding even more such "fake edges" that go from the bottom arms *back up* to the ones before them?
That would increase our flexibility for MIR code-generation, in theory, even further, right? | P-medium,T-compiler,A-NLL,NLL-complete | low | Critical |
413,443,522 | TypeScript | A behavior of conditional types is changed (regression) | @RyanCavanaugh @weswigham @ahejlsberg Caused by #27697
**TypeScript Version:** 3.4.0-dev.20190222
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
type Attrs = Record<string, string | EventListener | null | undefined>;
type Children =
| Children.Void
| Children.Text
| Children.Collection
| Children.Record;
namespace Children {
export type Void = undefined;
export type Text = string;
export interface Collection extends ReadonlyArray<El> { }
export type Record = { [field: string]: El; };
}
interface El<
T extends string = string,
E extends Element = Element,
C extends Children = Children,
> {
readonly tag?: T;
readonly element: E;
children: Relax<C>;
}
type Relax<C extends Children> = C extends Children.Text ? Children.Text : C;
declare function f<C extends Children>( children: C): C;
declare function f (attrs: Attrs, ): void;
declare function f<C extends Children>(attrs: Attrs, children: C): C;
f({ a: 0 as any as El<'a', HTMLAnchorElement> }); // ok
f({ a: 0 as any as El<'a', HTMLAnchorElement, Children> }); // ok
f({ a: 0 as any as El<'a', HTMLAnchorElement, Children.Void> }); // error since 3.4.0-dev.20190222
f({ a: 0 as any as El<'a', HTMLAnchorElement, Children.Text> }); // error since 3.4.0-dev.20190222
```
**Expected behavior:**
pass
**Actual behavior:**
error
**Playground Link:** http://www.typescriptlang.org/play/index.html#src=type%20Attrs%20%3D%20Record%3Cstring%2C%20string%20%7C%20EventListener%20%7C%20null%20%7C%20undefined%3E%3B%0D%0Atype%20Children%20%3D%0D%0A%20%20%7C%20Children.Void%0D%0A%20%20%7C%20Children.Text%0D%0A%20%20%7C%20Children.Collection%0D%0A%20%20%7C%20Children.Record%3B%0D%0Anamespace%20Children%20%7B%0D%0A%20%20export%20type%20Void%20%3D%20undefined%3B%0D%0A%20%20export%20type%20Text%20%3D%20string%3B%0D%0A%20%20export%20interface%20Collection%20extends%20ReadonlyArray%3CEl%3E%20%7B%20%7D%0D%0A%20%20export%20type%20Record%20%3D%20%7B%20%5Bfield%3A%20string%5D%3A%20El%3B%20%7D%3B%0D%0A%7D%0D%0Ainterface%20El%3C%0D%0A%20%20T%20extends%20string%20%3D%20string%2C%0D%0A%20%20E%20extends%20Element%20%3D%20Element%2C%0D%0A%20%20C%20extends%20Children%20%3D%20Children%2C%0D%0A%20%20%3E%20%7B%0D%0A%20%20readonly%20tag%3F%3A%20T%3B%0D%0A%20%20readonly%20element%3A%20E%3B%0D%0A%20%20children%3A%20Relax%3CC%3E%3B%0D%0A%7D%0D%0Atype%20Relax%3CC%20extends%20Children%3E%20%3D%20C%20extends%20Children.Text%20%3F%20Children.Text%20%3A%20C%3B%0D%0A%0D%0Adeclare%20function%20f%3CC%20extends%20Children%3E(%20%20%20%20%20%20%20%20%20%20%20%20%20%20children%3A%20C)%3A%20C%3B%0D%0Adeclare%20function%20f%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20(attrs%3A%20Attrs%2C%20%20%20%20%20%20%20%20%20%20%20%20)%3A%20void%3B%0D%0Adeclare%20function%20f%3CC%20extends%20Children%3E(attrs%3A%20Attrs%2C%20children%3A%20C)%3A%20C%3B%0D%0Af(%7B%20a%3A%200%20as%20any%20as%20El%3C'a'%2C%20HTMLAnchorElement%3E%20%7D)%3B%0D%0Af(%7B%20a%3A%200%20as%20any%20as%20El%3C'a'%2C%20HTMLAnchorElement%2C%20Children%3E%20%7D)%3B%0D%0Af(%7B%20a%3A%200%20as%20any%20as%20El%3C'a'%2C%20HTMLAnchorElement%2C%20Children.Void%3E%20%7D)%3B
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Needs Investigation | low | Critical |
413,494,435 | TypeScript | Results of conditional types are not comparable (regression) | @RyanCavanaugh @weswigham @ahejlsberg Caused by #27697
**TypeScript Version:** 3.4.0-dev.20190222
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
type Falsy = undefined | false | 0 | '' | null | void;
type TEq<T, U> = [T] extends [U] ? [U] extends [T] ? true : false : false;
type If<S, T, U> = S extends Falsy ? U : T;
type StrictExtract<T, U> = T extends U ? U extends T ? T : never : never;
type StrictExclude<T, U> = T extends StrictExtract<T, U> ? never : T;
type ExcludeProp<T, V> =
{ [Q in { [P in keyof T]: If<TEq<V, never>, T[P] extends never ? never : P, If<Includes<T[P], V>, never, P>>; }[keyof T]]: T[Q]; };
type DeepExcludeProp<T, V, E extends object | undefined | null = never> =
T extends E ? T :
T extends V ? never :
T extends readonly any[] | Function ? T :
T extends object ? ExcludeProp<{ [Q in { [P in keyof T]: If<TEq<V, never>, T[P] extends never ? P : P, If<Includes<T[P], V>, T[P] extends E ? P : never, P>>; }[keyof T]]: StrictExclude<DeepExcludeProp<T[Q], V, E>, {}>; }, never> :
T;
type Includes<T, U> = true extends (T extends U ? true : never) ? true : false;
type AD = { a: boolean; b: { a: boolean; b: boolean[]; c: undefined; d: undefined[]; e: boolean | undefined; f: Array<boolean | undefined>; }; c: { a: undefined; }; };
type a = DeepExcludeProp<AD, undefined, AD['b']>;
type b = ExcludeProp<AD, AD['c']>;
type c = {
a: boolean;
b: {
a: boolean;
b: boolean[];
c: undefined;
d: undefined[];
e: boolean | undefined;
f: Array<boolean | undefined>;
};
};
type d = TEq<a, c>;
type e = TEq<b, c>;
type f = TEq<a, b>;
```
**Expected behavior:**
```ts
type d = TEq<a, c>; // true
type e = TEq<b, c>; // true
type f = TEq<a, b>; // true
```
**Actual behavior:**
```ts
type d = TEq<a, c>; // true
type e = TEq<b, c>; // true
type f = TEq<a, b>; // false
```
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** #30047
| Needs Investigation | low | Minor |
413,501,219 | flutter | CupertinoTabView is missing initialRoute | The `CupertinoTabView` doesn't have an `initialRoute` the way `CupertinoApp` has.
I don't see a way to implement this behaviour properly without an property like this.
```
[✓] Flutter (Channel master, v1.2.2-pre.22, on Mac OS X 10.14.2 18C54, locale en-AT)
• Flutter version 1.2.2-pre.22 at /Users/enyo/local/flutter
• Framework revision dd23be3936 (3 days ago), 2019-02-19 21:35:31 -0800
• Engine revision f45572e95f
• Dart version 2.1.2 (build 2.1.2-dev.0.0 c92d5ca288)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/enyo/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
• All Android licenses accepted.
[!] iOS toolchain - develop for iOS devices (Xcode 10.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.1, Build version 10B61
✗ Verify that all connected devices have been paired with this computer in Xcode.
If all devices have been paired, libimobiledevice and ideviceinstaller may require updating.
To update with Brew, run:
brew update
brew uninstall --ignore-dependencies libimobiledevice
brew uninstall --ignore-dependencies usbmuxd
brew install --HEAD usbmuxd
brew unlink usbmuxd
brew link usbmuxd
brew install --HEAD libimobiledevice
brew install ideviceinstaller
• ios-deploy 2.0.0
• CocoaPods version 1.6.0.beta.2
[✓] Android Studio (version 3.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 29.0.1
• Dart plugin version 173.4700
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
[✓] VS Code (version 1.31.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 2.23.0
[✓] Connected device (1 available)
• Destroyer • 60ca238f17cb69a3d45f4708dc55c47e4e8b532e • ios • iOS 12.1.4
! Doctor found issues in 1 category.
```
| c: new feature,framework,f: cupertino,f: routes,P2,team-design,triaged-design | low | Major |
413,510,071 | vscode | Workspace-level environment variable *definitions* | Often I find myself wanting to associate a workspace with an environment variable (e.g. server domain name, development/production tag) but do not want to create a global system environment variable or pollute ```~/.bash_profile``` with such a variable. I would like such an environment variable to be associated only with the workspace "namespace" and be used for tasks, launch configs and integrated terminals started in that workspace.
I would like to suggest adding an env section to the .code-workspace file format similar to that shown below. It is crucial that a ```.env``` file argument be supported to allow environment variables to be excluded from source control (e.g. non-source controlled variables could be placed in ```.env``` file which is specified in ```.gitignore```).
Then different "flavours" of the same workspace (each using different clones of the same repos) could be used on the same computer - great for side-by-side comparison debugging or temporarily experimenting with a different setup etc.
```json
{
"folders": [
{
"name": "Api",
"path": ".."
},
{
"name": "Processing",
"path": "../../client/Processing"
}
],
"env": {
"NODE_ENV": "development",
"DOMAIN": "backend.example.com"
},
"launch": {
"configurations": [],
"compounds": [
{
"name": "Api + Ws",
"configurations": [
"Api",
"Ws",
]
}
]
}
}
```
| feature-request,workbench-multiroot | medium | Critical |
413,520,097 | flutter | Add a test that tests going from native view controller to flutter view controller to native etc. | From comment on https://github.com/flutter/flutter/pull/27712
> adding FlutterViewController pushing a native view controller pushing a FlutterViewController and the second FlutterViewController doesn't affect the first one.
We should generally test some more navigation scenarios like this and monitor memory usage, state, etc. | a: tests,team,platform-ios,engine,a: existing-apps,P2,team-ios,triaged-ios | low | Minor |
413,536,149 | flutter | AutomaticKeepAliveClientMixin does not work with BottomNavigationBar | I have a `BottomNavigationBar` with 3 tabs, one of which is a stateful widget with the `AutomaticKeepAliveClientMixin`, `wantKeepAlive` getter returning `true` and the `build` method calling `super.build(context)` at the beginning.
However, every time I switch tabs, the Widget that is supposed to be kept alive is always re-created.
[Others have the same issue too](https://stackoverflow.com/questions/49439047/how-to-preserve-widget-states-in-flutter-when-navigating-using-bottomnavigation). Does anyone have a solution to this problem? | framework,f: material design,has reproducible steps,P2,found in release: 3.3,found in release: 3.5,team-design,triaged-design | low | Critical |
413,551,491 | godot | StyleBoxTexture Tile and Tile Fit modes not working correctly on GLES2 | **Godot version:**
3.1 Beta 6
**OS/device:** Dell Inspiron 620, Windows 7 Home Premium 64-bits
**GPU:** AMD Radeon HD 6450
**Issue description:**
Tiling doesn't seem to work on custom StyleBoxTexture on GLES2
**Steps to reproduce:**
Create a new project on GLES2. Create a scene with a Control node and save. Create a panel node. Go to custom styles and create a new StyleBoxTexture and set the default godot icon as a texture. Change the axis stretch to Tile or TileFit.
**Minimal reproduction project:**
[TestStyleBoxTexture.zip](https://github.com/godotengine/godot/files/2895495/TestStyleBoxTexture.zip) | bug,topic:rendering,confirmed,topic:gui,topic:2d | medium | Major |
413,578,697 | rust | Document expected relationships between FromIterator, Default, and Extend | Somebody who creates a type that implements both `Default` and `Extend` should be aware that their type can be created through `Iterator::unzip`. In fact, any such type should probably also implement `FromIterator`, with the semantics that:
```rust
let mut a = TheType::default()
a.extend(iter);
```
is identical to `iter.collect()`.
(I wonder why `unzip` didn't decide to use `FromIterator + Extend` instead?) | C-enhancement,A-collections,T-libs-api,A-docs,A-iterators | low | Minor |
413,581,648 | TypeScript | Intellisense sometimes fails to show anything for Node modules | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.32.0-insider
- OS Version: Windows 7 64 bit
Steps to Reproduce:
1. In a new Node project, type `const http = require('http');` on line 1
2. Copy and paste that line to line 2 (must copy and paste to reproduce most of the time)
3. Edit line 2's `http` to `fs` (doesn't have to be http or fs, just any two modules)
4. Now on line 3 type `http.` and you will see that Intellisense is not working.
Here is a video of the entire bug:
[https://www.youtube.com/watch?v=kIshFgrrYyw](https://www.youtube.com/watch?v=kIshFgrrYyw)
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Pretty sure extensions need to be on for it to work properly in the first place right?
| Bug | low | Critical |
413,582,194 | go | net/http/httputil: document that ReverseProxy doesn't remove Alt-Svc header | ### What version of Go are you using (`go version`)?
<pre>
go version devel +27b9571de8 Fri Feb 22 19:30:43 2019 +0000 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/ej/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/ej/gopath"
GOPROXY=""
GORACE=""
GOROOT="/Users/ej/go"
GOTMPDIR=""
GOTOOLDIR="/Users/ej/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/xh/6w5vj0s12y97yt81179m2ddm0000gq/T/go-build981855942=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
When using `httputil.ReverseProxy`, the backend's `Alt-Svc` header is passed through to the original client. I do not believe it should be. Passing this header back to the original client can cause the client to connect to some OTHER server, and not the reverse proxy on a future request. This is generally not the purpose.
The `httputil.ReverseProxy` attempts to remove all hop-by-hop headers. I *think* `Alt-Svc` header should be treated as a hop-by-hop header. From [RFC7838 Section 4](https://tools.ietf.org/html/rfc7838#section-4): "The ALTSVC frame is processed hop-by-hop". Note this is about the *HTTP/2 ALTSVC* frame, and not the *HTTP/1.1 Alt-Svc* header. However, I believe it should be applied there as well.
Fixing this requires a one line addition to the `hopHeaders` slice: https://github.com/golang/go/blob/a14ed2a82a1563ba89e1f22ab517bf3c9abe416f/src/net/http/httputil/reverseproxy.go#L145
I'm happy to submit this change and a test if this is a good idea.
As an example, run this example proxy:
```go
package main
import (
"flag"
"log"
"net/http"
"net/http/httputil"
"net/url"
)
func main() {
addr := flag.String("addr", "[::1]:8080", "listen addr")
target := flag.String("target", "https://www.google.com/", "target backend")
flag.Parse()
parsedTarget, err := url.Parse(*target)
if err != nil {
panic(err)
}
// start a proxy that replaces everything before the path
proxy := &httputil.ReverseProxy{
Director: func(r *http.Request) {
r.Host = ""
r.URL.Scheme = parsedTarget.Scheme
r.URL.Host = parsedTarget.Host
},
}
log.Printf("listening on %s; proxying to %s\n", *addr, *target)
err = http.ListenAndServe(*addr, proxy)
if err != nil {
panic(err)
}
}
```
Then run `curl --verbose http://localhost:8080/` to make a request.
### What did you expect to see?
I would expect NOT to see the `Alt-Svc` header returned by the response.
### What did you see instead?
The original `Alt-Svc` header from Google's servers:
```
< HTTP/1.1 200 OK
< Alt-Svc: quic=":443"; ma=2592000; v="44,43,39"
...
```
### Related things
* Issue #18341 is discussing supporting HTTP Status 421 Misdirected Request. Issue #25804 is a timed out issue asking for http2 to support handling Alt-Svc. Both of these would allow ReverseProxy to handle this header internally, and are not related to removing it here.
* Caddy had an issue about this, and decided to strip it: https://github.com/mholt/caddy/issues/1051 | Documentation,NeedsInvestigation | low | Critical |
413,599,836 | flutter | Code reuse from Cookbooks for API Docs | The [cookbook](https://flutter.dev/docs/cookbook) pages from the website contain a lot of snippets and sample apps for individual widgets. There are also assets like screenshots and animations that can be re-used for API docs.
Related to #21136 | team,framework,d: api docs,P2,team-framework,triaged-framework | low | Minor |
413,625,363 | pytorch | improved assert message in the case of "CUDA error: device-side assert triggered" | When pytorch spits out `CUDA error: device-side assert triggered` would it be possible to add a brief note, such as: "It's not possible to recover. Program/kernel restart is required to continue"
So that users won't waste time trying to recover or re-run their code, when we know ahead of time they won't be able to.
And a bonus feature is to add a recommendation to re-run with `CUDA_LAUNCH_BLOCKING=1` env var for debugging the issue.
Thank you.
I have just added in my code (in addition to the normal exception):
```
if "device-side assert triggered" in str(e):
warn("""When 'device-side assert triggered' error happens, it's not possible to recover and you must restart the kernel to continue. Use os.environ['CUDA_LAUNCH_BLOCKING']="1" before restarting to debug""")
```
but ideally it should be part of the pytorch exception in first place.
cc @ngimel | module: bootcamp,module: cuda,module: molly-guard,triaged | low | Critical |
413,668,147 | go | cmd/vet: Printf format %.T has unrecognized flag | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.2 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/demouser/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/demouser/workspace/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.11.2/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.11.2/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/rf/xcy7qhgd7nldd0txw__11gpw0000gn/T/go-build280768088=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
https://play.golang.org/p/m0ahZ7ofAH_f
```
package main
import "fmt"
func main() {
fmt.Printf("arg1=%s, arg2=%.T, arg3=%.T, arg4=%.T, arg=%s", "arg1", 2.123, true, "arg4", "arg5")
}
```
### What did you expect to see?
```
arg1=arg1, arg2=, arg3=, arg4=, arg=arg5
Program exited.
```
### What did you see instead?
```
prog.go:6: Printf format %.T has unrecognized flag .
Go vet exited.
arg1=arg1, arg2=, arg3=, arg4=, arg=arg5
Program exited.
``` | NeedsInvestigation,Analysis | low | Critical |
413,678,320 | vue-element-admin | 缓存不适合场景解决思路 | 问题描述:
目前缓存的方案对于某些业务是不适合的,比如文章详情页这种 /article/1 /article/2,他们的路由不同但对应的组件却是一样的,所以他们的组件 name 就是一样的,就如前面提到的,keep-alive的 include 只能根据组件名来进行缓存,所以这样就出问题了。(以上摘自帮助文档)
解决目的: keep-alive 和 include 共存,缓存/article/1 /article/2这种情况的相同组件的各个不同实例,关闭后再次打开/article/1 /article/2 这种**实现刷新**
主要思路:目前vue-element-admin的AppMain.vue文件的 路由的key 是
this.$route.fullPath,/article/1 这种每次关闭页面再打开时对应的key是 相同的,因而无法刷新,
AppMain.vue文件修改如下:
`
key() {
let marker = window.sessionStorage.getItem(this.$route.fullPath)
if (!marker) {
window.sessionStorage.setItem(this.$route.fullPath, new Date())
marker = window.sessionStorage.getItem(this.$route.fullPath)
}
return this.$route.fullPath + marker
}`
然后tagsView.js里DEL_VISITED_VIEW方法中添加:
` if (!! view.fullPath) {
window.sessionStorage.removeItem(view.fullPath)
}`,即在关闭顶部标签的时候,清除对应的映射,
我的项目这中映射关系简单交由sessionStorage维护,也可以vuex去管理。
目前我的项目使用这种方式解决,没有发现明显的问题,不够优雅的就是
`<keep-alive :include="cachedViews">` 的cachedViews里有重复的组件名称,但不影响功能,include仅用于判断组件是否缓存,而组件实例的区别还是在于key。这种方案是否可以解决问题?是否有更好的解决方案?烦请试一下,感谢 | enhancement :star:,feature | low | Minor |
413,682,106 | tensorflow | [TF2.0] Change default types globally | Hello everyone,
I made the same request a while ago at [tensorflow/community](https://github.com/tensorflow/community/issues/23). The similar question was raised before at [tensorflow/tensorflow#9781](https://github.com/tensorflow/tensorflow/issues/9781), where maintainers argued that GPU is much faster on float32, default type cannot (should not) be changed because of backwards compatibility reasons and cetera.
The thing is that the precision is very important for algorithms where operations like cholesky, solvers and etc. are used. This becomes very tedious to specify type everywhere, it gets even worse when you start using other frameworks or small libraries which follow the standard type settings and sometimes they become useless, just because type incompatibilities. The policy of **"changing types locally solves your problems"** becomes cumbersome.
It would be great to have methods `tf.set_default_float` and `tf.set_default_int` in TensorFlow 2.0 and I believe that such a small change will make TensorFlow more user friendly.
Kind regards,
Artem Artemev | stat:awaiting tensorflower,type:feature,comp:apis | medium | Critical |
413,701,857 | godot | Not possible to set TabContainer title in the editor without changing node names | Godot 3.0.6
Godot 3.1 beta6
Currently, `TabContainer` and `Tabs` don't expose any option to choose the title of their tabs in the editor. The only way is to rename child nodes. This might work out somehow, but I find it inconvenient because nodes have naming restrictions, and they are more identifiers rather than user-facing strings. As such, that's different use cases which I don't want to mix.
For example, I make an app which has a `ScriptEditor` scene and `CharacterEditor` scene as child of a tab container. This causes tab titles to be "ScriptEditor" and "CharacterEditor", but I want them to be "Script" and "Character" instead.
My only options for now is to set this by script with `set_tab_title()`, or somehow abuse localization system in a way it translates "ScriptEditor" into "Script", which is not what I want either. | topic:editor,usability,topic:gui | low | Major |
413,721,433 | rust | Closure/Pin: &mut captured as value | rustc -vV = `1.34.0-nightly (146aa60f3 2019-02-18)`
MRE:
```
fn test() {
use core::pin::Pin;
fn take_fn_mut<F>(f: F)
where F: FnMut() -> ()
{}
let mut x = 0i32;
let reference = &mut x;
let fn_mut = || { Pin::new(reference); };
take_fn_mut(fn_mut);
}
error[E0525]: expected a closure that implements the `FnMut` trait, but this closure only implements `FnOnce`
--> src/lib.rs:10:18
|
10 | let fn_mut = || { Pin::new(reference); };
| ^^ --------- closure is `FnOnce` because it moves the variable `reference` out of its environment
| |
| this closure implements `FnOnce`, not `FnMut`
11 | take_fn_mut(fn_mut);
| ----------- the requirement to implement `FnMut` derives from here
```
Playground: https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=b21dc69d7b5e018992620d82995ccd43
3 local fixes:
```diff
- let fn_mut = || { Pin::new(reference); };
+ let fn_mut = || { Pin::new(&mut *reference); };
```
```diff
- let fn_mut = || { Pin::new(reference); };
+ let fn_mut = || { Pin::new(reference as &_); };
```
```diff
- let fn_mut = || { Pin::new(reference); };
+ let fn_mut = || { Pin::<&mut i32>::new(reference); };
```
These "fixes" are strange because [Pin::new](https://doc.rust-lang.org/nightly/std/pin/struct.Pin.html#method.new) accepts `pointer: P` whereas in our case `P` should be `&mut i32`, but specializing `P` manually to `Pin::<&mut i32>::new` solves the issue. Why?
Referenced article: https://bluss.github.io/rust/fun/2015/10/11/stuff-the-identity-function-does/ | C-enhancement,A-closures,T-compiler | low | Critical |
413,754,947 | godot | controls grab focus even when clipped (and other things) | Godot version: 3.0.6
Window 7 Home 64 bit;
Controls seem to be grabbing focus even if they are clipped by parent control
this shouldn't happen since clipped content isn't very going to be seen making it useless.
To reproduce..
1. Download and open project
2. Run list scene
3. Navigate to down arrow and press enter until your at 7
4. Navigate past up and down arrow and see that its not being clipped
[Clipped Control Focus Bug.zip](https://github.com/godotengine/godot/files/2897373/Clipped.Control.Focus.Bug.zip)
here is a example of it doing the same thing without clip

| bug,confirmed,topic:gui | low | Critical |
413,767,341 | pytorch | Automatic aggregation of a mix of sparse and dense gradients is not supported yet | The gradient of [BatchGather](https://caffe2.ai/docs/operators-catalogue.html#batchgather) can't be mixed with gradients from other paths.
Any way to work around this? https://github.com/pytorch/pytorch/blob/0799a81cb78da35b897136e6a2a2fb3ea3f75ee7/caffe2/python/core.py#L851
```
INFO train.py: 144: Building model: generalized_rcnn
WARNING cnn.py: 25: [====DEPRECATE WARNING====]: you are creating an object from CNNModelHelper class which will be deprecated soon. Please use ModelHelper object with brew module. For more information, plea
se refer to caffe2.ai and python/brew.py, python/brew_test.py for more information.
Traceback (most recent call last):
File "tools/train_net.py", line 132, in <module>
main()
File "tools/train_net.py", line 114, in main
checkpoints = detectron.utils.train.train_model()
File "/home/yihuihe/detectron/detectron/utils/train.py", line 53, in train_model
model, weights_file, start_iter, checkpoints, output_dir = create_model()
File "/home/yihuihe/detectron/detectron/utils/train.py", line 145, in create_model
model = model_builder.create(cfg.MODEL.TYPE, train=True)
File "/home/yihuihe/detectron/detectron/modeling/model_builder.py", line 124, in create
return get_func(model_type_func)(model)
File "/home/yihuihe/detectron/detectron/modeling/model_builder.py", line 89, in generalized_rcnn
freeze_conv_body=cfg.TRAIN.FREEZE_CONV_BODY
File "/home/yihuihe/detectron/detectron/modeling/model_builder.py", line 229, in build_generic_detection_model
optim.build_data_parallel_model(model, _single_gpu_build_func)
File "/home/yihuihe/detectron/detectron/modeling/optimizer.py", line 42, in build_data_parallel_model
model.AddGradientOperators(all_loss_gradients)
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/model_helper.py", line 335, in AddGradientOperators
self.grad_map = self.net.AddGradientOperators(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/core.py", line 1840, in AddGradientOperators
self._net.op[skip:], ys)
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/core.py", line 1107, in GetBackwardPass
return ir.GetBackwardPass(ys)
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/core.py", line 988, in GetBackwardPass
forward_op_idx)
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/core.py", line 876, in DoGradientAccumulation
err
RuntimeError: Gradients for param ''gpu_3/U'' failed to verify: Automatic aggregation of a mix of sparse and dense gradients is not supported yet
``` | caffe2 | low | Critical |
413,775,819 | godot | Multidimensional arrays from GDScript can't be passed to C# | **Godot 3.1 Beta 5**
If you pass an array to C#, it must receive it as an object array. Passing a 2D array need to be received as object[][]. But you just simply can't access it.
```
object[][] map = array from GDScript;
map.Length; // returns proper length (width)
map[0].Length; // returns 0, always (instead of height)
```
If you
```
GD.Print(map);
```
It'll print the entire 2D array correctly. But
```
GD.Print(map[0])
```
Won't print anything. You also can't access any value inside it. | topic:dotnet | low | Minor |
413,792,455 | TypeScript | Document.onmousewheel event was removed from typescript | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
// A *self-contained* demonstration of the problem follows...
// Test this by running `tsc` on the command-line, rather than through another build tool such as Gulp, Webpack, etc.
```
**Expected behavior:**
document.onmousewheel should be existed.
**Actual behavior:**
Intellisense & on build getting an error: Property 'onmousewheel' does not exist on type 'Document'.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
I've noticed that this event was removed on the following commit from lib.dom.d.ts:
https://github.com/Microsoft/TypeScript/commit/79e5e79ef7724066889fbce97e9e6d6db4e746b5#diff-46fd87925e4552c166ec188712741c3f
I was wondering if it's a mistake or if there was a reason for the removal. | Bug,Domain: lib.d.ts | low | Critical |
413,806,781 | flutter | BitmapRegionDecoder equivalent | Is there a flutter equivalent to Android's `BitmapRegionDecoder`? I have very large images and I don't want to load the entire image until it is needed. | c: new feature,framework,a: images,P2,team-framework,triaged-framework | low | Minor |
413,812,646 | TypeScript | Omit producing "JavaScript heap out of memory" with v3.3 | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.201xxxxx
- 3.3.0-rc
- 3.3.3333
- 3.4.0-dev.20190223
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
Omit
**Code**
```ts
// using "@types/react": "^16.8.4"
type Omit<T, K extends keyof T> = Pick<T, Exclude<keyof T, K>>;
type XXX = Omit<React.HTMLAttributes<HTMLElement>, 'title'>; // <= this alias is not working
```
**Expected behavior:**
It should work as early versions, last tested v3.2.4
**Actual behavior:**
Compiler throws "JavaScript heap out of memory" and same happens with VSCode Intellisense (stuck for minutes without proper answer).
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
Tested code working with playground 3.3. (pasting @types/react at bottom), too big to share.
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Suggestion,Needs Proposal | low | Critical |
413,833,005 | godot | Inconsistent behaviour of setup methods between GDScript and GDNative (C++) - values and default values of properties | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** v3.1.beta6.official 64b
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Linux 64b
<!-- Specify GPU model and drivers if graphics-related. -->
---
**Issue description:**
<!-- What happened, and what was expected. -->
By *changed value* and *default value* I mean values in text fields in *Script Variables* in the editor.
GDNative: In `_init` property values are not initialized at all. In `_ready` properties which should have default values are not initialized to default values, properties with changed values are properly handled.
GDScript: In `_init` property values are initialized (to default values). In `_ready` changed properties are initialized to their current (changed) values.
I hit this issue many times even in a trivial benchmark project and was for some time really confused, why setting properties in the editor works only in some cases.
In my opinion both of these should behave exactly same. I am guessing the GDScript implementation is the desired one, since it makes more sense.
---
**Steps to reproduce:**
```cpp
...
register_property<GDExample, float>("defaultValue", &GDExample::defaultValue, 5);
register_property<GDExample, float>("changedValue", &GDExample::changedValue, 10);
...
void GDExample::_init() {
Godot::print("[C++] init, defaultValue = {0}", defaultValue);
Godot::print("[C++] init, changedValue = {0}", changedValue);
}
...
void GDExample::_ready() {
Godot::print("[C++] ready, defaultValue = {0}", defaultValue);
Godot::print("[C++] ready, changedValue = {0}", changedValue);
}
```
```python
extends Node
export var defaultValue = 5;
export var changedValue = 10;
func _init():
print("[GDScript] init, defaultValue = ", defaultValue)
print("[GDScript] init, changedValue = ", changedValue)
func _ready():
print("[GDScript] ready, defaultValue = ", defaultValue)
print("[GDScript] ready, changedValue = ", changedValue)
```
Output:
```
[C++] init, defaultValue = 0
[C++] init, changedValue = 0
[GDScript] init, defaultValue = 5
[GDScript] init, changedValue = 10
[C++] ready, defaultValue = 0
[C++] ready, changedValue = 11
[GDScript] ready, defaultValue = 5
[GDScript] ready, changedValue = 11
```
| documentation,topic:gdextension | low | Major |
413,836,252 | rust | libcore can use unstable library features without a feature flag | I've been assuming that feature flags can also be used "within" the standard library to control which features are already being used. That is helpful when stabilizing a part of the API, to test if that API is viable enough for whatveer the standard library does with it.
But it turns out that, e.g., [`core::fmt::float` can call `MaybeUninit::get_mut`](https://github.com/rust-lang/rust/blob/e17c48e2f21eefd59748e364234efc7037a3ec96/src/libcore/fmt/float.rs#L20) without enabling the relevant feature flag. That seems like a bug to me. | A-stability,T-compiler,C-bug | low | Critical |
413,862,564 | godot | Gdscript -> Assign a boolean value to a typed float produces no error. | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
5e837b3f13ab1e3b31bb8d705e87820fa4eff21e
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
win 7
**Issue description:**
<!-- What happened, and what was expected. -->
This expresion:
`export (bool) var UseAceleration : float = false`
Produces no error: no error in the declaration nor in the logic... godot ignores the value entered in inspector and cast false to 0.
The expected result is godot advice to you that entered value doesn´t match the type.
| enhancement,discussion,topic:gdscript,topic:editor,confirmed | low | Critical |
413,863,078 | godot | Add godot version # to windows firewall entries | **Godot version:**
Every version that exists lol
**OS/device including version:**
Windows 10 64bit
**Issue description:**
Basically anytime you run godot for the first time it'll ask you if you want to add it to the firewall for permissions (most likely for networking stuff?)
You can see a video here of all the entries I have with the same name https://streamable.com/1qk26
Would be cool if it added the version to the entry so it's easier to clean up later. | enhancement,platform:windows,topic:editor,topic:network | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.