id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
494,957,901 | TypeScript | Expose inferred type for use in type annotations | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
## Search Terms
infer inferred type annotation
## Suggestion
When annotating the type of a variable, allow the variable to have access to the type that would've been inferred for the RHS, so that the author can build a type annation derived from that inferred value. This value could be exposed as the `inferred` keyword or similar.
## Use Cases/Examples
```ts
// All properties in `deletable` are subject to being deleted, so we want
// the type to be a Partial of the would-have-been-inferred type.
// I don't think there's a great way to write this at the moment.
const deletable: Partial<inferred> = { a: true, b: false, c: "xyz" };
// The `mutable` property might be reassigned to, but all the other properties wont.
// So combined `Readonly` and `inferred`, with an exception for mutable.
type Legal = "initial" | "middle" | "final"
const partiallyMutable: Readonly<Omit<inferred, "mutable">> & { mutable: Legal } = {
x: "literally",
y: true,
mutable: "initial",
lotsOfOther: 47,
literalPropsHere: "name"
}
// We want to verify assignability to the mapped type, to make sure that, as new
// required keys are added, this object literal is updated. However, we also want the
// values at each key to be inferred narrowly as a literal type for use in the code that follows.
type RequiredKeys = "A" | "B" | "C"
const mustHaveAllKeys: inferred & { [K in RequiredKeys]: any } = = {
"A": true,
"B": false,
"C": "hi!"
}
// typeof mustHaveAllKeys.A should be true!
```
## Related issues
- https://github.com/Microsoft/TypeScript/issues/24375
- https://github.com/microsoft/TypeScript/issues/17574
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
495,034,463 | opencv | VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV | 
/VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV/
I am trying to use darknet with input from Intel Realsense D435 RGBD camera on ubuntu 18
Please advise! | feature,category: videoio(camera) | low | Critical |
495,050,941 | pytorch | NNPACK condition should be changed (ARM processors) | ## π Feature
Remove the condition for usage of NNPACK for conv2d which requires the batchsize >= 16. Otherwise THNN is used on ARM processors
## Motivation
I tested NNPACK vs. THNN and I am think that NNPACK is mostly faster than THNN for batchsize = 1 and also > 1. NNPACK is still being excluded in special cases even when the condition batchsize >= 16 is removed, for example THNN is used when stride > 1.
The condition now makes little sense because it abolishes NNPACK for inference (where batchsize = 1) where the speedup from NNPACK is most significant. In particular the speed-up is most significant because in case batchsize = 1 NNPACK uses a specific inference forward function which gives a an additional huge advantage in speed while for batchsize > 1 NNPACK uses a training forward function which is advantagious for backprop (I think at least).
To summarize: Best case scenario for inference (usually batchsize = 1) - condition on NNPACK is removed and if other conditions allow NNPACK usage, NNNPACK is used and goes into sped-up inference function (only for batchsize = 1) and forward pass is super fast.
## Pitch
Remove condition in
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/Convolution.cpp#L173
input.size(0) >= 16
## Alternatives
Allow for use of Eigen Tensor Library and/or ARM Compute Library
| module: convolution,triaged,enhancement,module: nnpack | low | Minor |
495,089,800 | pytorch | RuntimeError: tensor.ndimension() == static_cast<int64_t>(expected_size.size()) INTERNAL ASSERT FAILED | ## π Bug
Hey
I am trying to train my model. I am using Multi-GPU training.
While training I get the next Runtime error:
```
RuntimeError: tensor.ndimension() == static_cast<int64_t>(expected_size.size()) INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1565272279342/work/torch/csrc/cuda/comm.cpp:220, please report a bug to PyTorch. (gather at /opt/conda/conda-bld/pytorch_1565272279342/work/torch/csrc/cuda/comm.cpp:220)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7f75fc2dfe37 in /home/nlp/avivwn/miniconda3/envs/nomlex/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: torch::cuda::gather(c10::ArrayRef<at::Tensor>, long, c10::optional<int>) + 0x6b6 (0x7f75708af616 in /home/nlp/avivwn/miniconda3/envs/nomlex/lib/python3.6/site-packages/torch/lib/libtorch.so)
frame #2: <unknown function> + 0x5fa742 (0x7f75fd9c7742 in /home/nlp/avivwn/miniconda3/envs/nomlex/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #3: <unknown function> + 0x1c8316 (0x7f75fd595316 in /home/nlp/avivwn/miniconda3/envs/nomlex/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #14: THPFunction_apply(_object*, _object*) + 0x98f (0x7f75fd7ba4bf in /home/nlp/avivwn/miniconda3/envs/nomlex/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #63: __libc_start_main + 0xf5 (0x7f760725c495 in /lib64/libc.so.6)
```
The error occurs in different time each training, sometimes at start and sometimes later, but always when calling the forward method of the model:
`outputs = model(padded_batch_indexed_tokens, padded_batch_tags, sents_lengths)`
## Environment
Collecting environment information...
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: CentOS Linux 7 (Core)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
CMake version: version 2.8.12.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
GPU 3: GeForce GTX 1080 Ti
Nvidia driver version: 418.87.00
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] msgpack-numpy==0.4.3.2
[pip] numpy==1.16.4
[pip] numpydoc==0.8.0
[pip] pytorch-transformers==1.1.0
[pip] torch==1.2.0
[conda] cuda80 1.0 h205658b_0 pytorch
[conda] mkl 2019.3 199
[conda] pytorch 1.2.0 py3.6_cuda10.0.130_cudnn7.6.2_0 pytorch
[conda] pytorch-transformers 1.1.0 pypi_0 pypi
[conda] torch 1.2.0 pypi_0 pypi
## Additional context
Currently I am using try and except over the forward call, but I want to know what this bug means and why it is reproducing only on some training points.
Thanks in advance
cc @ezyang @gchanan @zou3519 | needs reproduction,module: multi-gpu,module: cuda,triaged | low | Critical |
495,121,715 | TypeScript | Missing references for Lodash named imports | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.6.3
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
// a.ts
import { negate } from 'lodash';
negate(() => true);
```
```ts
// b.ts
import { negate } from 'lodash';
negate(() => true);
```
Activate "find references" on `negate` in either file.
**Expected behavior:**
References in both `a.ts` and `b.ts` + definition are found.
**Actual behavior:**
References only in the current file + definition are found

**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** https://github.com/microsoft/TypeScript/issues/33017
| Bug,Domain: Refactorings | low | Critical |
495,182,524 | storybook | Addon-docs: Resize iframe to fit content | **Is your feature request related to a problem? Please describe.**
Sometimes the component being displayed in Docs is only a few px tall, yet the iframe defaults to 500px. When you have a lot of these stories stacked, there is a lot of empty space.
**Describe the solution you'd like**
After iframe loads, have it resize to fit the content
**Describe alternatives you've considered**
Can set styles to manually override the iframe height per-story, but this is difficult/tedious to maintain
| feature request,help wanted,addon: docs,block: other | medium | Critical |
495,186,552 | svelte | A lot of empty text nodes - I mean A LOT - performance issue | **Describe the bug**
Just check this repl https://svelte.dev/repl/902d4ccfcc5745d0b927b6db6be4f441?version=3.12.1
There are 22 000 empty text nodes that are really slowing down entire component on update.

**Expected behavior**
I know that those nodes are for a reason but maybe some virtual approach will be better?
Each node must be created updated and destroyed which takes a lot of time and memory when something change.
**Severity**
This behavior can lead users to leave svelte because they might not figure out what is wrong - code seems to be ok, but performance is really bad (22 000 extra nodes for 2 500 real ones :O ).
For me this is annoying because I must be careful and watch for those "extra" nodes that are appearing randomly in some circumstances. | feature request,awaiting submitter,perf,stale-bot | medium | Critical |
495,193,494 | kubernetes | Migrate kube-controller-manager to use v1beta1 Events | Now that new Events API is ready, we can migrate the kube-controller-manager to emit events against the new API.
This is part of #76674 | kind/feature,sig/apps,sig/instrumentation,do-not-merge/hold,lifecycle/frozen | low | Major |
495,267,297 | create-react-app | webpack library configuration | ### Is your proposal related to a problem?
We want to use monorepo project and create-react-app, but we cant to create library with help of react-create-app.
### Describe the solution you'd like
Into `.env` file we can add `LIBRARY_NAME` and `LIBRARY_TARGET` configuration parameters and after that these are used to `webpack.config.js`
Example:
```
// file: create-react-app/packages/react-scripts/config/webpack.config.js
//...
output: {
library: env.LIBRARY_NAME,
libraryTarget: env.LIBRARY_TARGET
//...
```
| issue: proposal,needs triage | low | Minor |
495,298,796 | rust | Suggest adding a generic lifetime parameter | ```rust
fn foo(
mut x: &(),
y: &(),
) {
x = y;
}
```
gives the error:
```
error[E0623]: lifetime mismatch
--> src/lib.rs:5:9
|
2 | mut x: &(),
| ---
3 | y: &(),
| --- these two types are declared with different lifetimes...
4 | ) {
5 | x = y;
| ^ ...but data from `y` flows into `x` here
```
but does not suggest the solution: adding a new lifetime parameter `'a` to `foo` and giving both variables the same lifetime. | C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics,D-papercut | low | Critical |
495,317,764 | go | cmd/link: linker should fail quickly when external link mode is not supported | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
HEAD
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
darwin, 386
</pre></details>
### What did you do?
While investigating Issue 33808 I discovered that external linking doesn't work in 32bit darwin.
### What did you expect to see?
Linker should fail quickly when trying to do an external link.
### What did you see instead?
Linker calls external linker, external linker fails.
CC @cherrymui | NeedsFix,compiler/runtime | low | Minor |
495,336,224 | pytorch | Generated file not getting cleaned up by clean | When making changes that add new torch functions, and which should result in a change to the generated file `aten/src/ATen/core/OpsAlreadyMovedToC10.cpp`, we must build with GEN_TO_SOURCE=1 to regenerate the file with updated interfaces. `python setup.py clean` followed by `python setup.py install` does not cause the file to be regenerated.
Expected behavior is that clean should be enough to correctly fully re-build.
| module: build,triaged | low | Minor |
495,351,913 | go | go/doc: lines ending with parenthesis cannot be headings | What did you do?
Add a package doc like:
// ...
// Create a snapshot from a source table (snapshots alpha)
//
// cbt createsnapshot \<cluster\> \<snapshot\> \<table\> [ttl=\<d\>]
// ...
What did you expect to see?
The string "Create a snapshot..." should be a title since it is delimited by blank lines, starts with and uppercase letter and does not end with an invalid punctuation.
What did you see instead?
It shows up as a regular text paragraph.
Can be seen here: https://godoc.org/google.golang.org/cloud/bigtable/cmd/cbt#hdr-Block_until_all_the_completed_writes_have_been_replicated_to_all_the_clusters
cc: @dmitshur | Thinking,NeedsDecision | low | Minor |
495,409,445 | go | cmd/compile: output a DW_LNE_end_sequence instruction at the end of every function's line table | As we move DWARF generation out of the linker and into the compiler, we've taken some short cuts when emitting the line table. Specifically, rather than outputting a DW_LNE_end_sequence at the end of every function's debug_lines table, we reset the state machine. See discussion [HERE](https://go-review.googlesource.com/c/go/+/191968/6/src/cmd/link/internal/ld/dwarf.go#1112).
We are blocked on Delve supporting multiple DW_LNE_end_sequences per compilation unit. See Delve's issue [HERE](https://github.com/go-delve/delve/issues/1694).
In addition to making the state machine mechanics simpler, we can remove the assert in the linker that PCs are monotonically increasing per functions in a compilation unit. Again, see the [discussion](https://go-review.googlesource.com/c/go/+/191968/6/src/cmd/link/internal/ld/dwarf.go#1112). | NeedsInvestigation,compiler/runtime | low | Critical |
495,410,657 | opencv | MSER keypoints should be ellipses (which should be taken by descriptors like SIFT) | The invariance against perspective transformations is only valid, when MSER keypoints are extracted and described as ellipses. Internally they are also extracted as ellipses.
I see, there is such a ellipse-keypoint class in xfeatures2d, that or another solution should be used by MSER-Features and SIFT-Descriptors, and should be scaled carefully (different subsampling in ellipse axis) when used as descriptors. Perhaps it would be easier to make keypoints ellipses by default. | category: features2d,RFC | low | Minor |
495,416,705 | rust | Suggest giving recommending substitution based on naming conventions with available | New rustacean here. In writing simple programs to get a feel for the language, I tried testing if a string slice is empty as the conditional for a if/else statement. Something like the following:
if mystr.empty() {
// ...
} else {
// ...
}
Of course the correct method is `.is_empty()`. I would suppose that this is a common mistake for people coming from languages like C++. It would be friendly of the compiler to recognize that in this boolean context there *is* a `.is_empty()` method that is the same name but with `is_` prepended, takes the provided (no) parameters, and returns the required boolean, and suggest that alternative to the user in the error output. | C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics | low | Critical |
495,425,741 | rust | Match ergonomics means bindings have different types in patterns and match arm; cannot deref references in pattern | Consider the following:
```rust
struct S(u32);
fn foo(x: &S) {
let _: &u32 = match x {
S(y) => y,
_ => &0,
};
}
```
Here, the variable binding `y` has type `&u32` outside the pattern. However, inside the pattern it has type `u32`. The fact that these types do not align is confusing, as it means you can't dereference `y` inside the pattern like this:
```rust
fn foo(x: &S) {
let _: u32 = match x {
S(&y) => y,
_ => 0,
};
}
```
and instead have to dereference it outside the pattern:
```rust
fn foo(x: &S) {
let _: u32 = match x {
S(y) => *y,
_ => 0,
};
}
```
Is there any reason this was chosen to be the case, or is required to be the case? As it stands, this behaviour is very counter-intuitive. | C-enhancement,T-lang,A-patterns | high | Critical |
495,451,262 | angular | [Ivy] ASSERTION ERROR: Should be run in update mode | <!--π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
Oh hi there! π
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
-->
# π bug report
### Affected Package
<!-- Can you pin-point one or more @angular/* packages as the source of the bug? -->
<!-- βοΈedit: --> The issue is caused by package @angular/core (Ivy)
### Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- βοΈ-->
Yes, works in non-Ivy.
### Description
I suspect this happens as a result of having an Input property that is set directly (`isExpanded="true"`) instead of using a binding (`[isExpanded]="true"`), and having the property setter call `changeDetectorRef.detectChanges()`. It should be noted that this works in non-ivy.
## π¬ Minimal Reproduction
https://github.com/fr0/angular-test/tree/ivy
1. Clone repo
2. `git checkout ivy`
3. `npm install`
4. `npm start`
5. Open http://localhost:4200 in Chrome
6. View browser console log
## π₯ Exception or Error
<pre><code>
core.js:5829 ERROR Error: ASSERTION ERROR: Should be run in update mode
at throwError (core.js:1164)
at assertEqual (core.js:1123)
at refreshView (core.js:12425)
at detectChangesInternal (core.js:13907)
at ViewRef.detectChanges (core.js:15373)
at ExpandableIfComponent.set isExpanded [as isExpanded] (expandable-if.component.ts:60)
at setInputsFromAttrs (core.js:13579)
at postProcessDirective (core.js:13335)
at instantiateAllDirectives (core.js:13234)
at createDirectivesInstances (core.js:12618)
</code></pre>
## π Your Environment
**Angular Version:**
<pre><code>
<!-- run `ng version` and paste output below -->
<!-- βοΈ-->
Angular CLI: 9.0.0-next.5
Node: 12.6.0
OS: darwin x64
Angular: 9.0.0-next.7
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.900.0-next.5
@angular-devkit/build-angular 0.900.0-next.5
@angular-devkit/build-optimizer 0.900.0-next.5
@angular-devkit/build-webpack 0.900.0-next.5
@angular-devkit/core 9.0.0-next.5
@angular-devkit/schematics 9.0.0-next.5
@angular/cli 9.0.0-next.5
@ngtools/webpack 9.0.0-next.5
@schematics/angular 9.0.0-next.5
@schematics/update 0.900.0-next.5
ng-packagr 5.5.1
rxjs 6.5.3
typescript 3.5.3
webpack 4.40.2
</code></pre>
| freq2: medium,area: core,core: change detection,P3 | medium | Critical |
495,451,885 | pytorch | Remove TensorOptions logic from generated code | Scripts that are generating code are getting more and more complex and require a lot of domain knowledge in order to make any substantial changes. This is mostly due to logic around TensorOptions. We should eliminate it in order to significantly decrease complexity of the code.
Objectives:
- Make is easy to add and remove parameters from Tensor Options. Ideally we want to have one place where we would add a new parameter with default value, if any, and the rest of code generation code should be able to take it from there.
- Clean up the dead code.
- Add documentation which explains the current flow of the code generation process. Explain what each generated file is responsible for and why we need Tensor Options in particular places.
- Remove all hardcoded hacks. For example handling lriu/tril.
- Fix default value processing.
- Fix discrepancy between some declarations and definitions.
| module: internals,triaged | medium | Major |
495,474,600 | kubernetes | migrate the leader election to use v1beta1 Events | the leader election code should be emitting events against the v1beta1 Events API. This change also concerns any kubernetes component (such as kube-scheduler or the controller-manager) that uses the `leaderelection` package.
Part of #76674
/kind feature
/sig scalability
/sig instrumentation
/priority important-soon | priority/important-soon,sig/scalability,kind/feature,sig/instrumentation,do-not-merge/hold,lifecycle/frozen | low | Major |
495,482,960 | go | cmd/compile: use hashing for switch statements | Currently we implement switch statements with binary search. This works well for values that fit into a register, but for strings it might be better to compute a hash function.
Some thoughts:
1. Even in the simple case of using a full hash function, exactly two passes over the switched value is probably better than O(log N) possible passes over it.
2. Since we have the expected keys statically, we can use a simple, cheap universal hash (e.g., FNV? djb2?) and keep trying different seeds until we get one without any collisions.
3. Instead of hashing all of the input bytes, we can hash just enough of them to avoid collisions.
4. We can try using a *minimal* (or near minimal) perfect hash to emit a jump table instead of binary search. (This might even be useful for sparse integer values too.)
If someone can help figure out code that can take a `[]string` and figures out an appropriate and cheap hash function to use, I can help wire it into swt.go. | help wanted,NeedsInvestigation,compiler/runtime | medium | Critical |
495,497,111 | go | cmd/go: 'go mod edit -replace' fails if the package's version in go.mod is not in semver format | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code>
</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/rafael/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/rafael/Work/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/go-1.12"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go-1.12/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/rafael/Work/go/src/github.com/rafaelvanoni/kube-controllers-private/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build260011560=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
A simplified example suggested by @heschik is to run the following command twice:
<pre>go mod edit -replace golang.org/x/tools@latest=github.com/golang/tools@latest </pre>
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
### What did you expect to see?
### What did you see instead?
A "version must be of the form v1.2.3" error.
| NeedsInvestigation,GoCommand,modules | low | Critical |
495,511,464 | godot | [Bullet] Setting 3D gravity vector during runtime disables it instead | **Godot version:**
3.1.1
**OS/device including version:**
Ubuntu 18.04
**Issue description:**
Calling `PhysicsServer.area_set_param` to update a world's gravity vector seems to disregard the provided value, instead setting it to 0. After some digging, I found this behaviour consistent with commit https://github.com/godotengine/godot/commit/d250ade37b1e74dd11c870be279318ba01a8ba70 (link kindly provided by a user in IRC), and I can confirm that the gravity vector behaves as expected after reverting the commit. However I am not sure if doing so reintroduced whatever bug it was fixing, thus this is not a pull request.
**Steps to reproduce:**
1. Add Camera and RigidBody (any shape works, needs some visible geometry as child node).
2. Extend RigidBody node with a script, and set world gravity vector within `_process` (to anything other than 0).
3. Play the project.
**Minimal reproduction project:**
[NoGravity.zip](https://github.com/godotengine/godot/files/3628811/NoGravity.zip) | bug,topic:physics,topic:3d | low | Critical |
495,513,307 | pytorch | Avoid sending zero grads over the wire in distributed autograd backward pass | For the distributed backward pass, we pass zero grads over the wire for certain tensors which were not used in the forward pass (when we execute RecvRpcBackward). This is done for simplicity currently, but a potential performance enhancement is to send only gradients that we need over the wire. For more context see: https://github.com/pytorch/pytorch/pull/25527#discussion_r321509898
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini | triaged,module: rpc | low | Major |
495,514,629 | TypeScript | Serialization/Deserialization API for AST | ## Search Terms
serialize deserialize ast node
Ref #26871
Ref #28365
## Suggestion
Implement/expose an API that can serialize/deserialize AST nodes, in particular a way to serialize a source file so it can be easily transferred or stored, potentially manipulated, and hydrated back.
Somewhat related is #28365 which would allow ingestion of AST directly for certain use cases. It was suggested before in #26871 for performance reasons, but not for externalising the AST in a form that can be persisted easily.
## Use Cases
Specifically in Deno, we are interested in doing some AST generation or manipulation external to the JavaScript runtime. Doing some computationally intense functions not in JavaScript can have significant improvements in speed. For example, we originally did our source map remappings for errors in JavaScript using the `source-map` package from Mozilla. In version 0.7+ though, Mozilla had written the mappings part of it in Rust transpiled to WASM. We were able to use that directly in the privileged side of Deno, in its native Rust and saw something like a 100x increase in performance.
At this point we don't specifically know how we would use it, though there can potentially be a need to procedurally generate some aspects like our runtime type library or do some AST transformations for bundling. We have experimented with doing this manipulation in a JavaScript runtime and wonder if we can get performance improvements by doing some stuff in Rust.
Right now, a full parse, transform and emit is the only way to "persist" code. If you want to make further changes, it is a reparse. It seems logical that this transitory state could be utilised for a lot of different advanced use cases. It also would in a way decouple the parser and the transformer externally.
There are a good amount of advanced use cases that people build on top of AST generators like acorn and estree, so it is logical that other advanced use cases could be built if serialisation/deserialisation is available.
Two big challenges are likely that there is a lot of circular references, which makes JSON a difficult serialisation format. There is a lot of hidden state in a SourceFile that isn't "owned" by the surface object. That would have to be exposed as part of a serialisation which isn't present as properties in the existing nodes.
I started to look at `tsbuildinfo` to see if some sort of compiler or AST state is persisted somewhere that could be used to rehydrate the state of the compiler, but really haven't looked to see if internally TypeScript can serialise a program state.
## Examples
```ts
const sourceFile = ts.createSourceFile("mymodule.ts", "console.log(\"hello\");", ts.ScriptTarget.ESNext, true);
// Returns a POJO of ts.SerializedNode?
const serializedSourceFile = ts.serialize(sourceFile);
// Because it is a POJO, it can be easily serialized
console.log(JSON.stringify(serializedSourceFile));
// Accepts a ts.SerializedNode?
const sourceFile2 = ts.deserialize(serializedSourceFile);
```
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Needs Proposal | low | Critical |
495,525,703 | pytorch | parallel_for may hang when called in main process and then on daemon process | ## π Bug
The ```torch.Tensor()``` will be stuck or deadlock in the multiprocess if the main process did the ```torch.Tensor()``` before the child-process start().
Please see the code below to reproduce the bug.
If reducing the size of dummy_im to the size (3,3), it's ok. Therefore, It might be a shared memory problem.
The bug appears in the following environments:
PyTorch 1.1.0+ Python version:3.7.4 created by conda
PyTorch 1.2.0+ Python version:3.7.4 created by conda
PyTorch 1.2.0+ Python version:3.6.8 created by conda
PyTorch 1.2.0+ Python version:3.5.2 in Docker
But it's ok in the following environments:
PyTorch 1.1.0+ Python version:3.7.3 created by conda
PyTorch 1.2.0+ Python version:3.7.3 created by conda
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
```python
import numpy as np
import torch
import sys
import torch.multiprocessing as mp
if __name__ == '__main__':
print("Python version=", sys.version) # Python3.4.4
print("torch version=", torch.__version__) # torch 1.2.0
image_queue = mp.Queue(50)
#########################################
#comment two lines below and no deadlock, otherwise deadlock in line 23
dummy_im = np.random.randn(300, 400)
dummy_im = torch.Tensor(dummy_im)
#########################################
def _read_worker():
im = image_queue.get(block=True)
print(type(im))
print(im.shape)
# stuck here: deadlock? line 23
im = torch.Tensor(im)
print(im)
w = mp.Process(target=_read_worker)
w.daemon = True # ensure that the worker exits on process exit
w.start()
im = np.random.randn(300, 400)
image_queue.put(im, block=True)
w.join()
print("done")
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
- stuck in the line ```im = torch.Tensor(im)``` with no exception error message.
- Reduce the dummy_im to the size (3,3), it becomes work.
- if move the ```dummy_im = torch.Tensor(dummy_im)``` after ```w.start()```, it's ok.
- My Python version is 3.7.4 but if I downgrade to 3.7.3, everything is ok.
## Expected behavior
The normal expected result in torch 1.1.0:
```
Python version= 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0]
torch version= 1.1.0
<class 'numpy.ndarray'>
(300, 400)
tensor([[-0.8363, 1.3835, -0.7603, ..., -0.0222, -0.4916, -1.2512],
[-0.2285, -0.6330, 0.4084, ..., 0.9458, 1.0684, 0.3801],
[-0.5260, 1.4498, -0.8997, ..., 0.2333, -0.8163, 1.1754],
...,
[-0.2710, 0.7194, -0.2842, ..., 0.5777, 1.1365, -0.1534],
[-0.6067, 0.1561, -1.2539, ..., 1.6007, 0.4443, -2.6501],
[ 0.1211, -1.1745, -0.0238, ..., 1.0563, 1.2978, -2.9615]])
done
```
The stuck output in torch 1.2.0:
```
Python version= 3.7.4 (default, Aug 13 2019, 20:35:49)
[GCC 7.3.0]
torch version= 1.2.0
<class 'numpy.ndarray'>
(300, 400)
```
<!-- A clear and concise description of what you expected to happen. -->
## Environment
- PyTorch Version (e.g., 1.0): 1.2.0
- OS (e.g., Linux): Ubuntu 16.04 LTS
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version:3.7.4
- CUDA/cuDNN version:10.0
- GPU models and configuration:Geforce GTX 1080Ti
- Any other relevant information:
| module: multiprocessing,triaged,module: deadlock | low | Critical |
495,542,991 | material-ui | Verify correctiveness of theme during `createMuiTheme` | <!-- Provide a general summary of the feature in the Title above -->
<!--
Thank you very much for contributing to Material-UI by creating an issue! β€οΈ
To avoid duplicate issues we ask you to check off the following list.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
## Summary π‘
This feature will ensure correctiveness of theme's being verified during `createMuiTheme`.
Alternatively, a function to verify correctness of the theme could be helpful too.
## Examples π
```javascript
// This should throw an error during initialization
const theme = createMuiTheme({
palette: {
primary: 'random string'
}
})
```
<!--
Provide a link to the Material design specification, other implementations,
or screenshots of the expected behavior.
-->
## Motivation π¦
At the moment, Typescript can enforce the correctness of theme structure, but it can't verify the content of them. The content of theme seems to get verified during runtime when a component is consuming a part of it.
This causes an issue that front-end page crashes when you navigate to a MUI component that consume an invalid part of theme via `decomposeColor ` function. For above example, you would see `Material-UI: unsupported XXXX color` and page crashes
At this moment, to ensure the theme is valid, we will have to either:
1. unit-test all components's behavior under custom theme (`renderWithTheme` etc..)
2. make a dedicated test scaffolding for theme to verify the content
3. manually check all components on UI
It'd be great if `createMuiTheme` can create a safety net for us.
<!--
What are you trying to accomplish? How has the lack of this feature affected you?
Providing context helps us come up with a solution that is most useful in the real world.
-->
## Alternative
Handle the error thrown by `decomposeColor` gracefully | new feature,core | low | Critical |
495,558,393 | youtube-dl | Could you add a extractor for amazon live: https://www.amazon.com/live | amazon live is a new live site: https://www.amazon.com/live
| request | low | Minor |
495,612,920 | pytorch | [libtorch]Same model in CUDA and CPU got different result? | ## π Bug
I have already asked questions in the forum, the link is as follows,
https://discuss.pytorch.org/t/why-same-model-in-cuda-and-cpu-got-different-result/56241
I happened in the same program, using the same model, switching between the GPU and the CPU, and the results were different.
My libtorch version is libtorch1.2.0_GPU.
## To Reproduce
Steps to reproduce the behavior:
1.load model:
```c++
torch::NoGradGuard no_grad_guard;
if (isCUDA) {
module = torch::jit::load(model_path, torch::kCUDA);
}
else {
module = torch::jit::load(model_path);
}
```
2.forward:
```c++
if (isCUDA) {
output = module.forward({ tensor_image.to(torch::kCUDA)});
}
else {
output = module.forward({ tensor_image });
}
```
3.treat output:
```c++
}else if (output.isTuple()) {
cout << "output isTuple!" << '\n';
auto ouputTuple=output.toTuple();
cout << "tuple_size:" << ouputTuple.use_count()<< '\n';
torch::Tensor out_tensor = ouputTuple->elements()[0].toTensor();
torch::Tensor out_tensor1 = ouputTuple->elements()[1].toTensor();
cout << "out_tensor:" << out_tensor.sizes() << '\n';//out_tensor:[1, 288, 384, 2] 576,768
cout << "out_tensor1:" << out_tensor1.sizes() << '\n';//out_tensor1:[1, 32, 288, 384]
torch::Tensor score;
if (isCUDA) {
torch::Tensor new_out_tensor = out_tensor.cpu();// .to(torch::kCPU).detach();
score = new_out_tensor.squeeze(0);// [288, 384, 2]
}
else {
score = out_tensor.squeeze(0);// [288, 384, 2]
}
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
GPU and CPU should have the same output.
## Environment
Collecting environment information...
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0
OS: Microsoft Windows 10 ε°ζ₯η
GCC version: Could not collect
CMake version: version 3.14.4
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
Nvidia driver version: 426.00
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.16.4+mkl
[pip3] numpydoc==0.8.0
[pip3] torch==1.2.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.4.0
[pip3] torchviz==0.0.1
[conda] blas 1.0 mkl
[conda] mkl 2019.1 144
[conda] mkl-service 1.1.2 py36hb782905_5
[conda] mkl_fft 1.0.6 py36h6288b17_0
[conda] mkl_random 1.0.2 py36h343c172_0
[conda] numpy 1.16.4+mkl pypi_0 pypi
[conda] torch 1.1.0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.4.0 pypi_0 pypi
[conda] torchviz 0.0.1 pypi_0 pypi
cc @suo @yf225 | module: cpp,module: nn,triaged | medium | Critical |
495,663,437 | flutter | TextEditingController documentation should mention that it shouldn't be created in build | If you use Pinyin to input Chinese characters, the intermediate letters will also be treated as "changed" texts when using "onChanged" callback if you set texts for the text controller in the build function.
This also affects Japanese Input method too.
If you want to reproduce this issue, please clone this repo:
https://github.com/rockingdice/test_multistep_input.git
Tap the Text Field at the bottom of the screen, use any multi-step input method will reproduce this issue.
I could put the controller outside the build as a workaround, but I don't think this behavior is correct.
@cbracken maybe knew this issue?
This image demos the issue:
I inputted "nihongo" but it gives out all the intermediate letters after each inputted letter.

| a: text input,framework,f: material design,a: internationalization,d: api docs,d: examples,P3,team-text-input,triaged-text-input | low | Major |
495,668,693 | terminal | Feature Request: global hotkey to re-run last command | # Description of the new feature/enhancement
I would really like to see the ability to bind a global hotkey to repeat the last command in the visible terminal. In essence all it does is: `up arrow` and `enter` on the terminal that is visible. Should work even when terminal has no focus since I'm pressing this global hotkey when I'm not in the terminal.
Global hotkeys should be bindable to any key, including special keys such as Scroll Lock, Pause/Break and the Context Menu (the button next to the right CTRL).
iTerm2 has this functionality and no other terminal emulator I've tried has this, on Linux or Windows. I'm sure this is achievable with AutoHotkey but I'd prefer native functionality.
Use case for me:
When testing or running experimental scripts, it is common to rerun your script every time you make a change. It is very productive for me if I don't need to take my hands off the keyboard for this. Personally I would bind this functionality to the "context menu" key (between CTRL and Windows), since I never use that button anyway.
| Issue-Feature,Help Wanted,Area-UserInterface,Product-Terminal | low | Major |
495,728,013 | create-react-app | Service worker: delete index.html from cache in onupdatefound event | ### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
When the service worker in `registerServiceWorker.(js|ts )` is enabled it provides very useful PWA benefits.
However, in a production build it also caches the `index.html` file which references the names & hashes of the outputted JS chunks by the webpack build. When a new build is created, new chunks and hashes are generated, however, the service worker at the client keeps the old `index.html` and looks for the old chunks, which completely breaks the app as no JS can be obtained.
### Describe the solution you'd like
The registration of the service worker in `registerServiceWorker.ts` already detects new content available from the server and logs a prompt to refresh in the console. In the `onupdatefound` event [here](https://github.com/facebook/create-react-app/blob/a1afaa6376590f732d3f32e6ab9a6dcf8c8aafbc/packages/react-scripts/template-typescript/src/serviceWorker.ts#L77) there is an assumption that old content has been purged, when in fact even after closing all tabs and hard reloading I keep seeing this message and when I check the cache in DevTools and the network I see it's coming from the service worker. Other issues can be found here #7121 #4674 #5316 which can't get past this issue as well.
At the event handler as well as the console log of `New content is available; please refresh.`, it would be better if `index.html` is manually deleted from the cache using the `Cache` API, the cache key seems to be in the form of `workbox-precache-https://mysite.io/`.
### Describe alternatives you've considered
Turning the service worker off takes away a lot of benefits and PWA functionality, so it's not really an option as proposed in other issues related to this.
Many thanks :) | issue: proposal,needs triage | medium | Critical |
495,761,805 | rust | Unit struct appears to be a NoType in pdb files | A unit struct has the following debug metadata:
"'()", DW_ATE_unsigned
and when reading the type data from generated pdb file I get a <no type> for it.
```rust
#[inline(never)]
fn foo(_: ()) { }
fn main() {
let x = ();
foo(x);
}
```
Compile it, read pdb data from the pdb file in target/debug :
```bash
llvm-pdbutil dump -symbols mycrate.pdb | rg 'mycrate::foo' -A2
```
and you'll get something like:
```bash
212 | S_LPROC32 [size = 52] `plop::foo`
parent = 0, end = 324, addr = 0001:0352, code size = 3
type = `0x101D (void (<no type>))`, debug start = 0, debug end = 0, flags = none
```
It's probably because the size of an unit struct is zero (am I correct ?) and then when dumping into pdb then it is converted into no type.
I think it would be better to have an empty struct instead of an unsigned.
| A-debuginfo,T-compiler,O-windows-msvc,C-bug | low | Critical |
495,840,716 | rust | -C target-feature/-C target-cpu are unsound | In a nutshell, target-features are part of the call ABI, but Rust does not take that into account, and that's the underlying issue causing #63466, #53346, and probably others (feel free to refer them here).
For example, for an x86 target without SSE linking these two crates shows the issue:
```rust
// crate A
pub fn foo(x: f32) -> f32 { x }
// crate B
extern "Rust" {
#[target_feature(enable = "sse")] fn foo(x: f32) -> f32;
}
unsafe { assert_eq!(foo(42.0), 42.0) }; // UB
```
The ABI of `B::foo` is ` "sse" "Rust"` but the ABI defined in `A::foo` is just `"Rust"`, no SSE, since the SSE feature is not enabled globally. That's an ABI mismatch and results in UB of the form ["calling a function with an incompatible call ABI"](https://github.com/rust-lang-nursery/reference/blob/master/src/behavior-considered-undefined.md). For this particular case, `B::foo` expects the `f32` in an xmm register, while `A::foo` expects it in an x87 register, so the roundtrip of `42.0` via `foo` will return garbage.
Now, this example is not unsound, because it requires an incompatible function declaration, and unsafe code to call it - and arguably, the `unsafe` asserts that the declaration is, at least correct (note: we forbid assigning `#[target_feature]` functions to function pointers and only allow enabling features and using white-listed features because of this).
However, you can cause the exact same issue, but on a larger scale, by using `-C target-feature`/`-C target-cpu` to change the features that a Rust crate assumes the `"Rust"` ABI has, without any unsafe code:
```rust
// crate A: compiled without -C target-feature
pub fn foo(x: f32) -> f32 { x }
// crate B: compiled with -C target-feature=+sse
// this is now safe Rust code:
assert_eq!(A::foo(42.0), 42.0) }; // UB
```
So that's the soundness bug. Safe Rust can exhibit [undefined behavior of the form "calling a function with an incompatible call ABI"](https://github.com/rust-lang-nursery/reference/blob/master/src/behavior-considered-undefined.md).
This is an issue, because when `RUSTFLAGS="-C target-cpu=native"` is used, not all crates in the dependency graph are compiled with those features. In particular, `libstd` and its dependencies are not recompiled at all, so their "Rust" ABI might be different than what the rest of the dependency graph uses. `-C target-feature` also allows disabling features, `-C target-feature/target-cpu` allow enabling/disabling features that are not white-listed (e.g. avx512f if your CPU supports it can be enabled using `-C target-feature` and will be enabled using `-C target-cpu=skylake` or `=native`even though the `avx512f` feature is unstable).
How important is fixing this ? I'd say moderately important, because many features are compatible. For example, the `"sse2" "Rust"` ABI has the same calling convention as the `"sse3" "Rust"`, so even though the call ABIs aren't identical, they are compatible. That is, this bug is unlikely to cause issues in practice, unless you happen to end up with two crates where the difference in target features changes the ABI.
I think that rustc should definitely detect that the ABIs are incompatible, and at least, error at compile-time when this happen, explaining what went wrong, which target-features differed in an incompatible way, etc.
We could make such code work, e.g., by just treating target-features as part of the calling convention, and following the calling convention properly. I don't think fixing this would be useful for people doing `-C target-feature` globally, because when that happens, chances are that they wanted to compile their whole binary in a specific way, as opposed to having different parts of the binary somehow interoperate.
It would however be useful for improving how we currently handle SIMD vectors. For example, a vector types like f64x4 will be passed in different registers (2x 128-bit or 1x 256-bit) depending on whether sse or avx are available. We currently "solve" this problem by always passing vectors to functions indirectly by memory. That is, on the caller side we spill the vector registers into the stack, create a pointer to it, pass it to the callee, and then the callee loads the content into the appropriate vectors. Since target-features are not part of the calling convention, we always do this, as opposed to, only when the calling convention is incompatible. So having target-features be part of the calling convention might be a way to improve that. | A-codegen,P-medium,T-compiler,I-unsound,C-bug,A-ABI,A-target-feature | medium | Critical |
495,856,167 | flutter | Staggered GridView in Flutter | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
Why isn't there in-built support for a staggered grid view in Flutter?
This is a basic use case that is covered by almost any mobile framework.
For example, Android has a `StaggeredGridLayoutManager` for `RecyclerView`.
I don't want to rely on a 3rd party library for this type of UI constructs.
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
--> | c: new feature,framework,a: fidelity,f: scrolling,would be a good package,c: proposal,P3,team-framework,triaged-framework | medium | Critical |
495,882,224 | flutter | Hot restart/reload for web should allow controlling invalidation of resources. | We should know in the tool what resources have been invalidated when performing a hot restart. In should be possible to, via some debugger connection, provide the list of invalidated resources and and preserve the rest.
This would remove the loading jank for applications with lots of fonts. | tool,c: performance,t: hot reload,platform-web,P2,team-web,triaged-web | low | Critical |
495,896,083 | go | x/text/language: language Matcher does not properly support zh_HK (Hong Kong) | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
1.13
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
Reproduces on all processors and architectures
### What did you do?
Tried to match the following language header from Google Chrome:
<pre>zh-HK,zh;q=0.9,zh;q=0.8,en-GB;q=0.7,en;q=0.6,ja;q=0.5,th;q=0.4,en-US;q=0.3</pre>
https://play.golang.org/p/hQ_FUehpgoZ
### What did you expect to see?
Language matcher returns the Hong Kong region and Traditional Chinese character set
### What did you see instead?
Language matcher is unaware of the Hong Kong region and falls back to the next language (Chinese or Taiwanese). Does not return traditional character set unless zh-Hant or zh-TW is explicitly set in the language header.
| NeedsInvestigation | low | Minor |
495,902,225 | go | proposal: cmd/doc: add "// Unstable:" prefix convention | ### What did you see today?
Many projects have different stability guarantees for the exported symbols in a package. Others rely on generated code that cannot, or will not, give stability guarantees. As such, most authors will document this pre-release or instability in doc strings, but the syntax and conventions are all over the place.
The standard library likes to reference the `Go compatibility promise` or `Go 1 compatibility guidelines`, since it is bound by them, however these do not work well for community packages. Other terms like `non-portable` and `EXPERIMENTAL` are descriptive and well explained in `unsafe` and `syscall/js` respectively.
Some community libraries have used terms like `// Alpha: ...`, `// Beta: ...`, and `// Unstable: ...`, which work as well. There could be an argument for not releasing pre-release features on a stable branch, but other times like with the [proto client/server `interfaces`](https://github.com/grpc/grpc-go/issues/2318), instability is a guarantee that must be [worked around](https://github.com/golang/protobuf/pull/785).
### What would you like to see?
Similar to [`// Deprecated: ...`](https://github.com/golang/go/issues/10909), I would like to see the stabilization of supported comment tags for unstable symbols.
They support the same three formats that same [three formats](https://github.com/golang/go/issues/10909#issuecomment-136492606) that deprecation has.
These tags should also allow such symbols to be excluded from the [`gorelease`](https://github.com/golang/go/issues/26420) tool.
A single tag should be sufficient:
```go
// Unstable: ...
```
When interacting with released, finalized symbol that cannot or will not be stabilized, the description can provide stability workarounds, alternatives, or what guarantees should be expected.
When interacting with pre-release features, a proposed timeline can be given or alternatives for customers requiring stability guarantees.
### What did not work?
The `// Alpha: ...` and `// Beta: ...` options looked promising, since they would only be used for temporary instability as part of the release process. The two terms overload one another (what is the difference between alpha, beta, and `// PreRelease: ...`?), leading to confusion. Furthermore, the programmatic benefits of knowing an API will stabilize in a future release is not that useful, "is it unstable now?" is more important.
The `// Experimental: ...` syntax used by the standard library implies the notion that the feature will eventually be stabilized. This further overloads it with alpha, beta, etc. and does not fit the needs of the above gRPC interfaces.
The `// NonPortable: ...` syntax is too domain specific to `unsafe` to make sense for purely semantic changes to packages. It makes sense for `unsafe`, but does not generalize | Documentation,Proposal,Proposal-Hold | high | Critical |
495,936,490 | flutter | Do not create an onscreen surface when foregrounding if no FlutterViewController is visible | Spin off from https://github.com/flutter/flutter/issues/38736 and https://github.com/flutter/engine/pull/12277#issuecomment-531987029
Right now when coming back into the foreground on iOS, the PlatformViewIOS will recreate the IOSSurface. This isn't necessary if the FlutterViewController isn't visible. | platform-ios,engine,a: platform-views,c: proposal,P2,team-ios,triaged-ios | low | Minor |
495,975,643 | go | net/http/httputil: ReverseProxy doesn't handle connection/stream reset properly | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
net/http/httputil: ReverseProxy
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/jzhan/Library/Caches/go-build"
GOENV="/Users/jzhan/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/jzhan/gocode"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.13/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.13/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/4p/ch6dsl490v9227wwhd9rnb7w0000gn/T/go-build248622450=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I am trying to test the behavior of HTTP forwarding using `httputil.Reverseproxy`. Especially the behavior when the backend server closes the connection (for HTTP/1) or resets the stream (for HTTP/2). To test the stream reset behavior, the server uses `panic(http.ErrAbortHandler)` in the handler.
Wrote some tests for these.
For HTTP1: https://github.com/jaricftw/go-http-reverseproxy/blob/master/http1_test.go#L20, which tests:
1). http1 client -> http1 server
2). http1 client -> httputil.Reverseproxy -> http1 server
For HTTP2: https://github.com/jaricftw/go-http-reverseproxy/blob/master/http2_test.go#L25, similarly:
1). h2c client -> h2c server
2). h2c client -> httputil.Reverseproxy (with h2c setup) -> h2c server
### What did you expect to see?
I would expect all tests to pass. i.e. the response for W/ and W/O reverseproxy should be the same in the stream reset scenario
### What did you see instead?
In the happy case, w/ and w/o reverseproxy tests all passed, for both HTTP/1 and HTTP/2.
However, I see different responses W/ and W/O reverseproxy for the connection/stream reset test cases:
**For HTTP/1**:
W/O reverseproxy:
```
PASS: TestHTTP1/direct,_server_panic (0.00s)
http1_test.go:62: got err: Get http://127.0.0.1:6666: EOF
http1_test.go:63: got resp: <nil>
```
W/ reverseproxy:
```
FAIL: TestHTTP1/via_proxy,_server_panic (0.00s)
http1_test.go:62: got err: <nil>
http1_test.go:63: got resp: &{502 Bad Gateway 502 HTTP/1.1 1 1 map[Content-Length:[0] Date:[Thu, 19 Sep 2019 20:52:56 GMT]] {} 0 [] false false map[] 0xc0001da300 <nil>}
require.go:248:
Error Trace: http1_test.go:65
Error: An error is expected but got nil.
Test: TestHTTP1/via_proxy,_server_panic
```
**For HTTP/2:**
W/O reverseproxy:
```
PASS: TestHTTP2/direct,_server_panic (0.00s)
http2_test.go:78: got err: Get http://127.0.0.1:7777: stream error: stream ID 3; INTERNAL_ERROR
http2_test.go:79: got resp: <nil>
```
W/ reverseproxy:
```
FAIL: TestHTTP2/via_proxy,_server_panic (0.00s)
http2_test.go:78: got err: <nil>
http2_test.go:79: got resp: &{502 Bad Gateway 502 HTTP/2.0 2 0 map[Content-Length:[0] Date:[Thu, 19 Sep 2019 19:01:15 GMT]] 0xc00022e700 0 [] false false map[] 0xc000250800 <nil>}
require.go:248:
Error Trace: http2_test.go:81
Error: An error is expected but got nil.
Test: TestHTTP2/via_proxy,_server_panic
FAIL
```
| NeedsInvestigation | low | Critical |
496,022,384 | go | cmd/go: do not allow the main module to replace (to or from) itself | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13 darwin/amd64
</pre>
This problem is also seen on Linux using go1.12.3
<pre>
$ go version
go version go1.12.3 linux/amd64
</pre>
And on Mac with the tip branch
<pre>
$ gotip version
go version devel +fe2ed50 Thu Sep 19 16:26:58 2019 +0000 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/rselph/Library/Caches/go-build"
GOENV="/Users/rselph/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/rselph/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/opt/local/lib/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/opt/local/lib/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="/usr/bin/clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/4w/6dmz_gmd6fj_qy_nwdwclg8c0000gp/T/go-build675848548=/tmp/go-build -gno-record-gcc-switches -fno-common"
GOROOT/bin/go version: go version go1.13 darwin/amd64
GOROOT/bin/go tool compile -V: compile version go1.13
uname -v: Darwin Kernel Version 18.7.0: Tue Aug 20 16:57:14 PDT 2019; root:xnu-4903.271.2~2/RELEASE_X86_64
ProductName: Mac OS X
ProductVersion: 10.14.6
BuildVersion: 18G95
lldb --version: lldb-1001.0.13.3
Swift-5.0
</pre></details>
### What did you do?
Option A:
```
$ git clone https://github.com/rselph-tibco/go-unstable-mods.git
$ cd go-unstable-mods
$ git checkout start_here
$ git switch -c new_branch
$ ./run.sh
```
The git repository will be updated with the results of each step. Optionally, comment out the git
commands to simply produce the error without recording results along the way.
This is equivalent to:
1. Start with the contents of the attached file
[go-unstable-mods-start_here.tar.gz](https://github.com/golang/go/files/3633168/go-unstable-mods-start_here.tar.gz)
1. Set GOPATH and GOCACHE to point at empty directories
1. From the sample1 directory run `go mod tidy`
1. From the sample2 directory run `go mod tidy`
1. From the sample2 directory run `go install ./...`
1. From the sample1 directory run `go install ./...`
1. Repeat the last step indefinitely
At this point, sample1/go.mod will never stabilize.
### What did you expect to see?
go.mod should stabilize when the build is given the same inputs over and over.
### What did you see instead?
go.mod eventually oscillates between two states, preventing `-mod readonly` from *ever* working, and wreaking
havoc with source control.
| help wanted,NeedsFix,GoCommand,modules | medium | Critical |
496,030,508 | kubernetes | Cache CRD conversion results | **What would you like to be added**:
A cache of CRD conversion results.
Cache entries should be keyed by UID (namespace, name), GVK and resource version, I believe.
**Why is this needed**:
Based on the benchmarks data we gathered for the [CRD scalability targets](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20180415-crds-to-ga.md#scale-targets-for-ga) this should significantly to reduce volume of CRD conversion webhook requests, improving scalability and reducing latency.
/sig api-machinery
/area custom-resources
/priority important-longterm
/cc @roycaihw @liggitt | sig/api-machinery,kind/feature,priority/important-longterm,area/custom-resources,lifecycle/frozen | low | Major |
496,055,201 | flutter | Embedded platform views don't support Switch Access | /cc @amirh @goderbauer
Switch Access appears to work fine with Flutter Gallery. | platform-android,framework,a: accessibility,from: a11y review,a: platform-views,P2,team-android,triaged-android | low | Minor |
496,056,757 | flutter | Voice Access bugs | /cc @amirh @goderbauer
We appear to support basic navigation, but from some quick testing with the Gallery I'm seeing some issues.
- [ ] Text fields don't support input.
- [ ] Not all UI elements are correctly discovered and annotated. I'm not sure if this is an issue with the Gallery itself or an underlying semantics issue.
- [ ] After selecting all text, the text editing toolbar does not appear.
- [ ] Deleting selected text shows a "this item does not support spoken..." response. | platform-android,engine,a: accessibility,from: a11y review,P2,team-android,triaged-android | low | Critical |
496,057,734 | flutter | Embedded platform views don't support Voice Access | Embedded platform views appear to lack basic functionality with Voice Access. I was able to open the hamburger menu inside of our webview app, but nothing else in the embedded UI had controls.
This is likely blocked on #40912.
/cc @amirh @goderbauer | platform-android,engine,a: accessibility,from: a11y review,a: platform-views,P2,team-android,triaged-android | low | Minor |
496,089,270 | flutter | [video_player] add buffering controls | I'd like the ability to pause/resume buffering on `video_player`, why?
We've got a situation where we have multiple videos loading offscreen and we'd like video_player to buffer only enough to keep up once the user navigates to view that video. As far as I know, video_player currently buffers 100% of the video as quickly as possible which is normally appropriate but not when you have 6-12 videos buffering slightly off screen.
| c: new feature,p: video_player,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | high | Critical |
496,120,301 | flutter | TextSpan backgroundColor with special Arabic character are not displayed correctly | When using many TextSpan with background color being set, a transparent border appears around the spans and sometimes it's gone depending on the font size. It will be even worse when using Arabic text that contains "Tashkeel - which are special Arabic characters", there will be more gaps between the spans specially if the alginment is not set to left. As may I have noticed, when span splitted into more than one line, things become worse.
**Steps to Reproduce**
1. To test for Arabic texts, first download an Arabic font.
http://fonts.cooltext.com/Downloader.aspx?ID=11182
2. Create "fonts" folder in the application and move the font file there:
3. Define the font in pubspec.yaml
```
fonts:
- family: Andalus
fonts:
- asset: fonts/andlso.ttf
```
- Note: using the default font with Arabic texts will mess up the background color. That's why you need to download the font.
4. Use this code in main.dart:
```dart
import 'package:flutter/material.dart';
void main() => runApp(MainApp());
class MainApp extends StatefulWidget {
@override
_MainAppState createState() => _MainAppState();
}
class _MainAppState extends State<MainApp> {
final _arrText = [
/* English text */
// 'this is sentence one. ',
// 'sentence 2 is here. ',
// 'and here is sentence 3'
/* Arabic text without Tashkeel */
// 'Ψ§ΩΨ¨Ψ±Ω
Ψ¬Ψ© ΩΩΨ§ΩΨ© Ψ¬Ω
ΩΩΨ© Ψ¬Ψ―Ψ§ ',
// 'Ψ₯ΩΨ§ Ψ£Ω Ψ§ΩΩΨ«ΩΨ±ΩΩ ΩΨ³ΨͺΨ΅ΨΉΨ¨ΩΩΩΨ§ Ψ¨Ψ΄ΩΩ ΩΨ¨ΩΨ±',
// 'ΩΨ₯ΩΩ ΩΨ£Ψ±Ψ§ΩΨ§ Ψ΄ΩΩΨ© Ψ³ΩΩΨ©',
/* Arabic text with Taskeel */
'Ψ§ΩΨ¨ΩΨ±ΩΩ
ΩΨ¬Ψ©Ω ΩΩΩΩΨ§ΩΩΨ©Ω Ψ¬ΩΩ
ΩΩΩΩΩΨ©Ω Ψ¬ΩΨ―ΩΨ§Ω ',
'Ψ₯ΩΨ§Ω Ψ£ΩΩ Ψ§ΩΩΨ«ΩΨ±ΩΩΩ ΩΨ³ΨͺΨ΅ΨΉΨ¨ΩΩΩΨ§ Ψ¨Ψ΄ΩΩΩ ΩΨ¨ΩΨ±Ω ',
'ΩΨ₯ΩΩΩ ΩΨ£Ψ±Ψ§ΩΨ§Ω Ψ΄ΩΩΨ©Ω Ψ³ΩΩΨ©Ω'
];
var _fontSize = 58.0;
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: Text('TextSpan bg Bug'),
),
body: Container(
// color: Colors.brown,
padding: EdgeInsets.all(7), // span background color at the end of the sentence is not respecting this padding
child: Column(
children: <Widget>[
Row(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
RaisedButton(
child: Text('-'),
onPressed: () => setState(() => _fontSize--),
),
RaisedButton(
child: Text('+'),
onPressed: () => setState(() => _fontSize++),
)
],
),
RichText(
textDirection: TextDirection.rtl,
// for Arabic texts, background color is better when aligned left
textAlign: TextAlign.center,
text: TextSpan(
style: TextStyle(
color: Colors.black,
fontSize: _fontSize,
fontFamily: 'Andalus', // comment this line when using english texts
),
children: [
..._arrText
.map(
(span) => TextSpan(
text: span,
style: TextStyle(
backgroundColor: Colors.lightBlue,
fontFamily: 'Andalus', // comment this line when using english texts
),
),
)
.toList()
]),
),
],
),
),
),
);
}
}
```
**Screenshots**
English Text:

Arabic Text with no "Tashkeel":

Arabic Text with "Tashkeel":

```
[β] Flutter (Channel stable, v1.9.1+hotfix.2, on Microsoft Windows [Version 10.0.17763.678], locale en-US)
β’ Dart version 2.5.0
[β] Android toolchain - develop for Android devices (Android SDK version 29.0.0)
[β] Android Studio (version 3.4)
β’ Flutter plugin version 36.0.1
β’ Dart plugin version 183.6270
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[β] VS Code (version 1.38.1)
β’ Flutter extension version 3.4.1
[β] Connected device (1 available)
β’ No issues found!
``` | framework,engine,a: internationalization,a: quality,a: typography,has reproducible steps,P2,found in release: 3.7,found in release: 3.9,team-engine,triaged-engine | low | Critical |
496,124,563 | youtube-dl | Support of Le Monde | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.12.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [X] I'm reporting a new site support request
- [X] I've verified that I'm running youtube-dl version **2019.09.12.1**
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that none of provided URLs violate any copyrights
- [X] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.lemonde.fr/festival/video/2017/08/30/que-se-passerait-il-si-tout-le-monde-etait-vegan_5178386_4415198.html
- Single video: https://www.lemonde.fr/sciences/video/2019/09/12/correlation-et-causalite-peut-on-decrocher-un-prix-nobel-en-mangeant-du-chocolat_5509656_1650684.html
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE
Le Monde videos are actually on YouTube or Dailymotion but youtube-dl doesn't catch them. | site-support-request | low | Critical |
496,130,457 | flutter | dispose() of Stateful widgets is not called when the App is exited by pressing the backbutton | Why are some widget state dispose() executed when the page is closed, some widget states don't execute dispose, it's really annoying, I want to cry, and the resource release is a problem. | platform-android,framework,dependency: android,has reproducible steps,P3,found in release: 3.3,found in release: 3.7,team-android,triaged-android | high | Critical |
496,132,848 | go | go/types: what is the (types.Object) identity of embedded methods? | Up to Go 1.13, embedded interfaces could not have overlapping methods. The corresponding `go/types` implementation, when computing method sets of interfaces, reused the original methods in embedding methods. For instance, given:
```Go
type A interface {
m()
}
type B interface {
A
}
```
the `go/types` method `Func` `Object` for `m` in `B` was the same as in `A`. We know that some (external) tests depended on this property. A consequence of this is that the source position of `B.m` is the same as the one of `A.m`; it is not the position of the embedded `A` in `B`.
With Go 1.14, embedded interfaces may overlap. The same method `m` (with identical signature), may be embedded in an interface through multiple embedded interfaces:
```Go
type C interface {
A
A2 // A2 also defines m()
A3 // A3 also defines m()
}
```
Now, it is not clear which "original" method `m` should be used in the method set for `C`. It could be the "first" one (as in the one embedded via the earliest embedded interface providing `m` in the source, so `A` in this example), or it could be undefined. Or it could be a new method `Func` `Object`. Note that the latter choice wouldn't be of much help when deciding which position should be assigned to `m` as it could be any of the embedded interfaces that provide `m`.
This boils down to what kind of API guarantee should be provided by `go/types`. | NeedsDecision | low | Minor |
496,169,652 | vue | v-slot to be used in case a slot prop is undefined error | ### Version
2.6.10
### Reproduction link
[https://codepen.io/gongph/pen/bGbQLGE?editors=1010](https://codepen.io/gongph/pen/bGbQLGE?editors=1010)
Reproduction code at below:
Javascript:
```js
Vue.component('current-user', {
data () {
return {
item: {
user: {
name: 'defualt name'
}
}
}
},
template: `
<div>
<slot v-bind="item"></slot>
</div>
`
})
new Vue({
el: '#app'
})
```
Html:
```html
<div id="app">
<current-user v-slot="{ user }">
<!-- page print: `default name` -->
{{ user.name }}
</current-user>
<current-user v-slot="{ user = { name: 'gongph' } }">
<!-- page print: { 'name': 'default name' }-->
{{ user }}
</current-user>
</div>
```
### Steps to reproduce
1. open brower console
2. console print [Vue warn] is :
```bash
[Vue warn]: Error compiling template:
invalid expression: Invalid shorthand property initializer in
v-slot="{ user = { name: 'gongph' } }"
```
### What is expected?
if prop is undefined show `gongph` value. for example:
```js
Vue.component('current-user', {
data () {
return {
item: '' // item is undefined
}
},
template: `
<div>
<slot v-bind="item"></slot>
</div>
`
})
```
```html
<current-user v-slot="{ user = { name: 'gongph' } }"
<!-- expected output `gongph` -->
{{ user.name }}
</current-user>
```
### What is actually happening?
`gongph` value can normal render , but console show warn message:
```bash
[Vue warn]: Error compiling template:
invalid expression: Invalid shorthand property initializer in
v-slot="{ user = { name: 'gongph' } }"
```
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | low | Critical |
496,171,425 | pytorch | Multilinear map | ## π Feature
Multilinear layer: a generalization of the Linear and Bilinear layers/functions to any configurable number of variables. This should be a learnable multilinear map that does the equivalent of this function, plus a bias (from the [Multilinear map Wikipedia page](https://en.wikipedia.org/wiki/Multilinear_map)):

## Details
Implement the following:
`torch.nn.functional.multilinear((input1, ..., inputN), weight, bias=None)`
`torch.nn.Multilinear((in1_features, ..., inN_features), out_features, bias=True)`
I think the rank of the weight tensor would be the number of input features plus one.
## Motivation
This is useful for situations where you want to combine more than two vectors using a learnable function that is linear in all variables. A good use case would be tensor fusion with more than two variables.
I plan to implement this for use in a project. It should be fairly easy to implement using something like `torch.einsum`, but a native C++ implementation could speed things up significantly.
Should I submit a PR when this is done?
cc @albanD @mruberry | module: nn,triaged,function request | low | Major |
496,183,236 | TypeScript | Abstract classes will still import the value for a computed property at runtime |
**TypeScript Version:** 3.7.0-dev.20190920
abstract class import computed property
abstract class computed property imports for runtime
**Code**
sharedSymbol.ts
```ts
export const sharedSymbol: unique symbol = Symbol();
```
AbstractClass.ts
```ts
import { sharedSymbol } from "./sharedSymbol";
abstract class AbstractClass {
public abstract [sharedSymbol]: string;
}
```
**Expected behavior:**
sharedSymbol.js is not imported by AbstractClass.js
**Actual behavior:**
The symbol is imported by AbstractClass.js despite it's use being omitted due to it's abstract nature
**Playground Link:** Unavailable due to the bug being related to imported values.
**Related Issues:** No, I did not find any
| Bug | low | Critical |
496,205,813 | flutter | First line Indentation For Text | I am developing a Reader with Flutter. When it comes to text rendering, I have not found the API for setting the first line indentation. Could you add this feature?
Flutter version : v1.9.1+hotfix.2
| c: new feature,framework,a: typography,c: proposal,P3,team-framework,triaged-framework | low | Major |
496,213,437 | storybook | TypeScript Props table limitations (MDX) <Props of={Component}/> | I'm not sure if this also the case stories written in CSF.
I've noticed a couple of limitations when trying to auto generating prop tables in typescript.
**1. It looks like only Interfaces are supported, using a Type seems to break it.**
**2. using generics (in the case of react) seem to break it too, example below:**
This works:
```
export const Text = ({
padding = '0',
margin,
}: Props) => ()
```
This doesn't
```
export const Text: React.FC<Props> = ({
padding = '0',
margin,
}) => ()
```
**3. Using imported types like the example below are always inferred as `any`**
```
export interface Props {
foo: SomeImportedType['property']
}
```
**4. Doesnt work with React.forwardRef** | bug,typescript,addon: docs,block: props | medium | Major |
496,224,022 | go | cmd/gofmt: inconsistent formatting of spacing around operators | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.9 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes, tested with `format` button on https://play.golang.org/p/Ei6Hu4QqAHa
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/{username}/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/{username}/go"
GOPROXY=""
GORACE=""
GOROOT="/snap/go/4301"
GOTMPDIR=""
GOTOOLDIR="/snap/go/4301/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/dev/null"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build430600410=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
Operators inside function calls are inconsistently formatted depending if they are inside `append` or not.
### What did you expect to see?
```go
bar := []int{
baz(1 * 2),
}
bar = append(bar,
baz(1 * 2),
)
```
### What did you see instead?
```go
bar := []int{
baz(1 * 2),
}
bar = append(bar,
baz(1*2),
)
``` | NeedsInvestigation | low | Critical |
496,233,771 | pytorch | MKLDNN+AMD BLIS path for PyTorch | ## π Feature
We have added MKLDNN+AMD BLIS path for PyTorch and want to upstream our changes to master branch
## Motivation
PyTorch with MKLDNN+AMD BLIS would give higher performance for DNN applications on AMD Platforms
## Pitch
We want to upstream our changes to the master branch. We have added new CMake file for taking the AMD BLIS path
## Alternatives
BLIS path is alternative to the OpenBLAS path which already exists
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh | feature,module: cpu,triaged,module: mkldnn | low | Major |
496,236,732 | godot | Handling signals which were emitted in a thread unsafe? | **Godot version:**
3.1.1
**OS/device including version:**
Ubuntu 19.04, Windows 10
**Issue description:**
I want to emit a signal in a thread to notify that something new is available, so that it can be handled safely in the main thread. However, it seems like when I emit a signal in a thread, the function which is connected to it is not thread safe as well.
Since the errors are very hard to interpret (usually something like 10054 socket error), it's hard to know where exactly something unsafe is happening. I think the documentation needs to be extended with how signals and threads interact.
If this is really currently not the case, it would be very helpful for signal handling to always be thread safe, no matter where it came from. Until then, I see this mainly as a documentation issue (there is no mention of signals in the 'Thread safe APIs' page). | discussion,topic:core | low | Critical |
496,247,618 | flutter | A command line option to flutter drive and flutter test to fail when exceptions are detected | When `flutter drive` excecutes, it will log exceptions caught by the rendering library (and others, like the widget library). For example:
```
I/flutter (20477): βββ‘ EXCEPTION CAUGHT BY RENDERING LIBRARY ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
I/flutter (20477): The following assertion was thrown during layout:
I/flutter (20477): A RenderFlex overflowed by 1.00 pixels on the bottom.
```
This does not cause a `flutter drive` integration test to fail.
The use case: Build new interfaces, or use plugins that do not format well on some devices. Have emulators running as part of a CI build to execute `flutter drive` in parallel and fail if a supported device cannot render part of the UI.
That `flutter drive` can detect and log it to stdout means that `flutter drive` can detect it. The exception text even explains why it is an error, but `flutter drive` does not fail a test when this condition is detected.
Desired behavior: A command line option that will cause `flutter drive` to exit with a non-zero return code so that CI builds fail when an exception is detected. I believe this should be the default behavior, but there may be plenty of "passing" tests in the wild already that would break with this change.
Of course it is very difficult to surface this in the context of a test where any user code is on the stack, but `flutter drive` is a testing tool, and should absolutely have an option to fail the test under conditions where code intends for something to be seen, but is not rendered due to device constraints.
If this is already a feature, but now well known, perhaps the documentation for "testing with flutter" would be a good place to surface that information.
| a: tests,c: new feature,tool,t: flutter driver,c: proposal,P3,team-tool,triaged-tool | low | Critical |
496,252,792 | kubernetes | Disruption controller support configure workers' number | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Nowadays in disruption controller we only run a worker and a recheck worker to sync the pdbs. But in most controllers we can configure the number of workers. Should we make it configurable?
**Why is this needed**:
When we have too much pdbs, the worker will be overload and cannot handle the syncing in time. It makes the pdb status lags with real state, and eviction handler cannot work properly. | priority/backlog,sig/scalability,kind/feature,sig/apps,lifecycle/frozen,needs-triage | low | Major |
496,332,761 | godot | Child node doesn't initially detect the correct parent type when it is overriden in an instanced scene | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
3.1.1.
**OS/device including version:**
Windows 10 64 bit
**Issue description:**
I have two C# scripts, Parent.cs and Child.cs, and a scene (let's call it SceneA) as follows:
- Parent
- Child (Child.cs)
Then I instance that scene in another one (SceneB), and I add a script to the Parent node:
- Parent (Parent.cs) -> Instance of SceneA
- Child (Child.cs)
```
public class Parent : Node { }
```
```
public class Child : Node {
public override void _Notification(int what) {
switch(what) {
case NotificationParented:
if(GetParent() is Parent){
// Code
}
break;
}
}
}
```
When I run SceneA, the code inside this if statement:
```
if(GetParent() is Parent){
// Never executed
}
```
is not executed, and that's expected, but **the same happens even if I run SceneB, where the Parent node should be an instance of Parent.cs class.**
The code runs correctly if the Parent node inside SceneA has Parent.cs attached in the first place.
Seems like at the time NotificationParented is emitted the instanced scene is yet to be updated with overridden values? | bug,topic:core,topic:dotnet | low | Minor |
496,341,971 | godot | Tilemap in a nested viewport in a project with scaling enabled, offsets the position of the mouse | **Godot version:**
3.1.1
**OS/device including version:**
PC/Fedora Linux 30 (but seems to affect everything)
**Issue description:**
When you have a project where scaling is enabled and you have a tilemap within a viewport (i.e. not in the root viewport), the methods to return the mouse position start getting offset by a variable amount. The offset increases the further the mouse travels to the bottom-right. The offset also changes depending on the viewport zoom level.
**Steps to reproduce:**
* Project where scaling is enabled
* Create a 'Node2d > Viewport Container > Viewport > Tilemap' node tree
* Create code to display the position of the mouse using get_local_mouse_position() as well as the current cell position.
You will see that the location of the mouse cursor, will not correspond to the cell your mouse is actually over. It's easier to see if you put a sprite to display the cell.
If you move the tilemap to have Node2D as the parent, the position will start working correctly. If you disable scaling in the project properties, it will also fix this issue.
I have also tried get_global_mouse_position() and others positioning options.
I also tried multiplying with various Transform2D objects during testing with no success.
**Minimal reproduction project:**
https://drive.google.com/file/d/1zUo4lsBz5Bn7MsgRhJcuCBg26qHxMJur/view?usp=sharing | bug,topic:input | low | Major |
496,368,237 | vscode | Remote terminals don't resolve variables when restored on startup |
Issue Type: <b>Bug</b>
I changed my Terminal->Integrated: Cwd setting to "$(workspaceFolder)". Now, when Code starts up, the Terminal does not open, and I get a notification: "The terminal shell CWD "/home/gpudb/gpudb-dev-7.0/gpudb-core/gaiadb-cluster/${workspaceFolder}" does not exist". This happens every time I launch Code.
However, once Code has loaded, I can then do Ctrl-` to open a Terminal window, and it works as expected (also every time). Note that this is doing Remote development (so the Terminal is opening a session on my remote machine).
I suspect the issue is that Code is trying to start the Terminal before $workspaceFolder is set, but it is set later, when I manually open the Terminal. It is a fairly easy work-around, but am reporting it in case this is not the expected behavior.
VS Code version: Code 1.38.1 (b37e54c98e1a74ba89e03073e5a3761284e3ffb0, 2019-09-11T13:35:15.005Z)
OS version: Windows_NT x64 10.0.18362
Remote OS version: Linux x64 4.15.0-62-generic
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz (8 x 2592)|
|GPU Status|2d_canvas: enabled<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>oop_rasterization: disabled_off<br>protected_video_decode: unavailable_off<br>rasterization: enabled<br>skia_deferred_display_list: disabled_off<br>skia_renderer: disabled_off<br>surface_synchronization: enabled_on<br>video_decode: enabled<br>viz_display_compositor: disabled_off<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|undefined|
|Memory (System)|15.76GB (7.51GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|SSH: andy-linux|
|OS|Linux x64 4.15.0-62-generic|
|CPUs|Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 x 900)|
|Memory (System)|62.86GB (6.62GB free)|
|VM|0%|
</details><details><summary>Extensions (20)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-cudacpp|kri|0.1.1
remote-containers|ms-|0.74.0
remote-ssh|ms-|0.46.1
remote-ssh-edit|ms-|0.46.1
remote-ssh-explorer|ms-|0.46.1
remote-wsl|ms-|0.39.5
vscode-remote-extensionpack|ms-|0.17.0
vs-keybindings|ms-|0.2.0
code-gnu-global|aus|0.2.2
githistory|don|0.4.6
mssql|ms-|1.6.0
cpptools|ms-|0.25.1
java|red|0.49.0
vscodeintellicode|Vis|1.1.9
vscode-java-debug|vsc|0.21.0
vscode-java-dependency|vsc|0.5.1
vscode-java-pack|vsc|0.8.0
vscode-java-test|vsc|0.19.0
vscode-maven|vsc|0.19.0
gitblame|wad|3.0.1
</details>
<!-- generated by issue reporter --> | bug,help wanted,remote,terminal-process | low | Critical |
496,376,090 | PowerToys | WinRoll 2.0 type functionality | # Summary of the new feature/enhancement
WinRoll 2.0 is a great tool that I've used since Windows XP days, worked great on 98 SE, ME, XP, Vista and even 7. But only for 32bit programs (even on 64bit OSes) It even still works on Windows 10, but again, only for 32bit programs - not 64 bit, and not for modern apps.
What WinRoll 2.0 does is to allow the user to right click the title bar of the window, to have the window minimized to just the title bar, and back again on a second right click.
Here's a screen cap showing Chrome and HexChat being rolled up:
https://i.imgur.com/YN7txgx.png
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
Screen clutter is something I think most power users experience. Rolling up windows to their title bars provides a few use cases:
Quickly checking between large-area windows on single monitor systems (laptops), such as a reference web page being open behind an IDE; Users can look at a window behind the window they're using without that window losing focus; Provides the ability to access desktop icons without minimizing the window (which takes additional time) Rolling up the window, at least with WinRoll 2.0 is extremely quick - both from a user action perspective, as well as a functional / process standpoint.
If my reasons are questioned, I implore you to find and use WinRoll 2.0 for a few days before dropping this request. WindowBlinds it another third party program that does similar, however it uses a button on the title bar to perform the request, which is less than ideal in may situations.
This is a feature present in most GNU/Linux DEs as well, and is something I've always thought Windows should have, built-in.
# Proposed technical implementation details (optional)
<!--
A clear and concise description of what you want to happen.
-->
I would love to see PowerToys have the ability to roll up windows to just the title bar, with ***just*** a right click of the mouse.
The user right clicks the title bar:
The window is minimized to just the title bar
The user has access to everything behind the space the window previous occupied
The user right clicks the title bar again and the window is returned to it's previous size
NOTE:
I understand WinRoll 2.0 is written in ASM, and that PowerToys is not. I don't know if this is even possible outside of using ASM. Wil Palma originally had WinRol 2.0 open sourced, however the original site is no longer serving the program, source or info. I have found https://github.com/saccohuo/winroll which appears to contain the original source for WinRoll 2.0 though. Hopefully this is helpful! Thank You! | Idea-New PowerToy,Product-Window Manager | low | Major |
496,398,655 | go | cmd/go: prefer to report mismatched module paths over mismatched major versions | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/brenol/.cache/go-build"
GOENV="/home/brenol/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY="bitbucket.org"
GONOSUMDB="bitbucket.org"
GOOS="linux"
GOPATH="/home/brenol/goworkspace"
GOPRIVATE="bitbucket.org"
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/brenol/goworkspace/src/github.com/twitchscience/kinsumer/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build321165660=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Tried to run go mod tidy in github.com/twitchscience/kinsumer
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
### What did you expect to see?
I expected it to report correctly that the project github.com/myesui/uuid is the one that uses gopkg.in/stretchr/testify.v1. However, it's reporting that twinj/uuid incorrectly is the one that is using.
### What did you see instead?
```
go: finding gopkg.in/stretchr/testify.v1 v1.4.0
github.com/twitchscience/kinsumer imports
github.com/twinj/uuid tested by
github.com/twinj/uuid.test imports
gopkg.in/stretchr/testify.v1/assert: cannot find module providing package gopkg.in/stretchr/testify.v1/assert
```
How the module looks like:
```
module github.com/twitchscience/kinsumer
go 1.13
require (
github.com/aws/aws-sdk-go v1.24.2
github.com/cactus/go-statsd-client/statsd v0.0.0-20190906215803-47b6058c80f5
github.com/myesui/uuid v1.0.0 // indirect
github.com/stretchr/testify v1.4.0
github.com/twinj/uuid v1.0.0
golang.org/x/net v0.0.0-20190918130420-a8b05e9114ab // indirect
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e
)
```
The strangest thing is that I see no mention of myesui/uuid in the vendored directory. | help wanted,NeedsFix,modules | low | Critical |
496,440,235 | rust | Tracking issue for `rustc_reservation_impl` attribute | ## Background
The `#[rustc_reservation_impl]` attribute was added as part of the effort to stabilize `!`. Its goal is to make it possible to add a `impl<T> From<!> for T` impl in the future by disallowing downstream crates to do negative reasoning that might conflict with that.
**Warning:** the effect of this attribute is quite subtle, and it should be used with caution! In particular, adding a "reservation impl" alone does **not** guarantee that one can add the impl in the future, as described below.
## History
- Initial implementation in https://github.com/rust-lang/rust/pull/62661
## Current blockers before this attribute can be used to affect end-users
- [ ] How should "reserved" impls show up in rustdoc? (https://github.com/rust-lang/rust/issues/64633)
- [ ] What should its error message be? (https://github.com/rust-lang/rust/issues/64633)
## Usage
You can use the `#[rustc_reservation_impl]` attribute as follows:
```rust
#[rustc_reservation_impl]
impl<T> From<!> for T { .. }
```
For the most part, rustc will act as though this impl **does not exist**. For example, users can add overlapping impls if they prefer:
```rust
struct LocalType1<T>(T);
impl<T> From<!> for LocalType1<T> { }
struct LocalType2<T>(T);
impl<T> From<T> for LocalType2<T> { }
```
**Note that this implies that the `#[rustc_reservation_impl]` attribute alone does not guarantee that you can add the reservation impl in a future compatible way.** Adding the reserved impl in the future may still cause coherence overlap (e.g., the impl for `LocalType<T>` for the impl for all `T` would overlap here). This will typically result in errors.
In order to add the impl, you must be able to ensure that coherence will allow the overlapping impls. This can be done in two ways, both of which are presently unstable:
- Specialization: but be careful! e.g., the current specialization rules do not suffice in the `LocalType2` example above, since neither impl is a subset of one another.
- Marker traits: we have a notion of marker traits that are allowed to overlap. Marker traits currently must have no items, which excludes e.g. `From`, but we could grow this definition.
## What *does* the attribute do?
The attribute prevents negative reasoning. In particular, it forbids impls like this:
```rust
trait LocalTrait;
struct LocalType3;
impl<T> LocalTrait for T where T: From<!> { }
impl<T> LocalTrait for LocalType3 { }
```
Without the reservation impl, this would be legal, because the crate may assume that `LocalType: From<!>` does not hold (since `LocalType` is local to the crate). *With* the reservation impl, however, code like this will get an error.
| A-trait-system,T-lang,T-compiler,C-tracking-issue,requires-nightly,F-rustc_attrs,S-tracking-perma-unstable | low | Critical |
496,444,767 | create-react-app | Add a branding page with new logo resources | Add a page to the docs with links to download the new logo as well as colour codes, etc. | tag: documentation | low | Minor |
496,447,869 | go | x/tools/gopls: handle line directives to .y files | ### What version of Go are you using (`go version`)?
go version go1.13 linux/amd64
And gopls...
golang.org/x/tools/gopls v0.1.7
golang.org/x/tools/[email protected] h1:YwKf8t9h69++qCtVmc2q6fVuetFXmmu9LKoPMYLZid4=
### Does this issue reproduce with the latest release?
Yes
### What did you do?
Open https://play.golang.org/p/-QfVqDKxsFI in VSCode.
### What did you expect to see?
No crashing.
### What did you see instead?
"The gopls server crashed 5 times in the last 3 minutes. The server will not be restarted."
| gopls,Tools | low | Critical |
496,469,896 | pytorch | No way to disable mse_loss broadcasting warning | ## π Bug
Hi,
torch.nn.functional.mse_loss always throws a warning when using it with two tensors of different shapes.
There are legitimate reasons for wanting to use differently shaped tensor and taking advantage of the standard broadcasting behaviour of pytorch, so there needs to be a way to disable that warning.
## To Reproduce
```
import torch
import torch.nn.functional as F
a = torch.randn(5, 4)
b = torch.randn(5, 1)
F.mse_loss(a, b)
```
## Expected behavior
There should be an optional parameter to disable the warning, like `warn_broadcasting=True`, `error_broadcasting=False`.
As discussed here: https://github.com/pytorch/pytorch/issues/16045#issuecomment-476266780
## Additional context
There are legitimate cases for wanting to calculate the MSE between differently shaped tensors.
For example I need to calculate the difference between each sample and a subset of other samples. So I need to calculate the MSE between tensors shaped like:
(number of samples, dimension of a sample, 1) and (number of samples, dimension of a sample, number of samples in the subset).
I understand that safeguards need to be put in-place to avoid misleading people, as discussed in the original issue (https://github.com/pytorch/pytorch/issues/16045) but broadcasting is a Pytorch staple and users shouldn't have to re-implement basic functions like the MSE to make use of its full power.
Cheers
| module: nn,triaged,enhancement | low | Critical |
496,482,179 | terminal | Feature Request: More integration with Windows features. | <!--
π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# More integration with Windows features
Add commands to interact with standard Windows features. Metro Notifications, systray messages, Message boxes, and maybe even window actions. (close/minimize other windows)
Moved here from [uservoice](https://wpdev.uservoice.com/forums/266908-command-prompt-console-windows-subsystem-for-l/suggestions/6559018-more-integration-with-windows-features?tracking_code=67966467aa3e6163542338d28206225a) to @bitcrazed's request | Issue-Feature,Product-Cmd.exe,Product-Powershell,Area-UserInterface | low | Critical |
496,551,309 | pytorch | Make `torch.save` serialize a zip file | ## Why
Right now `torch.save` saves a series of 5 pickle binaries followed by raw tensor data that looks something like
```python
torch.save([torch.ones(2, 2), torch.randn(5, 5)], "my_value.pt")
```
`my_value.pt`
```
[ magic_number | version number | system metadata | actual pickle ([ tensor shape, type, etc.. 1, tensor shape, type, etc... 2]) | list of tensor storage keys | raw tensor data ]
```
This has the problem that tensors must be lazily loaded since, when reading the `actual pickle`, the location of each tensor in `raw tensor data` is not known (it depends on other tensors saved, the sizes of which are at random points in `actual pickle`).
TorchScript [serializes a zip folder](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/docs/serialization.md) that looks like
```
model.pt
|-- model.json
|-- code/
|-- __torch__.py
|-- __torch__.py.debug_pkl
|-- foo/
|-- bar.py
|-- bar.py.debug_pkl
|-- data.pkl
|-- tensors/
|-- 0
|-- 1
```
Where tensors can be eagerly loaded since they are files in the zip archive.
## The Change
`torch.save` should be changed to save a zip file that looks like
```
my_value.pt
|-- metadata (version number, magic number, system metadata)
|-- data.pkl (`actual pickle` from earlier)
|-- tensors/
|-- 0
|-- 1
```
This lets us load tensors eagerly which would fix #24045 without the hacky #24794 and make #25109 much simpler.
This keeps all the functionality of the old format and makes `torch.save` compatible with the TorchScript format (i.e. `code` could be added to `my_value.pt` and the file could be loaded in the JIT). The old `torch.load` format can be kept backwards compatible by checking the first 2 bytes of the file (`0x8002` represent a pickle archive start).
The serialization code is much shorter as a result: #26573
cc @zdevito @suo @ezyang | module: serialization,triaged,enhancement | low | Critical |
496,620,171 | rust | Unhelpful error "one type is more general than the other" in async code | The following code does not compile:
```rust
// Crate versions:
// futures-preview v0.3.0-alpha.18
// runtime v0.3.0-alpha.7
use futures::{prelude::*, stream::FuturesUnordered};
#[runtime::main]
async fn main() {
let entries: Vec<_> = vec![()]
.into_iter()
.map(|x| async { vec![()].into_iter().map(|_| x) })
.collect::<FuturesUnordered<_>>()
.map(stream::iter)
.flatten()
.collect()
.await;
}
```
It produces this error message:
```
error[E0308]: mismatched types
--> src\main.rs:3:1
|
3 | #[runtime::main]
| ^^^^^^^^^^^^^^^^ one type is more general than the other
|
= note: expected type `std::ops::FnOnce<(std::iter::Map<std::vec::IntoIter<()>, [closure@src\main.rs:7:51: 7:56 x:&()]>,)>`
found type `std::ops::FnOnce<(std::iter::Map<std::vec::IntoIter<()>, [closure@src\main.rs:7:51: 7:56 x:&()]>,)>`
```
In this case, the code can be made to compile by adding `move` in two places on this line:
```rust
// ...
.map(|x| async move { vec![()].into_iter().map(move |_| x) })
// ...
```
Some possibly related issues: #57362, #60658
**rustc version:** `1.39.0-nightly (97e58c0d3 2019-09-20)` | C-enhancement,A-diagnostics,T-compiler,A-async-await,AsyncAwait-Triaged,D-confusing,D-newcomer-roadblock | medium | Critical |
496,623,630 | terminal | Feature Request: Show terminal size when resizing the window | <!--
π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
There was this cool feature in PuTTY, that when you resize the window, it also tells you the size of the window in COLUMNS and ROWS in a small tooltip. This would make it easier for users to set the terminal to their appropriate size.
I know there is a setting for the initial size, but sometimes you want to resize your terminal to an exact size and this feature would help.
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
# Proposed technical implementation details (optional)
Whenever the terminal window is resized by the user using either the mouse or other hotkeys, then a little tooltip will appear and show the current size (COLUMNS x ROWS) of the terminal.
<!--
A clear and concise description of what you want to happen.
-->
| Help Wanted,Area-UserInterface,Product-Terminal,Issue-Task | medium | Critical |
496,637,507 | PowerToys | Command-Line Based interactive process viewer | # Summary of the new feature/enhancement
Most windows users cant properly live in the terminal since alternatives to the linux tools have not been provided.
One of the most commonly used tool is `top` (&/or `htop`). What if we can make a similar port of this to windows. This would allow us to to do tasks like killing or increasing the priority value through terminal based programs.
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
# Proposed Features
The program does everything that the task manager does:
- Killing Processes
- Shows system usage of CPU, Memory, etc
- Allows us to change the Scheduling Priority of an application
<!--
A clear and concise description of what you want to happen.
-->
| Idea-New PowerToy | low | Minor |
496,662,153 | neovim | Composed characters don't work with `f` and `F` | <!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`: NVIM v0.3.8
Build type: Release
LuaJIT 2.0.5
Compiled by [email protected]
- `vim -u DEFAULTS` (version: 8.0) behaves differently? yes
- Operating system/version: macOS 10.14.6
- Terminal name/version: iTerm2 3.3.2
- `$TERM`: xterm-256color
### Steps to reproduce using `nvim -u NORC`
Go to insert mode, type in a sequence of characters that contains a character that your keyboard layout produces through composition (compose key or similar). Go back to normal mode and search for that character with either `f` or `F`. For example input `HallΓΆchen` and then search for `ΓΆ` with `FΓΆ`, but use composition for the `ΓΆ`.
I've only just found this on my Mac so far, so I'm not sure whether linux's compose key causes the same bug. To reproduce with a Mac, set the Keyboard layout to US and use ALT+U, O to type the `ΓΆ` in the example above.
### Actual behaviour
`f` and `F` don't find characters when using composition in the search.
### Expected behaviour
`f` and `F` find characters when using composition in the search. This is already the case with my version of vim.
| bug-vim,encoding,unicode π© | low | Critical |
496,670,331 | PowerToys | Add option to remove all title bars. (like i3) | # Summary of the new feature/enhancement
make users able to remove titlebars in fancy zones option.
Like in you whould do in a windowmanager like i3
# Proposed technical implementation details (optional)
<!--
A clear and concise description of what you want to happen.
-->
toggle option - no title bars.
| Idea-Enhancement,Product-Window Manager,Product-FancyZones | low | Major |
496,695,702 | godot | Autotile ignore separation | **Godot version:** 3.1.1
**OS/device including version:** windows 10
**Issue description:** When i set separation it don't work
**Steps to reproduce:** see screnshot example

| bug,topic:core,confirmed | low | Minor |
496,696,085 | youtube-dl | reguestin new site mtv.fi | - [x ] I'm reporting a new site support request
- [x ] I've verified that I'm running youtube-dl version **2019.09.12.1**
- [x ] I've checked that all provided URLs are alive and playable in a browser
- [x ] I've checked that none of provided URLs violate any copyrights
- [x ] I've searched the bugtracker for similar site support requests including closed ones
- example1 video: https://www.mtv.fi/sarja/aaveiden-jaljilla-33011225
- example2 video: https://www.mtv.fi/sarja/-112-33011107005/jakso-10-sisa-suomen-poliisi-jahtaa-polkupyoravarasta-908881
- Playlist from example 2 (.mpd): https://mtvdashvodcloud-a.akamaized.net/video/won/ismvol1/2018-04-29/112(908881_ISMUSPWV).ism/112(908881_ISMUSPWV).mpd
Description:
sometimes this site tries to force you to log in accounts are free here is one I made that can be used
username: [email protected]
Password: password
Some videos are only viewable in Finland. this is a Finnish channel 3 website where you can watch their shows. I don't think this violates copyrights if I'm wrong I'm sorry. I looked through your supported sites and one was Ruutu and that is a similar site but for Chanel 4
first time using GitHub hopefully this is okay :) | site-support-request | low | Critical |
496,701,194 | rust | Tracking issue for dual-proc-macros | Implemented in #58013
Cargo side is https://github.com/rust-lang/cargo/pull/6547
This feature causes Cargo to build a proc-macro crate for both the target and the host, and for rustc to load both.
I'm not sure if this is a perma-unstable feature for the compiler-only, or if there is any need to use it elsewhere. But a tracking issue may be helpful for anyone looking for more information.
| A-macros,T-compiler,B-unstable,C-tracking-issue,requires-nightly,A-proc-macros,S-tracking-design-concerns,S-tracking-perma-unstable | low | Minor |
496,719,159 | flutter | Prefetch image sizes when loading images from the network | ## Use case
The `FadeInImage `provides a convenient way to provide a placeholder image when loading an image from the network.
When the size of the image, that's being loaded, is not known beforehand though the layout changes suddenly. Unpredictable layout changes should generally be avoided as disturbs users in their current interactions, reading content e.g. or even worse they may click the wrong button because the location suddenly changes.
This can already be improved by wrapping the `FadeInImage` with a `AnimatedSize `widget, but I believe that there is an even better solution.
## Proposal
For the most common image formats, metadata such as the size is at the stated in the first few bytes of a file.
If `FadeInImage` were to figure out the size of the downloading image as soon as the relevant bytes are available the final layout could be determined much earlier.
For reference look at this medium article where someone implemented a similar system with swift.
[Prefetching images size without downloading them [entirely] in Swift](https://medium.com/ios-os-x-development/prefetching-images-size-without-downloading-them-entirely-in-swift-5c2f8a6f82e9)
| c: new feature,framework,a: images,P3,team-framework,triaged-framework | low | Minor |
496,720,197 | thefuck | Suggest correct filename for editing | <!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
The output of `thefuck --version`: `The Fuck 3.29 using Python 3.6.8 and ZSH 5.4.2`
Your system (Debian 7, ArchLinux, Windows, etc.): `Ubuntu 18.04.3 LTS`
How to reproduce the bug:
```
$ ls
bot.php
$ vim bot
<vim session with new file>
:q!
$ fuck
No fucks given
$
```
The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
```
fuck
DEBUG: Run with settings: {'alter_history': True,
'debug': True,
'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},
'exclude_rules': [],
'history_limit': None,
'instant_mode': False,
'no_colors': False,
'num_close_matches': 3,
'priority': {},
'repeat': False,
'require_confirmation': True,
'rules': [<const: All rules enabled>],
'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],
'user_dir': PosixPath('/root/.config/thefuck'),
'wait_command': 3,
'wait_slow_command': 15}
DEBUG: Execution timed out!
DEBUG: Call: vim bot; with env: {'LC_TERMINAL_VERSION': '3.3.5beta1', 'LANG': 'C', 'LC_TERMINAL': 'iTerm2', 'LC_CTYPE': 'en_US.UTF-8', 'USER': 'root', 'LOGNAME': 'root', 'HOME': '/root', 'PATH': '/root/.autojump/bin:/root/github/node-v10.15.3-linux-x64/bin:/root/bin:/home/linuxbrew/.linuxbrew/bin:/root/.autojump/bin:/root/.autojump/bin:/root/scripts:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 'MAIL': '/var/mail/root', 'SHELL': '/usr/bin/zsh', 'SSH_CLIENT': '24.5.18.26 40624 22', 'SSH_CONNECTION': '24.5.18.26 40624 172.26.45.171 22', 'SSH_TTY': '/dev/pts/0', 'TERM': 'xterm-256color', 'XDG_SESSION_ID': '204', 'XDG_RUNTIME_DIR': '/run/user/0', 'SHLVL': '1', 'PWD': '/home/she/http/swo.re/includes', 'OLDPWD': '/root', 'AUTOJUMP_SOURCED': '1', 'AUTOJUMP_ERROR_PATH': '/root/.local/share/autojump/errors.log', 'PAGER': 'less', 'LESS': '-R', 'LSCOLORS': 'Gxfxcxdxbxegedabagacad', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'SSH_AUTH_SOCK': '/tmp/ssh-OtM7w1NqCnQm/agent.11268', 'SSH_AGENT_PID': '11270', 'EDITOR': 'vim', 'GIT_EDITOR': 'vim', 'GITHUB_USER': 'marv3lls', 'GITHUB_PASSWORD': 'XzAbJjJsdhm9prBzQegA', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'LE_WORKING_DIR': '/root/.acme.sh', 'CERBOT_IP_LOGGING': 'true', 'THEFUCK_DEBUG': 'true', 'TF_SHELL': 'zsh', 'TF_ALIAS': 'fuck', 'TF_SHELL_ALIASES': '-=\'cd -\'\n...=../..\n....=../../..\n.....=../../../..\n......=../../../../..\n1=\'cd -\'\n2=\'cd -2\'\n3=\'cd -3\'\n4=\'cd -4\'\n5=\'cd -5\'\n6=\'cd -6\'\n7=\'cd -7\'\n8=\'cd -8\'\n9=\'cd -9\'\n_=sudo\na=\'echo >>\'\na:=\'cat >>\'\nacme.sh=/root/.acme.sh/acme.sh\nafind=\'ack -il\'\nalert=\'notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e \'\\\'\'s/^\\s*[0-9]\\+\\s*//;s/[;&|]\\s*alert$//\'\\\'\')"\'\nat=atom\naudio=\'youtube-dl -xo \'\\\'\'%(title)s.%(ext)s\'\\\'\' -f \'\\\'\'bestvideo[ext=mp4]+bestaudio[ext=m4a]/m4a\'\\\'\nc=cat\nccat=colorize_via_pygmentize\ncless=colorize_via_pygmentize_less\nd=\'"$WGET"\'\nd64=decode64\ne64=encode64\negrep=\'egrep --color=auto\'\nf=\'"$GREP" -Rli\'\nf.=\'find . | "$GREP"\'\nf:=find\nfgrep=\'fgrep --color=auto\'\nfn=\'"$GREP" -Rlvi\'\ng=git\nga=\'git add\'\ngaa=\'git add --all\'\ngap=\'git apply\'\ngapa=\'git add --patch\'\ngau=\'git add --update\'\ngav=\'git add --verbose\'\ngb=\'git branch\'\ngbD=\'git branch -D\'\ngba=\'git branch -a\'\ngbd=\'git branch -d\'\ngbda=\'git branch --no-color --merged | command grep -vE "^(\\+|\\*|\\s*(master|develop|dev)\\s*$)" | command xargs -n 1 git branch -d\'\ngbl=\'git blame -b -w\'\ngbnm=\'git branch --no-merged\'\ngbr=\'git branch --remote\'\ngbs=\'git bisect\'\ngbsb=\'git bisect bad\'\ngbsg=\'git bisect good\'\ngbsr=\'git bisect reset\'\ngbss=\'git bisect start\'\ngc=\'git commit -v\'\n\'gc!\'=\'git commit -v --amend\'\ngca=\'git commit -v -a\'\n\'gca!\'=\'git commit -v -a --amend\'\ngcam=\'git commit -a -m\'\n\'gcan!\'=\'git commit -v -a --no-edit --amend\'\n\'gcans!\'=\'git commit -v -a -s --no-edit --amend\'\ngcb=\'git checkout -b\'\ngcd=\'git checkout develop\'\ngcf=\'git config --list\'\ngcl=\'git clone --recurse-submodules\'\ngclean=\'git clean -id\'\ngcm=\'git checkout master\'\ngcmsg=\'git commit -m\'\n\'gcn!\'=\'git commit -v --no-edit --amend\'\ngco=\'git checkout\'\ngcount=\'git shortlog -sn\'\ngcp=\'git cherry-pick\'\ngcpa=\'git cherry-pick --abort\'\ngcpc=\'git cherry-pick --continue\'\ngcs=\'git commit -S\'\ngcsm=\'git commit -s -m\'\ngd=\'git diff\'\ngdca=\'git diff --cached\'\ngdct=\'git describe --tags $(git rev-list --tags --max-count=1)\'\ngdcw=\'git diff --cached --word-diff\'\ngds=\'git diff --staged\'\ngdt=\'git diff-tree --no-commit-id --name-only -r\'\ngdw=\'git diff --word-diff\'\ngemb=\'gem build *.gemspec\'\ngemp=\'gem push *.gem\'\ngf=\'git fetch\'\ngfa=\'git fetch --all --prune\'\ngfg=\'git ls-files | grep\'\ngfo=\'git fetch origin\'\ngg=\'git gui citool\'\ngga=\'git gui citool --amend\'\nggpull=\'git pull origin "$(git_current_branch)"\'\nggpur=ggu\nggpush=\'git push origin "$(git_current_branch)"\'\nggsup=\'git branch --set-upstream-to=origin/$(git_current_branch)\'\nghh=\'git help\'\ngignore=\'git update-index --assume-unchanged\'\ngignored=\'git ls-files -v | grep "^[[:lower:]]"\'\ngit-svn-dcommit-push=\'git svn dcommit && git push github master:svntrunk\'\ngk=\'\\gitk --all --branches\'\ngke=\'\\gitk --all $(git log -g --pretty=%h)\'\ngl=\'git pull\'\nglg=\'git log --stat\'\nglgg=\'git log --graph\'\nglgga=\'git log --graph --decorate --all\'\nglgm=\'git log --graph --max-count=10\'\nglgp=\'git log --stat -p\'\nglo=\'git log --oneline --decorate\'\ngloburl=\'noglob urlglobber \'\nglod=\'git log --graph --pretty=\'\\\'\'%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%ad) %C(bold blue)<%an>%Creset\'\\\'\nglods=\'git log --graph --pretty=\'\\\'\'%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%ad) %C(bold blue)<%an>%Creset\'\\\'\' --date=short\'\nglog=\'git log --oneline --decorate --graph\'\ngloga=\'git log --oneline --decorate --graph --all\'\nglol=\'git log --graph --pretty=\'\\\'\'%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset\'\\\'\nglola=\'git log --graph --pretty=\'\\\'\'%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset\'\\\'\' --all\'\nglols=\'git log --graph --pretty=\'\\\'\'%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset\'\\\'\' --stat\'\nglp=_git_log_prettily\nglum=\'git pull upstream master\'\ngm=\'git merge\'\ngma=\'git merge --abort\'\ngmom=\'git merge origin/master\'\ngmt=\'git mergetool --no-prompt\'\ngmtvim=\'git mergetool --no-prompt --tool=vimdiff\'\ngmum=\'git merge upstream/master\'\ngp=\'git push\'\ngpd=\'git push --dry-run\'\ngpf=\'git push --force-with-lease\'\n\'gpf!\'=\'git push --force\'\ngpoat=\'git push origin --all && git push origin --tags\'\ngpristine=\'git reset --hard && git clean -dfx\'\ngpsup=\'git push --set-upstream origin $(git_current_branch)\'\ngpu=\'git push upstream\'\ngpv=\'git push -v\'\ngr=\'git remote\'\ngra=\'git remote add\'\ngrb=\'git rebase\'\ngrba=\'git rebase --abort\'\ngrbc=\'git rebase --continue\'\ngrbd=\'git rebase develop\'\ngrbi=\'git rebase -i\'\ngrbm=\'git rebase master\'\ngrbs=\'git rebase --skip\'\ngrep=\'grep --color=auto\'\ngrev=\'git revert\'\ngrh=\'git reset\'\ngrhh=\'git reset --hard\'\ngrm=\'git rm\'\ngrmc=\'git rm --cached\'\ngrmv=\'git remote rename\'\ngroh=\'git reset origin/$(git_current_branch) --hard\'\ngrrm=\'git remote remove\'\ngrs=\'git restore\'\ngrset=\'git remote set-url\'\ngrss=\'git restore --source\'\ngrt=\'cd "$(git rev-parse --show-toplevel || echo .)"\'\ngru=\'git reset --\'\ngrup=\'git remote update\'\ngrv=\'git remote -v\'\ngsb=\'git status -sb\'\ngsd=\'git svn dcommit\'\ngsh=\'git show\'\ngsi=\'git submodule init\'\ngsps=\'git show --pretty=short --show-signature\'\ngsr=\'git svn rebase\'\ngss=\'git status -s\'\ngst=\'git status\'\ngsta=\'git stash push\'\ngstaa=\'git stash apply\'\ngstall=\'git stash --all\'\ngstc=\'git stash clear\'\ngstd=\'git stash drop\'\ngstl=\'git stash list\'\ngstp=\'git stash pop\'\ngsts=\'git stash show --text\'\ngsu=\'git submodule update\'\ngsw=\'git switch\'\ngswc=\'git switch -c\'\ngtl=\'gtl(){ git tag --sort=-v:refname -n -l ${1}* }; noglob gtl\'\ngts=\'git tag -s\'\ngtv=\'git tag | sort -V\'\ngunignore=\'git update-index --no-assume-unchanged\'\ngunwip=\'git log -n 1 | grep -q -c "\\-\\-wip\\-\\-" && git reset HEAD~1\'\ngup=\'git pull --rebase\'\ngupa=\'git pull --rebase --autostash\'\ngupav=\'git pull --rebase --autostash -v\'\ngupv=\'git pull --rebase -v\'\ngwch=\'git whatchanged -p --abbrev-commit --pretty=medium\'\ngwip=\'git add -A; git rm $(git ls-files --deleted) 2> /dev/null; git commit --no-verify --no-gpg-sign -m "--wip-- [skip ci]"\'\nh=history\nhistory=omz_history\nhsi=\'hs -i\'\nimgcat=/root/.iterm2/imgcat\nimgls=/root/.iterm2/imgls\nit2attention=/root/.iterm2/it2attention\nit2check=/root/.iterm2/it2check\nit2copy=/root/.iterm2/it2copy\nit2dl=/root/.iterm2/it2dl\nit2getvar=/root/.iterm2/it2getvar\nit2setcolor=/root/.iterm2/it2setcolor\nit2setkeylabel=/root/.iterm2/it2setkeylabel\nit2ul=/root/.iterm2/it2ul\nit2universion=/root/.iterm2/it2universion\nl=\'ls -CF\'\nla=\'ls -A\'\nll=\'ls -alF\'\nls=\'ls --color=auto\'\nlsa=\'ls -lah\'\nm=man\nmd=\'mkdir -p\'\nn=\'"$GREP" -Rvi\'\np=\'"$PAGER"\'\npls=ls\nrd=rmdir\nrsync-copy=\'rsync -avz --progress -h\'\nrsync-move=\'rsync -avz --progress -h --remove-source-files\'\nrsync-synchronize=\'rsync -avzu --delete --progress -h\'\nrsync-update=\'rsync -avzu --progress -h\'\ns=\'"$ROOT"\'\nsa=\'"$ROOT" echo >>\'\nsa:=\'"$ROOT" cat >>\'\nsc=\'"$ROOT" cat\'\nsd=\'"$ROOT" "$WGET"\'\nsf=\'"$ROOT" "$GREP" -Rli\'\nsf.=\'"$ROOT" find . | "$GREP"\'\nsf:=\'"$ROOT" find\'\nsfn=\'"$ROOT" "$GREP" -Rlvi\'\nsm=\'"$ROOT" man\'\nsn=\'"$ROOT" "$GREP" -Riv\'\nsp=\'"$ROOT" "$PAGER"\'\nsw=\'"$ROOT" echo >\'\nsw:=\'"$ROOT" cat >\'\nsx=\'"$ROOT" xargs\'\nsxa=\'"$ROOT" xargs echo >>\'\nsxa:=\'"$ROOT" xargs cat >>\'\nsxc=\'"$ROOT" xargs cat\'\nsxd=\'"$ROOT" xargs "$WGET"\'\nsxf=\'"$ROOT" xargs "$GREP" -li\'\nsxf.=\'"$ROOT" xargs find | "$GREP"\'\nsxf:=\'"$ROOT" xargs find\'\nsxfn=\'"$ROOT" xargs "$GREP" -lvi\'\nsxm=\'"$ROOT" xargs man\'\nsxn=\'"$ROOT" xargs "$GREP" -Riv\'\nsxp=\'"$ROOT" xargs "$PAGER"\'\nsxu=\'"$ROOT" xargs "$CURL"\'\nsxw=\'"$ROOT" xargs echo >\'\nsxw:=\'"$ROOT" xargs cat >\'\nsxy=\'"$ROOT" xargs "$GREP" -Ri\'\nsy=\'"$ROOT" "$GREP" -Ri\'\nu=\'"$CURL"\'\nvs=/var/www/she/swo.re/vidl.sh\nw=\'bash -c w\'\nw:=\'cat >\'\nwhich-command=whence\nwitch=which\nx=xargs\nxa=\'xargs echo >>\'\nxa:=\'xargs >>\'\nxc=\'xargs cat\'\nxd=\'xargs "$WGET"\'\nxf=\'xargs "$GREP" -Rli\'\nxf.=\'xargs find | "$GREP"\'\nxf:=\'xargs find\'\nxfn=\'xargs "$GREP" -Rlvi\'\nxm=\'xargs man\'\nxn=\'xargs "$GREP" -Riv\'\nxp=\'xargs "$PAGER"\'\nxu=\'xargs "$CURL"\'\nxw=\'xargs echo >\'\nxw:=\'xargs cat >\'\nxy=\'xargs "$GREP" -Ri\'\ny=\'"$GREP" -Ri\'\nyoutube-dl=\'youtube-dl -o \'\\\'\'%(title)s.%(ext)s\'\\\'\' -f \'\\\'\'bestvideo[ext=mp4]+bestaudio[ext=m4a]/m4a\'\\\'\nyt=youtube-dl', 'TF_HISTORY': 'fuck\nscript ~/script.txt\nfuck\nvim ~/.cust.zsh ; . ~/.cust.zsh \nfuck\nexit\nmv ~/script.txt ../vid/\nexport THEFUCK_DEBUG=true\neval $(thefuck --alias)\nvim bot', 'PYTHONIOENCODING': 'utf-8', '_': '/usr/local/bin/thefuck', 'LC_ALL': 'C', 'GIT_TRACE': '1'}; is slow: took: 0:00:03.029428
DEBUG: Importing rule: adb_unknown_command; took: 0:00:00.000320
DEBUG: Importing rule: ag_literal; took: 0:00:00.000530
DEBUG: Importing rule: apt_get; took: 0:00:00.001031
DEBUG: Importing rule: apt_get_search; took: 0:00:00.000360
DEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.000929
DEBUG: Importing rule: apt_list_upgradable; took: 0:00:00.000520
DEBUG: Importing rule: apt_upgrade; took: 0:00:00.000636
DEBUG: Importing rule: aws_cli; took: 0:00:00.000406
DEBUG: Importing rule: az_cli; took: 0:00:00.000350
DEBUG: Importing rule: brew_cask_dependency; took: 0:00:00.000689
DEBUG: Importing rule: brew_install; took: 0:00:00.000145
DEBUG: Importing rule: brew_link; took: 0:00:00.000347
DEBUG: Importing rule: brew_reinstall; took: 0:00:00.000691
DEBUG: Importing rule: brew_uninstall; took: 0:00:00.000351
DEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000167
DEBUG: Importing rule: brew_update_formula; took: 0:00:00.000395
DEBUG: Importing rule: brew_upgrade; took: 0:00:00.000130
DEBUG: Importing rule: cargo; took: 0:00:00.000114
DEBUG: Importing rule: cargo_no_command; took: 0:00:00.000347
DEBUG: Importing rule: cat_dir; took: 0:00:00.000359
DEBUG: Importing rule: cd_correction; took: 0:00:00.001320
DEBUG: Importing rule: cd_mkdir; took: 0:00:00.000519
DEBUG: Importing rule: cd_parent; took: 0:00:00.000127
DEBUG: Importing rule: chmod_x; took: 0:00:00.000146
DEBUG: Importing rule: composer_not_command; took: 0:00:00.000419
DEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000512
DEBUG: Importing rule: cpp11; took: 0:00:00.000341
DEBUG: Importing rule: dirty_untar; took: 0:00:00.001543
DEBUG: Importing rule: dirty_unzip; took: 0:00:00.001257
DEBUG: Importing rule: django_south_ghost; took: 0:00:00.000135
DEBUG: Importing rule: django_south_merge; took: 0:00:00.000117
DEBUG: Importing rule: dnf_no_such_command; took: 0:00:00.001287
DEBUG: Importing rule: docker_login; took: 0:00:00.000356
DEBUG: Importing rule: docker_not_command; took: 0:00:00.000739
DEBUG: Importing rule: dry; took: 0:00:00.000163
DEBUG: Importing rule: fab_command_not_found; took: 0:00:00.000511
DEBUG: Importing rule: fix_alt_space; took: 0:00:00.000347
DEBUG: Importing rule: fix_file; took: 0:00:00.003155
DEBUG: Importing rule: gem_unknown_command; took: 0:00:00.000566
DEBUG: Importing rule: git_add; took: 0:00:00.000611
DEBUG: Importing rule: git_add_force; took: 0:00:00.000344
DEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000344
DEBUG: Importing rule: git_branch_delete; took: 0:00:00.000346
DEBUG: Importing rule: git_branch_exists; took: 0:00:00.000439
DEBUG: Importing rule: git_branch_list; took: 0:00:00.000364
DEBUG: Importing rule: git_checkout; took: 0:00:00.000360
DEBUG: Importing rule: git_commit_amend; took: 0:00:00.000335
DEBUG: Importing rule: git_commit_reset; took: 0:00:00.000395
DEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000348
DEBUG: Importing rule: git_diff_staged; took: 0:00:00.000334
DEBUG: Importing rule: git_fix_stash; took: 0:00:00.000343
DEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000350
DEBUG: Importing rule: git_help_aliased; took: 0:00:00.000334
DEBUG: Importing rule: git_merge; took: 0:00:00.000378
DEBUG: Importing rule: git_merge_unrelated; took: 0:00:00.000354
DEBUG: Importing rule: git_not_command; took: 0:00:00.000357
DEBUG: Importing rule: git_pull; took: 0:00:00.000348
DEBUG: Importing rule: git_pull_clone; took: 0:00:00.000336
DEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000343
DEBUG: Importing rule: git_push; took: 0:00:00.000349
DEBUG: Importing rule: git_push_different_branch_names; took: 0:00:00.000335
DEBUG: Importing rule: git_push_force; took: 0:00:00.000347
DEBUG: Importing rule: git_push_pull; took: 0:00:00.000382
DEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000388
DEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000347
DEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000255
DEBUG: Importing rule: git_remote_delete; took: 0:00:00.000338
DEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000251
DEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000376
DEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000337
DEBUG: Importing rule: git_rm_staged; took: 0:00:00.000392
DEBUG: Importing rule: git_stash; took: 0:00:00.000365
DEBUG: Importing rule: git_stash_pop; took: 0:00:00.000352
DEBUG: Importing rule: git_tag_force; took: 0:00:00.000357
DEBUG: Importing rule: git_two_dashes; took: 0:00:00.000341
DEBUG: Importing rule: go_run; took: 0:00:00.000345
DEBUG: Importing rule: gradle_no_task; took: 0:00:00.000615
DEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000348
DEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000353
DEBUG: Importing rule: grep_recursive; took: 0:00:00.000386
DEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000585
DEBUG: Importing rule: gulp_not_task; took: 0:00:00.000381
DEBUG: Importing rule: has_exists_script; took: 0:00:00.000346
DEBUG: Importing rule: heroku_multiple_apps; took: 0:00:00.000363
DEBUG: Importing rule: heroku_not_command; took: 0:00:00.000374
DEBUG: Importing rule: history; took: 0:00:00.000161
DEBUG: Importing rule: hostscli; took: 0:00:00.000510
DEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.001046
DEBUG: Importing rule: java; took: 0:00:00.000342
DEBUG: Importing rule: javac; took: 0:00:00.000342
DEBUG: Importing rule: lein_not_task; took: 0:00:00.000511
DEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000338
DEBUG: Importing rule: ln_s_order; took: 0:00:00.000344
DEBUG: Importing rule: long_form_help; took: 0:00:00.000146
DEBUG: Importing rule: ls_all; took: 0:00:00.000337
DEBUG: Importing rule: ls_lah; took: 0:00:00.000408
DEBUG: Importing rule: man; took: 0:00:00.000373
DEBUG: Importing rule: man_no_space; took: 0:00:00.000137
DEBUG: Importing rule: mercurial; took: 0:00:00.000371
DEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000141
DEBUG: Importing rule: mkdir_p; took: 0:00:00.000437
DEBUG: Importing rule: mvn_no_command; took: 0:00:00.000362
DEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000358
DEBUG: Importing rule: no_command; took: 0:00:00.000343
DEBUG: Importing rule: no_such_file; took: 0:00:00.000139
DEBUG: Importing rule: npm_missing_script; took: 0:00:00.000717
DEBUG: Importing rule: npm_run_script; took: 0:00:00.000382
DEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000618
DEBUG: Importing rule: open; took: 0:00:00.000446
DEBUG: Importing rule: pacman; took: 0:00:00.000651
DEBUG: Importing rule: pacman_not_found; took: 0:00:00.000143
DEBUG: Importing rule: path_from_history; took: 0:00:00.000155
DEBUG: Importing rule: php_s; took: 0:00:00.000411
DEBUG: Importing rule: pip_install; took: 0:00:00.000444
DEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000441
DEBUG: Importing rule: port_already_in_use; took: 0:00:00.000282
DEBUG: Importing rule: prove_recursively; took: 0:00:00.000361
DEBUG: Importing rule: pyenv_no_such_command; took: 0:00:00.000705
DEBUG: Importing rule: python_command; took: 0:00:00.000336
DEBUG: Importing rule: python_execute; took: 0:00:00.000338
DEBUG: Importing rule: quotation_marks; took: 0:00:00.000152
DEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000446
DEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000131
DEBUG: Importing rule: rm_dir; took: 0:00:00.000340
DEBUG: Importing rule: rm_root; took: 0:00:00.000388
DEBUG: Importing rule: scm_correction; took: 0:00:00.000465
DEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000342
DEBUG: Importing rule: sl_ls; took: 0:00:00.000137
DEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000358
DEBUG: Importing rule: sudo; took: 0:00:00.000143
DEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000383
DEBUG: Importing rule: switch_lang; took: 0:00:00.000189
DEBUG: Importing rule: systemctl; took: 0:00:00.000538
DEBUG: Importing rule: test.py; took: 0:00:00.000127
DEBUG: Importing rule: tmux; took: 0:00:00.000348
DEBUG: Importing rule: touch; took: 0:00:00.000356
DEBUG: Importing rule: tsuru_login; took: 0:00:00.000336
DEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000340
DEBUG: Importing rule: unknown_command; took: 0:00:00.000130
DEBUG: Importing rule: unsudo; took: 0:00:00.000118
DEBUG: Importing rule: vagrant_up; took: 0:00:00.000375
DEBUG: Importing rule: whois; took: 0:00:00.000540
DEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000448
DEBUG: Importing rule: yarn_alias; took: 0:00:00.000335
DEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.000782
DEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000463
DEBUG: Importing rule: yarn_help; took: 0:00:00.000350
DEBUG: Trying rule: dirty_unzip; took: 0:00:00.000100
No fucks given
DEBUG: Total took: 0:00:03.131388
```
If the bug only appears with a specific application, the output of that application and its version:
Not relevant
Anything else you think is relevant:
- Expected result:
```
$ ls
bot.php
$ vim bot
<vim session with new file>
:q!
$ fuck
vim bot.php [enter/β/β/ctrl+c]
<goes into vim with relevant filename>
```
- Also, with other editors? nano, emacs, &c.
<!-- It's only with enough information that we can do something to fix the problem. -->
| enhancement,help wanted,hacktoberfest | low | Critical |
496,749,794 | storybook | width/maxWidth prop on Story/Preview component | Would be awesome to have a width or maxWidth props on the Story components so that we can avoid writing decorators to restrict the width of 100% width components. | addon: docs | low | Major |
496,752,660 | rust | Wrong infered type when an additional constrain added | In the function `f` I have to specify `bool` explicitly while `f` differs from `g` in constrains only:
```rust
use rand::Rng;
use rand::distributions::{Distribution, Standard};
fn f<S, R>(rng: &mut R) -> bool
where
R: Rng,
Standard: Distribution<S>,
{
rng.gen::<bool>()
}
fn g<R>(rng: &mut R) -> bool
where
R: Rng
{
rng.gen()
}
```
[Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f3bfa6c5cea44b4239677476b7e03cf4)
Not sure if it's an actual bug but it looks like the type inference should work the same way in both cases. | A-trait-system,T-compiler,A-inference,C-bug | low | Critical |
496,758,542 | rust | Misleading error message, privacy error reported as type error | ```rust
mod foo {
pub struct Foo {
private: i32,
}
}
use foo::Foo;
fn main() {
let mut foo = Foo { private: 0_usize };
}
```
([Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=db546b8ab3361a008f7cd421658e15aa))
Errors:
```
Compiling playground v0.0.1 (/playground)
error[E0308]: mismatched types
--> src/main.rs:11:34
|
11 | let mut foo = Foo { private: 0_usize } ;
| ^^^^^^^ expected i32, found usize
error: aborting due to previous error
For more information about this error, try `rustc --explain E0308`.
error: Could not compile `playground`.
To learn more, run the command again with --verbose.
```
The error message here is incorrect. Whether the types match or not is irrelevant, because the field is private. The error should say something like this instead ([playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=c2a74939542ed8ce8535b779a2688c1c)):
```
error[E0451]: field `private` of struct `foo::Foo` is private
--> src/main.rs:10:25
|
10 | let mut foo = Foo { private: 0_i32 };
| ^^^^^^^^^^^^^^ field `private` is private
error: aborting due to previous error
``` | A-diagnostics,A-visibility,T-compiler,C-bug | low | Critical |
496,777,715 | go | runtime: maybe allgs should shrink after peak load | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.4 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Y
### What operating system and processor architecture are you using (`go env`)?
any
### What did you do?
When serving a peak load, the system creates a lot of goroutines, and after that the goroutine garbages cause more CPU consuming.
This can be reproduced by:
```go
package main
import (
"log"
"net/http"
_ "net/http/pprof"
"time"
)
func sayhello(wr http.ResponseWriter, r *http.Request) {}
func main() {
for i := 0; i < 1000000; i++ {
go func() {
time.Sleep(time.Second * 10)
}()
}
http.HandleFunc("/", sayhello)
err := http.ListenAndServe(":9090", nil)
if err != nil {
log.Fatal("ListenAndServe:", err)
}
}
```
after 10 seconds, the inuse objects still remain the same.

### What did you expect to see?
The global goroutines shrink to a proper size.
### What did you see instead?
Many inuse objects created by malg | NeedsInvestigation,compiler/runtime | medium | Major |
496,785,988 | rust | Linker error because `-wL as-needed` conflicts with `-C link-dead-code` | When a project has the `[lib]` tag in the `Cargo.toml` the `-wL as-needed` flag is added to the projects linker flags. However, if a project uses `-C link-dead-code` the two flags conflict causing a linker error of `undefined reference` for any functions in the lib.
This was discovered because link-dead-code is used for code coverage in tarpaulin. I spent some time seeing if there was a way around it as a user but I didn't manage to solve it myself. Issue that lead me to this for reference: https://github.com/xd009642/tarpaulin/issues/126
Any solutions I could currently implement would be appreciated if this requires a PR I'm also happy to contribute (though may need some mentorship) | A-linkage,T-compiler,C-bug,link-dead-code | low | Critical |
496,796,554 | godot | Import plugin does not support extensions with multiple periods | **Godot version:**
3.1
**OS/device including version:**
Windows
**Issue description:**
I am writing an import plugin for Yarn Spinner dialogue which uses a `.yarn.txt` extension. The import plugin must be registered to just `.txt` in order to work but then causes all sorts of issues where it imports other txt files in the filesystem.
It would be great if:
1. You could specify `.yarn.txt`
2. OR if you could tell the import plugin to skip a file if it's not the right format. So I would still register a .txt but then could just return an error that tells Godot to skip importing the file. Let me know if I should create a separate issue for this. It would also be useful for other import plugins (like Tiled importer) that handle `.json` but then end up causing a lot of headache when it tries to import other json files in the project. | bug,topic:plugin,topic:import | low | Critical |
496,818,313 | godot | Arrays/Dictionaries allow modifying contents during iteration | **Godot version:**
2e065d8
**Issue description:**
While debugging my game, I bumped into this piece of code:
```gdscript
for id in shared_objects:
if not is_instance_valid(shared_objects[id]):
shared_objects.erase(id)
```
Godot doesn't see any problem here, even though doing this will totally break iteration and skip some elements. IMO modifying collection during iteration should raise an error, because it's never an intended behavior and yields unexpected results. | bug,topic:gdscript,confirmed | low | Critical |
496,824,620 | rust | No error for private type in public async fn | [Playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=4cfeed4e56df13be164fe7e46608e39a)
```rust
mod foo {
struct Bar;
pub async fn bar() -> Bar {
Bar
}
}
```
This compiles just fine, despite the async fn returning a private type. There is a warning, but that's about it.
```
warning: private type `foo::Bar` in public interface (error E0446)
--> src/lib.rs:4:5
|
4 | / pub async fn bar() -> Bar {
5 | | Bar
6 | | }
| |_____^
|
= note: `#[warn(private_in_public)]` on by default
= warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
= note: for more information, see issue #34537 <https://github.com/rust-lang/rust/issues/34537>
```
I don't think the warning was intended for async fn. This seems like it slipped through for async fn.
| A-visibility,T-compiler,C-bug,A-async-await,AsyncAwait-Triaged | low | Critical |
496,826,462 | vue | Infinite loop in vue-template-compiler | ### Version
2.6.10
### Reproduction link
[https://github.com/oguimbal/vuebug](https://github.com/oguimbal/vuebug)
### Steps to reproduce
```
git clone [email protected]:oguimbal/vuebug.git
npm i
npm start
```
Wait a couple of seconds, and your compilation process will be frozen.
If you attach a debugger to the node process, you will see the infinite loop in `generateCodeFrame()` method of vue-template-compiler:

### What is expected?
I would expect the compiler not to freeze
### What is actually happening?
The compiler is freezing
<!-- generated by vue-issues. DO NOT REMOVE --> | bug,contribution welcome,has PR | medium | Critical |
496,852,944 | rust | Type parameters of associated consts have unexpected lifetime constraints | This simple trait fails to compile:
```rust
pub trait Foo<T> {
const FOO: &'static fn(T);
}
```
```
error[E0310]: the parameter type `T` may not live long enough
--> src/lib.rs:3:5
|
2 | pub trait Foo<T> {
| - help: consider adding an explicit lifetime bound `T: 'static`...
3 | const FOO: &'static fn(T);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: ...so that the reference type `&'static fn(T)` does not outlive the data it points at
--> src/lib.rs:3:5
|
3 | const FOO: &'static fn(T);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
error: aborting due to previous error
For more information about this error, try `rustc --explain E0310`.
```
However, `T`'s lifetime obviously has nothing to do with the lifetime of a pointer to a function that includes `T`. | A-lifetimes,A-borrow-checker,T-lang | low | Critical |
496,853,316 | TypeScript | Provide a `this` type for `get`/`set` methods in `Reflect.defineProperty` options |
**Code**
```ts
var A: {a?: number, b: number} = {b:2}
Reflect.defineProperty(A, 'a', {
// set(this: any, value: any) { // to fix
set(value: any) {
console.log(this.b); // Property 'b' does not exist on type 'PropertyDescriptor'.
},
enumerable: true,
configurable: true
});
A.a = 1;
// expect the context of setter is A
```
**Expected behavior:**
expect the context of the setter is A
**Actual behavior:**
Property 'b' does not exist on type 'PropertyDescriptor'.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
https://www.typescriptlang.org/play/index.html?strictPropertyInitialization=false&alwaysStrict=false&experimentalDecorators=true&emitDecoratorMetadata=true#code/LAKAbghgTgBAggLhgbwgfiQOwK4FsBGAplADQz5Z5FQC+MAvChQEw2gBKhAZgDaEDGAFwB0AE24BLTIQAKUAPYAHYoICeACjhkA5BG1lkoGDADOhQesg9shJBEyqAlCiPGY-eZhPy+wnvIBzdUEACwkTYXxHAG4YAHo4mDklFVUYbXxtGFF5QhMYTHlBGEIAD3Dizxg1ZXTk5Sg1ABE8-igJRUF5KG1hVxoSV0IcXGIIfD4kQSgbQZBjD0wuCQDsKHHJ6pnCUBoY0DhhCAYYAEZo0ASS0uUhapDCd09BMsquU3MX2HD4IA
| Suggestion,Experience Enhancement | low | Minor |
496,854,973 | godot | Disabled modules shouldn't call pkg-config if they are set to be non-builtin | **Godot version:** Git https://github.com/godotengine/godot/commit/72d87cfbce137b8012e86f678c27f0f19a9771cf
**OS/device including version:** Fedora 30
**Issue description:** Someone reported on IRC that non-builtin dependency checking is done even if the module that requires them is disabled. This causes a build failure if the dependency in question isn't installed on the system.
For instance, `builtin_libtheora=no` will always call pkg-config, even if `module_theora_enabled=no` is also passed to the SCons command line:
https://github.com/godotengine/godot/blob/72d87cfbce137b8012e86f678c27f0f19a9771cf/platform/x11/detect.py#L241-L247
**Steps to reproduce:** Build Godot with `scons platform=x11 builtin_libtheora=no module_theora_enabled=no`. | bug,platform:linuxbsd,topic:buildsystem | low | Critical |
496,855,258 | TypeScript | Documentation for --declaration and -d incorrect, missing --dry | When I read the [Compiler Options docs](https://www.typescriptlang.org/docs/handbook/compiler-options.html) (source permalink)[https://github.com/microsoft/TypeScript-Handbook/blame/ee314075d47dbab49a6b631cc45a38d6c555b324/pages/Compiler%20Options.md#L16], I see:
> Option | Type | Default | Description
> --- | --- | --- | ---
> `--declaration`<br/> `-d` | `boolean` | `false` | Generates corresponding `.d.ts` file.
When I try to specify `--declaration` as a compiler option on the command line, it gives an error. When I try to specify `-d` as a compiler option on the command line, it seems to activate an [undocumented `--dry` option](https://github.com/microsoft/TypeScript/blob/5d09688c1ef3eaf4e48fc570cc3e233e78222f9c/src/compiler/commandLineParser.ts#L920).
I think `tsc` needs to be fixed to follow the behavior specified in the documentation or the documentation needs to be updated to describe the `--dry` compiler option.
Furthermore, by reading the source, I can see that there seem to be config-only compiler options and CLI-only compiler options. The documentation probably needs to be updated to describe how those work/interact and describe when they are different. When a compiler option is config-only, it should never be documented as `--optionName` because a config-only option will never ever be typed as `--optionName`.
**TypeScript Version:** 3.7.0-dev.20190922
**Search Terms:**
* is:issue is:open declaration dry
**Code**
```
$ ./node_modules/.bin/tsc -b --declaration
```
```
$ ./node_modules/.bin/tsc -b -d
```
**Expected behavior:**
Build with `.d.ts` files emitted.
**Actual behavior:**
```
ohnobinki@gibby /tmp/.private/ohnobinki/typescript-next-play $ ./node_modules/.bin/tsc -b --declaration
error TS5072: Unknown build option '--declaration'.
```
```
ohnobinki@gibby /tmp/.private/ohnobinki/typescript-next-play $ ./node_modules/.bin/tsc -b -d
[00:31:11 GMT+0000 (UTC)] A non-dry build would build project '/tmp/.private/ohnobinki/typescript-next-play/tsconfig.json'
```
**Playground Link:** n/a
**Related Issues:**
| Docs | low | Critical |
496,905,694 | opencv | OpenCV DNN throws exception with DeepLab MobileNet v2 frozen graph | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
Please:
* Read the documentation to test with the latest developer build.
* Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue.
* Try to be as detailed as possible in your report.
* Report only one problem per created issue.
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => 4.1.1(python wrapper)
- Operating System / Platform => Ubuntu 18.04 LTS
- tensorflow => 1.14.0
##### Detailed description
<!-- your description -->
I fine-tuned the mobilenetv2 backbone version of deeplabv3, and export the frozen graph provided by [export_model.py](https://github.com/tensorflow/models/blob/master/research/deeplab/export_model.py) with no error.
I can use the frozen graph for inference in tensorflow, but throws an error when call `cv2.dnn.readNet('frozen_graph.pb', 'inference_graph.pbtxt')`:
>**error: OpenCV(4.1.1) /io/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:663: error: (-215:Assertion failed) const_layers.insert(std::make_pair(name, li)).second in function 'addConstNodes'**
##### Steps to reproduce
I can reproduce this error with the [deeplab official mobilenetv2 checkpoint](http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz)
1. Download the checkpoints and extract.
2. export using(in path/to/models/research/):
```
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
python deeplab/export_model.py \
--logtostderr \
--checkpoint_path=path/to/checkpoint/model.ckpt-30000 \
--export_path=path/to/frozen_graph/frozen_inference_graph.pb \
--model_variant="mobilenet_v2" \
--num_classes=21 \
--crop_size=513 \
--crop_size=513 \
--inference_scales=1.0 \
--save_inference_graph=True
```
3. readNet() using generated '.pb' and '.pbtxt' file
I have read many related issues but does not solve my problem.
I have also tested [xception_65](http://download.tensorflow.org/models/deeplabv3_pascal_train_aug_2018_01_04.tar.gz) checkpoint, also throws a error.
<!-- to add code example fence it with triple backticks and optional file extension
-->
| category: dnn | low | Critical |
496,977,365 | youtube-dl | Hide cmd windows 10 | When youtube_dl starts to convert video file via ffmpeg, it show cmd window.
Do anyone know how to hide this cmd window?
After googling. I read this https://trac.ffmpeg.org/wiki/DirectShow#Runningffmpeg.exewithoutopeningaconsolewindow
1. Tried to rename *.py to *.pyw
2. Make exe with pyinstaller with flag --noconsole
It still appears. I have no idea how disable console when ffmpeg start works.
I also found that 2 times this question was closed without any help. | question | low | Minor |
497,035,023 | angular | Animation doesn't work properly in nested component inside ng-content when both parent and child components use animations | # π bug report
### Affected Package
@angular/animations
### Is this a regression?
No.
It worked differently in versions < 5.0.0, but there were other problems with this case.
### Description
A parent component has a ng-content which is located inside element with ngIf and animation.
parent: `<div *ngIf="parentVisible" @animation><ng-content></ng-content></div>`
Nested child component has additional ngIf with animation.
child: `<div *ngIf="childVisible" @animation>Child content</div>`
Usage: `<parent><child></child></parent>`
When parent component hides and shows back ng-content, the child's content inside ngIf element stays hidden.
This problem does not exist when animation is not used in parent or child component.
## π¬ Minimal Reproduction
https://stackblitz.com/edit/angular-bugqv5
## π Your Environment
**Angular Version:**
<pre><code>
8.2.7
</code></pre>
**Anything else relevant?**
Tested in Chrome 77.0.3865.90 (64-bit), Firefox 69.0.1 (64-bit) | type: bug/fix,area: animations,freq3: high,state: needs more investigation,P3 | medium | Critical |
497,059,217 | opencv | gtk buildable error in open cv | i got these error when i run opencv
Starting /home/pi/build-QcOpenCv-Desktop-Debug/QcOpenCv...
qt5ct: using qt5ct plugin
(QcOpenCv:2806): GLib-GObject-WARNING **: cannot register existing type 'GtkWidget'
(QcOpenCv:2806): GLib-GObject-CRITICAL **: g_type_add_interface_static: assertion 'G_TYPE_IS_INSTANTIATABLE (instance_type)' failed
(QcOpenCv:2806): GLib-GObject-WARNING **: cannot register existing type 'GtkBuildable'
(QcOpenCv:2806): GLib-GObject-CRITICAL **: g_type_interface_add_prerequisite: assertion 'G_TYPE_IS_INTERFACE (interface_type)' failed
(QcOpenCv:2806): GLib-CRITICAL **: g_once_init_leave: assertion 'result != 0' failed
(QcOpenCv:2806): GLib-GObject-CRITICAL **: g_type_add_interface_static: assertion 'G_TYPE_IS_INSTANTIATABLE (instance_type)' failed
(QcOpenCv:2806): GLib-GObject-CRITICAL **: g_type_register_static: assertion 'parent_type > 0' failed
| priority: low,incomplete,needs investigation | low | Critical |
497,097,099 | go | encoding/json: json.Number accepts quoted values by default | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/viacheslav.poturaev/Library/Caches/go-build"
GOENV="/Users/viacheslav.poturaev/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY="github.com/hellofresh"
GONOSUMDB="github.com/hellofresh"
GOOS="darwin"
GOPATH="/Users/viacheslav.poturaev/go"
GOPRIVATE="github.com/hellofresh"
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.13/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.13/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/h7/7qg3nbt91bb6mgk2xtqqpwc40000gp/T/go-build850653985=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I've decoded a JSON value of `"123"` into `json.Number`.
https://play.golang.org/p/9Nwjn3rBFxI
### What did you expect to see?
I've expected to see an error.
### What did you see instead?
I've seen a successful result and an int64 value of `123` accessible with `.Int64()`.
I could not find any documentation that `json.Number` accepts quoted string values by default and was expecting the opposite (as per "being precise" motivation expressed in https://github.com/golang/go/issues/22463#issuecomment-340567609).
Not sure if this is a documentation or implementation issue. | NeedsDecision | medium | Critical |
497,111,590 | youtube-dl | Change container of opus audio | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.12.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x ] I'm reporting a feature request
- [ x] I've verified that I'm running youtube-dl version **2019.09.12.1**
- [x ] I've searched the bugtracker for similar feature requests including closed ones
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
Hi, with yt-dl i can download the audio from any video, which is great. The thing is, youtube stores opus audio in the webm container.
Is there any function to change the container within the yt-dl process? I tried:
`youtube-dl.exe -f "bestaudio[ext=webm]" --batch-file "youtube-dl.txt" --merge-output-format ogg -v`
but does nothing, it'll still give you the webm container file.
I believe it's not implemented or something, is there any known workaround?
thanks
| request | medium | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.