id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
462,353,416 | terminal | Add support for a separate color / image for padding area background | # Summary of the new feature/enhancement
I'd like to be able to specify a separate background color, image, etc. for the area in the margin (padding) in the terminal window.
This uses a background image and padding value, but fails when resized to anything but the golden value of both.

In straight-up XAML, I'd do this by having a window-wide background color/image, and then specifying something different for the area inside the padding.
| Help Wanted,Area-TerminalControl,Area-Settings,Product-Terminal,Issue-Task | low | Major |
462,371,788 | pytorch | scatter_ supporting different reduction modes | ## π Feature
Currently, we have `scatter_` and `scatter_add_`.
However, in use-cases such as Graph Neural Networks, it's common to wanting to do other reductions than add.
As a case in point, @rusty1s has written https://github.com/rusty1s/pytorch_scatter which has `add`, `sub`, `mean`, `max`, `min`, `std`, `mul`, `div`.
The kernels are there, and they are minor variations of the existing scatter kernel, [see here](https://github.com/rusty1s/pytorch_scatter/blob/master/cuda/scatter_kernel.cu).
We should provide this in PyTorch natively, ideally has a `reduce='add'` keyword argument in `scatter_`, or if one wants it to be more explicit, `scatter_reduce_(..., mode='mean')`
## Motivation
Graph Neural Nets are becoming pretty common.
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @bhosmer @smessmer @ljk53 @ailzhang @nikitaved @pearu @cpuhrsch @IvanYashchuk | high priority,module: sparse,module: internals,triaged,enhancement,module: scatter & gather ops | medium | Minor |
462,374,215 | TypeScript | code example is not correct in Intersection Types section |
**Code**
```ts
function extend<First, Second>(first: First, second: Second): First & Second {
const result: Partial<First & Second> = {};
for (const prop in first) {
if (first.hasOwnProperty(prop)) {
(result as First)[prop] = first[prop];
}
}
for (const prop in second) {
if (second.hasOwnProperty(prop)) {
(result as Second)[prop] = second[prop];
}
}
return result as First & Second;
}
class Person {
constructor(public name: string) { }
}
interface Loggable {
log(name: string): void;
}
class ConsoleLogger implements Loggable {
log(name) {
console.log(`Hello, I'm ${name}.`);
}
}
const jim = extend(new Person('Jim'), ConsoleLogger.prototype);
jim.log(jim.name);
```
class methods are non-enumerable, so the second for...in loop will not add log method to the result obj. | Docs | low | Minor |
462,407,720 | godot | 2D Texture Bleed when game is scaled | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
3.1.1.stable.official
<!-- Specify commit hash if non-official. -->
**OS/device including version:**
Linux Mint 19.1
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
When drawing a rotated sprite that is part of a sprite sheet (using the region) it will bleed colours in two directions.
The colors will bleed in the down and right direction relative to the sprite, so if it is rotated 90 degrees clockwise it will bleed down and left.
<!-- What happened, and what was expected. -->
**Steps to reproduce:**
Setup a sprite with a region_rect and rotate it. Set scaling to viewport and scale it by a few factors.
**Minimal reproduction project:**
[RotatedTextureBleed.zip](https://github.com/godotengine/godot/files/3342950/RotatedTextureBleed.zip)
If it is hard to see the orange pixels please try zooming in. Screenshots taken at 640x640 but game resolution was 32x32.

 | bug,topic:rendering | low | Minor |
462,441,384 | godot | Imposible to select multiple entries in orphan resource explorer at once | **Issue description:**
I can't select more than one entry in orphan resource explorer.
It is very frustrating, when I working on big project.

| enhancement,topic:editor,usability | low | Minor |
462,449,197 | rust | compiler_fence may emit machine code | As discovered [there](https://github.com/rust-embedded/wg/issues/361#issuecomment-505146286) `compiler_fence` produces `atomic_fence(ordering, SingleThread)` construction, which in turn can produce a non-empty code sequence. In fact, LLVM backends for AVR, PowerPC, RISC-V and Spark do not treat SingleThread fence as something special. At the same time Rust [docs](https://doc.rust-lang.org/std/sync/atomic/fn.compiler_fence.html) tell that "compiler_fence does not emit any machine code".
Seems like Rust misuses this "SingleThread means CompilerBarrier" semantics, but I could be wrong. | A-LLVM,A-codegen,T-compiler,A-docs,C-bug,I-heavy,O-riscv,A-atomic | low | Major |
462,452,470 | godot | [Bullet] Bullet physics: scaled collision shapes return incorrect normals | **Godot version:**
Godot 3.1.1 and Godot 3.2.dev.b4aba3ae7
**OS/device including version:**
Windows 10 64bit, Nvidia 660 TI
**Issue description:**
Bullet physics doesn't seem to take the scale of the collision shape into account when returning the normal vector from a Raycast, though the position returned is accurate. GodotPhysics does not have this issue.
While working on my painting system, I discovered that angled surfaces, like ramps, were not returning the correct normal vectors when using a raycast.
After many hours of debugging, I found that for a 45 degree surface, the raycast returned was off by roughly 4 degrees. For other angles, the offset is less, but still there. The raycast would not return accurate normal vectors for angular surfaces, it was always off at least by a couple degrees. For my paint system, the decal-like shader I am using requires fairly precise angles, so this issue made it nearly impossible to paint on angled surfaces.
I should note that the issue is only present in angled collision meshes. A rotated cube, for example, does not have this issue.
After lots of investigating, trying to write code to compensate, and more, I discovered the source of the issue: The MeshInstance nodes I was using for the ramp were scaled. For whatever reason, the scale of the MeshInstance node was messing up the normal vector returned by the raycast, despite *everything* else physics related, including the returned raycast position, works fine.
On a whim, I decided to see if the issue is present in both Bullet and GodotPhysics. Surprisingly, GodotPhysics does not have this issue, it returns the correct normal vector regardless of the scale, which is what I would expect and my project works like normal.
This leads me to believe the issue is somewhere with Bullet. I would expect the normal vector returned by the raycast to account for the scale of the collision shape when returning the normal vector.
I can attempt to help fix this issue, but I know little about Bullet/Game-Physics, so I have no idea where to look or what in the code base could be causing the issue.
**Steps to reproduce:**
For the sample project, try the following with *both* Bullet physics and Godot physics:
* Run the project and face one of the colored ramps.
* Hold the left mouse button with the center of the screen over a ramp.
* Observe the angle of the light blue quad.
* With Bullet physics, the light blue quad will show the normal returned as expected with the green colored ramps, however the purple colored ramps will return an incorrect normal, with the angle several degrees off.
* With Godot physics, the light blue quad will show the normal returned is as expected regardless of the color of the ramp in use.
**Minimal reproduction project:**
[Godot_Scaled_Physics_Shape_Example.zip](https://github.com/godotengine/godot/files/3343393/Godot_Scaled_Physics_Shape_Example.zip)
| topic:physics | low | Critical |
462,453,633 | pytorch | Storage operation failing on second GPU | ## π Bug
Certain operations on the storage of CUDA tensors cause a crash when using the non-default GPU device.
## To Reproduce
1. Find a multi-GPU machine.
2. Run a Python interpreter with `CUDA_LAUNCH_BLOCKING=1 python` (the bug will occur without setting the environment variable too, but it will just hang instead of erroring).
3. Run the following:
```python
>>> import torch
>>> torch.empty(1, device='cuda:0').storage().fill_(1)
1.0
[torch.cuda.FloatStorage of size 1]
>>> torch.cuda.set_device(1)
>>> torch.empty(1, device='cuda:0').storage().fill_(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: parallel_for failed: an illegal memory access was encountered
```
## Expected behavior
All storage operations should work regardless of which CUDA device is the default.
## Environment
```
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.2 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.105
GPU models and configuration:
GPU 0: GeForce GTX TITAN X
GPU 1: GeForce GTX TITAN X
Nvidia driver version: 418.39
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.3
[pip] torch==1.1.0
[pip] torchvision==0.2.2
[conda] blas 1.0 mkl
[conda] mkl 2019.3 199
[conda] mkl_fft 1.0.10 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.1.0 py3.7_cuda10.0.130_cudnn7.5.1_0 pytorch
[conda] torchvision 0.2.2 py_3 pytorch
``` | module: cuda,triaged | low | Critical |
462,460,181 | rust | TokenStream::to_string doesn't respect Joint in all cases | Example:
```rust
#[proc_macro]
pub fn m(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
let s = input.to_string();
proc_macro::TokenStream::from(proc_macro::TokenTree::Literal(proc_macro::Literal::string(&s)))
}
fn main() {
dbg!(m!(<=>));
}
```
Output: `m!(<= >) = "<= >"`; Expected: `m!(<= >) = "<=>"`, as the TokenStream does contain the correct joint-ness:
```rust
TokenStream [
Punct { ch: '<', spacing: Joint, .. },
Punct { ch: '=', spacing: Joint, .. },
Punct { ch: '>', spacing: Alone, .. },
]
``` | A-macros,T-compiler,C-bug | low | Minor |
462,463,544 | TypeScript | setTimeout defined in lib.dom.d.ts is not type safe | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.2
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
settimeout type safe
**Code**
```ts
// A *self-contained* demonstration of the problem follows...
// Test this by running `tsc` on the command-line, rather than through another build tool such as Gulp, Webpack, etc.
const f = (foo: number) => console.log(foo + 1);
setTimeout(f, 0, 'a');
```
**Expected behavior:**
It shall not pass thanks to the type error.
**Actual behavior:**
It passes.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
https://www.typescriptlang.org/play/#code/MYewdgzgLgBAZjAvDAFHEIBcMwFcC2ARgKYBOAlEgHwyiQgA2xAdAyAOZoYwDUMAjOQDcAKAjEoAFQCW+YiFxQ0AGhgAGVQHIAhoULBgAE0PFNWwpuFA
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
emm, maybe not. | Bug,Domain: lib.d.ts | low | Critical |
462,519,580 | TypeScript | int32 type returned by bitwise operators | ## Search Terms
bitwise operators, integer type, int32
## Suggestion
Add an `int32` subtype for `number` returned by TypeScript from bitwise operators.
```ts
function coerce(n: number): int32 {
return n | 0;
}
```
JavaScript bitwise operators [coerce arguments to int32](https://tc39.es/ecma262/#sec-binary-bitwise-operators-runtime-semantics-evaluation) and the return type of those operators will always be an int32.
This would not be a breaking change since `int32` would be a subtype of `number`. Only bitwise operations would change to return `int32`s.
**EDIT:** I now recommend calling the type `BitwiseInt32` so that userβs wonβt see the type as a generic integer type.
## Use Cases
Helps applications which are trying to take advantage of JavaScript implementations `int32` optimizations. An application could ask for an `int32` parameter to force coercion with `n | 0`.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Major |
462,538,353 | neovim | TermClose autocmd can free `curbuf` | - `nvim --version`: NVIM v0.4.0-1191-g740fb337d
- `vim -u DEFAULTS` (version: 8.1) behaves differently? It doesn't have a `TermClose` event
- Operating system/version: Fedora Linux 30
- Terminal name/version: gnome-terminal
- `$TERM`: xterm-256color
### Steps to reproduce using `nvim -u NORC`
```
nvim -u NORC
:autocmd TermClose * bd!
:term
:bd!
```
### Actual behaviour
Memory is corrupted, nvim crashes immediately or soon after
### Expected behaviour
Does something reasonable
### Notes
A few minutes in gdb suggests the problem comes down to `termincal_close()` running autocmds and reentering `do_bufdel`, which is presumably not safe to reenter. | terminal,bug-crash | medium | Critical |
462,554,672 | terminal | Unit test framework for terminal emulation | # Summary of the new feature/enhancement
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
Currently, terminal emulation for unix is extremely incomplete.
- The parser fails in many circumstances and can't cope with what would be considered backwards references in a regular expression engine. It is missing support for many different modes of operation...
- The terminal escape sequence interpreter is minimal. It's as approximately the same state that my terminal emulator [VtNetCore.UWP](https://github.com/darrenstarr/VtNetCore.UWP) was on the second or third day of development. It has barely a minimal implementation required to run a few sample cases. Things like midnight commander barely function at all (though I'm REALLY impressed by how much does function). vttest more or less considers Microsoft Terminal to be more or less unusable... look at [basic terminal comparison on windows](https://medium.com/@ITGuyGoneBad/vttest-on-different-terminals-4235d4d7aee6)
- Unit tests for the parser and the command sequence interpreter are minimal at best, incorrect in some cases and are very coarse grained.
I don't believe that the current state of the terminal emulator's unit testing framework is suitable for permitting clear pull requests or filing clear issues with test cases.
Anyone who creates a terminal which should test for correctness needs to have a proper testing framework and it would be impossible for any of us to make a terminal work without this.
# Proposed technical implementation details (optional)
<!--
A clear and concise description of what you want to happen.
-->
I recommend writing a unit test framework that is part of the CI pipeline that supports describing and writing unit tests in the following fashion
- Name of test
- Input data
- Compliance level (minimum viable product/ECMA-48/xterm/academic)
- Expected cursor state?
- Expected window characteristics?
- Expected window text and attributes?
- Query text characteristics (character width, character height, etc...)?
- Query window as pixels?
- Query window as text?
- Query current buffer?
- Query mouse pointer?
- Query mouse position?
- Query mouse colors?
- Query protected areas?
- Query current character set?
- Query state of sequence parser?
- Query state of sequence buffer?
I recommend supporting a "function as a service" implementation that would allow describing the test as a JSON file and a Javascript or Typescript file for the test itself.
I intend to work towards this goal with my VtNetCore engine as well. I have about 200 unit tests at this time and have planned another 1000+ for the additional features I currently support. If we do this together, we can standardize a virtual terminal test suite.
If you look at VtNetCore, you can see how to start a legitimate test suite (in this case using XUnit) for compliance. I highly recommend also looking at [libvterm](https://github.com/neovim/libvterm), they have done an excellent job of setting an example for all of us. But they have some very serious limitations that can't be solved by writing such a simple query language. We should use something more robust.
If Microsoft is going to make a real attempt at a real terminal emulator and they're going to claim compliance, you absolutely must provide a better means for providing tests. At this time, the test suite (see OutputEngineTest) is just too complex and it's not possible for people like me, or like Thomas Dickey or anyone else to provide a meaningful bug report. | Issue-Feature,Area-VT,Product-Terminal | low | Critical |
462,615,553 | go | x/net/websocket: add runnable versions of examples mentioned in Server.Handshake documentation | https://godoc.org/golang.org/x/net/websocket there is missing example for redefining your own check origin callback or even disabling the existing one
| Documentation,help wanted,NeedsFix | low | Minor |
462,643,605 | youtube-dl | Mikan School | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.27. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.06.27**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.mikanschool.org/
- Playlist: https://www.mikanschool.org/class_syllabus_hls_play.php?class=CLS594a18b2baf0e
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
The videos are hosted at the amazonaws cloud, the playlist has one free video, but when I try to download it, youtube-dl fails with unsupported url, but the single video on the homepage will download.
| site-support-request | low | Critical |
462,655,911 | TypeScript | `strictPropertyInitialization` should allow private initialization helpers | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
- strictPropertyInitialization
- class initialization helper
## Suggestion
It's a common pattern to delegate class initialization to helper methods to reduce the size of the constructor. This makes it more maintainable. However, when using `--strictNullChecks` and `--strictPropertyInitialization`, TypeScript complains because it's unable to infer the initialization since it's in a helper function. The argument in https://github.com/microsoft/TypeScript/issues/21132 is that it's because of inheritance, but I think it should work if the helper method is marked a `private` (or private class field `#`), since a subclass would not be able to access it.
Not only is it annoying to have to do `private foo!: string;`, but it's also hard to know to do that, so most just end up either removing the helper methods or [using `// @ts-ignore`](https://github.com/sindresorhus/p-queue/pull/52/commits/0c6382390dec345dd56a70705d34be74e817138f#diff-21e9ddd93162651bd36f6e5bbfca8460R43). It's also TS's goal to support common JS pattern as best as possible.
## Use Cases
Explained above.
## Examples
```ts
class A {
private foo: string; // <== TypeScript complains
constructor() {
this.initStuff();
}
private initStuff() {
this.foo = 'foo';
}
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Critical |
462,684,074 | java-design-patterns | Kappa Architecture | **Description:**
The Kappa Architecture is a data processing architecture that provides a simplified approach to handling both real-time and batch data processing. Unlike the Lambda Architecture, which requires separate paths for batch and real-time processing, the Kappa Architecture uses a single stream processing engine for both real-time and historical data processing.
Main elements of the Kappa Design Pattern include:
- **Single Data Pipeline:** A unified data processing stream that handles both real-time and historical data.
- **Stream Processing Engine:** A core component that processes incoming data in real-time.
- **Immutable Data Store:** All data is stored in an immutable, append-only log, ensuring data integrity and enabling easy reprocessing.
- **Reprocessing Capability:** The ability to reprocess historical data by replaying the log.
**References:**
1. [Kappa Architecture by Milinda Pathirage](http://milinda.pathirage.org/kappa-architecture.com/)
2. [Kappa Architecture GitHub Repository](https://github.com/milinda/kappa-architecture.com)
3. [Java Design Patterns Contribution Guidelines](https://github.com/iluwatar/java-design-patterns/wiki)
**Acceptance Criteria:**
1. Implement a single data pipeline that processes both real-time and historical data.
2. Integrate a stream processing engine to handle incoming data in real-time.
3. Ensure that the data store is immutable and supports reprocessing of historical data by replaying the log. | info: help wanted,epic: pattern,type: feature | medium | Major |
462,706,769 | node | Restart frame is broken | <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: 11.15.0, 12.5.0
* **Platform**: Linux, macOS, didn't check on Windows
* **Subsystem**: Inspector
<!-- Please provide more details below this comment. -->
**Restart frame** action doesn't work since node 11:

| inspector | low | Critical |
462,734,093 | opencv | [ JavaScript ] [ Feature Request ] xfeatures2d.SIFT_create and similar Feature Detection alg. | Or maybe I just miss something, what's the point of the algorithm implementation on js, without further possibility to work with the results ?
| incomplete,category: javascript (js) | low | Minor |
462,739,896 | go | cmd/go: GOPROXY default can make 'get -u' lag upstream repository | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +623d653db7 Sat Jun 29 13:17:15 2019 +0000 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
No, because Go 1.12 doesn't have a GOPROXY set up by default.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE="on"
GOARCH="amd64"
GOBIN="/home/mvdan/go/bin"
GOCACHE="/home/mvdan/go/cache"
GOENV="/home/mvdan/.config/go/env"
GOEXE=""
GOFLAGS="-ldflags=-w"
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/mvdan/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org"
GOROOT="/home/mvdan/tip"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/mvdan/tip/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/dev/null"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build558320792=/tmp/go-build -gno-record-gcc-switches"
GOROOT/bin/go version: go version devel +623d653db7 Sat Jun 29 13:17:15 2019 +0000 linux/amd64
GOROOT/bin/go tool compile -V: compile version devel +623d653db7 Sat Jun 29 13:17:15 2019 +0000
uname -sr: Linux 5.1.15-arch1-1-ARCH
/usr/lib/libc.so.6: GNU C Library (GNU libc) stable release version 2.29.
gdb --version: GNU gdb (GDB) 8.3
</pre></details>
### What did you do?
Module A and B live in separate repositories, and they are both v0. A depends on B.
I push a commit to B, and then run `go get -u` in A to update the dependency to that new master commit. Alternatively, tried `go get B@master`.
### What did you expect to see?
A's `go.mod` updated to show B's latest pseudo-version with the master commit just pushed.
### What did you see instead?
B's version staying at an older version, pushed hours or days before. Presumably because `proxy.golang.org` caches the version for `@latest` and `@master`.
This went unnoticed for a while and someone was scratching their head until they realised `go get -u` hadn't done what they were used to.
`GOPROXY=direct go get -u` had to be used to work around this issue in the end. The other option was to manually copy the commit hash to force the proxy to fetch that git ref, which is a bit cumbersome.
In the end, I worry that the new `GOPROXY` default in 1.13 is going to confuse people who are used to pushing to multiple repos that depend on each other. I understand why the proxy server can't fetch from git repeatedly every single time, but I wonder if there's something we can do to not silently break this.
/cc @bcmills @heschik @jayconrod | NeedsDecision,modules | medium | Critical |
462,802,658 | vscode | Allow for configuration files in .devcontainer | Currently I don't seem to know a good way to have container-specific settings, tasks, and launch configurations.
The closest thing to it is to set up settings.json, tasks.json, and launch.json in the workspace .vscode folder, but then it applies to the workspace when you open it normally.
It would be ideal if we could have those same files in the .devcontainer folder and have vscode pick them up from there when the folder is opened in the container.
Currently the closest thing to it is in .devcontainer/devcontainer.json, there is a settings key which is similar to having settings.json, but i don't see anything for tasks or launch. | feature-request,debug,tasks | low | Major |
462,808,657 | TypeScript | Generic type can't be assigned to the same DeepReadonly type | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
Assigning generic type `T` to it's deep readonly `DeepReadonly<T>` cause error however shallow `Readonly<T>` works fine. Also I found that known types do work well with `DeepReadonly` .
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
DeepReadonly assign return generic nested readonly
**Code**
```ts
// Simple deep readonly
export type DRO<T> = { readonly [P in keyof T]: DR<T[P]> };
type DR<T> = T extends object ? DRO<T> : T;
// Sample type
interface TMP {
a: number;
b: {
c: boolean;
};
}
// Works fine
const a: TMP = { a: 10, b: { c: false } };
const b: DRO<TMP> = a;
// Doesn't work
class X<T> {
constructor(private readonly t: T) {}
foo(): DRO<T> {
// Error here
return this.t;
}
bar(): Readonly<T> {
// Works fine
return this.t;
}
}
```
**Expected behavior:**
Method `foo` shouldn't have error
**Actual behavior:**
Error: Type 'T' is not assignable to type 'DRO<T>'.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
https://www.typescriptlang.org/play/#code/PTAEGUEsFsAcBsCmoAmjG1AJ0QQxQPYB28AngFCIAesBWALqPabMgCIBKA8gDwAqAPlABeUAG9seQiVKgA2gAVQkIqADWiUgQBmoPgF0AXKE79F+oQF8A3OWasTHfkNF9Q1eoiIoAzqAIARgBWiADGjAD8jryCoMZ8tuQgELhwSEwsiOQqnljauKHIfACySmLkoKC4xkQArtABiFi2lQHG5ZWVocYBBARIuEQtoDbkluRJYADqdGp+2ipZocQ+jNV6pSLiVcYAjAAMADSgbdvdoPnwPsiWI7bLRKsnxpwxpS5ViclsBIg+RAByRgAd1m5FC8FwPj8AA1nOIKqAHqssLVwnQABSwLCQABuuE8knwxDITHiAEpxONEdo+hjyS9uPCOp1kgBRLBYOigAAWTSynUk9FqWFU9B5kB8ADp6NZQIjqa1cFh6cYOFISaRmYjKskZlg5hdFjqhSKxRLpbKFWMgA
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
Maybe: #12826 and #21919
| Bug | low | Critical |
462,835,601 | pytorch | Sparse allreduce for ProcessGroupNCCL | In #22036 we added sparse allreduce for ProcessGroupGloo. It works for sparse CUDA tensors, but doesn't leverage InfiniBand like NCCL does. Therefore, we should have a sparse allreduce implementation for ProcessGroupNCCL as well. | oncall: distributed,feature,triaged | low | Major |
462,839,881 | TypeScript | Extra slashes allowed in import paths | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.6.0-dev.20190628
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
- import
- require
- slash / separator
**Code**
Create an import that uses multiple `/` instead of a single one:
```ts
import { abc } from ".///////folder///////y";
console.log(abc);
```
**Expected behavior:**
An error is produced. This is not valid a valid import in many module loaders
**Actual behavior:**
No error
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Bug | low | Critical |
462,844,458 | go | proposal: crypto/x509: ability to add custom curve when parsing X509 certificate | Per https://github.com/golang/go/issues/26776, using a third party library for custom curve is advised.
However, when parsing x509 certificate (`x509.ParseCetificate()`), it is not possible to supply custom curve.
My proposal is to offer a configuration that can be used to supply a function to return `elliptic.Curve` from `asn1.ObjectIdentifier` to complement the default `namedCurveFromIOD()` | Proposal,Proposal-Hold,Proposal-Crypto | medium | Major |
462,852,206 | rust | Move `compile-pass` tests to `check-pass` or `build-pass` | `compile-pass` was the old way to assert that UI tests were able to successfully build. However, it would do a full build of the code, including codegen and linking. Many of our tests don't need this, however, and should instead use the new `check-pass`, introduced in https://github.com/rust-lang/rust/pull/61778. For tests that exercise codegen and linking, `build-pass` can be used instead.
As a first step, it would be good to remove `compile-pass` to prevent users adding new tests that unconsciously take a dependency on codegen/linking, and push them to use `check/build-pass` to make the distinction explicit.
To do this, we'd like to start with a mass migration of `compile-pass` tests to `build-pass` (rather than `check-pass` because we don't want to accidentally stop testing codegen/linking functionality). However, we don't want to lose the distinction between tests which are intentionally `build-pass` and those that have been automatically migrated, so automatically-migrated tests should include a note like `// build-pass (FIXME(this-issue-#): could be check-pass?)` or similar.
Once that's done, we can remove `compile-pass` and work on moving over the `FIXME`-tagged tests to either `check-pass` or remove the `FIXME`. | C-cleanup,A-testsuite,E-mentor,T-compiler,E-medium,A-compiletest | low | Major |
462,852,648 | kubernetes | Dynamic informers do not stop when custom resource definition is removed | **What happened**:
Once started, dynamic informers for custom resources are not stopped.
**What you expected to happen**:
After the dynamic informers resynced, they would stop informers belong to no-longer-existing resources.
**How to reproduce it (as minimally and precisely as possible)**:
1. Start a cluster with garbage collection and quota enabled with high enough verbosity to see API requests logged
2. Create a custom resource definition
3. Observe list/watch requests from the dynamic informers:
```
I0701 14:31:54.604559 77309 wrap.go:47] GET /apis/mygroup.example.com/v1/myresources?limit=500&resourceVersion=0: (382.601Β΅s) 200 [hyperkube/v1.16.0 (darwin/amd64) kubernetes/40038d5/dynamic-informers 127.0.0.1:62616]
```
4. Delete the custom resource definition
5. Observe continuous list requests from the dynamic informers receiving a 404:
```
I0701 14:33:05.857207 77309 wrap.go:47] GET /apis/mygroup.example.com/v1/myresources?limit=500&resourceVersion=0: (346.987Β΅s) 404 [hyperkube/v1.16.0 (darwin/amd64) kubernetes/40038d5/dynamic-informers 127.0.0.1:62616]
I0701 14:33:06.861218 77309 wrap.go:47] GET /apis/mygroup.example.com/v1/myresources?limit=500&resourceVersion=0: (349.903Β΅s) 404 [hyperkube/v1.16.0 (darwin/amd64) kubernetes/40038d5/dynamic-informers 127.0.0.1:62616]
...
```
after the switch by the [garbage collector and quota controllers to use the generic metadata client](https://github.com/kubernetes/kubernetes/pull/78742), the controller manager logs around this are even harder to understand:
```
2019-11-16T01:18:20.15247609Z stderr F E1116 01:18:20.152307 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
2019-11-16T01:18:20.450468558Z stderr F E1116 01:18:20.450095 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
2019-11-16T01:18:20.632847838Z stderr F E1116 01:18:20.625706 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
2019-11-16T01:18:20.693940889Z stderr F E1116 01:18:20.693774 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
2019-11-16T01:18:20.740153328Z stderr F E1116 01:18:20.739963 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
2019-11-16T01:18:20.740400594Z stderr F E1116 01:18:20.740276 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
2019-11-16T01:18:20.820833937Z stderr F E1116 01:18:20.820695 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
2019-11-16T01:18:20.848726246Z stderr F E1116 01:18:20.848605 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
2019-11-16T01:18:20.856013314Z stderr F E1116 01:18:20.855816 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
2019-11-16T01:18:20.950015373Z stderr F E1116 01:18:20.949837 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
2019-11-16T01:18:20.998885723Z stderr F E1116 01:18:20.998732 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
2019-11-16T01:18:21.001894579Z stderr F E1116 01:18:21.001726 1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
```
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
- Cloud provider or hardware configuration:
- OS (e.g: `cat /etc/os-release`):
- Kernel (e.g. `uname -a`):
- Install tools:
- Network plugin and version (if this is a network-related bug):
- Others:
/sig api-machinery
/priority important-soon
/cc @sttts @jpbetz @deads2k | kind/bug,sig/api-machinery,priority/important-longterm,lifecycle/frozen | high | Critical |
462,873,617 | pytorch | PyTorch Tensor subclasses and protocols for NumPy interoperability | ## π Feature
This is a description of several related features that are best considered together.
1. Allow subclassing `Tensor` and propagating subclass instances correctly with `torch` functions, operators, using views/slices/etc.
2. Support the NumPy array protocols
3. Allow other libraries to reuse the PyTorch API via a similar method as NumPy uses
## Motivation
This issue/document is motivated by the attempts in [PyTorch PR 17249](https://github.com/pytorch/pytorch/issues/17249) and follow-up PRs [18610](https://github.com/pytorch/pytorch/issues/18610), [22235](https://github.com/pytorch/pytorch/pull/22235) and [22247](https://github.com/pytorch/pytorch/issues/22247) to make `torch.Tensor` subclasses interact better with NumPy and Torch functions. Currently (June 2019), `Tensor` subclassing is not yet supported and while PyTorch in many cases follows the NumPy API, direct interoperability is limited (instead one needs to explicitly convert between `torch.Tensor` and `numpy.ndarray`).
## Potential goals
These are _potential_ goals that have been collected from the above referenced PRs, other PyTorch issues (referenced in the relevant sections), as well as from discussions with mainly Edward Yang, and also other PyTorch and NumPy maintainers:
1. Support subclassing `torch.Tensor` in Python
2. Preserve `Tensor` subclasses when calling `torch` functions on them
3. Preserve `Tensor` subclasses when calling `numpy` functions on them
4. Use the NumPy API with PyTorch tensors (i.e. NumPy API calls dispatch to `torch` functions)
5. Use the PyTorch API with `torch.Tensor`-like objects that are _not_ `Tensor` subclasses
6. Reuse NumPy ufunc implementations directly from PyTorch
7. Allow operations on mixed array types, e.g. `tensor + ndarray`
Important to keep in mind when implementing features that achieve any of the above goals:
- The PyTorch team is [planning](https://github.com/pytorch/pytorch/issues/17249#issuecomment-505968610) more complex `Tensor` wrappers, an effort that should not be made significantly more difficult.
- The PyTorch team may want to provide a (g)ufunc-like mechanism in PyTorch in the future. Also that should not be made unnecessarily complex.
### Support subclassing `torch.Tensor` in Python
Note that `Tensor` seems to have been designed with subclassing in mind, at least that's what the comments at https://pytorch.org/docs/stable/_modules/torch/tensor.html (_NB: If you subclass Tensor ..._) indicate. Support seems incomplete though. The most basic way of subclassing just adds some new attributes, e.g. to carry around specific metadata like "this tensor represents voltages".
```
class AddedAttributeTensor(torch.Tensor):
data_info = 'voltage'
t1 = torch.Tensor([1, 2])
t2 = AddedAttributeTensor([3, 6])
print("Extra attribute of subclass: ", t2.extra_attr)
print("Tensor + subclass gives back a Tensor instance: ", t1 + t2)
print("Is subclass preserved for operators? ", isinstance(t2 + t2, AddedAttributeTensor))
print("Does slicing preserve subclass? ", isinstance(t2[:1], AddedAttributeTensor))
print("Does taking a view preserve subclass? ", isinstance(t2.view((2, 1)), AddedAttributeTensor))
```
Running this code shows that for a regular subclass, the subclass doesn't propagate (we always get a plain `Tensor` instance back:
```
Extra attribute of subclass: voltage
Tensor + subclass gives back a Tensor instance: tensor([4., 8.])
Is subclass preserved for operators? False
Does slicing preserve subclass? False
Does taking a view preserve subclass? False
```
The [NumPy subclassing docs](https://www.numpy.org/devdocs/user/basics.subclassing.html#ndarrays-and-object-creation) discuss the ways in which new class instances can be created; the same will apply to `Tensor` instances. To deal with that, two methods are needed:
1. A `__new__` method for initialization in case of an explicit constructor call.
2. A method to deal with creation in other ways, like slicing or taking a view (which will bypass `__new__`).
For (2) NumPy uses `__array_finalize__`, however in [gh-22247](https://github.com/pytorch/pytorch/pull/22247) the claim was that this method is very expensive (because it gets called too often - this doesn't seem the case though, see the _"Performance considerations"_ section further down). Instead it introduced a `THPVariable_result_ptype`, which achieves the same thing (although it was confusingly mixed up with an incorrect use of `__array_ufunc__` there).
Also note that `torch.Tensor._make_subclass` already exists (defined in `torch/csrc/autograd/python_variable.cpp`, according to the comment specifically for use with `torch.nn.Parameter`). It's unclear whether that or the code in [gh-22235](https://github.com/pytorch/pytorch/pull/22235) works for views and slicing; it's not tested.
### Preserve `Tensor` subclasses when calling `torch` functions on them
_Note that this was the goal of [gh-22235](https://github.com/pytorch/pytorch/pull/22235), which is useful as a reference._
For `Tensor` subclasses that _do not_ implement `__torch_function__` (assuming that gets implemented, see goal 5), this will work if the `__array_finalize__` equivalent gets implemented (see previous section). For subclasses that _do_ implement `__torch_function__`, all `torch` functions get overridden by the subclass, so it has more control over this (although in many cases it will still make use of the `__array_finalize__` equivalent).
### Preserve `Tensor` subclasses when calling `numpy` functions on them
_Note that this was the goal of [gh-22247](https://github.com/pytorch/pytorch/pull/22247), which is useful as a reference._
This should be done via implementation of `__array_ufunc__` and `__array_function__` on the `Tensor` class. At that point, all the NumPy functions that have a PyTorch equivalent will work (including subclass propagation if that's implemented for `torch` functions and operators), and other NumPy functions will error.
### Use the NumPy API with PyTorch tensors (i.e. NumPy API calls dispatch to `torch` functions)
NumPy provides two protocols that ndarray-like objects (like `torch.Tensor`) can implement to make NumPy API calls dispatch to their own implementations. Those protocols are `__array_ufunc__` (available since NumPy 1.13) and `__array_function__` (available since NumPy 1.17 by default; in 1.16 one can enable it via an environment variable). These two protocols work in the same way:
1. Pass `Tensor` instance to a numpy function (e.g. `numpy.abs`)
2. NumPy detects the presence of `Tensor.__array_ufunc__` (or `Tensor.__array_function__`) and delegates execution to it.
3. The `Tensor.__array_ufunc__` implementation then can forward that function call to the right implementation (`torch.abs` or `Tensor.abs`).
The main benefit of implementing these functions is that users can prototype new code with NumPy, or reuse their existing code, and that code will then work unchanged when passing in PyTorch tensors (even if they live on a GPU). For more context, see e.g. [NEP 18](https://www.numpy.org/neps/nep-0018-array-function-protocol.html). Also note that CuPy, Dask and pydata/sparse already implement these protocols.
Note that there's a related discussion at [gh-2228](https://github.com/pytorch/pytorch/issues/2228), "PyTorch with NumPy syntax?", with a fairly detailed plan to provide a new `torch.np` API. That is very much related. That plan does seem less desirable than using `__array_function__` - why create a whole new API in a `torch.np` submodule when it's now possible to use the NumPy API itself?
There may be a **backwards compatibility** issue here. Because `torch.Tensor` already implements
`__array__` and `__array_wrap__`, many (but not all) NumPy functions will already work with Tensor:
```
In [1]: import torch
In [2]: t = torch.Tensor([1, -2])
In [3]: np.abs(t)
Out[3]: tensor([1., 2.])
In [4]: np.sin(t)
Out[4]: tensor([ 0.8415, -0.9093])
In [5]: np.dot(t, t)
Out[5]: 5.0
In [6]: torch.dot(t, t) # would be called if t had __array_function__
Out[6]: tensor(5.)
In [7]: np.mean(t) # not all functions work ....
...
TypeError: mean() missing 3 required positional argument: "dim", "keepdim", "dtype"
```
So here the return from `np.dot(t, t)` would change from `5.0` to `tensor(5.)`. For functions in NumPy that _don't_ have a PyTorch equivalent, the PyTorch `__array_function__` implementation should explicitly convert to ndarray with `np.asarray` and then call the NumPy functions (this preserves current behavior). Returning `NotImplemented` will cause NumPy to raise a `TypeError` while those functions worked previously via `__array__`.
Likely this is not a major issue (Dask, CuPy and pydata/sparse all didn't consider this problematic), but it's good to explicitly think about this. The [Partial implementation of NumPy's API](https://www.numpy.org/neps/nep-0018-array-function-protocol.html#partial-implementation-of-numpy-s-api) section of NEP 18 provides a detailed discussion on this point.
A **note of caution** is probably warranted here: while `__array_ufunc__` has been around for over 2 years and has generally worked very well, `__array_function__` (which does work very similarly but has to deal with more flexible function signatures) is brand new. An alternative discussed in NEP 18 is to use [multiple dispatch](https://www.numpy.org/neps/nep-0018-array-function-protocol.html#multiple-dispatch). That would be a more comprehensive solution (one can override anything, see e.g. [uarray](http://uarray.readthedocs.io/)), however it's a more invasive change with likely larger overhead (3-5 function calls rather than 1). Adding a protocol now would not preclude adding a multiple dispatch layer on top later. If the larger overhead is acceptable though, the PyTorch team could also decide that a more complete multiple dispatch layer (perhaps in a separate namespace or project) would be the better solution.
### Use the PyTorch API with `torch.Tensor`-like objects that are _not_ `Tensor` subclasses
This would allow users to write their own tensor implementations and have users use the familiar PyTorch API with it. It can be implemented with a `__torch_function__` protocol, which would work analogously to the NumPy `__array_function__` protocol.
Providing such a `__torch_function__` protocol will also help `Tensor` subclasses to modify the behavior of individual `torch` functions, while forwarding directly to the `torch` functions that that subclass does not want to modify (for an example of how this can work see the `__array_ufunc__` section of the [NumPy subclassing docs](https://www.numpy.org/devdocs/user/basics.subclassing.html#array-ufunc-for-ufuncs)).
In [issue 17268](https://github.com/pytorch/pytorch/issues/17268), "Generic object to tensor dispatching", there's both a proposal to API to register tensor conversion functions and a response from the Pyro developers that they'd prefer support for `__array_function__`.
As an alternative, the `__array_function__` protocol could be used directly on `torch` functions. The way to reuse `__array_function__` would be to decorate functions in PyTorch with `@array_function_dispatch` (from `numpy.core.overrides`, which is currently still private). The upsides of that are that the mechanism exists already, and is already supported by other array libraries. The potential downsides are that:
- the mechanism is still very new and marked in the NumPy docs as "may still change" (although changes other than bug fixes are unlikely). Therefore, vendoring the decorator would be the way to go.
- it puts a more stringent requirement on functions in the `torch` namespace to be compatible in signature with the `numpy` ones - this is desirable, but may not be achievable due to backwards compatibility reasons.
- it's still not clear if `__array_function__` is actually supposed to be used like this (likely yes, but still under discussion in [NumPy gh-13872](https://github.com/numpy/numpy/issues/13872))
In summary: it's probably better to go with `__torch_function__` that works the same way as `__array_function__` but has its own "domain". That requires other libraries that want to reuse the PyTorch API to implement `__torch_function__` to explicitly opt in.
### Reuse NumPy ufunc implementations directly from PyTorch
What the rationale for this goal would be is a little unclear. The number of NumPy functions that are ufuncs and are not already covered by equivalent PyTorch functionality is not that large (see a list of NumPy ufuncs [here](https://docs.scipy.org/doc/numpy/reference/ufuncs.html#available-ufuncs)). The implementation of this feature in [gh-22247](https://github.com/pytorch/pytorch/pull/22247) was partially motivated by the goal of using `Tensor` subclasses with NumPy functions; that is best done differently though.
In case users want to use NumPy functions (not just ufuncs) with `Tensor`s today, this either may already work (it does for many functions) or can be done with explicit conversion:
```
x = tensor.numpy() # to a numpy array (no-copy if tensor lives in CPU memory)
y = np.somefunction(x) # use with NumPy
tensor2 = torch.from_numpy(y) # convert back to a tensor
```
Adding extra design complexity to avoid these explicit casts does not seem worthwhile. In case there are NumPy functions that are popular, adding those functions to PyTorch itself seems like a better option, especially because that would work for tensors that live in GPU memory as well and be more performant.
The explicit casts would be a little more cumbersome for subclasses, but that's probably not a good enough reason to add design complexity.
### Allow operations on mixed array types
In [gh-22247](https://github.com/pytorch/pytorch/pull/22247#issuecomment-506123115) it was suggested that a good goal could be to make operators on mixed array/tensor types work better. Currently `torch.Tensor + numpy.ndarray` will call the PyTorch implementation of the `+` operator, while `numpy.ndarray + torch.Tensor` will call the NumPy implementation of `+`. This is simply the way Python operator support works, and is unaffected by any of the array protocols like `__array_function__`. The general advice should be "don't do that" - it's better to be explicit there and convert both left-hand and right-hand side of any expression to either `Tensor`s or `ndarray`s.
## Performance considerations
The extra overhead of the `__array_function__` and `__array_ufunc__` protocols is that of a single Python function call. Typically that is 300-400 ns (see e.g. https://github.com/numpy/numpy/pull/12830#issuecomment-506893356 for benchmarks). Adding a `__torch_function__` protocol should give a similar extra overhead for calling `torch.somefunction` _if_ a new check needs to be added. However it's likely (says @ezyang) that a check and fast path for `torch.Tensor` input already exists - in that case there will be no extra overhead for `Tensor` input (and 300 ns for non-`Tensor` input seems less of an issue). _needs investigating_
For comparison, the current overhead a NumPy ufunc including `__array_ufunc__` is of the same order (~400 ns) while the overhead of `torch` functions is significantly larger, ~3 us:
```
In [1]: import torch
t
In [2]: t = torch.Tensor([1, -2])
In [3]: %timeit torch.abs(t)
3.2 Β΅s Β± 30.5 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each)
In [4]: x = t.numpy()
In [5]: %timeit np.abs(x)
392 ns Β± 5.83 ns per loop (mean Β± std. dev. of 7 runs, 1000000 loops each)
```
That implies that this overhead should be acceptable.
In the PRs that triggered this subclassing discussion, it was said that `__array_finalize__` has too much overhead so cannot be used (hence the alternative in gh-22247). To check this, let's implement a similar small array (taken from [here](https://www.numpy.org/devdocs/user/basics.subclassing.html#simple-example-adding-an-extra-attribute-to-ndarray), see that description for more extensive comments):
```
class InfoArray(np.ndarray):
def __new__(subtype, shape, dtype=float, buffer=None, offset=0,
strides=None, order=None, info=None):
# Create the ndarray instance of our type, given the usual
# ndarray input arguments. This will call the standard
# ndarray constructor, but return an object of our type.
# It also triggers a call to InfoArray.__array_finalize__
obj = super(InfoArray, subtype).__new__(subtype, shape, dtype,
buffer, offset, strides,
order)
obj.info = info
return obj
def __array_finalize__(self, obj):
# ``self`` is a new object resulting from
# ndarray.__new__(InfoArray, ...), therefore it only has
# attributes that the ndarray.__new__ constructor gave it -
# i.e. those of a standard ndarray.
if obj is None: return
# Note that it is here, rather than in the __new__ method,
# that we set the default value for 'info', because this
# method sees all creation of default objects
self.info = getattr(obj, 'info', None)
n = 3
x = np.arange(n)
i = InfoArray(shape=(n,), dtype=np.int64, buffer=x)
```
Now to test the performance:
```
In [2]: %timeit x + x
436 ns Β± 0.616 ns per loop (mean Β± std. dev. of 7 runs, 1000000 loops each)
In [3]: %timeit x + i
1.43 Β΅s Β± 10.8 ns per loop (mean Β± std. dev. of 7 runs, 1000000 loops each)
In [4]: %timeit i + i
2.4 Β΅s Β± 20.3 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each)
```
And for `n = 30000`:
```
In [6]: %timeit x + x
10.9 Β΅s Β± 27.4 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each)
In [7]: %timeit x + i
12.5 Β΅s Β± 29.9 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each)
In [8]: %timeit i + i
13.8 Β΅s Β± 439 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each)
```
So the extra overhead of `__array_finalize__` is ~2-3 us when implemented in pure Python (and a subclass author could decide to implement the method in C if that's a problem). There does seem to be a small design issue in NumPy, because `x + i` and `i + i` both require a single new subclass instance to be created, hence for operations of the same type `i + i` should not be more expensive (but this is a minor detail).
## Some comments on the NumPy & PyTorch APIs
- It's not desirable to copy all of the NumPy API; that API is way too large and many functions are of limited interest or have better alternatives.
- See [RNumPy](https://github.com/Quansight-Labs/rnumpy) for a work-in-progress attempt to define a sensible subset of the full NumPy API that other array libraries can target.
- The PyTorch maintainers have expressed interest/willingness in adding functions to the `torch` namespace. Focusing first on matching the signatures of functions in the `torch` and methods in the `torch.Tensor` namespace would be nice. Right now the functions that are there often have different signatures (e.g. compare `torch.sum` and `Tensor.sum`).
## Pitch / Possible plan forward
Implement the following (these don't depend on each other, no order implied):
1. `__array_ufunc__` and `__array_function__` (small backwards compat impact, no performance impact)
2. `__torch_function__` (no backwards compat impact, likely no performance impact for `Tensor` input and 300-400 ns for non-`Tensor` input, needs investigating)
3. A subclass finalization method (as in [gh-22235](https://github.com/pytorch/pytorch/pull/22235) or `__array_finalize__`) (no backwards compat impact, 300 ns - 3 us performance impact for subclasses only)
This would close:
- [gh-2228](https://github.com/pytorch/pytorch/issues/2228), "PyTorch with numpy syntax?"
- [gh-20073](https://github.com/pytorch/pytorch/pull/20073), "numpy arg translation proof of concept"
- [gh-17249](https://github.com/pytorch/pytorch/issues/17249): "Proposal: Add `__tensor_wrap__` method similar to numpy `__array_wrap__`"
- [gh-22235](https://github.com/pytorch/pytorch/pull/22235): "ptype propagation on torch functions"
- [gh-22247](https://github.com/pytorch/pytorch/pull/22247): "ptype propagation on numpy functions"
| high priority,feature,triaged,module: numpy | medium | Critical |
462,900,219 | youtube-dl | [dailytelegraph] site support | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.07.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.07.02**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: http://www.dailytelegraph.com.au/news/nsw/theo-hayezs-godfather-and-cousin-speak-about-his-disappearance/video/5db7f7025b04cbea5ad016653551221c
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
should be very straightforward to add support for this site - once the web page is opened in a browser with cookies enabled, the URL for the video is found to be
https://ssaiplayback.eu-west-1.prod.boltdns.net/playback/once/v1/hls/v4/clear/5348771529001/2cb8109a-39f2-4e4f-a4ce-ec3941116954/647fdd6e-daf8-469f-947a-ed30c7deedfc/a4243f31-acfa-437d-a0b9-731d832cd315/default_video1200_5_960x540/media.m3u8
Note that several other newspaper websites owned by the same company replicate the same article and video - for instance, http://www.heraldsun.com.au/news/theo-hayezs-godfather-and-cousin-speak-about-his-disappearance/video/5db7f7025b04cbea5ad016653551221c
Verbose log
youtube-dl -o "C:/Video/%(title)s.%(ext)s" "http://www.dailytelegraph.com.au/news/nsw/theo-hayezs-godfather-and-cousin-speak-about-his-disappearance/video/5db7f7025b04cbea5ad016653551221c" --verbose
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-o', 'C:/Video/%(title)s.%(ext)s', 'http://www.dailytelegraph.com.au/news/nsw/theo-hayezs-godfather-and-cousin-speak-about-his-disappearance/video/5db7f7025b04cbea5ad016653551221c', '--verbose']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2019.07.02
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.10586
[debug] exe versions: phantomjs 2.1.1
[debug] Proxy map: {}
[generic] 5db7f7025b04cbea5ad016653551221c: Requesting header
[redirect] Following redirect to https://www.dailytelegraph.com.au/news/nsw/theo-hayezs-godfather-and-cousin-speak-about-his-disappearance/video/5db7f7025b04cbea5ad016653551221c?nk=43ce73c73eef6705e9dff93f21fdac6b-1562012992
[generic] 5db7f7025b04cbea5ad016653551221c?nk=43ce73c73eef6705e9dff93f21fdac6b-1562012992: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 5db7f7025b04cbea5ad016653551221c?nk=43ce73c73eef6705e9dff93f21fdac6b-1562012992: Downloading webpage
[generic] 5db7f7025b04cbea5ad016653551221c?nk=43ce73c73eef6705e9dff93f21fdac6b-1562012992: Extracting information
ERROR: Unsupported URL: https://www.dailytelegraph.com.au/news/nsw/theo-hayezs-godfather-and-cousin-speak-about-his-disappearance/video/5db7f7025b04cbea5ad016653551221c?nk=43ce73c73eef6705e9dff93f21fdac6b-1562012992
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphnjfkuku\build\youtube_dl\YoutubeDL.py", line 796, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphnjfkuku\build\youtube_dl\extractor\common.py", line 530, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmphnjfkuku\build\youtube_dl\extractor\generic.py", line 3333, in _real_extract
youtube_dl.utils.UnsupportedError: Unsupported URL: https://www.dailytelegraph.com.au/news/nsw/theo-hayezs-godfather-and-cousin-speak-about-his-disappearance/video/5db7f7025b04cbea5ad016653551221c?nk=43ce73c73eef6705e9dff93f21fdac6b-1562012992 | site-support-request | low | Critical |
462,901,884 | pytorch | Build failure with setup.py | Hi. I was building pytorch with setup.py and after
```
[100%] Built target torch_python
Install the project...
-- Install configuration: "Release"
```
And some output about copying files I got the following error:
```
running build_ext
-- Building with NumPy bindings
Traceback (most recent call last):
File "setup.py", line 856, in <module>
'python/serialized_test/data/operator_test/*.zip',
File "/usr/lib/python3.7/site-packages/setuptools/__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 378, in run
report('-- Detected cuDNN at ' + CUDNN_LIBRARY + ', ' + CUDNN_INCLUDE_DIR)
TypeError: can only concatenate str (not "NoneType") to str
```
I did ```print(CUDNN_LIBRARY, CUDNN_INCLUDE_DIR)``` and the output is ```None None``` but cudnn was found:
```
-- Found CUDNN: /usr/include
-- Found cuDNN: v7.6.1 (include: /usr/include, library: /usr/lib/libcudnn.so)
```
I always build pytorch with just cmake and I wanted to use setup.py instead.
os: arch linux
cudnn version: 7.6.1.34
cuda version: 10.1.168 | module: build,triaged | low | Critical |
462,904,089 | terminal | ConPTY sends two `WINDOW_BUFFER_SIZE_EVENT` messages when the window is restored from maximize | I think I found one resize bug. A ConPTY sends two `WINDOW_BUFFER_SIZE_EVENT` messages when the window is restored from maximize, but the first one has the wrong size (not sure of the internals of this, but it appears in conhost/tmux/pwsh, alacritty/pwsh, and Windows Terminal).
Run [this program](https://github.com/parkovski/conutils/blob/master/conevents.cpp) as `conevents -es` in conhost and a different terminal to see the difference; maximize and restore the window. For example, restoring the WinTerm window reports `119 x 46` in the `WINDOW_BUFFER_SIZE_EVENT` message, but `GetConsoleScreenBufferInfo` returns the correct value `119 x 32`.
_Originally posted by @parkovski in https://github.com/microsoft/terminal/issues/1465#issuecomment-506924361_ | Product-Conpty,Area-Server,Issue-Bug,Priority-3 | low | Critical |
462,909,995 | flutter | Flutter run with local engine in profile mode now requires arm64 on 64 bit devices | Previously, I was able to do:
```
flutter run --profile --local-engine=android_profile
```
On an arm64 device and it worked. Now this results in a not found for `libflutter.so`, which it can't find in a `lib/arm64` directory.
If I do
```
flutter run --profile --local-engine=android_profile_arm64
```
It works.
Maybe we should consider defaulting the Android CPU to arm64 at the least if this is WAI.
/cc @chinmaygarde @blasten @jason-simmons | team,tool,a: quality,P2,team-tool,triaged-tool | low | Minor |
462,910,178 | rust | Tracking issue for `slice_take` | Feature gate: `#![feature(slice_take)]`
### Public API
```rust
impl<T> [T] {
fn take<'a, R: OneSidedRange<usize>>(self: &mut &'a Self, range: R) -> Option<&'a Self>;
fn take_mut<'a, R: OneSidedRange<usize>>(self: &mut &'a mut Self, range: R) -> Option<&'a mut Self>;
fn take_first<'a>(self: &mut &'a Self) -> Option<&'a T>;
fn take_first_mut<'a>(self: &mut &'a mut Self) -> Option<&'a mut T>;
fn take_last<'a>(self: &mut &'a Self) -> Option<&'a T>;
fn take_last_mut<'a>(self: &mut &'a mut Self) -> Option<&'a mut T>;
}
// core::ops
trait OneSidedRange<T: ?Sized>: RangeBounds<T> {}
impl<T> OneSidedRange<T> for RangeTo<T> where Self: RangeBounds<T>;
impl<T> OneSidedRange<T> for RangeFrom<T> where Self: RangeBounds<T>;
impl<T> OneSidedRange<T> for RangeToInclusive<T> where Self: RangeBounds<T>;
```
### Steps / History
- [ ] #49173
- [ ] #62282
- [ ] #77065
- [x] #88502
- [ ] Final comment period (FCP)
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilzation PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- | T-libs-api,C-tracking-issue,A-slice,I-libs-api-nominated | medium | Critical |
462,924,779 | pytorch | CPU random number generator is slow | ## π Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
```
import torch
from time import perf_counter
def run():
ys = 0
with torch.no_grad():
for _ in range(100):
ys = ys + torch.rand(256, 20).mm(torch.rand(20, 20)).mean()
# ys = ys + np.random.rand(256, 20).dot(np.random.rand(20, 20)).mean()
def main():
for _ in range(100):
run()
t0 = 0
t0 -= perf_counter()
for _ in range(1000):
run()
t0 += perf_counter()
print(f"time = {t0 / 1000:.6f}")
if __name__ == '__main__':
main()
```
From @colesbury 's investigation, random number generator is taking ~40% of total time.

cc: @ezyang @roosephu | module: performance,triaged,module: random | low | Critical |
462,930,354 | pytorch | Add support for serializing Mkldnn Tensor | ## π Bug
Currently
## To Reproduce
```python
import torch
dense_tensor = torch.randn(1, dtype=torch.float)
mkldnn_tensor = dense_tensor.to_mkldnn()
torch.save(mkldnn_tensor, 'mkldnn_tensor.pt')
```
Throws error:
```
RuntimeError Traceback (most recent call last)
<ipython-input-6-cca16a2a9070> in <module>
----> 1 torch.save(mkldnn_tensor, 'mkldnn_tensor.pt')
pytorch/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol)
222 >>> torch.save(x, buffer)
223 """
--> 224 return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
225
226
pytorch/torch/serialization.py in _with_file_like(f, mode, body)
147 f = open(f, mode)
148 try:
--> 149 return body(f)
150 finally:
151 if new_fd:
pytorch/torch/serialization.py in <lambda>(f)
222 >>> torch.save(x, buffer)
223 """
--> 224 return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
225
226
pytorch/torch/serialization.py in _save(obj, f, pickle_module, pickle_protocol)
294 pickler = pickle_module.Pickler(f, protocol=pickle_protocol)
295 pickler.persistent_id = persistent_id
--> 296 pickler.dump(obj)
297
298 serialized_storage_keys = sorted(serialized_storages.keys())
pytorch/torch/tensor.py in __reduce_ex__(self, proto)
50 return (torch._utils._rebuild_qtensor, args)
51 else:
---> 52 args = (self.storage(),
53 self.storage_offset(),
54 tuple(self.size()),
```
## Expected behavior
Mkldnn tensor can be saved via `torch.save` and later restored via `torch.load` (with layout preserved as torch._mkldnn, but without preserving the mkldnn internal layout) | module: serialization,triaged,module: mkldnn | low | Critical |
462,940,953 | flutter | Semantics Tree API should not allow cycles, multi-parent children | Today the engine API would allow cycles or multi-parent children, which aren't really valid on any of the platforms we support.
We should add asserts to prevent this. | framework,engine,a: accessibility,P2,team-engine,triaged-engine | low | Minor |
462,948,245 | flutter | Platform view accessibility does not respond to taps on Moto G 4 | Running the webview example project on a Moto G 4, tapping on elements within the webview does not focus them. Instead, to give talk back focus to anything within the webview, I have to swipe left/right. Tapping on Flutter widgets, however, does correctly focus them for talk back.
The Moto G 4 that I'm using is running Android 7.0. | e: device-specific,platform-android,engine,a: accessibility,a: platform-views,P2,a: plugins,team-android,triaged-android | low | Major |
462,949,506 | node | SIGTERM handler does not run if there are no tasks running and exit code is always 0 | <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: v11.15.0
* **Platform**: Linux tsundberg-dev 4.18.0-20-generic #21~18.04.1-Ubuntu SMP Wed May 8 08:43:37 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**: process
<!-- Please provide more details below this comment. -->
---
To run the following tests paste them into `index.js` and run `node index.js; echo $?`
---
#### Case 1
The following results in an expected output:
```node
process.kill(process.pid, 'SIGTERM')
```
Output:
```
Terminated
143
```
---
#### Case 2
Adding a SIGTERM results in an exit code of 0, and the handler never runs:
```node
process.on('SIGTERM', () => {
console.error('Handle ran!')
process.exit(1)
})
process.kill(process.pid, 'SIGTERM')
```
Output:
```
0
```
---
#### Case 3
Adding a timeout or some other task that keeps node alive results in the expected behavior:
```node
process.on('SIGTERM', () => {
console.error('Handle ran!')
process.exit(1)
})
process.kill(process.pid, 'SIGTERM')
setTimeout(() => {}, 100000)
```
Output:
```
Handle ran!
1
``` | confirmed-bug,process | low | Critical |
462,950,222 | godot | Some files and folders are not shown in the editor when a file has this character in its name. | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** v3.1.1.stable.official
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Arch Linux, Dell Inspiron 5000 series
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:** When the character `$'\220'` (or U+0090) is used in a filename, files and folders in the same directory are no longer shown in the file tree or in "open file" dialogs. Presumably, other strange unicode characters could cause problems as well.
Without the file:

With the file:

<!-- What happened, and what was expected. -->
**Steps to reproduce:**
1. On a linux computer, create a new file in your project with the command `touch $'\220'`.
2. Observe as part of your project completely disappears from the editor, but is still shown in your normal file manager.
**Minimal reproduction project:** [testProject.tar.gz](https://github.com/godotengine/godot/files/3347612/testProject.tar.gz)
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| bug,platform:linuxbsd,platform:macos,topic:editor | low | Critical |
462,953,414 | rust | Mutually recursive `async fn`s are hard to make `Send` | There are several other related issues to this, but I'm opening this to track this one specifically since it's a pain-- there are workarounds, but it'd be lovely (and should be possible) to make this "just work." The following example compiles just fine without ` + Send`, but adding the `Send` bound causes a cycle error:
```rust
#![feature(async_await)]
use {
std::{
future::Future,
pin::Pin,
},
};
type BoxFuture = Pin<Box<dyn Future<Output = ()> /* + Send */>>; // adding Send causes a cycle error
async fn foo() -> BoxFuture {
Box::pin(bar()) as _
}
async fn bar() {
let _ = foo().await;
}
```
Working around the cycle error is possible, but annoying:
```rust
#![feature(async_await)]
use {
std::{
future::Future,
pin::Pin,
},
};
type BoxFuture = Pin<Box<dyn Future<Output = ()> + Send>>;
async fn foo() -> BoxFuture {
box_bar()
}
fn box_bar() -> BoxFuture {
Box::pin(bar())
}
async fn bar() {
let _ = foo().await;
}
```
Ideally we wouldn't have a cycle error in either case, since it is possible to see that `foo` must be `Send` without ever looking at the body of `bar`, since `bar` is immediately boxed into a `BoxFuture`. | T-compiler,A-async-await,AsyncAwait-Triaged | low | Critical |
462,957,015 | kubernetes | Improve logging when AD controller lets kubelet attach volumes | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Add a lower-level log (2) and/or pod event (info level) whenever the attach detach controller decides to skip processing a volume and let kubelet handle the attaching.
**Why is this needed**:
The typical volume attach handling is done by the attach detach controller running in the master.
It used to be handled by kubelet, but was moved to a single controller in the 1.3 timeframe, and this configuration is not being used or supported in most configurations anymore. I would also like to deprecate this option, but that's [another discussion](https://github.com/kubernetes/kubernetes/issues/55517).
However, the mechanism to transition to controller attach detach uses Node annotations is not a reliable mechanism to communicate configuration between master and nodes, as the Node objects can be modified and annotations modified or dropped. In these situations, it would be useful to surface a more visible message to make it easier for users to debug.
@kubernetes/sig-storage-feature-requests
| priority/backlog,sig/storage,kind/feature,lifecycle/frozen | low | Critical |
462,981,430 | TypeScript | Include sourcemaps in Typescript NPM package. | ## Search Terms
typescript.js.map typescript.js source map
## Suggestion
I would like `typescript.js.map` to be included in the distributed NPM package, along with the original sources. This could be achieved either with `inlineSources: true` or by including the `src` folder in the NPM package.
## Use Cases
I'm working on a compiler transformer and this has resulted in me stepping through `typescript.js`. Since source maps are not included, this results in me stepping through a 125,000 line JavaScript file. Not only does my editor hate me for this and sometimes locks up, the code is significantly harder to read than the original TypeScript source code.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Major |
462,988,288 | rust | pthread_cond_signal/pthread_cond_broadcast return EAGAIN after retrying 8192 times in macOS | Since OSX 10.7 (Lion), pthread_cond_signal and pthread_cond_broadcast return EAGAIN after retrying 8192 times.
- 10.7.0 (Lion): http://www.opensource.apple.com/source/Libc/Libc-763.11/pthreads/pthread_cond.c
- 10.8.0 (Mountain Lion): http://www.opensource.apple.com/source/Libc/Libc-825.24/pthreads/pthread_cond.c
- 10.10.0 (Yosemite): http://www.opensource.apple.com/source/libpthread/libpthread-105.1.4/src/pthread_cond.c
- 10.11.0 (El Capitan): http://www.opensource.apple.com/source/libpthread/libpthread-137.1.1/src/pthread_cond.c
- 10.12.0 (Sierra): http://www.opensource.apple.com/source/libpthread/libpthread-218.1.3/src/pthread_cond.c
- 10.13.0 (High Sierra): http://www.opensource.apple.com/source/libpthread/libpthread-301.1.6/src/pthread_cond.c
- 10.14.0 (Mojave): http://www.opensource.apple.com/source/libpthread/libpthread-330.201.1/src/pthread_cond.c
- 10.14.1 (Mojave/latest): http://www.opensource.apple.com/source/libpthread/libpthread-330.220.2/src/pthread_cond.c
I'm not sure this is correct, ruby [retries until pthread functions don't return EAGAIN](https://redmine.ruby-lang.org/issues/5155).(This issue is written in Japanese)
| O-macos,T-libs-api | low | Minor |
462,990,882 | create-react-app | postcss-modules-values variables isn't working in a non-module css/scss files | <!--
Please note that your issue will be fixed much faster if you spend about
half an hour preparing it, including the exact reproduction steps and a demo.
If you're in a hurry or don't feel confident, it's fine to report bugs with
less details, but this makes it less likely they'll get fixed soon.
In either case, please use this template and fill in as many fields below as you can.
Note that we don't provide help for webpack questions after ejecting.
You can find webpack docs at https://webpack.js.org/.
-->
### Desribe the bug
postcss-modules-values variables aren't working on non-module CSS/SCSS files
### Did you try recovering your dependencies?
<!--
Your module tree might be corrupted, and that might be causing the issues.
Let's try to recover it. First, delete these files and folders in your project:
* node_modules
* package-lock.json
* yarn.lock
Then you need to decide which package manager you prefer to use.
We support both npm (https://npmjs.com) and yarn (http://yarnpkg.com/).
However, **they can't be used together in one project** so you need to pick one.
If you decided to use npm, run this in your project directory:
npm install -g npm@latest
npm install
This should fix your project.
If you decided to use yarn, update it first (https://yarnpkg.com/en/docs/install).
Then run in your project directory:
yarn
This should fix your project.
Importantly, **if you decided to use yarn, you should never run `npm install` in the project**.
For example, yarn users should run `yarn add <library>` instead of `npm install <library>`.
Otherwise your project will break again.
Have you done all these steps and still see the issue?
Please paste the output of `npm --version` and/or `yarn --version` to confirm.
-->
Yes, I already tried using css/scss files to know which one works but no luck, variables isn't working on non-module css/scss files
### Which terms did you search for in User Guide?
<!--
There are a few common documented problems, such as watcher not detecting changes, or build failing.
They are described in the Troubleshooting section of the User Guide:
https://facebook.github.io/create-react-app/docs/troubleshooting
Please scan these few sections for common problems.
Additionally, you can search the User Guide itself for something you're having issues with:
https://facebook.github.io/create-react-app/
If you didn't find the solution, please share which words you searched for.
This helps us improve documentation for future readers who might encounter the same problem.
-->
Can't find any solutions, "postcss-modules-values variables isn't working on non-module/ regular css/scss files"
### Environment
<!--
To help identify if a problem is specific to a platform, browser, or module version, information about your environment is required.
This enables the maintainers quickly reproduce the issue and give feedback.
Run the following command in your React app's folder in terminal.
Note: The result is copied to your clipboard directly.
`npx create-react-app --info`
Paste the output of the command in the section below.
-->
System:
OS: Windows 10
CPU: (6) x64 Intel(R) Core(TM) i5-8600K CPU @ 3.60GHz
Binaries:
Node: 10.16.0 - C:\Program Files\nodejs\node.EXE
Yarn: 1.15.2 - C:\Program Files (x86)\Yarn\bin\yarn.CMD
npm: 6.9.0 - C:\Program Files\nodejs\npm.CMD
Browsers:
Edge: 44.17763.1.0
Internet Explorer: 11.0.17763.1
npmPackages:
react: ^16.8.6 => 16.8.6
react-dom: ^16.8.6 => 16.8.6
react-scripts: 3.0.1 => 3.0.1
npmGlobalPackages:
create-react-app: Not Found
### Steps to reproduce
<!--
How would you describe your issue to someone who doesnβt know you or your project?
Try to write a sequence of steps that anybody can repeat to see the issue.
-->
1. I'm using postcss-modules-values library for creating variables for my css-modules
2. variables isn't working on a regular / non-modules css/scss files, I could use css variables but I don't want to refactor my codes because I'm already using postcss-modules-values, in my previous project i'm using gatsby and it works fine even with non-module css/scss files
3. it works without an error but the variable values aren't there,
4. as much as possible I don't like to eject my project, It'll be my last resort if ever.
### Expected behavior
<!--
How did you expect the tool to behave?
Itβs fine if youβre not sure your understanding is correct.
Just write down what you thought would happen.
-->
I expect that the variables created from postcss-modules-values are available with module and non-module css/scss files. Because it behave exactly like this in my gatsby-project.
### Actual behavior
<!--
Did something go wrong?
Is something broken, or not behaving as you expected?
Please attach screenshots if possible! They are extremely helpful for diagnosing issues.
-->
Nothing went wrong it works without an error, but If I import my variables to a non-module css/scss files the variable value isn't read or simply the variables isn't working.
### Reproducible demo
<!--
If you can, please share a project that reproduces the issue.
This is the single most effective way to get an issue fixed soon.
There are two ways to do it:
* Create a new app and try to reproduce the issue in it.
This is useful if you roughly know where the problem is, or canβt share the real code.
* Or, copy your app and remove things until youβre left with the minimal reproducible demo.
This is useful for finding the root cause. You may then optionally create a new project.
This is a good guide to creating bug demos: https://stackoverflow.com/help/mcve
Once youβre done, push the project to GitHub and paste the link to it below:
-->
the bug is understandable even without a bug demo, maybe it's a web pack config issue, I just want you to know that it behaves differently in create-react-app. it works perfectly in other setups like gatsby.
<!--
What happens if you skip this step?
We will try to help you, but in many cases it is impossible because crucial
information is missing. In that case we'll tag an issue as having a low priority,
and eventually close it if there is no clear direction.
We still appreciate the report though, as eventually somebody else might
create a reproducible example for it.
Thanks for helping us help you!
-->
| issue: bug | low | Critical |
463,008,154 | go | syscall: Stat_t with different fields' name between Darwin and Linux | ### What version of Go are you using (`go version`)?
<pre>
$go version
go version go1.12.6 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
YES!
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/MC/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/MC/Golang/pkgs"
GOPROXY=""
GORACE=""
GOROOT="/Users/MC/Golang/go"
GOTMPDIR=""
GOTOOLDIR="/Users/MC/Golang/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/MC/Workspaces/qutoutiao/godis/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/th/22jxv0q12c5c23v02rt67gg00000gp/T/go-build623000608=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
Access file `ctime` from `os.FileInfo`;
https://play.golang.org/p/LD80qQy5qyC
### What did you expect to see?
All `os.FileInfo.Sys()` should have the same fields' name across OS.
### What did you see instead?
Within Linux OS, it should call `Stat_t.Ctim.Sec`; But under Darwin OS, you have `Stat_t.Ctimespec.Sec` in hand. | Unfortunate,NeedsInvestigation,compiler/runtime | low | Critical |
463,080,936 | TypeScript | MediaQueryList.prototype.addListener & removeListener are marked as deprecated | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** master
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
MediaQueryList addListener
addListener
MediaQueryList
**Code**
```ts
const listener = window.matchMedia("(max-width: 1000px)");
const handler = () => {};
listener.addListener(handler);
listener.removeListener(handler);
```
**Expected behavior:**
`addListener` and `removeListener` are "valid" signatures to invoke
**Actual behavior:**
`addListener` and `removeListener` are deprecated
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
https://www.typescriptlang.org/play/#code/MYewdgzgLgBANgS2gUzMgTjAvDA7gsAExFwDoBbAQymAAsBZZQhSgCgCJWqAPAWn0JRaALhgBGAAxSADtwCU7OQG4AUKEixalInAzYYrOdgB8MAN4BfVYhRp0pSoUIAZJFFQZWWnRmUqb7nak6MjkIABuyK62nt6EuujKQA
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
None
**Notes:**
This document describes the `MediaQueryList` interface: https://drafts.csswg.org/cssom-view/#dom-mediaquerylist-addlistener
It doesn't list `addListener` and `removeListener` as deprecated, does not give an indication that they will be deprecated in the future, it also does not say that you should use `addEventListener` and `removeEventListener` instead.
It did say that `addListener` and `removeListener` are direct aliases.
**Source:**
https://github.com/microsoft/TypeScript/blob/master/lib/lib.dom.d.ts#L10018
**Why would I bring this up:**
Because **Safari** and Internet Explorer do not support `addEventListener` and `removeEventListener`, but they do support `addListener` and `removeListener`.
Because `addListener` and `removeListener` are deprecated it means that people are more likely to use the alternatives without looking at the issues with doing so.
A lot of examples with the usage of `MediaQueryList` show `addListener` and `removeListener` usage, even the documentation on MDN: https://developer.mozilla.org/en-US/docs/Web/API/MediaQueryList#Examples
| Bug | medium | Critical |
463,128,227 | flutter | CupertinoSegmented control widget doesn't work as intended with SingleChildScrollView widget | **CupertinoSegmentedControl** doesn't work properly under **SingleChildScrollView** if the child container becomes scrollable (when it exceeds the viewport and needs to scroll to see further). Switch the segments for a few times to understand
[Code to reproduce](https://pastebin.com/PkCiZp5e) | framework,f: scrolling,f: cupertino,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-design,triaged-design | low | Major |
463,128,590 | TypeScript | Suggestion: Add built-in vanilla constructor interface | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
1. built-in constructor interface
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
Consider the below from the [official TS docs on decorators](https://www.typescriptlang.org/docs/handbook/decorators.html):
```TypeScript
function classDecorator<T extends {new(...args:any[]):{}}>(constructor:T) {
return class extends constructor {
newProperty = "new property";
hello = "override";
}
}
```
vs:
```TypeScript
function classDecorator<T extends Constructor>(constructor:T) {
return class extends constructor {
newProperty = "new property";
hello = "override";
}
}
```
I suggest we have a built-in `Constructor` interface so we can save typing.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
See above.
## Examples
<!-- Show how this would be used and what the behavior would be -->
See above.
## Checklist
My suggestion meets these guidelines:
* [x ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x ] This wouldn't change the runtime behavior of existing JavaScript code
* [ ] This could be implemented without emitting different JS based on the types of the expressions
* [ x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
463,132,539 | TypeScript | Support Unicode RegExp property escapes | TypeScript currently doesn't transpile Unicode property escapes (of the form `\p{ID_Start}` or `\P{ASCII}`) in regular expressions.
It would be great if it did!
https://www.typescriptlang.org/play/?target=1#code/MYewdgzgLgBATgUwOYIB4wLwwPQB0AOA3gMrBwCW+UGA4oggNYC+2ArgNwBQokIANggB0fEEgAUiFKkFQE0MQHJAA8AKAlKvZA
## Search Terms
regexp, regular expression, Unicode, property escapes, ES2018
## Suggestion
Support transpiling Unicode property escapes in regular expressions. Examples:
```js
/\p{ID_Start}/u;
/\P{ASCII}/u;
/\p{Script_Extensions=Greek}/u;
```
## Use Cases
One particular use case is matching identifier characters in JavaScript parsers. This is currently commonly implemented as [a large script-generated regular expression pattern](https://github.com/jquery/esprima/issues/1979) (like in Esprima) or as [a magical-looking list of code point ranges](https://github.com/microsoft/TypeScript/issues/32213) (like in TypeScript itself). However, it would be [much simpler to use property escapes](https://github.com/tc39/proposal-regexp-unicode-property-escapes#other-examples).
```js
const regexIdentifierStart = /[$_\p{ID_Start}]/u;
const regexIdentifierPart = /[$_\u200C\u200D\p{ID_Continue}]/u;
const regexIdentifierName = /^(?:[$_\p{ID_Start}])(?:[$_\u200C\u200D\p{ID_Continue}])*$/u;
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Major |
463,167,280 | create-react-app | Clarify polyfilling and browserlist usage documentation | Hi,
The [docs about supported browsers](https://facebook.github.io/create-react-app/docs/supported-browsers-features) is not clear.
After reading it i still dont understand the following :
- If the polyfills are not imported, what the browserlist field in package.json is used for ?
- If i need to support ie11 (and Array.includes for example), what do i need to do exactly ?
- Is @babel/preset-env used to automatically require polyfills ?
- Moreover the [docs from @babel/preset-env](https://babeljs.io/docs/en/babel-preset-env#usebuiltins-entry) (included with preset react app) indicates to import @babel/polyfill which is now deprecated. `react-app-polyfill/stable.js` requires the correct package `core-js/stable` but is browserlist still used using this file ?
Thanks for your help | issue: proposal,tag: documentation | medium | Major |
463,174,752 | TypeScript | Code completion: suggest tokens from "paths" in tsconfig.json | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
Code completion; module
## Suggestion
## Use Cases
For those paths which contains no wildcard character, can we suggest tokens from module it points?
For example:
```json
{
"baseUrl": '.',
"paths": {
"my-base-module": [
"../../dest/base"
],
}
```
in the example, we can say that bare module specifier `"my-base-module"` is mapped to path `"../../dest/base"`, which is actually a .d.ts.
However, my IDE(Visual Studio Code) do not list available symbols in `"my-base-module"`,
such as, When I type:
```ts
const v: Ve // I 'm going to type Vec3, which is defined in "../../dest/base".
```
I hopo IDE can suggest `Vec3` and auto-complete the importing statement as:
```
import { Vec3 } from "my-base-module";
```
You may ask: Why don't you put `"../../dest/base"` in `"types"` option in `tsconfig.json`.
Well, the `"../../dest/base"` is just a library of me. It is indeed another Typescript project's output **bundle** file(bundling use rollup). It is bundled as a single module, which, in later, maybe registered as a SystemJS module with name `"my-base-module"` because I hope users of the library treat the library as a single JS module so they can write:
```
import /* xx */ from "my-base-module";
```
## Examples
As stated above, I would upload my example if it's needed.
## Checklist
My suggestion meets these guidelines:
* [-] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [-] This wouldn't change the runtime behavior of existing JavaScript code
* [-] This could be implemented without emitting different JS based on the types of the expressions
* [-] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [-] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Critical |
463,195,272 | pytorch | Weak Symbols Resolution Causes Segmentation Fault in External Libraries | ## π Bug
PyTorch causes segfault in internals of PyArrow during reading a parquet file.
## To Reproduce
In order to reproduce, run the code below.
```python
# filename: script.py
import torch as T
import pyarrow.parquet
tbl = pyarrow.parquet.read_table(tablename)
```
During execution of the code above the segmentaion fault occurs and core is dumped. In fact, register `rax` points to a wrong place in instruction `lock xadd DWORD PTR [rax],edx`. The related stacktrace is below.
```
(gdb) bt
#0 0x00007f551323a9ce in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x1e ()
at .env/lib/python3.7/site-packages/pyarrow/./libarrow.so.13
#1 0x00007f5513258b85 in std::vector<std::shared_ptr<arrow::Buffer>, std::allocator<std::shared_ptr<arrow::Buffer> > >::~vector() + 0x35 ()
at .env/lib/python3.7/site-packages/pyarrow/./libarrow.so.13
#2 0x00007f5513258fc5 in std::_Sp_counted_ptr_inplace<arrow::ArrayData, std::allocator<arrow::ArrayData>, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x55 ()
at .env/lib/python3.7/site-packages/pyarrow/./libarrow.so.13
#3 0x00007f551323a9e9 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x39 ()
at .env/lib/python3.7/site-packages/pyarrow/./libarrow.so.13
#4 0x00007f551323a9e9 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x39 ()
at .env/lib/python3.7/site-packages/pyarrow/./libarrow.so.13
#5 0x00007f55132d1855 in std::_Sp_counted_ptr_inplace<arrow::ChunkedArray, std::allocator<arrow::ChunkedArray>, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x45 ()
at .env/lib/python3.7/site-packages/pyarrow/./libarrow.so.13
#6 0x00007f5513ba9589 in std::_Sp_counted_ptr_inplace<arrow::Column, std::allocator<arrow::Column>, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0xc9 ()
at .env/lib/python3.7/site-packages/pyarrow/lib.cpython-37m-x86_64-linux-gnu.so
#7 0x00007f551323a9e9 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x39 ()
at .env/lib/python3.7/site-packages/pyarrow/./libarrow.so.13
#8 0x00007f55132d18d5 in arrow::SimpleTable::~SimpleTable() + 0x45 ()
at .env/lib/python3.7/site-packages/pyarrow/./libarrow.so.13
#9 0x00007f5513ac706a in __pyx_tp_dealloc_7pyarrow_3lib_Table(_object*) + 0xda ()
at .env/lib/python3.7/site-packages/pyarrow/lib.cpython-37m-x86_64-linux-gnu.so
#10 0x00007f554082ecb8 in No symbol matches 0x00007f554082ecb8. () at /usr/lib/libpython3.7m.so.1.0
```
## Expected behavior
First of all, no segfault should happen. Secondly, PyTorch shared libraries need better isolation of exported symbols. I think that this issue is related to PyTorch not PyArrow since PyTorch exports some weak symbols with defaults which replace symbols defined in `libstdc++.so` as showen in the section _Additional Context_.
## Environment
```bash
pip3 install pyarrow
pip3 install https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
```
- PyTorch Version (e.g., 1.0): `1.1.0`
- OS (e.g., Linux): `5.1.15-arch1-1-ARCH`
- How you installed PyTorch (`conda`, `pip`, source): `pip`
- Build command you used (if compiling from source): `none`
- Python version: `3.7.3`
- CUDA/cuDNN version: `none`
- GPU models and configuration: `none`
## Additional context
Preloading of `libstdc++.so.6` solves the issue.
```bash
$ LD_PRELOAD=/usr/lib/libstdc++.so.6.0.26 python script.py
```
So it seems obvious that the issue is related to relocation of weak symbols. I have tried to figure out what symbols are resolved in a wrong way.
```bash
$ LD_DEBUG=bindings python script.py 2>&1 | grep pyarrow | grep torch | c++filt | sed 's/.*`//' | sed "s/'.*//" | sort | uniq
...
std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_add_ref_copy()
std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_destroy()
std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release()
std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_weak_release()
typeinfo for std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>
typeinfo name for std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>
vtable for std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>
...
```
One can see that ref counter is resolved to PyTorch libraries. The complete list of [symbols](https://github.com/pytorch/pytorch/files/3349946/symbols.txt) is attached. I hope that it will be helpful.
| triaged | low | Critical |
463,225,567 | flutter | User touch and draw polygon over google map | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
## Proposal
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
| c: new feature,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Critical |
463,325,816 | pytorch | Tracing an RNN does not support torch.nn.utils.rnn.PackedSequence as input | ## π Bug
`torch.jit.trace` over a RNN (GRU, LSTM) does not work when you feed it with a PackedSequence (as the ones created by `torch.nn.utils.rnn.pack_padded_sequence`). However, this works in eager mode.
## To Reproduce
```python
import torch
torch.jit.trace(nn.LSTM(input_size=300,
hidden_size=32,
batch_first=False,
num_layers=2,
bidirectional=True),
torch.nn.utils.rnn.pack_padded_sequence(torch.rand(10,2,300), torch.tensor([10,3]))
)
```
Returns:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-32df167805c9> in <module>
6 num_layers=2,
7 bidirectional=True),
----> 8 (torch.nn.utils.rnn.pack_padded_sequence(torch.rand(10,2,300), torch.tensor([10,3]))
9 )
10 )
~/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/jit/__init__.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, _force_outplace, _module_class)
686 traced = _module_class(func, **executor_options)
687 traced._c._create_method_from_trace('forward', func, example_inputs,
--> 688 var_lookup_fn, _force_outplace)
689 else:
690 name = getattr(func, '__name__', 'forward')
RuntimeError: Only tensors and (possibly nested) tuples of tensors or dicts are supported as inputs or outputs of traced functions, but instead got value of type NoneType.
Value: None
```
## Expected behavior
In order to be able to feed variable length input to an RNN and export it to ONNX, support for either `torch.nn.utils.rnn.PackedSequence` or a Tuple with 4 elements (the 4 attributes a `PackedSequence` has) is needed when tracing RNNs is needed.
## Additional context
ScriptModule is not an option in some cases either; since a RNN layer has to be traced in order to export successfully to ONNX
| oncall: jit,triaged | low | Critical |
463,354,195 | pytorch | Move csrc/distributed/c10d/{comm,reducer} to libtorch.so | These classes should be part of `libtorch.so` but are currently part of `libtorch_python.so`.
cc @mrshenli | oncall: distributed,triaged | low | Minor |
463,383,959 | TypeScript | Offer an auto-import for unresolved shorthand-named object literal properties | <!-- Please search existing issues to avoid creating duplicates. -->
suppose I have exported an object called `Foo` in `foo.js`, and in another file I want to use something like this:
```JavaScript
export default configure({Foo})
```
autocomplete will not show `Foo` and of course, will not auto import from that file.
to use auto import, I have to write the code like this:
```JavaScript
export default configure({Foo:Foo})
```
actually autocomplete shows `Foo`, But it doesn't auto import. it has `abc` icon.
could you support object shorthand in autocomplete?
<!-- Describe the feature you'd like. -->
| Suggestion,Awaiting More Feedback | low | Major |
463,388,498 | godot | Tilemap_editor_plugin add tile information to item tooltip | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.2 or later feature proposal
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
all
**Issue description:**
<!-- What happened, and what was expected. -->
### Resume:
There is no quick way to see information about the tiles when editing a tilemap. Item tooltip is a good place to get changed properties of the tile.
### Explanation:
Tileset information is shadowed when editing a tilemap. You need to find the resource in the inspector, click the resource, find the tile by name and open the folded properties... You have no feedback from the tile in tilemap editor (only the image, the name of the tile, and the number) nor in the inspector (once you click in the tileset resource, the tilemap editor is closed, and if you click in the tilemap node, the tileset resource is closed).
I find myself writing in the name of the tiles some information... and this is because there is no direct way to get that (open the resource all time is slow)
### Objetive:
Send useful tile information to the tooltip of the item in the tilemap editor item list.
- Not empty shapes
- Offsets different that 0,0 (if have shape)
- Shape One Way
- Z Index
### Alternative:
Instead having the name and the number of each tile in their item (that requires space), it can be great to have a rich text label on the bottom of the tilemap editor that can display the information of the selected tile... space is freed in the item list and some clarity gained. | enhancement,topic:editor | low | Major |
463,426,788 | flutter | Analyze generated code in flutter_tools | flutter_tools generates some dart code to run tests, see https://github.com/flutter/flutter/blob/master/packages/flutter_tools/lib/src/test/flutter_platform.dart.
Unfortunately it seems like that generated code is never analyzed and currently has analyzer issues. We should add some kind of a test that runs the output of the code generator through the analyzer to ensure it's error/lint/hint free.
/cc @jonahwilliams | a: tests,c: new feature,team,tool,P3,team-tool,triaged-tool | low | Critical |
463,437,673 | flutter | Implement TransformToSliverAdaptor | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
I'm building an app that would require completely animating `SliverAppBar` out of view (to the top) based on the `TabController.animation` values as shown below:
<img src="https://user-images.githubusercontent.com/15171489/60546724-94117180-9d26-11e9-8cde-b034f7411954.gif" width=250 />
The following is the implementation I would want to use:
```dart
import 'dart:async';
import 'package:flutter/material.dart';
class AppNavigator extends StatefulWidget {
const AppNavigator({
Key key,
@required this.cameraWidget,
@required this.chatsWidget,
}) : super(key: key);
final Widget cameraWidget;
final Widget chatsWidget;
@override
_AppNavigatorState createState() => _AppNavigatorState();
}
class _AppNavigatorState extends State<AppNavigator> with SingleTickerProviderStateMixin {
TabController _tabController;
@override
void initState() {
super.initState();
_tabController = TabController(initialIndex: 1, vsync: this, length: 2);
}
@override
void dispose() {
_tabController.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
final tabs = <Tab>[Tab(text: "CAMERA"), Tab(text: "CHATS")];
return Scaffold(
body: NestedScrollView(
headerSliverBuilder: (context, inner) {
return <Widget>[
AnimatedBuilder(
animation: _tabController,
child: SliverAppBar(
title: const Text('Chat App'),
pinned: true,
floating: true,
bottom: TabBar(
tabs: tabs,
controller: _tabController,
),
),
builder: (context, child) {
if (_tabController.animation.value >= 1) { // on the "CHATS" tab
return child;
}
// on the "CAMERA" tab, I'd want to translate the SliverAppBar
return Transform.translate(
child: child,
offset: Offset(0, _tabController.animation.value * -100),
);
},
),
];
},
body: TabBarView(
controller: _tabController,
children: <Widget>[
widget.cameraWidget,
widget.chatsWidget,
],
),
),
);
}
}
```
However, Flutter does not allow the use of a widget of type `RenderTransform` to be used in place of a widget of type`RenderSliver` therefore on using `AnimatedBuilder`, I run into the following error:
```
Launching lib/main.dart on Android SDK built for x86 in debug mode...
Built build/app/outputs/apk/debug/app-debug.apk.
I/CameraManagerGlobal(31494): Connecting to camera service
D/EGL_emulation(31494): eglMakeCurrent: 0xe45a9b40: ver 3 0 (tinfo 0xd00c8b70)
I/zygote (31494): Do partial code cache collection, code=29KB, data=24KB
I/zygote (31494): After code cache collection, code=29KB, data=24KB
I/zygote (31494): Increasing code cache capacity to 128KB
I/flutter (31494): βββ‘ EXCEPTION CAUGHT BY WIDGETS LIBRARY ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
I/flutter (31494): The following assertion was thrown building AnimatedBuilder(animation: Instance of 'TabController',
I/flutter (31494): state: _AnimatedState#9d909):
I/flutter (31494): A RenderNestedScrollViewViewport expected a child of type RenderSliver but received a child of type
I/flutter (31494): RenderTransform.
I/flutter (31494): RenderObjects expect specific types of children because they coordinate with their children during
I/flutter (31494): layout and paint. For example, a RenderSliver cannot be the child of a RenderBox because a
I/flutter (31494): RenderSliver does not understand the RenderBox layout protocol.
I/flutter (31494):
```
## Expensive Alternative
As of now, I have had to settle for the following implementation which is really expensive as it involves calling `setState` at the root of the app:
```dart
import 'dart:async';
import 'package:flutter/material.dart';
class AppNavigator extends StatefulWidget {
const AppNavigator({
Key key,
@required this.cameraWidget,
@required this.statusWidget,
}) : super(key: key);
final Widget chatsWidget;
final Widget cameraWidget;
@override
_AppNavigatorState createState() => _AppNavigatorState();
}
class _AppNavigatorState extends State<AppNavigator> with SingleTickerProviderStateMixin {
bool _pinned = true;
TabController _tabController;
ScrollController _scrollController;
@override
void initState() {
super.initState();
_tabController = TabController(initialIndex: 1, vsync: this, length: 2);
_tabController.animation.addListener(_handleChangeToCameraTab);
_scrollController = ScrollController();
}
void _handleChangeToCameraTab() {
final value = _tabController.animation.value;
setState(() {
if (value < 1) {
_scrollController.jumpTo(
_scrollController.position.maxScrollExtent * (1.0 - value)
);
}
// when animating the SliverAppBar, I don't want the app bar to be pinned
_pinned = value >= 1.0;
});
}
@override
void dispose() {
_tabController.dispose();
_scrollController.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
final tabs = <Tab>[Tab(text: "CAMERA"), Tab(text: "CHATS")];
return Scaffold(
body: NestedScrollView(
controller: _scrollController,
headerSliverBuilder: (context, inner) {
return <Widget>[
SliverAppBar(
title: const Text('Chat App'),
pinned: _pinned,
floating: true,
bottom: TabBar(
tabs: tabs,
controller: _tabController,
),
),
];
},
body: TabBarView(
controller: _tabController,
children: <Widget>[
widget.cameraWidget,
widget.chatsWidget,
],
),
),
);
}
}
```
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
## Proposal
I propose that just as we have `SliverToBoxAdapter`, we also have an implementation of `TransformToSliverAdapter` that would allow the use of animations on Slivers.
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
| c: new feature,framework,a: animation,f: scrolling,P3,team-framework,triaged-framework | low | Critical |
463,441,057 | flutter | Update attach to not require a device | As we discussed with @jonahwilliams in the document, the device is required only so that we know how to do port forwarding, and for embedding Flutter on Java Swing we don't need this. | tool,c: proposal,P3,team-tool,triaged-tool | low | Minor |
463,455,925 | flutter | Update 'flutter screenshot' to fail better if a device doesn't support screenshots | I'm using Flutter master branch version 1a374d820de32d359c06a8ecf0e1348e3ce69a5a, and the following error happened although the `flutter device` does show a connected Moto G4 device:
```
liyuqian@liyuqian:~/Downloads$ flutter devices
3 connected devices:
Moto G 4 β’ ZY2245H6KS β’ android-arm β’ Android 7.0 (API 24)
Linux β’ Linux β’ linux-x64 β’ Linux
Chrome β’ chrome β’ web-javascript β’ Google Chrome 75.0.3770.100
liyuqian@liyuqian:~/Downloads$ flutter screenshot
No supported devices connected.
Must have a connected device
liyuqian@liyuqian:~/Downloads$ flutter screenshot --type=skia --observatory-uri=http://127.0.0.1:35603/atA841bQWXA=/
No supported devices connected.
Must have a connected device
```
The app that I ran is
```
liyuqian@liyuqian:~/flutter/flutter/dev/benchmarks/macrobenchmarks$ flutter run --profile
Initializing gradle... 0.7s
Resolving dependencies... 1.3s
Launching lib/main.dart on Moto G 4 in profile mode...
Running Gradle task 'assembleProfile'...
Running Gradle task 'assembleProfile'... Done 1.2s
Built build/app/outputs/apk/profile/app-profile.apk (8.0MB).
An Observatory debugger and profiler on Moto G 4 is available at http://127.0.0.1:35603/atA841bQWXA=/
For a more detailed help message, press "h". To quit, press "q".
```
This blocks fixing a P0 bug for a Google's internal client. | c: new feature,tool,a: quality,customer: dream (g3),P2,team-tool,triaged-tool | low | Critical |
463,460,477 | youtube-dl | Blogger Video Support | ## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.07.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.07.02**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
- Single video: https://www.blogger.com/video.g?token=AD6v5dz18g_1SzBsAhKc1WrVQTXonlsRYUGqRv3GQ0pRq8zd9j0UTjOofODFRLRzVYrVEkne0gq1XtesVPk37EsSG7AX9qlpHwVvbfnVTTao6gdWnq_EUA-lz2QYAESDovwV0ZdXPeP5
## Description
Link was embedded within a Blogger Blog
WRITE DESCRIPTION HERE
Add support for the extraction - it's YouTube/YouTube-like style (uses some youtube embed) so I believe it should be supported and also not too hard to do so probably | site-support-request | low | Critical |
463,476,437 | TypeScript | [Feature request] sort literals in .d.ts | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
sort string literal in .d.ts
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
avoid string literal random change order when source code didn't change
and make code history is more easy read
in this issue thread `string literal` is mean => `"a" | "c" | "b"`
this not only happen on `Record`, it has chance happen at all type is auto create by typescript emit with `string literal`
## Examples
<!-- Show how this would be used and what the behavior would be -->
> `"a" | "c" | "b"` is create by typesctipt emit and some time will random change order
- https://github.com/bluelovers/cjk-convert/blob/1a0ba9f85e0bf7fe916158cc14a4ef87507106d9/lib/zh/table/table.ts#L1619-L1635
- https://github.com/microsoft/TypeScript/issues/30328
- https://github.com/bluelovers/novel-travis-test/commit/3b88745e187736797d9a30d4f4bbd927b0c6dd30#diff-3006c96d8657ccca5b9b3618a256314c
- https://github.com/bluelovers/novel-travis-test/commit/d33c4a61602a8536d1414d36e027eb99845104f1#diff-3006c96d8657ccca5b9b3618a256314c
- https://github.com/bluelovers/novel-travis-test/commit/2d749195c39ef386e6c4bad1ae6e19d007c03e93#diff-3006c96d8657ccca5b9b3618a256314c
### current .d.ts output
---
```ts
export declare const table_plus: Record<"a" | "c" | "b">
// "a" | "c" | "b" is random change order
```
### in this request
> sort ( use simple array.sort() ) it when output emit at `.d.ts`
```ts
export declare const table_plus: Record<"a" | "b" | "c">
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [ ] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
*Maintainer's note* [@sandersn]: To implementers: When you implement this, be sure to sort other literal types too. Consider sorting other things, like symbol, by the order of their string representation. | Suggestion,Effort: Difficult,Experience Enhancement | medium | Critical |
463,477,409 | rust | Coherence violation when inherent method is added to type | It seems that because inherent methods are always called before trait methods, when a dependency adds an inherent method to a struct (which _should_ be a backwards-compatible change to its API requiring no major semver bump) your code that was previously calling a trait method on that struct will (silently) switch to calling the inherent method.
As an example, I could make an extension trait for `Option` defining a method named `foo`, publish `my_cool_crate`, and then if an inherent method named `foo` is added to `Option` in a new version of `core` then my crate can suddenly behave differently despite merely upgrading my Rust compiler. Perhaps it could also fail to compile (like if the signature is "compatible" at the callsite but then a typechecking error happens elsewhere).
I'm not sure if this is a coherence violation or not, or whether this has already been observed/discussed elsewhere. (Had no luck when I searched for github issues about it.) | A-trait-system,T-compiler,T-types | low | Critical |
463,489,524 | rust | `size_of_val` in a generator can make the generator bigger | Consider the following code:
```rust
async fn foo() {
let mut x = get_future();
dbg!(std::mem::size_of_val(&x));
x.await
}
```
Today, having the `dbg!` line _roughly doubles_ the size of the future returned by `foo`. More precisely, it causes us to allocate storage for `x` twice (once for `x`, and once for the `pinned` variable that `x` is moved into inside the `await`).
This unfortunate state of events is caused by the fact that we cannot "peer into" `size_of_val` and see that the address of `x` never escapes the function. Without knowing this, we can't optimize away the storage of `x` once it has been moved.
This was first discussed here: https://github.com/rust-lang/rust/issues/59123#issuecomment-501401232. One promising solution ([suggested](https://github.com/rust-lang/rust/issues/59123#issuecomment-501472425) by @RalfJung) is that we inline all occurrences of `size_of_val` (and possibly some other intrinsics) in a MIR pass, before we get to the generator transform. | C-enhancement,T-compiler,A-coroutines,I-heavy,C-optimization | medium | Major |
463,520,188 | flutter | Review plugins lifecycle | go/flutter-plugin-registration
Review the plugin framework for add-to-app scenarios where either
- plugins might never be invoked or needed during the application's runtime but nevertheless can run code or latch onto host application events
- multiple flutter engines are created but not all engine instances need all plugins | engine,a: existing-apps,P2,a: plugins,team-engine,triaged-engine | low | Minor |
463,595,001 | vscode | [json] format on save should remove last trailing comma in JSON with json-language-features | I am using `json-language-features` to autoformat my JSON/C files but sometimes I accidentally add a last trailing comma on some props or forget one while moving props around. `json-language-features` will print an error then but does not autoformat it and removes the last comma or adds a missing which IMO would be very helpful.
Previously I used prettier (`json-stringify`-parser) which removed missing commas and formatted it well, but I wanted to use build in extensions to do most of the things.
Input:
```json
{
"nyc": {
"extension": [
".ts",
".ts"
".ts",
]
}
}
```
Expected:
```json
{
"nyc": {
"extension": [
".ts",
".ts",
".ts"
]
}
}
```
Reality:
```json
{
"nyc": {
"extension": [
".ts",
".ts" <-- Expected comma
".ts", <-- Trailing comma
]
}
}
``` | feature-request,json | low | Critical |
463,648,413 | terminal | Epic: Add configuration options for font rendering things (fallback, line height, ligatures, ...) |
```[tasklist]
### Tasks
- [x] #759
- [ ] #2664
- [x] #1298
- [ ] #5093
- [ ] #3498
- [ ] #956
- [x] #1751
- [x] #5828
- [x] #6049
- [ ] #10231
```
### consider (backlog)
- [x] #6678 should we support fractional point sizes?
#### original content
This is a summary from #714 #455
- To properly handle box drawing characters we may need ability to explicitly set line height **in pixels** (to avoid rounding error). We may also need baseline position settings to avoid clipping.
- For the arrow issue, we may need the ability to support specifying a fallback sequence of fonts rather than a master font, pretty like how CSS works.
- We may also need some method to allow WT to change how it measure characters in a fallen font?
- Option for antialiasing modes β at least for the case that the console's background _isnβt_ transparent. | Issue-Feature,Area-Rendering,Product-Terminal | medium | Critical |
463,658,442 | pytorch | "Floating point exception" after trying the method from the issue #22382 | in order to use libtorch c++ API.I tried an example from:https://pytorch.org/tutorials/advanced/cpp_export.html#a-minimal-c-application.
Firstly, I got the same bug report with issue #22382.
Then, I tried the method from it and 'make' succeed!
Howerver, I got the problem 'Floating point exception' after executing './example model.pt'
I debug and find the problem was caused by the code 'at::Tensor output = module.forward(inputs).toTensor()'
and I don't know why?
cc @suo | oncall: jit,triaged | low | Critical |
463,702,174 | terminal | Feature request: create new profile from .lnk file | # Summary of the new feature/enhancement
Development environments, such as Visual Studio, SDKs, etc. install shortcuts that open command prompts with a pre-configured environment (e.g. "x64 Native Tools Command Prompt for VS 2019"). It's tedious and error prone to copy settings from these shortcuts. Terminal should do it automatically, either on a shortcut-by-shortcut basis, or even better letting you dynamically import a whole directory of shortcuts
# Proposed technical implementation details
Until Terminal has a UI for creating and editing profiles, this feature will simply involve changes to the schema of profiles.json
I propose a new key for profile dictionaries, let's call it "`importDefaultsFromLnk`", which can specify a path to a shell link (.lnk) file. A profile can contain a single `importDefaultsFromLnk` field and nothing else, and all settings will come from the shell link and its `NT_CONSOLE_PROPS` and `NT_FE_CONSOLE_PROPS` data blocks (except those that don't make sense for Terminal, of course, like `NT_CONSOLE_PROPS::dwWindowOrigin`); the `name` field would default to the file's display name (`IShellItem::GetDisplayName`), so that it would automatically hide the extension and/or load a localized description that might be present. If other profile fields are explicitly specified, they'll override settings from the shell link. A `null` value for a field could be used to mean "override with the Terminal default"
Importing a set of shortcuts from a directory (e.g. all shortcuts in Programs folder "Visual Studio 2019\Visual Studio Tools\VC", or even better all shortcuts under "Visual Studio 2019\Visual Studio Tools", recursively) could be done naively, with a profile field that we might call "`importProfilesFromDirectory`", but unlike importing a single shortcut this feature has non-trivial implications, including but not limited to:
- should profiles imported from a directory be grouped in a sub-menu of the "new tab" menu?
- what about recursively imported profiles? should there be sub-sub-menus?
- how do we specify a dynamically imported profile in `globals\defaultProfile`? | Issue-Feature,Help Wanted,Area-Settings,Product-Terminal | low | Critical |
463,717,644 | material-ui | [Drawer] Container of Drawer jumps | <!--- Provide a general summary of the issue in the Title above -->
<!--
Thank you very much for contributing to Material-UI by creating an issue! β€οΈ
To avoid duplicate issues we ask you to check off the following list.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] This is not a v0.x issue. <!-- (v0.x is no longer maintained) -->
- [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
## Expected Behavior π€
If i initialize drawer in container with `SlideProps={{ direction: 'left' }}` (or `SlideProps={{ direction: 'up' }}` too) container should stay immovable
## Current Behavior π―
Container jumps
## Steps to Reproduce πΉ
<!---
Provide a link to a live example (you can use codesandbox.io) and an unambiguous set of steps to reproduce this bug.
Include code to reproduce, if relevant (which it most likely is).
This codesandbox.io template _may_ be a good starting point:
https://codesandbox.io/s/github/mui-org/material-ui/tree/master/examples/create-react-app
If you're using typescript a better starting point would be
https://codesandbox.io/s/github/mui-org/material-ui/tree/master/examples/create-react-app-with-typescript
If YOU DO NOT take time to provide a codesandbox.io reproduction, should the COMMUNITY take time to help you?
-->
Link: https://codesandbox.io/s/serene-bouman-wvvf6
1. initialize `Drawer` inside a container
2. set `anchor="right"`
3. set `SlideProps={{ direction: 'left' }}` or `SlideProps={{ direction: 'up' }}`
4. look how container is jumping
## Context π¦
I want to create drawer that is located in a container and appears from right side of the window.
For example, if i set `SlideProps` to other values(`right` or `down`), my container stays still. Here are some examples of this issue:
1. `anchor="left"` `SlideProps={{ direction: 'left' }}` **Incorrect behavior**

2. `anchor="left"` `SlideProps={{ direction: 'right' }}` **Correct behavior**

3. `anchor="right"` `SlideProps={{ direction: 'left' }}` **(_my case_)** **Incorrect behavior**

4. `anchor="right"` `SlideProps={{ direction: 'right' }}` **Correct behavior**

**BUT!**
i explored sources of `Drawer` and have noticed that this issue appears exactly in [`Slide` component](https://github.com/mui-org/material-ui/blob/master/packages/material-ui/src/Slide/Slide.js) particularly from line 43 to line 57
if in this fragment: `translateX(${window.innerWidth}px) translateX(-${rect.left - offsetX}px)` i change first translate to `translateX(${window.innerWidth - 32}px)` (exactly 32>, i dont know why, 31<= not working) it will work fine
## Your Environment π
<!---
Include as many relevant details about the environment with which you experienced the bug.
If you encounter issues with typescript please include version and tsconfig.
-->
| Tech | Version |
|--------------|---------|
| Material-UI | v4.1.3 |
| React | v16.8.6 |
| Browser | Chrome v75 |
| bug π,component: transitions | low | Critical |
463,760,978 | go | x/mobile: iOS project won't compile with framework built by gomobile | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.6 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/myname/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/myname/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/opt/go/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/opt/go/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/cy/b0lrq2m55js29jlf_hhm4xtc0000gn/T/go-build880588199=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I created a go package and used gomobile to bind it for iOS:
```
gomobile bind -v -target=ios mypackage
```
And then referenced the framework in my iOS project
Then complied the iOS project
### What did you expect to see?
The project should compile
### What did you see instead?
error messge
ld: in /PATH_TO_THE_FRAMEWORK(go.o), building for iOS, but linking in object file (/PATH_TO_THE_FRAMEWORK(go.o)) built for , for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
| help wanted,NeedsInvestigation,mobile | low | Critical |
463,821,037 | TypeScript | Dead return statements in a generator should offer a did-you-mean-yield codefix | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨ -->
<!--
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the CONTRIBUTING guidelines: https://github.com/Microsoft/TypeScript/blob/master/CONTRIBUTING.md
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
<!-- If you have a QUESTION:
THIS IS NOT A FORUM FOR QUESTIONS.
Ask questions at http://stackoverflow.com/questions/tagged/typescript
or https://gitter.im/Microsoft/TypeScript
-->
<!-- If you have a SUGGESTION:
Most suggestion reports are duplicates, please search extra hard before logging a new suggestion.
See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md
-->
<!-- If you have a BUG:
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.0.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** yield return generator
Currently it is possible to have the following code:
**Code**
```ts
async function *myGeneratorFunction(): AsyncIterableIterator<number> {
return 1;
return 2;
return 3;
}
```
**Expected behavior:**
A generator function that yields numbers 1 - 3
**Actual behavior:**
No syntax/runtime error but due to a human mistake the generator function is useless.
I copy/pasted an existing generator function, deleted the body with the new implementation and mistakenly used `return` instead of `yield`.
It would be nice if TypeScript could warn when it's encountering a `return` with an actual value inside a generator function.
It makes no sense to `return` a value.
Could be combined with a codefix in the IDE: `Did you mean yield?`
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:**
| Suggestion,Experience Enhancement | low | Critical |
463,858,214 | TypeScript | Spread with enum index key, invalid members don't trigger errors | **TypeScript Version:** 3.5.1
**Search Terms:**
object spread enum index missing keys
**Code**
[Playground link has code](https://www.typescriptlang.org/play/#code/KYOwrgtgBAcg9gFwJYDMkGMCGy4gLLAAmSkUA3gFBTVQCCACvQDRU30ASA8jAKIUC+FCqFLxkaLDhAAVAJ4AHYOVbUAYpwAyGzgHUeAERY0oGgJIBpAwKFIQCYACcUmdErGoM2JLnoPgKR1BXACVgdDgHQmVqCCISCAAuWEQPSW98OMgjKAQFYCT3CS9cOUVs0EwAIwAbIiTKuDhazBBrCnCQAGcEKGAADyRu2wBzX39OgpSiqTGAvxAQsIjCAG0AXSgAXigVlWjjGljiSEnxTykCY4gAOgZmPeNcxVPU4pk86-UtXQMmKAeaBUanUoM5qp0lHtBGsANxCAD08Kgpl6fUU6B6CAAFoMcnBQZgkNU-hClJVgNU4AB3dq4bqowbIECjPwoTp4TDyeQjLYMobM2ada5+QhgVwAHj2lAONBWAGtgLIoLZkmc0rhLvE1klpTKDvLFcqQKrXlJSsBtVAGk1gC0AdR+HDjIIAHwACj2btiEDgf3krIAlFsXfsZR16SB+ggkk9gHAUFBvfjtrq9TRrhmk9k09QVv7-Ncjlqdfac4ioBnC8AfXnWVWrms-uWqUhsTkcZ0oOEILE7EQoHAwAg-tjAlAqYPqlFyYnBp0eQrZJ0SfjW1BOljJ1FnETS2nawXY5ayIm41SkvmUNcgbUovwm0iEIhMNVqkrhkgAG48qkOXDDKBPxfMAlFjKAX06fE0EjPc9UwTpCGcBCUCSAAiHFUKgctMEIYhmVRBAHEwUEkApQhO0IOBgE6EAAHIekcP8HF6VssUcWCHRoUtBDTPwEDABxjUjPoECdGh7ylfhwM7FpZAoAM4SAA)
**Expected behavior:**
Missing keys & incorrect value types when I build a nested object using the spread operator with index keys, ought to throw errors for invalid types
**Actual behavior:**
It seems like the child object can have whatever shape I want, the types aren't being checked at all
**Playground Link:** [This is same as above playground link](https://www.typescriptlang.org/play/#code/KYOwrgtgBAcg9gFwJYDMkGMCGy4gLLAAmSkUA3gFBTVQCCACvQDRU30ASA8jAKIUC+FCqFLxkaLDhAAVAJ4AHYOVbUAYpwAyGzgHUeAERY0oGgJIBpAwKFIQCYACcUmdErGoM2JLnoPgKR1BXACVgdDgHQmVqCCISCAAuWEQPSW98OMgjKAQFYCT3CS9cOUVs0EwAIwAbIiTKuDhazBBrCnCQAGcEKGAADyRu2wBzX39OgpSiqTGAvxAQsIjCAG0AXSgAXigVlWjjGljiSEnxTykCY4gAOgZmPeNcxVPU4pk86-UtXQMmKAeaBUanUoM5qp0lHtBGsANxCAD08Kgpl6fUU6B6CAAFoMcnBQZgkNU-hClJVgNU4AB3dq4bqowbIECjPwoTp4TDyeQjLYMobM2ada5+QhgVwAHj2lAONBWAGtgLIoLZkmc0rhLvE1klpTKDvLFcqQKrXlJSsBtVAGk1gC0AdR+HDjIIAHwACj2btiEDgf3krIAlFsXfsZR16SB+ggkk9gHAUFBvfjtrq9TRrhmk9k09QVv7-Ncjlqdfac4ioBnC8AfXnWVWrms-uWqUhsTkcZ0oOEILE7EQoHAwAg-tjAlAqYPqlFyYnBp0eQrZJ0SfjW1BOljJ1FnETS2nawXY5ayIm41SkvmUNcgbUovwm0iEIhMNVqkrhkgAG48qkOXDDKBPxfMAlFjKAX06fE0EjPc9UwTpCGcBCUCSAAiHFUKgctMEIYhmVRBAHEwUEkApQhO0IOBgE6EAAHIekcP8HF6VssUcWCHRoUtBDTPwEDABxjUjPoECdGh7ylfhwM7FpZAoAM4SAA)
**Related Issues:** I'm not really sure what the root limitation that's causing this is so it's hard for me to find similar issues. | Needs Investigation | low | Critical |
463,868,923 | godot | AnimatedTexture sprite sheet support | **Godot version:**
<!-- Specify commit hash if non-official. -->
3.1
**Issue description:**
<!-- What happened, and what was expected. -->
AnimatedTexture is not capable of using AtlasTextures for its frames. When loading an AtlasTexture the animation will stop, and the entire first frame displays.
**Steps to reproduce:**
Create an AnimatedTexture with four frames. Make each frame an AtlasTexture that is one quarter of the Godot Icon.
**Comment:**
We should document which texture types animated texture supports.
I would also like the ability for AnimatedTexture to support sprite sheets, this would make it much easier to configure them in general. And makes it more convenient to use them as shader flipbooks. For effects such as described in this GDC talk: https://www.gdcvault.com/play/1024630/Visual-Effects-Bootcamp-The-Rise (Technically you can do it without sprite sheets. but its quite a hassle because the workflow is dreadfully slow.) | enhancement,confirmed,topic:2d | medium | Major |
463,871,488 | rust | Rustdoc should warn about unused reference links | rustdoc can easily generate external links by using a full URL as the target. However, this only works with inline links. It doesn't work when using footnote links, sometimes called "reference links". If you try to use a full URL as the target of a footnote link, rustdoc will throw up a `intra_doc_link_resolution_failure` warning. Here's an example.
`src/libs.rs`:
```rust
//!
//! This is an inline link to [`CStr`](https://doc.rust-lang.org/stable/std/ffi/struct.CStr.html).
//! It works!
//!
//! This is an intra-crate footnote link to [`MyType`][MyType]. It also works.
//!
//! This is a footnote link, aka "reference link", to [`OsStr`](OsStr). It fails :( .
//!
//! [MyType]: t/struct.MyType.html
//! [OsStr]: https://doc.rust-lang.org/stable/std/ffi/struct.OsStr.html
pub mod t {
pub struct MyType {}
}
```
```
> cargo +nightly doc --no-deps
Documenting external_reference_link v0.1.0 (/tmp/external_reference_link)
warning: `[OsStr]` cannot be resolved, ignoring it...
--> src/lib.rs:7:65
|
7 | //! This is a footnote link, aka "reference link", to [`OsStr`](OsStr). It fails :( .
| ^^^^^ cannot be resolved, ignoring
|
= note: #[warn(intra_doc_link_resolution_failure)] on by default
= help: to escape `[` and `]` characters, just add '\' before them like `\[` or `\]`
Finished dev [unoptimized + debuginfo] target(s) in 1.06s
> rustup run nightly rustc --version
rustc 1.36.0-nightly (7d5aa4332 2019-05-16)
```
The workaround is to only use inline links for external targets. But that uglies up the code, and it requires duplication if the same comment block needs to instantiate the link multiple times. | T-rustdoc,C-enhancement,E-medium,A-intra-doc-links | low | Critical |
463,872,419 | terminal | Terminal should force pseudoconsole host into UTF-8 codepage by default | It's 2019, after all. Maybe we should introduce a flag that starts up the pseudoconsole host in codepage 65001 so that we make good on our promise of "emoji just work and everything else works like it should too," and use WT as a _real_ opportunity to push the boundaries here.
```
π
πͺ πͺ
π
```
<!-- fite me -->
---
_maintainer note, Aug 2023_
> It's {{current_year}}, after all
Also, we want to take into account _arbitrary_ codepages, ala #15678 | Product-Conpty,Area-TerminalConnection,Product-Terminal,Issue-Task,Priority-3 | medium | Critical |
463,883,648 | go | net/http/httptrace: attaching a ClientTrace twice to the same context causes stack overflow | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.5 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What did you do?
```
package main
import (
"log"
"net/http"
"net/http/httptrace"
)
func main() {
tracer := &httptrace.ClientTrace{
ConnectStart: func(network, addr string) {
log.Println("traced")
},
}
req, err := http.NewRequest("GET", "http://localhost:9999/hello", nil)
if err != nil {
log.Printf("error creating request %v", err)
}
// Adding the same trace twice causes a stack overflow.
req = req.WithContext(httptrace.WithClientTrace(req.Context(), tracer))
req = req.WithContext(httptrace.WithClientTrace(req.Context(), tracer))
client := &http.Client{}
res, err := client.Do(req)
if err != nil {
log.Printf("request error: %v", err)
}
if res != nil && res.Body != nil {
res.Body.Close()
}
}
```
### What did you expect to see?
"traced" gets printed twice, and then a request error because nothing is listening on localhost:9999.
### What did you see instead?
The stack overflows.
What I'm doing is silly, but it happened in production for a service due to a lot of indirection on retries and attempting to use a single tracer object for all traces.
The bug seems to be here https://github.com/golang/go/blob/b412fde53a6b53475e25aaa9e49f3c6df3c48716/src/net/http/httptrace/trace.go#L202-L204
Since `*t` and `*old` are the same, the `of.Call()` ends up calling itself. | help wanted,NeedsInvestigation | low | Critical |
463,883,822 | pytorch | Error in equation | ## π Documentation
Small error in the comment documenting cosine simularity. I think:
` \text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}.`
, should be:
` \text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _1, \epsilon)}.`
That is, the last \Vert_2 should actually be \Vert_1
| module: docs,triaged | low | Critical |
463,885,753 | pytorch | `binary_linux_libtorch_2.7m_cu100_devtoolset3_build` times out after running for 5 hours | Example build: https://circleci.com/gh/pytorch/pytorch/2133467.
CircleCI puts a hard limit of 5 hours on each build, and as a result we need to break up the `binary_linux_libtorch_2.7m_cu100_devtoolset3_build` job in order for it to finish. @pjh5 suggested that we can have one build job for each libtorch variant: https://github.com/pytorch/builder/blob/c4bc4e2f020616944399e802a7de0eea86051b8e/manywheel/build_common.sh#L120, which should allow us to work around the timeout issue.
cc. @kostmo @pjh5 | module: build,triaged,better-engineering | low | Minor |
463,931,704 | react-native | Width measures of Text components using a custom font are wrong | React Native version:
```
System:
OS: macOS 10.14.5
CPU: (12) x64 Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
Memory: 1002.17 MB / 32.00 GB
Shell: 5.3 - /bin/zsh
Binaries:
Node: 10.15.3 - ~/.nvm/versions/node/v10.15.3/bin/node
Yarn: 1.16.0 - /usr/local/bin/yarn
npm: 6.4.1 - ~/.nvm/versions/node/v10.15.3/bin/npm
Watchman: 4.9.0 - /usr/local/bin/watchman
SDKs:
iOS SDK:
Platforms: iOS 12.2, macOS 10.14, tvOS 12.2, watchOS 5.2
Android SDK:
API Levels: 23, 25, 26, 27, 28
Build Tools: 27.0.3, 28.0.2, 28.0.3, 29.0.0
System Images: android-28 | Google APIs Intel x86 Atom, android-28 | Google Play Intel x86 Atom, android-29 | Google Play Intel x86 Atom
Android NDK: 17.2.4988734
IDEs:
Android Studio: 3.4 AI-183.6156.11.34.5522156
Xcode: 10.2.1/10E1001 - /usr/bin/xcodebuild
npmPackages:
react: 16.8.6 => 16.8.6
react-native: 0.60.0 => 0.60.0
```
β οΈ This is not a regression from RN 0.60, can also be reproduced in RN 0.57.
___
On Android, some manufacturers β like Samsung, OnePlus, LG, HTC, and others β let you change the default font on your device without having a rooted device, and, of course, rooted devices can also do that.
The `<Text>` component provided by React Native doesn't seem to measure the layout correctly. Observed cases were when the default font is **not** Roboto and when a thick font weight is applied.
The result is that on these specifics devices, text components with a thicker font weight will be chopped off (seems to be wrapped in some cases?). It could be a dot missing, or even an entire word.
```javascript
<Text style={{ fontWeight: 'bold' }}>
This looks great!
</Text>
```
| Roboto (Default font) | OnePlus Slate |
| - | - |
| <img src="https://user-images.githubusercontent.com/7189823/60620134-62d98580-9da8-11e9-9d20-9a139fe77ece.png" height="500" /> | <img src="https://user-images.githubusercontent.com/7189823/60620084-42113000-9da8-11e9-844d-3bbe32a71f75.png" height="500" /> |
[Repository to reproduce this.](https://github.com/charpeni/react-native-issue-font-weight-custom-fonts)
### Workaround
To influence the layout measures of the text component, we can set a border of at least `2` on it. By doing this, the layout will be properly measured, but the text won't be vertically centered.
From there, to vertically center the text, we can subtract the double of the border we previously set as a negative `marginBottom`. (As seen in the screenshots above). | Platform: Android,Component: Text,Contributor,Resolution: Backlog,Bug | medium | Critical |
463,965,207 | vscode | Hyper modifier is unknown | - VSCode Version: 1.35.1
- OS Version: Ubuntu 18.04.0 LTS
Steps to Reproduce:
1. Rebind any key to Hyper (Mod3)
2. Try to create shortcuts Hyper + Key
I bind **Caps Lock** to be working as **Hyper** modifier (**Mod3**). And wanted to use that in VS Code shortcuts. But whenever I press **Caps Lock** in "Change Keybinding" dialog it is not recognized as modifier.
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled? Yes
| feature-request,keybindings | low | Minor |
463,965,667 | go | cmd/compile: ineffective branch caused by defer | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +d410642 Mon Jul 1 21:30:23 2019 +0000 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What did you do?
I was testing the behavior of `defer`, so I ended up compiling the following functions (https://godbolt.org/z/uALdNA):
```go
package test
func f() {
}
func usef() {
defer f()
}
```
### What did you expect to see?
I expected the generated assembly to not contain any unnecessary branches.
### What did you see instead?
Instead, there is a repeated block of instructions that is preceded by a condicional jump (https://godbolt.org/z/uALdNA):
```asm
testl AX, AX
jne applyf_pc83
xchgl AX, AX
call runtime.deferreturn(SB)
movq 64(SP), BP
addq $72, SP
ret
applyf_pc83:
xchgl AX, AX
call runtime.deferreturn(SB)
movq 64(SP), BP
addq $72, SP
ret
```
| NeedsInvestigation,compiler/runtime | low | Minor |
463,968,632 | godot | Editor doesn't reload changed resources. | ___
***Bugsquad edit:** This issue has been confirmed several times already. No need to confirm it further.*
___
Basically a copy of #4769, but it is still an issue.
<!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** v 3.1.1.stable.official
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Linux Mint 19.1
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
The editor doesn't reload changed resources edited by ResourceSaver.
**Steps to reproduce:**
1. Create a resource and save it with ResourceSaver in script (version 1).
2. Open the resource with editor it seems everything is fine.
3. Create another different resource and save it to the same path (version 2).
4. Open the overwritten resource you can see it still be the version 1.
5. Check the resource file with other text editor, it is correct.
6. Reopen the project and open the resource file, it will show version 2
| bug,topic:editor,confirmed,documentation | medium | Critical |
463,974,881 | pytorch | No assertion when using scatter_ on a non-contiguous tensor | ## π Bug
Don't know if I should call it a bug, but I think it is better if you could add an assertion if people use scatter_ on a non-contiguous tensor, since the scatter_ function will behave unexpectedly in this case. In other words, the output of scatter_ for a non-contiguous tensor is different from its output for an "equivalent" contiguous tensor.
## To Reproduce
Steps to reproduce the behavior:
Run the code sample and observe the values of z1 and z2. They are different, but users tend to think they are the same. Better add an assertion if user use scatter_ on z2.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
```
import torch
x = torch.rand(2, 5)
y1 = torch.zeros(3, 5)
y2 = torch.zeros(1, 5).expand(3,-1)
z1 = y1.scatter_(0, torch.tensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x)
z2 = y2.scatter_(0, torch.tensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x)
print(z1)
print(z2)
print(z1-z2)
```
output:
```
tensor([[0.3873, 0.7741, 0.0770, 0.6895, 0.3309],
[0.0000, 0.8596, 0.0000, 0.3726, 0.0000],
[0.3587, 0.0000, 0.7394, 0.0000, 0.6455]])
tensor([[0.3587, 0.7741, 0.0770, 0.3726, 0.6455],
[0.3587, 0.7741, 0.0770, 0.3726, 0.6455],
[0.3587, 0.7741, 0.0770, 0.3726, 0.6455]])
tensor([[ 0.0286, 0.0000, 0.0000, 0.3169, -0.3145],
[-0.3587, 0.0854, -0.0770, 0.0000, -0.6455],
[ 0.0000, -0.7741, 0.6624, -0.3726, 0.0000]])
```
## Environment
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla P100-PCIE-16GB
GPU 1: Tesla P100-PCIE-16GB
GPU 2: Tesla P100-PCIE-16GB
GPU 3: Tesla P100-PCIE-16GB
Nvidia driver version: 418.67
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.2
[pip] numpydoc==0.8.0
[pip] torch==1.1.0
[pip] torchtext==0.1.1
[pip] torchvision==0.3.0
[conda] blas 1.0 mkl
[conda] mkl 2019.3 199
[conda] mkl-service 1.1.2 py37he904b0f_5
[conda] mkl_fft 1.0.10 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.1.0 py3.7_cuda10.0.130_cudnn7.5.1_0 pytorch
[conda] torchtext 0.1.1 pypi_0 pypi
[conda] torchvision 0.3.0 py37_cu10.0.130_1 pytorch
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| module: error checking,triaged,module: partial aliasing,module: scatter & gather ops | low | Critical |
464,000,097 | youtube-dl | Site support request: downloads.khinsider.com | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.07.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.07.02**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Playlist: https://downloads.khinsider.com/game-soundtracks/album/sonic-robo-blast-2-ost
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE
| site-support-request | low | Critical |
464,020,867 | go | cmd/go: print an incomplete go mod graph even if an error occurs | `go mod graph` is an invaluable tool for debugging module problems. It allows us to see problems deep into our transitive dependencies. However, it completely falls over if there is a single issue somewhere in the graph, and if that issue is not at the root module it's quite hard to debug transitive dependency problems.
Consider, for example:
```
$ go mod graph
go: finding github.com/kr/pty v1.1.7
go: github.com/kr/[email protected]: unknown revision v1.1.7
go: error loading module requirements
$
```
It would be ideal if the `go mod graph` command traversed and printed as much of the graph as possible, and indicated which nodes were problematic. For example:
```
$ go mod graph
A B
B C
C github.com/kr/[email protected]
go: github.com/kr/[email protected]: unknown revision v1.1.7
go: error loading module requirements
```
(last two lines are STDERR, first three are STDOUT) | NeedsInvestigation,GoCommand,modules | low | Critical |
464,063,282 | go | net/http: Request.ParseMultipartForm doesn't read all the body | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
tested in go1.12.6 and go1.13beta1
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
Ubuntu 18.04.2 LTS
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/tevic/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/tevic/Work/WorkSpace/Code/GoProjs"
GOPROXY=""
GORACE=""
GOROOT="/home/tevic/Work/RunTime/Go"
GOTMPDIR=""
GOTOOLDIR="/home/tevic/Work/RunTime/Go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build281284214=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
I send a multipart form request with a trailer, and the server side will parse the form use ParseMultipartForm,after that i get trailer from request. But sometimes it doesn't show up.
This is my example. Two test servers are for comparison. srvReadBody works like a charm while srvParseForm don't.
```
package main
import (
"bytes"
"errors"
"fmt"
"hash"
"io"
"io/ioutil"
"log"
"mime/multipart"
"net/http"
"net/http/httptest"
)
type eofReaderFunc func()
func (f eofReaderFunc) Read(p []byte) (n int, err error) {
f()
return 0, io.EOF
}
func main() {
var testTrailer = "TestTrailer"
srvReadBody := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := io.Copy(ioutil.Discard, r.Body)
if err != nil {
log.Println(err)
return
}
if r.Trailer.Get(testTrailer) == "" {
fmt.Println("srvReadBody: Trailer is empty.")
}
}))
srvParseForm := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
err := r.ParseMultipartForm(1024 * 1024 * 16)
if err != nil {
log.Println(err)
return
}
if r.Trailer.Get(testTrailer) == "" {
fmt.Println("srvParseForm: Trailer is empty.")
}
}))
var sendRequestWithTrailer = func(addr string) {
buf := make([]byte, 1024)
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
part, err := writer.CreateFormFile("file", "testFile")
if err != nil {
log.Println(err)
return
}
_, err = io.Copy(part, bytes.NewReader(buf))
if err != nil {
log.Fatal(err)
}
err = writer.Close()
if err != nil {
log.Println(err)
return
}
var req *http.Request
req, err = http.NewRequest("POST", addr, io.MultiReader(body, eofReaderFunc(func() {
req.Trailer.Set(testTrailer, testTrailer)
})))
if err != nil {
log.Println(err)
return
}
req.Header.Set("Content-Type", writer.FormDataContentType())
req.Trailer = http.Header{testTrailer: nil}
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Println(err)
return
}
defer resp.Body.Close()
}
for i := 0; i < 100; i++ {
sendRequestWithTrailer(srvReadBody.URL)
sendRequestWithTrailer(srvParseForm.URL)
}
}
```
### What did you expect to see?
`r.Trailer.Get(testTrailer) != ""`
### What did you see instead?
Sometimes `r.Trailer.Get(testTrailer) == ""`
The code output like:
```
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
srvParseForm: Trailer is empty.
```
I've trace the code and the problem is due to this line. When isFinalBoundary match the condition the body may have not been completely read.
https://github.com/golang/go/blob/39b533ed6634079a5c4f6a3488f4b01cdab2d833/src/mime/multipart/multipart.go#L336 | NeedsInvestigation | low | Critical |
464,074,792 | kubernetes | Dummy OpenAPI types generated from metav1.Fields | /kind bug
/sig api-machinery
/wg server-apply
this should probably be an issue in the openapi generator [kubernetes/kube-openapi](https://github.com/kubernetes/kube-openapi). the swagger docs doesn't properly represent the openapi schema for `metav1.Fields`:
```json
"io.k8s.apimachinery.pkg.apis.meta.v1.Fields": {
"description": "Fields stores a set of fields in a data structure like a Trie. To understand how this is used, see: https://github.com/kubernetes-sigs/structured-merge-diff",
"type": "object"
},
```
the correct form should probably make use of `additionalProperties` to mark the value type:
```json
"io.k8s.apimachinery.pkg.apis.meta.v1.Fields": {
"description": "Fields stores a set of fields in a data structure like a Trie. To understand how this is used, see: https://github.com/kubernetes-sigs/structured-merge-diff",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Fields"
}
},
```
/cc @jennybuckley @apelisse
/cc @sttts @roycaihw | kind/bug,sig/api-machinery,lifecycle/frozen | low | Critical |
464,140,662 | pytorch | Know which function is used by conv and force to use a function | I've been pointed to this line
https://github.com/pytorch/pytorch/blob/12528990f8c56deb7ce1c699e6da63d82c115968/aten/src/ATen/native/Convolution.cpp#L526
where we can see which choices we have when we call a convolution. However I am wondering if there is a way to actually know which function has been called. Concretely I'd like to know if I am actually using NNPack. (For context, I am trying to have fast conv layers on Raspberry Pi and installed Pytorch 1.1.0, default functions are very very slow.)
Thanks
## β Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
| module: nn,module: convolution,triaged | low | Major |
464,178,941 | pytorch | libtorch new op | hi, in my project I use libtorch.and there is an operation that libtorch does not support. and I want to
implement the operation myself. and I search the document,I have not found any help. I just want to implement the forward pass, and backward pass. does there is some document, that help me add an new op for libtorch. | module: docs,module: cpp,triaged,module: custom-operators | low | Minor |
464,179,737 | rust | slice::contains with borrowed data | `HashSet::contains` has the following type to allow e.g. searching in a `HashSet<String>` with an `&str`:
```rust
pub fn contains<Q: ?Sized>(&self, value: &Q) -> bool where
T: Borrow<Q>,
Q: Hash + Eq,
```
However, `slice::contains` does not use `Borrow`, so to search in an `&[String]` one has to actually allocate a `String`:
```rust
pub fn contains(&self, x: &T) -> bool where
T: PartialEq<T>,
```
Is there a fundamental reason for this, or is this just an omission? | T-libs-api,C-feature-request | low | Major |
464,196,387 | go | x/text/unicode/runenames: allow search of rune by name | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.6 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes?
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/miki/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/miki/go"
GOPROXY="https://proxy.golang.org"
GORACE=""
GOROOT="/opt/go"
GOTMPDIR=""
GOTOOLDIR="/opt/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build030318607=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
`x/text/unicode/runenames` provides a way to lookup rune name, I'd like to do the opposite and look for a rune by it's name.
### What did you expect to see?
A `Rune(name string) (rune, error)` function
### What did you see instead?
Nothing :) | NeedsInvestigation,FeatureRequest | low | Critical |
464,249,258 | godot | Webcam (PS3 Eye) causes mic input to spew WASAPI: unsupported channel count in microphone | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** 3.1.1.stable.official, 3.2.dev.calinou.ce8e54133
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Windows 10 Pro 1809
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:** With a PS3 Eye camera connected using libusb drivers, running any Godot program which uses mic input spews this error message:
```
ERROR: thread_func: WASAPI: unsupported channel count in microphone!
At: drivers/wasapi/audio_driver_wasapi.cpp:715
```
<!-- What happened, and what was expected. -->
**Steps to reproduce:**
1. Plug in a PS3 Eye camera
2. Run the MicRecord demo
3. Click Record
4. Click Stop
**Minimal reproduction project:** MicRecord demo
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| bug,topic:core,confirmed,topic:audio | low | Critical |
464,295,726 | flutter | ProxyValidator does not parse NO_PROXY correctly when looking for hosts | ## Use case
`flutter doctor` was unable to detect the problem when I wrote `localhost;127.0.0.1` in `NO_PROXY`, note the semicolon instead of comma.
## Proposal
As there is no standard way of delimiting values in environment variables (for example, on Windows, `;` is usually used), clear instructions should be given about setting `NO_PROXY`.
| tool,t: flutter doctor,a: quality,P2,team-tool,triaged-tool | low | Minor |
464,301,923 | TypeScript | Object.freeze after object creation doesn't error | **TypeScript Version:** 3.5.1
**Search Terms:** Object.freeze, freeze, freezing, properties, object, objects, property
**Code**
```ts
const a = Object.freeze({a: 0});
a.a = 1; // Error: Cannot assign read-only property
const b = {b: 0};
Object.freeze(b);
b.b = 1; // No error?
const c = {c: {d: 0}};
Object.freeze(c.c);
c.c.d = 1; // No error?
```
**Expected behavior:**
`Cannot assign read-only property` errors for the two lines with "No error?" above.
**Actual behavior:**
No errors
**Playground Link:** https://www.typescriptlang.org/play/?removeComments=true#code/MYewdgzgLgBAhjAvDA8gIwFYFNhQHQBmATllgF5YAUA3nAFwwAMAvgJQDcAUHHgsgIzsYAemEwAokSIgiDAMJwwYELDgQIASwDmYGCTgATALTgANgE8YAB2lWsRKOc6dQkWGiQxqaBiy7psXEIScio0Dk40PA8BIVEYADkQGHtpIgB+Z1doGGBPamAGagNfZmZ-TBx8YlIKSmA8YAiGhoNPQRExJJSpGXSgA
| Docs,PursuitFellowship | low | Critical |
464,324,312 | kubernetes | Adaptive retry-after interval on rate limiting to prevent request overload | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
In our cluster (1K Nodes/1W Pods), we found that a large number of logs with a return code of 429 will appear for a long time when kube-apiserver is started:
`I0704 17:28:00.800884 227416 maxinflight.go:111] GET /api/v1/services?resourceVersion=0: (7.054ΞΌs) 429 [[kubelet/7bff6911345 (linux/amd64) kubernetes/801b699] 10.10.10.10:41957]
`
In turn, the service will be unavailable for a long time, and such an error is more likely to occur in a large cluster.
The reason for this problem is that all clients need to re-list&watch when restarting, which causes the concurrent requests to be too many. After the flow control limit is exceeded, the _**429**_ response code is returned. At the same time, add Retry-After in the response header to tell client 1s and try again(FYI:https://github.com/kubernetes/kubernetes/blob/7bbc130f5e2f99e5d29dbb58e163c405cf6cef32/staging/src/k8s.io/apiserver/pkg/server/filters/maxinflight.go#L37), but 1s obviously can not meet all the circumstances, which caused long-term overload of APIServer.
**Why is this needed**:
This is a very dangerous risk for the user, we need an adaptive flow control algorithm to deal with this problem.Just like **TODO** in the comment: _maybe make this dynamic? or user-adjustable?_
I think we need to raise its priority and solve it.
/area apiserver
/kind bug
/priority important-soon
/assign
| kind/bug,priority/important-soon,area/apiserver,sig/api-machinery,kind/feature,lifecycle/frozen | medium | Critical |
464,357,391 | create-react-app | Debug Jest in react when module alias are defined | I have a react application which is create by CRA and using typescript. I define several module mapper in tsconfig.json
`tsconfig.path.json`
```
{
"compilerOptions": {
"baseUrl": "./src",
"paths": {
"@constants/*": ["constants/*"],
"@components/*": ["components/*"],
"@grid/*": ["components/Grid/*"],
"@grid-share/*": ["components/Grid/Share/*"],
"@utils/*": ["util/*"],
"@services/*": ["Services/*"]
}
},
"extends": "../tsconfig.json"
}
```
then I define same alias in `package.json` for `JEST`
```
"jest": {
"snapshotSerializers": [
"enzyme-to-json/serializer"
],
"moduleNameMapper": {
"@constants/(.*)": "<rootDir>/src/constants/$1",
"@utils/(.*)": "<rootDir>/src/util/$1",
"@grid-share/(.*)": "<rootDir>/src/components/Grid/Share/$1",
"@grid/(.*)": "<rootDir>/src/components/Grid/$1",
"@services/(.*)": "<rootDir>/src/Services/$1",
"@components/(.*)": "<rootDir>/src/components/$1"
}
},
```
when I use `yarn test` everything is okey. However, I want to debug a test in vs code my VSCode luncher config file is like the following:
```
{
"version": "0.2.0",
"configurations": [{
"name": "Debug All Tests",
"type": "node",
"request": "launch",
"runtimeExecutable": "${workspaceRoot}/node_modules/.bin/react-scripts",
"args": [
"test",
"--runInBand",
"--no-cache",
"--watchAll=false"
],
"cwd": "${workspaceRoot}",
"protocol": "inspector",
"console": "integratedTerminal",
"internalConsoleOptions": "neverOpen"
}, {
"name": "Debug Current Test",
"type": "node",
"request": "launch",
"runtimeExecutable": "${workspaceRoot}/node_modules/.bin/react-scripts",
"args": [
"test",
"--runInBand",
"--no-cache",
"${fileBasenameNoExtension}",
"--watchAll=true"
],
"cwd": "${workspaceRoot}",
"protocol": "inspector",
"console": "integratedTerminal",
"internalConsoleOptions": "neverOpen"
}]
}
```
the problem is when I try to debug a test case in VSCode, I have following error
> Out of the box, Create React App only supports overriding these Jest options:
>
> β’ collectCoverageFrom
> β’ coverageReporters
> β’ coverageThreshold
> β’ extraGlobals
> β’ globalSetup
> β’ globalTeardown
> β’ resetMocks
> β’ resetModules
> β’ snapshotSerializers
> β’ watchPathIgnorePatterns.
>
> These options in your package.json Jest configuration are not currently supported by Create React App:
>
> β’ moduleNameMapper
>
> If you wish to override other Jest options, you need to eject from the default setup. You can do so by running npm run eject but remember that this is a one-way operation. You may also file an issue with Create React App to discuss supporting more options out of the box.
is there any way to solve this problem without `ejecting`?? | issue: bug | low | Critical |
464,359,100 | pytorch | make[2]: *** No rule to make target 'libtorch/lib/libc10.so' | ## π make[2]: *** No rule to make target 'libtorch/lib/libc10.so'
<!-- -->
## Temporary Solution:
* If you want to find a temporary solution to that problem,please check the "EDIT" at the end
## To Reproduce
Steps to reproduce the behavior:
* I have successfully tried to compile and run the C++ libtorch example,as it is described here:https://pytorch.org/tutorials/advanced/cpp_export.html .Everything's working fine here.
* Now,I want to embed somehow my CMakelists.txt from the previous example to a new CMakelists.txt(new project).
* However,I have searched a lot but I can't cmake succesfully,because for unknown reason my target binary links false(or duplicates) some paths.I provide these wrong paths below.
* I have checked also the "CMakeFiles/<target_name>.dir/build.make file,which is created by CMake,in order to find these duplicate paths(target name is "app"):
```
app: ../libtorch/lib/libtorch.so
app: libtorch/lib/libc10.so
app: libtorch/lib/libc10_cuda.so
....
app: libtorch/lib/libc10.so
app: libtorch/lib/libc10_cuda.so
app: ../libtorch/lib/libc10_cuda.so
app: ../libtorch/lib/libc10.so
```
* Below is the successfuly working CMakelists.txt from the PyTorch Example(assume "ok" is the binary of my example app in the PyTorch website):
```
project(test_cmake)
set(CMAKE_PREFIX_PATH "libtorch/share/cmake/Torch")
find_package(Torch REQUIRED)
add_executable(ok ok.cpp)
target_link_libraries(ok "${TORCH_LIBRARIES}")
set_property(TARGET ok PROPERTY CXX_STANDARD 11)
```
* And the following is my final CMakelists.txt,where i have embedded the CMakelists.txt from the previous example,in order to build my app(not working,due to the error in the description)
```
cmake_minimum_required(VERSION 2.8.12)
project(examples)
add_subdirectory(../dlib-19.13/dlib/ dlib_build)
add_executable(app f1.cpp f2.cpp ... fn.cpp)
target_link_libraries(app dlib::dlib)
#find packages
find_package(OpenCV QUIET)
find_package(HDF5 REQUIRED COMPONENTS C CXX)
find_package(SDL2 REQUIRED)
set(CMAKE_PREFIX_PATH "libtorch/share/cmake/Torch")#Copied thefrom example!
find_package(Torch REQUIRED)#Copied thefrom example!
#includes dirs
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${HDF5_INCLUDE_DIRS})
include_directories(${SDL2_INCLUDE_DIRS})
include_directories(${Torch_INCLUDE_DIRS})#I just included that in order to fix the bug,there wasn't at the official example
#target link libraries
target_link_libraries(app dlib::dlib ${OpenCV_LIBS})
target_link_libraries(app dlib::dlib ${HDF5_CXX_LIBRARIES} ${HDF5_LIBRARIES} )
target_link_libraries(app ${SDL2_LIBRARIES})
target_link_libraries(app "${TORCH_LIBRARIES}" )#Copied thefrom example!
set_property(TARGET app PROPERTY CXX_STANDARD 11)#Copied thefrom example!
```
* In both example and project, I used the same command use the CMake:
```
sudo cmake -DCMAKE_PREFIX_PATH=../libtorch .. && make
```
* I mention also that I copied the whole "libtorch" folder in another path,in order to be in the same folder with each project's source files.Despite the fact that this copy is meaningless, I don't think its the reason for the wrong compilation.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* PyTorch version: 1.1.0
* Is debug build: No
* CUDA used to build PyTorch: 9.0.176
* OS: Ubuntu 16.04.6 LTS
* GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
* CMake version: version 3.5.1
* Python version: 3.6
* Is CUDA available: No
* CUDA runtime version: 7.5.17
* GPU models and configuration: Could not collect
* Nvidia driver version: Could not collect
* cuDNN version: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7
* [pip3] numpy==1.15.4
* [pip3] numpydoc==0.8.0
* [pip3] torch==1.1.0
* [pip3] torchsummary==1.5.1
* [pip3] torchvision==0.3.0
* [conda] blas 1.0 mkl
* [conda] mkl 2019.1 144
* [conda] mkl-service 1.1.2 py36he904b0f_5
* [conda] mkl_fft 1.0.10 py36ha843d7b_0
* [conda] mkl_random 1.0.2 py36hd81dba3_0
* [conda] torch 1.1.0 <pip>
* [conda] torchsummary 1.5.1 <pip>
* [conda] torchvision 0.3.0 <pip>
## EDIT
* I have found a temporary solution to avoid this error.However,it should be fixed normaly.
* I edited the **"CMakeFiles/<target_name>.dir/build.make** created by the CMake.I put in comments the targets that caused this error.In my project i put in comments the following
```
#app: libtorch/lib/libc10.so
```
* This process is done every time you run the cmake command. | module: build,module: cpp,triaged | low | Critical |
464,385,209 | rust | Trait resolution fails with unclear error when using function types as input parameters | This works ([Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=e0fd0844e20c87b6e28c8e246aad154c)):
```rust
trait Call<F> {
type Ret;
fn call(self, f: F) -> Self::Ret;
}
impl Call<f32> for f32 {
type Ret = f32;
fn call(self, f: f32) -> Self::Ret {
f
}
}
impl Call<i32> for f32 {
type Ret = i32;
fn call(self, f: i32) -> Self::Ret {
f
}
}
fn main() {
let _ = (0_f32).call(0_i32);
}
```
but this fails ([Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=eb1c03e73ae9b722fb8af8b8f60482cb)):
```rust
trait Call<F> {
type Ret;
fn call(self, f: F) -> Self::Ret;
}
impl Call<fn(f32) -> f32> for f32 {
type Ret = f32;
fn call(self, f: fn(f32) -> f32) -> Self::Ret {
f(self)
}
}
impl Call<fn(f32) -> i32> for f32 {
type Ret = i32;
fn call(self, f: fn(f32) -> i32) -> Self::Ret {
f(self)
}
}
fn main() {
fn u0(x: f32) -> f32 {
x
}
let _ = (0_f32).call(u0);
}
```
with
```
error[E0277]: the trait bound `f32: Call<fn(f32) -> f32 {main::u0}>` is not satisfied
--> src/main.rs:23:21
|
23 | let _ = (0_f32).call(u0);
| ^^^^ the trait `Call<fn(f32) -> f32 {main::u0}>` is not implemented for `f32`
|
= help: the following implementations were found:
<f32 as Call<fn(f32) -> f32>>
<f32 as Call<fn(f32) -> i32>>
error: aborting due to previous error
```
I'm not sure if this is by design or not (AFAICT the trait bound is satisfied - EDIT: it is by design, see below), but the error message could be at least much clearer. | C-enhancement,A-diagnostics,A-trait-system,T-compiler,A-suggestion-diagnostics,D-newcomer-roadblock | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.