id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
472,926,844 | angular | Ability to manually trigger all resolvers on route change | I have seen quite a few tickets related to this on github, and quite a few posts on this topic involving various workarounds elsewhere on the internet, e.g. on Stack Overflow, but there still doesn't seem to be a proper, simple solution for this included within the framework.
This is my use case:
1. I have 2 routes with components - A & B - which share a common parent root (C).
2. I have a resolver configured on C. A & B both share the data from this resolver
3. In component A, I am performing a server update of the shared data, before navigating to route B.
4. When performing this navigation, I want to force the resolver on C to fire again, so that B can see the latest version of the data that has just been modified by A.
Just to be clear:
1. I do ***not*** want the resolver on C to fire _every time_ I navigate between child routes of C, and I definitely don't want all my other resolvers firing every time I navigate between child routes. So `onSameUrlNavigation` is too course a solution for this use case.
2. I want to be able to force all the resolvers to fire _programatically_, _at the point of navigation_, _in specific situations as required_ - i.e. when I have just performed a server update and the route state has not yet been refreshed.
So I am looking for a parameter, or configuration option, that can be passed to `Router.navigate()` or `navigateByUrl()` that will force all the resolvers (and probably the guards) in the target route (hierarchy) to be re-executed.
E.g. something like the `reload: true` option provided in the AngularJS UI router, see here: https://ui-router.github.io/ng1/docs/latest/interfaces/transition.transitionoptions.html#reload - although I believe that also causes all the components to be destroyed and re-created, which is _probably_ not necessary in this case. | feature,freq1: low,area: router,feature: under consideration,cross-cutting: signals | medium | Major |
472,958,130 | go | x/tools/go/ssa: panic: no ssa.Value for function argument | <pre>
$ go version
go version go1.12.7 linux/amd64
</pre>
### What did you do?
Input program:
```
package pkg
func fn() {
func(arg int) []int {
_ = arg
return nil
}(0)[0]++
}
```
Run `ssadump` on the package.
The panic does not occur if the result of the function call or of the indexing operation are stored in a temporary variable before doing the post-increment.
### What did you expect to see?
No panic.
### What did you see instead?
```
panic: no ssa.Value for var arg int
goroutine 1 [running]:
golang.org/x/tools/go/ssa.(*Function).lookup(0xc0000b0780, 0x7d5b20, 0xc0000ad950, 0xa09601, 0x508c4125f9dfb600, 0x0)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/func.go:428 +0x472
golang.org/x/tools/go/ssa.(*Function).lookup(0xc0000b08c0, 0x7d5b20, 0xc0000ad950, 0xa09600, 0x1, 0x0)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/func.go:430 +0x11b
golang.org/x/tools/go/ssa.(*builder).addr(0xc000175a80, 0xc0000b08c0, 0x7cec20, 0xc00000d200, 0xc00010cb00, 0xc000174290, 0xc00010cb60)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:346 +0x246
golang.org/x/tools/go/ssa.(*builder).expr(0xc000175a80, 0xc0000b08c0, 0x7cec20, 0xc00000d200, 0x722500, 0x9ed680)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:528 +0x174
golang.org/x/tools/go/ssa.(*builder).assign(0xc000175a80, 0xc0000b08c0, 0x7cfbe0, 0xa08bc8, 0x7cec20, 0xc00000d200, 0x403600, 0xc0001743a8)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:502 +0x105
golang.org/x/tools/go/ssa.(*builder).assignStmt(0xc000175a80, 0xc0000b08c0, 0xc00005a830, 0x1, 0x1, 0xc00005a840, 0x1, 0x1, 0xc00005a600)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:1050 +0x1d5
golang.org/x/tools/go/ssa.(*builder).stmt(0xc000175a80, 0xc0000b08c0, 0x7ce620, 0xc000022940)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:2006 +0x209c
golang.org/x/tools/go/ssa.(*builder).stmtList(0xc000175a80, 0xc0000b08c0, 0xc00000d280, 0x2, 0x2)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:790 +0x72
golang.org/x/tools/go/ssa.(*builder).stmt(0xc000175a80, 0xc0000b08c0, 0x7ce7a0, 0xc000085800)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:2102 +0x22b7
golang.org/x/tools/go/ssa.(*builder).buildFunction(0xc000175a80, 0xc0000b08c0)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:2195 +0x2d2
golang.org/x/tools/go/ssa.(*builder).expr0(0xc000175a80, 0xc0000b0780, 0x7ceb20, 0xc00005a870, 0x7, 0x7cb560, 0xc000085bc0, 0x0, 0x0, 0x5555555555555555, ...)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:554 +0x3b4
golang.org/x/tools/go/ssa.(*builder).expr(0xc000175a80, 0xc0000b0780, 0x7ceb20, 0xc00005a870, 0x40bda9, 0xc0000ba700)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:530 +0x27e
golang.org/x/tools/go/ssa.(*builder).setCallFunc(0xc000175a80, 0xc0000b0780, 0xc000022980, 0xc0000ba740)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:881 +0xa5
golang.org/x/tools/go/ssa.(*builder).setCall(0xc000175a80, 0xc0000b0780, 0xc000022980, 0xc0000ba740)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:962 +0x53
golang.org/x/tools/go/ssa.(*builder).expr0(0xc000175a80, 0xc0000b0780, 0x7ce820, 0xc000022980, 0x7, 0x7cb5a0, 0xc00005a970, 0x0, 0x0, 0x7cb5a0, ...)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:596 +0x29cd
golang.org/x/tools/go/ssa.(*builder).expr(0xc000175a80, 0xc0000b0780, 0x7ce820, 0xc000022980, 0xc00005a970, 0x7f1c7d0e9678)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:530 +0x27e
golang.org/x/tools/go/ssa.(*builder).addr(0xc000175a80, 0xc0000b0780, 0x7ced20, 0xc000085830, 0xc000175500, 0xc000175680, 0xa08bc8)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:396 +0x63c
golang.org/x/tools/go/ssa.(*builder).stmt(0xc000175a80, 0xc0000b0780, 0x7cece0, 0xc00000d2e0)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:2000 +0x231d
golang.org/x/tools/go/ssa.(*builder).stmtList(0xc000175a80, 0xc0000b0780, 0xc00005a8a0, 0x1, 0x1)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:790 +0x72
golang.org/x/tools/go/ssa.(*builder).stmt(0xc000175a80, 0xc0000b0780, 0x7ce7a0, 0xc000085860)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:2102 +0x22b7
golang.org/x/tools/go/ssa.(*builder).buildFunction(0xc000175a80, 0xc0000b0780)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:2195 +0x2d2
golang.org/x/tools/go/ssa.(*builder).buildFuncDecl(0xc000175a80, 0xc000066900, 0xc000085890)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:2225 +0xed
golang.org/x/tools/go/ssa.(*Package).build(0xc000066900)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:2341 +0x88a
sync.(*Once).Do(0xc00006692c, 0xc000175ce8)
/usr/lib/go/src/sync/once.go:44 +0xb3
golang.org/x/tools/go/ssa.(*Package).Build(0xc000066900)
/home/dominikh/prj/src/golang.org/x/tools/go/ssa/builder.go:2260 +0x54
main.doMain(0x0, 0x0)
/home/dominikh/prj/src/golang.org/x/tools/cmd/ssadump/main.go:146 +0x5ff
main.main()
/home/dominikh/prj/src/golang.org/x/tools/cmd/ssadump/main.go:64 +0x26
```
/cc @ianthehat | NeedsInvestigation,Tools | low | Minor |
472,973,977 | go | cmd/go: go.mod formatting drops unattached comments within blocks | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13beta1 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes, happens with 1.12.6 too. Not tried with tip.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/mohit/.cache/go-build"
GOENV="/home/mohit/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/mohit/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go-1.13beta1"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go-1.13beta1/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/tmp/exp/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build485779267=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Executed `go mod edit -fmt` on the following. (The following is a minimal reproducible example of the actual go.mod file.)
```
module exp
go 1.13
require (
foo v0.0.0-00010101000000-000000000000
// bar v0.0.0-00010101000000-000000000000
)
```
### What did you expect to see?
Perhaps no change.
### What did you see instead?
The commented disappeared after formatting.
```
module exp
go 1.13
require foo v0.0.0-00010101000000-000000000000
```
| help wanted,NeedsInvestigation,modules | low | Critical |
472,982,316 | pytorch | Tensor from mmaped storage loads the entire file into memory | ## 🐛 Bug
Tensor from mmaped storage loads the entire file into memory for operations that don't require it.
## To Reproduce
```python
import os, torch
f = 'bigbinaryblobwithevensize'
s = torch.ByteStorage.from_file(f, True, os.path.getsize(f))
t = torch.ByteTensor.new(s)
# so far so good
t = t.view(-1,2) # loads entire file into memory for no reason
t.is_pinned() # loads entire file into memory for even less reason
```
## Expected behavior
Only the necessary parts of mmapped files should not load into memory.
## Environment
- PyTorch Version (e.g., 1.0):
- starting with july 21st nightly: 1.2.0.dev20190721-py3.7_cuda10.0.130_cudnn7.5.1_0
- still working as expected with 1.2.0.dev20190720-py3.7_cuda10.0.130_cudnn7.5.1_0
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): conda install pytorch-nightly
- Python version: Python 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] :: Anaconda, Inc. on linux
- CUDA/cuDNN version: 430.34 (10.1)
- GPU models and configuration: 1050Ti
| module: memory usage,triaged | low | Critical |
472,994,288 | godot | Documentation is unclear on how to move a RigidBody2D in Kinematic mode | **Godot version**: 3.1
**Issue description:**
It is unclear how a RigidBody2D in Kinematic mode should or can be moved.
This is the related documentation in [RigidBody2D#Mode](https://docs.godotengine.org/en/3.1/classes/class_rigidbody2d.html#enum-rigidbody2d-mode):
> MODE_KINEMATIC = 3 — Kinematic mode. The body behaves like a KinematicBody2D, and must be moved by code.
There has to be some specification as to how moving by code in kinematic mode should be.
Many users understandably assume that the `move_*` methods would be available, but naturally they aren't defined in a rigid body class.
The docs also state the following:
> "Note: You should not change a RigidBody2D’s **position** ... every frame or even very often. If you need to directly affect the body’s state, use _integrate_forces, which allows you to directly access the physics state.".
This might discourage users to change the **position** on every frame in kinematic mode (which I guess is ok to do).
I think this would clear a lot of confusion since some googling reveals many posts asking about this. | enhancement,documentation,topic:physics | low | Major |
473,023,266 | TypeScript | moveToNewFile: move variable under cursor | ## Search Terms
Move to New File
Refactor Refactoring
## Suggestion
Allow selecting variable name, not only function/class name, in moveToNewFile refactoring.
## Use Cases
moveToNewFile is useful for refactoring React component.
React component can be written in both function statements and arrow functions (function assigned to variable).
But current moveToNewFile implementation requires whole lines of a variable statement to be selected, while function and class can be moved by launching Quick Fix at their name.
## Examples
```ts
function Component(props: Props) {
...
}
```
Currently, when placing cursor at anywhere in function or class name, i.e. `Component` in above, VSCode shows "Move to New File" menu. But if it is variable, the menu does not show up.
```ts
// Cannot refactor until selecting all lines of below statement.
// I want to refactor this by moving cursor to anywhere in "Component"
const Component: React.FC<Props> = (props: Props) => {
...
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
## Implementation
https://github.com/microsoft/TypeScript/compare/master...ypresto:move-var-func-to-new-file?expand=1 | Suggestion,Awaiting More Feedback,Domain: Refactorings | low | Minor |
473,035,672 | TypeScript | Auto fix compilerOptions Lib | ## Search Terms
compilerOptions Lib, autofix Lib, auto Lib generation, Auto compilerOptions
## Suggestion
Suggest adding the appropriate library to compilerOptions.lib if you are using a feature that requires it.
## Use Cases
* Help the developer code by preventing them from having to look up which compilerOptions.lib setting they need to use a particual feature of the standard library. As it is difficult to remember which version of ES introduced which additions.
* In the future this could be used to auto generate the compilerOptions.Lib the developer needs by the JS features they are using. Which would remove some configuration. 🎉
## Examples
```js
Object.entities(object).map(([key, value]) => {
...
});
```
📎💬 “Looks like your using Object.entities, would you like to add 'es2017.object' to your compilerOptions.lib?”
____________________________________________________________________
```js
if (array.includes(testValue)) {
...
}
```
📎💬 “Looks like your using Array.include, would you like to add 'es2016.array.include' to your compilerOptions.lib?”
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals). | Suggestion,Experience Enhancement | low | Minor |
473,045,948 | TypeScript | Change property modifier in mapped type based on condition | As I know there is no way to change property modifier of mapped types based on conditional types. Here is my use case:
```typescript
export class MakeItRequired<T extends ModelValue<any>> {
target: "MakeItRequired" = "MakeItRequired"
constructor(public option: T) {
}
}
export type ModelValue<T> =
T extends StringConstructor ? string :
T extends NumberConstructor ? number :
T extends BooleanConstructor ? boolean :
unknown
export type ModelFromSchema<T> = {
[P in keyof T]?:
T[P] extends MakeItRequired<infer U> ? ModelValue<U> :
ModelValue<T[P]>
}
export const schema = {
id: new MakeItRequired(Number),
firstName: String,
lastName: String
}
export const type: ModelFromSchema<typeof schema> = {}
```
My goal is to have `id` non optional, while having others optional.
Additional screenshoot from vscode:
<img width="695" alt="Screenshot 2019-07-25 22 59 49" src="https://user-images.githubusercontent.com/1753397/61904745-1f4de380-af30-11e9-9259-01db8e79b9a0.png">
Can we have this feature? | Suggestion,Awaiting More Feedback,Domain: Mapped Types | low | Major |
473,048,728 | rust | rustdoc source code page should provide more things | For examples:
* [ ] Interactions with types (adding a popup when we hover one to give information like the doc page and provide a link to its definition)
* [ ] Interactions with all items (adding a popup when we hover one to give information like the doc page and provide a link to its definition)
* [ ] Collapsing blocks
* [ ] ... | T-rustdoc,C-enhancement | low | Major |
473,050,982 | go | context: doc of WithValue should mention context is not data store | Hello everybody,
https://github.com/golang/go/blob/919594830f17f25c9e971934d825615463ad8a10/src/context/context.go#L520-L525
Implementation of `valueCtx.Value` in context package is recursive function. That means its performance decreases at least linearly with the number of values stored in the context. Moreover, it is easily subject to stack overflow if user stores way too many values.
Documentation does not give any advice on how `context.WithValue` should be used, especially w.r.t. performance. IMHO, it would be better to update the doc in order to:
1. Discourage the user from using the `context.Context` as data store;
2. Suggest the user to just store things like map or struct, which let the user decide how to store data without impacting `valueCtx.Value` performance.
CC @mvdan
| Documentation,NeedsFix | low | Major |
473,079,776 | rust | slice methods could be added to the array docs | (I'm surprised I couldn't find an issue for this!)
The documentation page for the [primitive array type](https://doc.rust-lang.org/std/primitive.array.html) ideally should include the methods defined on slices. (see e.g. [this report of confusion](https://users.rust-lang.org/t/array-lengths-cant-depend-on-generic-parameters-with-const-generics-bug-or-expected-behavior/30579/36) on the forum)
This could be tricky; it looks like this capability is basically hardcoded into the compiler, so `rustdoc` would need to special-case it. (it can't rely on `Deref` like it can for `Vec`)
---
As an aside, the page contains some misleading language:
> Arrays coerce to slices (`[T]`), so a slice method may be called on an array.
This suggests that coercions enable method access, but there's plenty of coercions in Rust that do not, including other `Unsize` coercions like `&T -> &dyn Trait`. | T-rustdoc,C-enhancement | low | Critical |
473,094,601 | kubernetes | Decoding should not clear apiVersion/kind | **What would you like to be added**:
When using a typed client, decoding to a versioned struct (not an internal API type), the apiVersion/kind information returned from the server should not be dropped.
**Why is this needed**:
The `GroupVersionKind()` method included in the ObjectKind interface is largely useless when dealing with arbitrary runtime.Object instances, since typed instances drop this information here:
https://github.com/kubernetes/kubernetes/blob/69a34f6a6f67de47cb9b72b6ac98e089d301beb3/staging/src/k8s.io/apimachinery/pkg/runtime/helper.go#L245-L259
This is the decoder used when a client requests a decoder that does not do conversion:
https://github.com/kubernetes/kubernetes/blob/69a34f6a6f67de47cb9b72b6ac98e089d301beb3/staging/src/k8s.io/apimachinery/pkg/runtime/serializer/codec_factory.go#L175-L179
I could see clearing group/version/kind information when converting to an internal version, but I don't see the benefit of stripping it on decode if we're only dealing with a versioned struct.
/sig api-machinery
/cc @smarterclayton
Note that https://github.com/kubernetes/kubernetes/issues/3030 still needs to be resolved before apiVersion/kind could be depended on for individual objects for all API responses, but this would at least solve the issue with an update of an object clearing the apiVersion/kind in an update response (xref https://github.com/kubernetes-sigs/controller-runtime/issues/526) | sig/api-machinery,kind/feature,help wanted,lifecycle/frozen,triage/accepted | medium | Critical |
473,098,528 | godot | Can't export Windows with custom icon using headless | **Godot version:**
Godot_v3.1.1-stable_linux_headless.64
**OS/device including version:**
Ubuntu 16.04 inside a Docker container, running on an Ubuntu 18.04 host
**Issue description:**
Godot headless fails to load the editor settings config file even if it is present with an error that says only an editor can read those settings:
```
ERROR: instance: Class 'EditorSettings' can only be instantiated by editor.
At: core/class_db.cpp:523.
ERROR: poll: /root/.config/godot/editor_settings-3.tres:3 - Parse Error: Can't create sub resource of type: EditorSettings
At: scene/resources/resource_format_text.cpp:561.
ERROR: load: Condition ' err != OK ' is true. returned: RES()
At: core/io/resource_loader.cpp:208.
ERROR: _load: Failed loading resource: /root/.config/godot/editor_settings-3.tres
At: core/io/resource_loader.cpp:285.
WARNING: create: Could not open config file.
At: editor/editor_settings.cpp:871.
ERROR: get_edited_scene_root: Index current_edited_scene=-1 out of size (edited_scene.size()=0)
At: editor/editor_data.cpp:678.
ERROR: texture_set_flags: Condition ' !t ' is true.
At: ./drivers/dummy/rasterizer_dummy.h:199.
ERROR: texture_set_flags: Condition ' !t ' is true.
At: ./drivers/dummy/rasterizer_dummy.h:199.
ERROR: get_edited_scene_root: Index current_edited_scene=-1 out of size (edited_scene.size()=0)
At: editor/editor_data.cpp:678.
```
However, there are export settings in the editor settings config file, namely the ones I'm trying to use that specify an rcedit and wine location so that Windows exports can have a custom icon on the executable. This prevents me from using the custom icon feature at all in exports since I'm not using the editor, and my builds are automated using headless.
I suspect this may be related to #29149 which is another issue where headless breaks an existing editor settings config file.
**Steps to reproduce:**
Export a project for Windows with a custom icon set and an editor settings config file existing in the user's .config/godot directory.
| bug,topic:editor | low | Critical |
473,152,250 | godot | Overlapping nodes on y-sorted TileMap "jump" back and forth | **Godot version:**
3.1
**OS/device including version:**
Windows, Radeon R9 200 series
**Issue description:**
I have two nodes that are overlapping (exact same x/y position) on a y-sorted TileMap. The nodes will swap places randomly and Node A will be in front of Node B then randomly Node B will show up in front of Node A.
I'm guessing the algorithm that sorts the nodes is not deterministic. I would recommend that the sorting should probably just fallback to the node's index in the parent to decide on ordering if they overlap OR just make it deterministic so it doesn't jump back and forth.
**Steps to reproduce:**
1. Add 2 sprites on the same x/y position
2. Have other sprites that are moving around to force the engine to have to re-sort children.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[YSortBug.zip](https://github.com/godotengine/godot/files/3434005/YSortBug.zip)
See video. There are 2 sprites on the top left tile which will jump back and forth. The other nodes that are animating to the right/down are there just to force sorting to happen.

| bug,topic:core,topic:rendering,confirmed | low | Critical |
473,152,838 | flutter | FlutterFragment not returning the expected type when inflating it in Android | ## Summary
I need to inflate a `FlutterFragment` with a custom dart entry point, and set initial route.
However, this happens, it seems like the `io.flutter.embedding.android.FlutterFragment` is not a valid argument for the fragment layout inflation. This is misleading as the [documentation](https://github.com/flutter/flutter/wiki/Experimental:-Add-Flutter-Fragment), is actually inflating it the same way as I did.

Alternative is to use `Flutter.createFragment(...)`, however it does not allow me to set custom dart entry point other than `main`.

## Logs
`flutter doctor -v`
```
[✓] Flutter (Channel stable, v1.7.8+hotfix.3, on Mac OS X 10.14.5 18F132, locale en-PH)
• Flutter version 1.7.8+hotfix.3 at /Users/joshuadeguzman/Documents/Tools/flutter
• Framework revision b712a172f9 (2 weeks ago), 2019-07-09 13:14:38 -0700
• Engine revision 54ad777fd2
• Dart version 2.4.0
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.0)
• Android SDK at /Users/joshuadeguzman/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS
✗ Xcode installation is incomplete; a full installation is necessary for iOS development.
Download at: https://developer.apple.com/xcode/download/
Or install Xcode via the App Store.
Once installed, run:
sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer
✗ CocoaPods installed but not initialized.
CocoaPods is used to retrieve the iOS and macOS platform side's plugin code that responds to your plugin usage on the Dart side.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/platform-plugins
To initialize CocoaPods, run:
pod setup
once to finalize CocoaPods' installation.
[✗] iOS tools - develop for iOS devices
✗ libimobiledevice and ideviceinstaller are not installed. To install with Brew, run:
brew update
brew install --HEAD usbmuxd
brew link usbmuxd
brew install --HEAD libimobiledevice
brew install ideviceinstaller
✗ ios-deploy not installed. To install:
brew install ios-deploy
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 36.1.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[✓] VS Code (version 1.36.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.2.0
[✓] Connected device (1 available)
• Pixel XL • HT6C90202587 • android-arm64 • Android 8.0.0 (API 26)
! Doctor found issues in 2 categories.
```
| platform-android,d: examples,a: existing-apps,d: wiki,P3,team-android,triaged-android | low | Major |
473,225,055 | TypeScript | tsc and tsserver have different ideas of excessively deep types | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:**
`typescript-3.6.0-insiders.20190725`
https://typescript.visualstudio.com/cf7ac146-d525-443c-b23c-0d58337efebc/_apis/build/builds/37759/artifacts?artifactName=tgz&fileId=2A7E3F3E93DD6F833D63A63F3A7A21707B3F57B08FCA5C93A8F1848556D8141F02&fileName=/typescript-3.6.0-insiders.20190725.tgz
Taken from,
https://github.com/microsoft/TypeScript/pull/32028#issuecomment-515234327
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
max instantiation count, max instantiation depth, tsc, tsserver
**Code**
I've reduced the 40-subproject monorepo to just small parts of 2 subprojects.
Here is a snippet of the problem,
```ts
/**
*
* ```json
* //tsconfig-base.json
* "disableSourceOfProjectReferenceRedirect": true,
* ```
*
* ```json
* //package.json
* "typescript": "https://typescript.visualstudio.com/cf7ac146-d525-443c-b23c-0d58337efebc/_apis/build/builds/37759/artifacts?artifactName=tgz&fileId=2A7E3F3E93DD6F833D63A63F3A7A21707B3F57B08FCA5C93A8F1848556D8141F02&fileName=/typescript-3.6.0-insiders.20190725.tgz"
* ```
*
* -----
*
* VS Code: OK!
* Type inference OK!
*
* -----
*
* `tsc` gives me,
*
* ```js
* src/blah.ts:11:21 - error TS2589: Type instantiation is excessively deep and possibly infinite.
*
* 11 export const json = tm.deepMerge(
* ~~~~~~~~~~~~~
* 12 base.json,
* ~~~~~~~~~~~~~~
* ...
* 31 s.convertBlahType.toStr.BLAH
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* 32 );
* ~
* ```
*/
export const json = tm.deepMerge(
/*snip*/
);
const x = json("", "");
if (x.foo != undefined) {
if (x.foo.bar != undefined) {
//Correctly infers `string`
//So, 14M type instantiations is okay with VS code
x.foo.bar.baz
}
}
```
**Expected behavior:**
If `tsc` gets the error,
```
TS2589: Type instantiation is excessively deep and possibly infinite.
```
Then `tsserver` should also get the error
**Actual behavior:**
`tsc` gets the error, but `tsserver` does not (or at least VS code infers the type correctly without errors)
**Playground Link:**
-None-
@sheetalkamat
The project involved contains code related to a company project.
So, I really do not want to upload it publicly, if possible.
I could mangle the variable names so that they're meaningless but if I can just email the parts of the project that are involved, that would be nice.
Repro steps:
1. `npm install`
1. `npm run build`
1. See `TS2589: Type instantiation is excessively deep and possibly infinite.`
1. Open VS code
1. Navigate to file `tsc` says contains errors
1. Notice no errors
1. Play with return type of function
1. Notice inference works correctly
**Related Issues:**
The merged PR that introduced the error,
https://github.com/microsoft/TypeScript/pull/32079#issuecomment-515334873
The build I am testing,
https://github.com/microsoft/TypeScript/pull/32028
Also relevant, https://github.com/microsoft/TypeScript/issues/29511
It seems like every few versions, there ends up being a difference between `tsc` and `tsserver`, regarding the max instantiation depth
| Needs Investigation | low | Critical |
473,231,003 | go | x/tools/internal/imports: redundant import name is not removed | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.6 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/nd/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/nd/go"
GOPROXY=""
GORACE=""
GOROOT="/home/nd/go/go1.12.6"
GOTMPDIR=""
GOTOOLDIR="/home/nd/go/go1.12.6/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/nd/p/golang-tools/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build074254564=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
I've ran goimports built on revision 2e34cfcb95cb3d24b197d58fe6d25046b8f25c86 on the following file:
```
package main
import opentracing "github.com/opentracing/opentracing-go"
func main() {
println(opentracing.Binary)
}
```
### What did you expect to see?
The redundant import alias is removed.
### What did you see instead?
Nothing is changed.
Looks like introduced by 9bea2ecb9504b47b0689751119be8b92a371a771: the [imp.name != "" check](https://github.com/golang/tools/blob/master/internal/imports/fix.go#L425) prevents redundant import cleanup. | NeedsDecision,FeatureRequest,Tools | low | Critical |
473,282,908 | pytorch | Hanging on when one gpu node return zero as loss in the context of distributed data parallel training | ## 🐛 Bug
Hello.
I am training a multi task network on a cluster using 8 gpu with `Distributed Data Parallel`.
On every iteration, I process one image per gpu.
Suppose I have `task_a` and `task_b` with `loss_a` and `loss_b` respectively.
In some case, the `loss_a` could be zero on one gpu, which means that there is no loss computed.
Then I just return the zero tensor.
When this situation happens, the code just hang on.
- PyTorch Version: 1.1.0
- OS (e.g., Linux): Ubuntu
- How you installed PyTorch (`conda`, `pip`, source): docker conda
- Python version: 3.6
- CUDA/cuDNN version: CUDA 9.0
- GPU models and configuration: 1080ti
- Any other relevant information:
| oncall: distributed,triaged | low | Critical |
473,300,204 | rust | Iterator::skip is not zero-cost in some cases (badly optimized by LLVM) | See https://rust.godbolt.org/z/DkMgKv for full, commented source and the generated assembly.
---
rustc doesn't optimize the iterator `buffer.windows(size).skip(1)` well.
You can very easily outperform the Rust compiler by writing an imperative loop with a boolean flag whether the current element is the first.
Consider the function:
```rust
fn inner_buffer_ngrams_bad<'a>(&mut self, buf: &'a [u8]) -> &'a [u8] {
let order = self.order.get() as usize; // Note: self.order is a NonZeroU8
let mut last = buf;
for win in buf.windows(order).skip(1) {
self.add_ngram(last);
last = win;
}
last // return trailing bytes to prepend to next chunk
}
```
The intent of this function is to iterate over `order`-sized windows of `buf` and return any unprocessed data. We specifically don't want to process the last window in the `buf`, but since there is no useful method that would allow us to split off the last element in the iterator, we instead skip the first element and create a mutable variable to lag behind by one iteration.
This implementation is correct since `self.add_ngram` ignores trailing data in a slice, so passing the whole buffer on the first iteration is fine.
Sadly, this implementation is extremely poorly optimized. If you graph the generated assembly you can see two weirdly-interwoven loops, **with two bounds checks** inside those loops.
---
Reimplementing this function with a boolean flag instead of `.skip(1)` results in this code:
```rust
fn inner_buffer_ngrams<'a>(&mut self, buf: &'a [u8]) -> &'a [u8] {
let order = self.order.get() as usize; // Note: self.order is a NonZeroU8
let mut last = buf;
let mut is_first = true;
for win in buf.windows(order) {
if !is_first {
self.add_ngram(last);
last = win;
}
is_first = false;
}
last // return trailing bytes to prepend to next chunk
}
```
I intentionally did as little refactoring as possible to make it obvious the code is equivalent to the previous example.
The assembly this code generates is almost three times shorter, only contains a single loop, and contains **no bounds checks**. The assembly looks like something you'd write by hand.
---
Note: All of the above applies to `opt-level = 2` and `opt-level = 3`. This does not apply to `opt-level = 1` since it does little to no inlining. | A-LLVM,I-slow,C-enhancement,T-compiler,A-iterators | low | Major |
473,322,507 | flutter | The default margin right spacing in AppBar actions is too wide | The default margin right spacing in AppBar actions is too wide,
There are too many margin Rights. How to modify margin's broadband?
tks.... | c: new feature,framework,f: material design,a: quality,c: proposal,P3,team-design,triaged-design | low | Minor |
473,335,292 | scrcpy | Recorded or Displayed video puts some older frames after more recent frames – [feature request] option to select encoder | I downloaded scrcpy 1.9 on 7/24/2019 with the goal of recording video from my Droid Turbo onto my Windows 7 x64 PC. When I connect with USB and run scrcpy and have it display to my screen, and the video from the phone has a lot of motion, it's common for the display from scrcpy to periodically appear to show maybe 1 frame that probably would have been the correct frame to display about 1/10 of a second ago. So whatever motion was happening appears to jump backwards to where it would have been maybe 1/10th of a second ago for just one frame and then jump back to what the phone is currently displaying; it's a very jarring appearance. Depending on the size I am displaying with scrcpy, these frame jumps may happen as frequently as once a second. If I reduce the display size very small like scrcpy --max-size 400, the frame jumps are less frequent, but they still happen occasionally, maybe a few times every 10 seconds.
I have had my phone display its CPU and memory usage while running this, and it seems to display around 15.00/14.00/14.00 as the CPU usage and the bar only seems to go around 15% across the screen so I interpret that as 15% CPU usage. The phone memory remains with over 1 GB free while this happens. My PC CPU usage is only around 5% while it happens, and only 4 out of 8 GB of memory are in use; these are when I'm running scrcpy --max-size 1920.
I also tried having scrcpy do the file recording. I ran one with mp4 and one with mkv, with the --no-display option so it wouldn't need to do double work, but the mp4 and mkv files it created still have the same kinds of frame jumps I would see in the live display. I play the files in VLC, and I've checked that I'm not just getting some kind of playback issue with the mp4 and mkv files by advancing individual frames in VLC and I can still see the motion get jumped backwards for a frame.
I do not usually run it with --render-expired-frames, and when I've tried it, the frame jumps seem just as frequent, possibly more frequent. When I use ctrl+i, I often see skipped frames, but I am fairly certain I have observed these frame jumps in the display even during periods where no skipped frames are logged. The display on the phone itself always looks perfect during the whole thing, and the game doesn't feel any less responsive than it would while scrcpy was not running.
Do you have any suggestions to get it where it doesn't end up displaying or recording an older frame after it has already displayed/recorded a newer frame? I can probably share the mp4 and mkv file with google drive or something later if that would help. | feature request | low | Major |
473,336,852 | pytorch | Unreachable code in tanh | ## 🐛 Bug
In https://github.com/pytorch/pytorch/blob/9a281451ed7f00d052212695ea112ebe872776f6/caffe2/quantization/server/tanh.cc#L17-L48
all code after line 22 is unreachable. This doesn't seem to be intentional (maybe someone added it for testing and forgot to remove it?).
| caffe2,triaged | low | Critical |
473,374,446 | pytorch | [Feature request] Let DistributedSampler take a Sampler as input | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
## Motivation
Currently, `DistributedSampler` assumes that it takes a `Dataset` as argument. But in reality, the only information it exploits from it is its `len`.
We sometimes want to have a custom Sampler to be used in distributed mode. So it might be desirable to also let `DistributedSampler` take a `Sampler` as argument.
## Potential implementation
The only difference is that in
https://github.com/pytorch/pytorch/blob/46224ef89edcd0e350d1572bb4d12d370245cc6e/torch/utils/data/distributed.py#L57-L61
We would additionally have something like
```python
if isinstance(self.dataset, Sampler):
orig_indices = list(iter(self.dataset))
indices = [orig_indices[i] for i in indices]
return iter(indices)
```
## Pitch
More modularity and code reuse
```python
sampler = MyNiceSampler(dataset)
if distributed:
sampler = DistributedSampler(sampler)
```
Additionally, it make writing code more (in my view) clear. Instead of
```python
if distributed:
sampler = DistributedSampler(dataset)
else:
sampler = RandomSampler(dataset)
```
we can always have
```python
sampler = RandomSampler(dataset)
if distributed:
sampler = DistributedSampler(sampler, shuffle=False)
```
which, at first sight might seem very similar, but they imply different things.
## Alternatives
We can integrate the functionality of `DistributedSampler` inside our custom sampler, but this seems redundant.
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @SsnL @VitalyFedyunin @ejguan @NivekT @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang | oncall: distributed,feature,module: dataloader,triaged,has workaround | high | Critical |
473,389,603 | rust | Tracking issue for `fs::Metadata` extensions on Windows based on handle information | This is a tracking issue for APIs added in https://github.com/rust-lang/rust/pull/62980, namely the following Windows-specific APIs:
```rust
impl MetadataExt for Metadata {
fn volume_serial_number(&self) -> Option<u32>;
fn number_of_links(&self) -> Option<u32>;
fn file_index(&self) -> Option<u64>;
}
```
The motivation of these accessors are that the standard library learns about them in its call to `GetFileInformationByHandle` but doesn't expose it from the standard library, which means consumers that want to learn this either need to reimplement the pieces in the standard library or execute a second syscall. Concrete motivation for these fields arose during https://github.com/CraneStation/wasi-common/pull/42.
These fields all return `Option` which is a bit odd, but there's a few way of learning about a `Metadata` on Windows. When learned through a `DirEntry` these fields aren't available but when learned through a `fs::metadata` call or `File::metadata` these fields are available. It's unfortunately not as uniform as Unix, but we don't have a ton of options as well. | O-windows,T-libs-api,B-unstable,C-tracking-issue,A-io,Libs-Tracked | low | Major |
473,431,185 | rust | Tracking issue for -Z binary-dep-depinfo | This is a tracking issue for `-Z binary-dep-depinfo` added in #61727.
The cargo side is implemented in https://github.com/rust-lang/cargo/pull/7137.
Blockers:
- [ ] Canonicalized paths on Windows. The dep-info file includes a mix of dos-style and extended-length (`\\?\`) paths, and I think we want to use only one style (whatever is compatible with make and other tools). See the PR for details.
- [ ] Codegen backends are not tracked.
cc @Mark-Simulacrum @alexcrichton
| T-compiler,B-unstable,C-tracking-issue,requires-nightly,S-tracking-design-concerns,S-tracking-needs-summary,A-CLI | medium | Major |
473,449,705 | vscode | [folding] When you cut a collapsed code block it should stays collapsed when you paste/drop it. | I have searched the issues and requests and I don't see this one anywhere. And it seems like too good of an idea to pass up.
While you can cut a collapsed code block and paste it elsewhere, when you paste it, it "uncollapses."
It would be really cool and useful if the code remained collapsed after you pasted it.
Thanks.
| feature-request,editor-folding | low | Minor |
473,453,413 | godot | Errors hidden in error tab | **Godot version:**
Godot 3.1.1
**Issue description:**

**Steps to reproduce:**
Produce an error that is longer than the error window.
**Possible fix:**
* The tooltip allows to see the whole error (Already implemented)
* Side scroll / shift+scroll (Not implemented)
* Text wrapping (Not implemented) | bug,topic:editor,confirmed,usability | low | Critical |
473,456,265 | pytorch | einsum equation with conditional mask works in numpy but not in PyTorch | ## 🐛 Issue Description
I have a snippet of code that works when I use ``np.einsum``. But it's PyTorch equivalent throws an error for the conditional mask ``A==1.0``. Is this a bug or is this behaviour expected?
## To Reproduce
Steps to reproduce the behavior:
Input:
```
A_np = np.array([[0,1,1],[1,0,1],[1,1,0]])
B_np = np.array([[1,1,1],[1,1,1],[1,1,1]])
```
NumPy Code for matrix multiplication using ``einsum``:
```
C_np = np.einsum('ij,jk->ik', B, A==1.0)
```
Output:
```
>>> C_np
array([[2., 2., 2.],
[2., 2., 2.],
[2., 2., 2.]], dtype=float32)
```
Equivalent PyTorch Code:
```
C = torch.einsum('ij,jk->ik', B, A==1.0)
```
Error Message:
```
return torch._C._VariableFunctions.einsum(equation, operands)
RuntimeError: expected scalar type Float but found Byte
```
## Environment
```
PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: CentOS Linux release 7.6.1810 (Core)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
CMake version: Could not collect
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] numpy==1.16.2
[pip] torch==1.0.1.post2
[pip] torchvision==0.2.2
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] mkl 2019.1 144
[conda] mkl_fft 1.0.10 py36ha843d7b_0
[conda] mkl_random 1.0.2 py36hd81dba3_0
[conda] pytorch 1.0.1 py3.6_cuda10.0.130_cudnn7.4.2_2 pytorch
[conda] tensorflow 1.12.0 mkl_py36h69b6ba0_0 anaconda
[conda] tensorflow-base 1.12.0 mkl_py36h3c3e929_0
[conda] torch 1.0.1.post2 pypi_0 pypi
[conda] torchvision 0.2.2 py_3 pytorch
```
cc @vincentqb @vishwakftw @jianyuh @nikitaved @pearu @mruberry @heitorschueroff @nairbv | triaged,module: type promotion,module: linear algebra,function request | low | Critical |
473,477,568 | rust | SIGSEGV during compilation of extern x86-interrupt fn with u128 param | ```rust
#![feature(abi_x86_interrupt)]
pub extern "x86-interrupt" fn main (_a: u128) {}
```
I suspect LLVM being the culprit. | I-crash,A-LLVM,O-x86_64,T-compiler,C-bug,requires-nightly,O-x86_32,F-abi_x86_interrupt,A-hardware-interrupts | low | Minor |
473,489,418 | go | x/net/http2: call conn.Handshake before accessing ConnectionState | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go1.12.7 windows/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\schlopeki\AppData\Local\go-build
set GOEXE=.exe
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=C:\Users\schlopeki\go
set GOPROXY=
set GORACE=
set GOROOT=C:\Go
set GOTMPDIR=
set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=C:\Users\schlopeki\Projects\doh\go.mod
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=C:\Users\SCHLOP~1\AppData\Local\Temp\go-build767447546=/tmp/go-build -gno-record-gcc-switches
</pre></details>
### What did you do?
I attempted to make a simple http2 only client/server but the server rejects all connections as having "INADEQUATE_SECURITY, TLS version too low". When inspecting the TLS version connection claims to have a TLS version of 0 but in Wireshark I see that a TLS 1.2 negotiation has already started.
If I manually perform the TLS handshake the http2.Server.ServeConn no longer hangs up the connection.
Here is the minimal server:
```go
package main
import (
"crypto/tls"
"log"
"net/http"
"golang.org/x/net/http2"
)
func main() {
cert, err := tls.LoadX509KeyPair("server.crt", "server.key")
if err != nil {
panic("Cannot load cert")
}
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{cert},
NextProtos: []string{http2.NextProtoTLS},
}
srv := http2.Server{}
srvOpt := &http2.ServeConnOpts{
Handler: http.HandlerFunc(handler),
BaseConfig: &http.Server{TLSConfig: tlsCfg},
}
listener, err := tls.Listen("tcp", "127.0.0.1:8942", tlsCfg)
if err != nil {
panic("Cant listen")
}
for {
conn, err := listener.Accept()
if err != nil {
panic("connection error")
}
// Uncommenting the next 2 lines make the server work
// tlsConn := conn.(*tls.Conn)
// tlsConn.Handshake()
go func() { srv.ServeConn(conn, srvOpt) }()
}
}
func handler(wrt http.ResponseWriter, req *http.Request) {
if req.ProtoMajor != 2 {
wrt.WriteHeader(500)
flusher, _ := wrt.(http.Flusher)
flusher.Flush()
log.Printf("Not HTTP2")
return
}
wrt.WriteHeader(200)
flusher, _ := wrt.(http.Flusher)
flusher.Flush()
}
```
Modern versions of Chrome/Firefox should be able to test this server. Testing server.crt and server.key need to be created and I used a host file to point test.local at the loopback address.
### What did you expect to see?
The connection would have already completed the TLS handshake once listen.Accept returned it for use, or http2.Server.ServerConn would be able to see that the TLS version is at least negotiated.
### What did you see instead?
When passing the connection directly from Accept to the http2.Server.ServeConn, the TLS state of the connection is not set. This results in ServeConn seeing a TLS version of 0 and therefore it sends GOAWAY and kills the connection.
| NeedsFix | low | Critical |
473,513,265 | flutter | SceneBuilder::addPlatformView offset parameter is ignored. | Currently, the offset parameter is only used to specify the paint bounds of the platform view layer in flow. However, since the platform view layer is a leaf, this is effectively a no-op. The expectation is that this offset would be propagated to the embedder in the effective embedded view params. However, the offset in EmbeddedViewParams is currently only determined from the effective transformation matrix.
Potential fixes:
* Remove the offset from the API. This is a breaking API change but this API is the lowest layer in the stack and currently has no users (or at least the current calls don't do anything).
* Propagate the offset to the embedder by adding it to the offset in the embedded view params. This will require patching existing plugins to account for this extra offset.
* Do nothing but document that this offset is ignored.
I like the first option. | platform-ios,engine,a: platform-views,P2,team-ios,triaged-ios | low | Major |
473,516,309 | node | proposal: new release blog | ## TL;DR
To increase visibility around new features & fixes, I propose a creating a user-focused, human-readable _blog_ which aims to answer not just _what_ was released, but also “who might find this important?” and “how do I use this?”
## Problem
Node.js’ official “blog” is an HTML conversion of `CHANGELOG.md`:

As you can see, the text is terse and details are usually scant.
For example, what is `response.writableFinished` for? Who would want to use this feature? And for those who do, _how_ do they use this feature?
The entries above are tagged `notable-change`—implying notability—which is why they appear at the beginning of the post. This is a useful way to separate the changes that _a user might care about_ from the changes that a user likely _won’t_ care about. However, the mere presence of the label is insufficient; it _does not_ and _cannot_ contain context nor rationale. In practice, the list above is a mishmash of changes, and it’s difficult for a casual observer (or _any_ observer, really) to sift out what’s personally relevant.
As a result, changes often get missed by Node.js users. Proliferation of new feature adoption, for instance, relies mainly on word-of-mouth and community blog posts.
Node.js could better serve its userbase by highlighting new features, breaking changes, and fixes, and doing so in an “official” capacity.
## A Proposed Solution
At present time, while we may be able to _automate_ providing “more” information to users (via expanded commit message requirements, metadata, etc.), we should not go this route. Crucially, a script will lack information about the _relative importance_ of two different changes. This impacts how an editor chooses to organize a post—for example, by presenting change _A_ before change _B_; or grouping related changes across unrelated subsystems.
Given sufficient writing skills, a human editor will produce more _well-organized_, _relevant_, _cohesive_ and _relatable_ articles than a script.
Instead, producing a post must be a joint effort between collaborators and the editor(s).
Much/most/less-than-none of the necessary information _already exists_ around the PR and its referenced issue(s). But in the spirit of efficiency, we need to consolidate it.
### Of collaborators, I’d like to ask this:
Because the editor of the post cannot possibly know every detail of every change...
When tagging a PR `notable-change`—whether or not you are the author—please update the PR description (this could also happen when creating the PR) with this information (if it does not already exist):
- **Which users might care about this.** Do they deploy Node.js? Do they build tools? Do they compile native modules? Are they SREs? Do they perform post-mortem debugging? Anything here is better than nothing, even if incomplete.
If you’re unable to answer this question, it might be that the _change is not notable._
- **Motivation.** …but why?
- **Example code** (if appropriate). If example usage can be _easily derived_ from a test, please _link to it_ with line numbers—don’t make the editor search for it.
- **Dependency Upgrade?** If this is an upgrade of an upstream package, provide a link to the release in that package’s release notes or CHANGELOG. If there’s a specific fix or feature Node.js is looking to pull in, mention it.
- **Breaking Changes!** Provide workarounds or migration path—anything that might help those affected.
- **Who are you?** Add your GitHub username indicating that you’ve added this information, so we know who to talk to if we have questions about what you’ve written.
_In lieu of_ any of the above, a link to the relevant information is acceptable, but if it’s not a wall of text, **please** copy/paste so editors don't have to hunt around so much.
We can also provide a template for this information. Maybe put it in a GitHub PR template? Might be easier to knock all this stuff out at creation time, depending on how much the PR changes before merging.
**Don’t worry!** You _don’t_ have to be Shakespeare. Nobody is expecting perfect spelling, grammar, or even complete sentences. You aren’t writing for a huge audience—you’re writing for the editors of the blog post. Focus on getting your point across, and editors can ask questions if needed.
### Of editor(s):
- Make sure you understand what’s going in to the next release. If there are notable changes missing information, ask for it (nicely).
- You will edit, distill, and _likely just rewrite_ what collaborators have written. It’s your responsibility to put the pieces together in a way that makes sense.
- Be prepared for a crash course about Node.js’ many subsystems that you may be unfamiliar with.
- Ask questions if you don’t understand. If you don’t understand, it’s _highly likely_ somebody else doesn’t understand.
- Collaborate with the [release team](https://github.com/nodejs/release) if the need arises. I’m not sure why it would, necessarily, but they might have more information than the obvious about what’s going into the next release.
- Decide with other editors (if there are any) on the formatting and structure of the posts.
- You may choose whether or not to omit a change from the post if you are unconvinced it’s “notable.” Use your judgement, but explain your reasoning to the collaborator(s).
- Field any followup questions/comments on the post; if you don’t know the answers, you can choose to point them to someone who can help the commenter, or offer to “find out” and get back to the commenter at a later time.
- Work with [somebody] to ensure the blog post makes the rounds on social media.
- Settle on a cadence. Within a few days after a minor release? A week? What of patch releases and security fixes? How much time do we have before the final change targets a release before that release is cut?
- _You_ aren’t Shakespeare either, but you are confident in your written English.
- Get somebody to review the posts before publishing—unless nobody wants to help you, then just publish out of spite.
### Where do we post these articles?
We could:
1. Use the existing [blog](https://nodejs.org/en/blog/), or
2. Create a new blog somewhere on nodejs.org, or
3. Not use nodejs.org at all
I’d say go with option 1 until somebody complains that we’re polluting their feed of Node.js release notes with our dreck.
At minimum, we should syndicate these posts to [dev.to](https://dev.to). That’s where the “engagement” is going to happen, unless we set up a “proper” blogging platform elsewhere (and that takes more time/energy, _and_ this is already a big ask).
### Who writes them?
I’ll start, but I could _absolutely_ use help, because it’s probably going to be a lot of work. And probably more work than I think it’s going to be.
I will need an “official” Node.js account on dev.to (if one does not already exist), or be added to a team, or however that works.
### Example Post Content
I’m not going to write out an entire post right now, but here’s how I, as a user, would want to learn about [#28568](https://github.com/nodejs/node/pull/28568) (note that I'm skipping some use cases, and apologies to @guybedford if I get this wrong):
* * *
## New Feature: Package Exports (EXPERIMENTAL)
### Summary
An `exports` property can be added to `package.json` in which a package author can provide *aliases* to internal modules.
### Who’s it’s For
Primarily library authors
### Description
Some packages have a larger-than-average API surface. For these, like `rxjs` or `lodash`, a consumer may want to “reach in” to a package and pull out only what they need; or this may be the _recommended_ way (e.g., `rxjs/operators`). For example:
```js
// everything exported via some-package's "main" module gets loaded
const {map} = require('some-package').operators;
// but all we wanted was map(), so we can do this instead
const map = require('some-package/lib/operators/map.js');
```
You may have noticed that the second form requires the consumer to understand the internal directory structure of `some-package`. Package Exports aims to provide an abstraction to hide these _implementation details_ from the consumer. If `some-package` had the following in its `package.json`:
```json
{
"exports": {
"./map": "./lib/operators/map.js"
}
}
```
Then a consumer could instead write:
```js
const map = require('some-package/map');
```
Before this change, libraries like `rxjs` would provide this behavior by creating “stub” module aliases; for example, when you `require('rxjs/operators')`, you are actually loading `node_modules/rxjs/operators/index.js`, which manually re-exports _every_ module within `node_modules/rxjs/internal/operators/`.
`rxjs` could choose to eliminate its `operators` folder entirely using Package Exports.
#### Use with ESM
When used with ES modules, an `exports` entry can point to a directory:
```json
{
"exports": {
"./operators/": "./lib/operators/"
}
}
```
The above works a bit like [import maps](https://github.com/WICG/import-maps#packages-via-trailing-slashes**), where a consumer can now write:
```js
// node_modules/some-package/lib/operators/map.js
import map from 'some-package/operators';
```
For more information, check out [the full text of the proposal](https://github.com/jkrems/proposal-pkg-exports/).
[Trivial edit by @Trott to fix a broken link. Hey, now that GitHub keeps edit history, maybe we should remove the requirement for these comments?] | discuss,meta | medium | Critical |
473,521,224 | flutter | Better support for migrating existing localization schemes from Android/iOS to Flutter | Given a user who already has an iOS style Localizable.strings or a collection of Android strings.xml, create a conversion tool or an import tool for using them in Flutter. | tool,a: internationalization,a: existing-apps,P3,team-tool,triaged-tool | low | Major |
473,523,727 | go | gccgo: bad "incompatible types" error when comparing untyped expressions | gccgo incorrectly reports `error: incompatible types in binary expression` for this test program:
```
package main
func main() {
var v uint = 32
println(1 << v == '\002' << v)
}
```
The Go spec says:
> If the left operand of a non-constant shift expression is an untyped constant, it is first implicitly converted to the type it would assume if the shift expression were replaced by its left operand alone.
So `1` and `'\002'` should take on the types that they would in the comparison `1 == '\002'`, which is `rune`/`int32`.
For comparison, the program prints `true` when compiled with cmd/compile, and it type checks with go/types.
/cc @ianlancetaylor | NeedsInvestigation | low | Critical |
473,524,986 | go | x/net/websocket: override the Host header | ### What version of Go are you using (`go version`)?
Relevant as of today's `master` of `x/net/websocket` (see below).
### Does this issue reproduce with the latest release?
Yes.
### What did you do?
Tried changing the `Host` header appearing in the initial WebSocket query by creating a config through `websocket.NewConfig` and changing its host through `Header.Set("Host", "example.com")`.
### What did you expect to see?
That the `Host` header would be set to `example.com`.
### What did you see instead?
That the `Host` header was `config.Location.Host`.
In the current `master`, this is set on [line 411 of `hybi.go`](https://go.googlesource.com/net/+/refs/heads/master/websocket/hybi.go#411), in `hybiClientHandshake`.
Is this the intended API? It means that it becomes impossible to change the `Host` header without also changing the `config.Location.Host`, which might be undesirable (in my use case because it has the undesired side effect of changing the TLS SNI field, which is also based on `config.Location.Host`, and I can't have the SNI host and the WebSocket hosts be synchronized like that).
Given that there's a field called `Header`, I would have expected that to impact what headers end up being set; I would have been less surprised if `hybiClientHandshake` had used `Header.Get("Host")` if that header is set, and then fall back to `config.Location.Host` if it's not (similar to what you would see with e.g. curl).
| NeedsInvestigation | low | Minor |
473,545,240 | flutter | Flutter/* generated files are created for plugins | I haven't tracked down the exact trigger, but I frequently end up with `macos/Flutter/GeneratedPluginRegistrant.swift` and `macos/Flutter/ephemeral/Flutter-Generated.xcconfig` in plugins after building a project that uses them. I notice this all the time with FDE plugins because `testbed` includes them by path, not fetching them from pub, but presumably it's happening all the time (harmlessly for pub-fetched plugins, but pointlessly).
I suspect given the amount of code-sharing that this is happening for iOS plugins too, but that nobody has noticed. | tool,platform-mac,a: desktop,P3,team-macos,triaged-macos | low | Minor |
473,559,281 | kubernetes | Swagger spec: Incorrect response model for namespaces/{namespace}/pods/{name}/binding | Swagger definition for "/api/v1/namespaces/{namespace}/pods/{name}/binding" is incorrect.
The current definition is:
``` json
"/api/v1/namespaces/{namespace}/pods/{name}/binding": {
"parameters": [
... ],
"post": {
...
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/v1.Binding"
}
},
```
But the "/api/v1/namespaces/{namespace}/pods/{name}/binding" API actually returns a Status object, not a v1.Binding.
This was determined while attempting to write a Python application that acts as a custom scheduler: kubernetes-client/python#825 | kind/bug,sig/api-machinery,lifecycle/frozen | low | Major |
473,562,033 | flutter | Support multiple entrypoints to Dart via flutter driver | If apps implement multiple entrypoints to their app via `@pragma("vm:entry-point")`, the `flutter drive` command still assumes the entrypoint is `main` and fails.
We need to be able to specify the name of the entrypoint to be executed among the parameters. | tool,t: flutter driver,c: proposal,P3,team-tool,triaged-tool | low | Minor |
473,562,966 | rust | Error message referring to non-existent variable: "cannot return value referencing local variable `__next`" | Compiling the following code on stable result in an error message which is referring to a non-existent variable `__next`
```rust
use std::collections::HashMap;
use std::hash::Hash;
fn group_by<I, F, T>(xs: &mut I, f: F) -> HashMap<T, Vec<&I::Item>>
where
I: Iterator,
F: Fn(&I::Item) -> T,
T: Eq + Hash,
{
let mut result = HashMap::new();
for ref x in xs {
let key = f(x);
result.entry(key).or_insert(Vec::new()).push(x);
}
result
}
```
The error message:
```
$ rustup run stable rustc --crate-type lib lib.rs
error[E0515]: cannot return value referencing local variable `__next`
--> lib.rs:15:5
|
11 | for ref x in xs {
| ----- `__next` is borrowed here
...
15 | result
| ^^^^^^ returns a value referencing data owned by the current function
error: aborting due to previous error
For more information about this error, try `rustc --explain E0515`.
```
`rustc --version --verbose`:
```
rustc 1.36.0 (a53f9df32 2019-07-03)
binary: rustc
commit-hash: a53f9df32fbb0b5f4382caaad8f1a46f36ea887c
commit-date: 2019-07-03
host: x86_64-unknown-linux-gnu
release: 1.36.0
LLVM version: 8.0
```
| A-diagnostics,T-compiler,C-bug | low | Critical |
473,576,955 | flutter | Add OnReject to DragTarget | ## Use case
Simply need a callback to occur when a Draggable is dropped on the DragTarget and gets rejected. Currently there is only 'onAccept'. I want to trigger some animation events and functions on rejection.
## Proposal
Just add extra property onReject to the widget DragTarget 'drag_target.dart' in packages/flutter/lib/widgets/ | c: new feature,framework,a: animation,P3,team-framework,triaged-framework | medium | Major |
473,642,209 | scrcpy | "pull from" mouse gesture | This really is a 'feature request', but using the scroll wheel from the mouse should initiate a swipe for the direction rolled. This would enable things like expanding the notification area, etc.
| feature request | low | Minor |
473,645,807 | flutter | CupertinoActionSheet doesn't perfectly match with native | Fidelity wise speaking, the [CupertinoActionSheet](https://api.flutter.dev/flutter/cupertino/CupertinoActionSheet-class.html) isn't pixel perfect along with the native one and also there are some key features that are missing. Below are some of the findings I've gather that should be enough to make it perfectly match it.
### Environment:
- **Device:** iPhone X
- **OS version:** 12.3.1
- **Font scale:** Set to minimum
_**Flutter screenshots ar on the left, iOS native on the right**_
___
## Font scale factor shouldn't be allowed below 1.0 and bottom safe area is wrong
 
The bottom safe area has more pixeis in the Flutter that it should, also, the font scaling factor, in iOS native, doesn't go below `1.0` whereas Flutter allows it, it should be `max(1.0, fontScaleFactor)`.
## When system font scale is set to maximum, the sizes don't match properly
 
Looks like Flutter is scaling a bit more that it should. Fixing this, can potentially fix some other wrong scaling across cupertino widgets.
## Opacity when selected
 
Despite the fact of having different backgrounds, it looks like Flutter is using a different opacity when selecting the item in the sheet. Plus, native allows you to keep it pressed while navigating up and down without raising the finger, providing a light haptic feedback when doing so. Flutter doesn't allow this at all.
____
### Flutter doctor
```
[✓] Flutter (Channel unknown, v1.8.0, on Mac OS X 10.14.5 18F132, locale pt-PT)
• Flutter version 1.8.0 at /Users/miguelruivo/DevTools/flutter
• Framework revision 2fefa8c731 (4 weeks ago), 2019-07-01 11:33:22 -0700
• Engine revision 45b66b722e
• Dart version 2.4.0
``` | framework,a: fidelity,f: cupertino,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Major |
473,650,308 | rust | Formatter not capable of generating underscores in long number literals | When using the `quote` crate or otherwise generating Rust code (like we do in `svd2rust`) it would be immensely helpful if there was a way to `format!` long number literals in a way that `clippy` wouldn't throw an https://rust-lang.github.io/rust-clippy/master/index.html#unreadable_literal back at us when linting the code but isn't. | T-libs-api,C-feature-request,A-proc-macros | low | Minor |
473,655,677 | opencv | Feature request to use Intel Myriad chips for acceleration Depth Map from Stereo Images | Feature request to use Intel Myriad chips for acceleration Depth Map from Stereo Images by using Myriad SHAVES cores - to accelerate OpenCV functions: https://docs.opencv.org/4.1.1/dd/d53/tutorial_py_depthmap.html
Currently, OpenCV can use Myriad X chips for Deep Learning tasks (via OpenVINO DL IE backend), but not for other tasks like Stereo Depth processing.
As known KEY BENEFITS of Myriad X chip is: **New Vision Accelerators including Stereo Depth** https://www.movidius.com/MyriadX But there are no examples or API for Stereo Depth processing by using OpenVINO in free access.
It would be nice to be able to use uniform OpenCV API for the basic functions of the Miriad X chip in conjunction with a very weak CPU (Intel Atom). Because Atom CPU doesn't have enough performance to get a Stereo-Depth-Map on its own with high FPS.
| feature | low | Major |
473,655,966 | terminal | FR: Customizable HTML/formated copy | # Description of the new feature/enhancement
Text copied in formated form, like HTML, should be customizable by settings, so in case you do a lot of copy-paste you don't have to re-format each piece to match your preferences.
All of them should be optional, with some default values. Once settings are categorized, they should go into single category.
# Proposed technical implementation details (optional)
My propositions:
| Property | Type | Default | Description |
| --- | -- | ---- | -------- |
| `htmlWidth`| String | `"terminal"` | - `terminal` - the width of the actual terminal. <br/> - `text` - up to the last printable character. Like terminal, but with stripped blank space on the right. <br/> - `infinite` - Scratches to available space (current behaviour). <br/> - `Valid css value` e.g. `500px` |
| `htmlBackground` | String | `background`'s value | Could anything that `background` can be or any valid css value that would be passed to `background` css property, like url. Theoretically we could support stripping the actual background (like image) and embed it as base64. I'd like that feature, but I'm afraid it's too heavy for a while. |
| `htmlOffsetFirstCharracter` | Boolean | `true` | In situation like this: <br/> _012`3456789`_<br/> _`ABCDEFGHIJ`_<br/> _`KLMNOPQ`RST_ <br/> where `this means selected`, true makes the paste as:<br/> _3456789_<br/> _ABCDEFGHIJ_<br/> _KLMNOPQ_ </br>and false as:<br/> _3456789_<br/> _ABCDEFGHIJ_<br/> _KLMNOPQ_
Other than these above, properties `padding`, `fontFace`, `fontSize` and `colorTable` should be duplicated for the html/formated text so they can be set separately. If not set, they would match the terminal's ones. | Issue-Feature,Help Wanted,Area-Settings,Product-Terminal | low | Minor |
473,668,034 | go | cmd/trace: problems and proposed improvements | Writing this issue out of the GopherCon 2019 contribution workshop, as a sort of umbrella task to encompass various limitations in the current `go tool trace`, and to speculate about possible solutions.
NOTE: most of these remarks are as of Go 1.11, which was the latest version that I reconciled into an experimental prototype fork. All feedback on what I've missed in the 1.12 or upcoming 1.13 trace tool code is more than welcome. As I write this issue, I'm skimming through the code on master to try to verify my statements, but the last time that I did a truly deep look was 1.11.
### Problems
1. Event Parsing: the `internal/trace` parser used by the current tool is a rather slow 2-phase parser; also being internal, it cannot be re-used by other tools interested in processing trace files
2. Event Processing: currently the trace tool has two separate state machines for processing events: one for generating [catapult](https://github.com/catapult-project/catapult/tree/master/tracing)-compatible JSON, the other for generating a statistics table
3. Lack of export: other than looking at the data in the catapult viewer, or scanning the html statistics table, there's no (easy / good) way to export data for analysis in something like the PyData stack (e.g. I've found a CSV file into a pandas dataframe to be quite effective for exploration)
3. Viewer Chunking: finally, and perhaps most superficially, most of the reason for splitting traces (chunking them into different time sections), as far as I can tell, is 100MB limits on strings in V8 wrt JSON parsing. This is largely because the catapult JSON data is typically between 10x and 100x larger than the native binary data.
### Proposed Improvements
#### Parsing (library?)
A `golang.org/x/trace` library for dealing with trace files would be great, things it could cover:
- access to the raw events themselves, out of something like a streaming event reader
- separated event sorter for restoring overall wallclock order (since events are necessarily recorded in parallel batches)
- separated higher level event, maybe with links: part of why the current parser is a 2-pass affair, is due to it creating convenience singly-linked chains of events on top of the raw events read from the file
Many of these improvements could be made in place inside `internal/trace`, but are perhaps better explored first / concretely proposed through a standalone personal repository, eventually perhaps for a `golang.org/x` package.
#### Event Processing (library?)
However much of what you need when dealing with trace data, is a bit of a state machine that tracks what various go runtime objects. There's at least two separate ones of those within the current trace tool: one for creating catapult JSON, the other for tabulating statistics.
This state machine, just like the events themselves, are highly dependent on the Go runtime, and necessarily need updates for new versions. I expect there can be a vigorous discussion around what layer to is best to export in a library: should it be the parser above (low or high level), or instead something more processed like spans of goroutine-relevant time...
#### Data Export Mode (tool change?)
Library or not, making the trace tool able to be used in an offline way to simply transform the binary data into a useful form for consumption (by Unix tools, by Pandas, etc) has proven quite useful in my experimental fork.
One useful pivot on the raw events, is to group them into spans of goroutine-relevant time, e.g. (conversationally):
- G42 spent X time waiting after create (while runnable)
- G42 then spent X time executing on CPU
- G42 then spent X time blocking on a channel receive
- G42 then spent X time waiting to run after being unblocked by G99
- G42 then spent X time executing on CPU
- ... and so on
Basically, you can think of goroutines as going through this state machine (a direct analog of the classic [Bach 86](http://www.brendangregg.com/tsamethod.html) Unix process state machine):

A really fruitful dataset then, is a table of time spent by each goroutine in each state:
- not pre-summed as is seen in the current table of statistics
- with associated goroutine ID for relationships like creation and unblock
- also with a corresponding stack ID, that can be resolved in an ancillary table of stack frames
#### Better Trace Viewing
Now that most javascript runtimes support streaming, perhaps we could do better than static 100MB chunks.
As even more of a green field idea, maybe we could use something like http://gioui.org to build a dedicated trace viewer that wouldn't be held back the limitations of DOM parsing or rendering. | NeedsInvestigation,compiler/runtime | low | Major |
473,676,431 | pytorch | Build error due to unintended include path /usr/include | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. git clone the pytorch repo and all the submodules
2. Build pytorch with:
```
export CMAKE_PREFIX_PATH=$LINUXBREWHOME:$LINUXBREWHOME/Cellar/cuda/10.1
export CUDA_HOME=$LINUXBREWHOME/Cellar/cuda/10.1
export BLAS=OpenBLAS
export MAX_JOBS=16
export USE_CUDA=1
export USE_CUDNN=1
export USE_NUMPY=1
export TORCH_CUDA_ARCH_LIST="6.1;7.5"
export LIBRARY_PATH=$LINUXBREWHOME/lib:$LINUXBREWHOME/Cellar/cuda/10.1/lib
export CC=$LINUXBREWHOME/bin/gcc-5
export CXX=$LINUXBREWHOME/bin/g++-5
python3 setup.py clean
python3 setup.py bdist_wheel
```
where I have all my compiler toolchain in $LINUXBREWHOME
3. I got this error:
```
/home/aznb/.linuxbrew/bin/g++-5 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DONNX_NAMESPACE=onnx_torch -DUSE_GCC_ATOMICS=1 -D_FILE_OFFSET_BITS=64 -Dc10_EXPORTS -I../aten/src -I. -I../ -I../third_party/protobuf/src -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../c10/.. -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party -isystem /usr/include -isystem ../cmake/../third_party/eigen -isystem /home/aznb/.linuxbrew/Cellar/python/3.7.4/include/python3.7m -isystem /home/aznb/.linuxbrew/Cellar/python/3.7.4/lib/python3.7/site-packages/numpy/core/include -isystem ../cmake/../third_party/pybind11/include -isystem /home/aznb/.linuxbrew/Cellar/open-mpi/4.0.1_1/include -isystem ../third_party/ideep/mkl-dnn/include -isystem ../third_party/ideep/include -isystem ../third_party/ideep/mkl-dnn/external/mklml_lnx_2019.0.3.20190220/include -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -O3 -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -DC10_BUILD_MAIN_LIB -fvisibility=hidden -std=gnu++11 -MD -MT c10/CMakeFiles/c10.dir/core/CPUAllocator.cpp.o -MF c10/CMakeFiles/c10.dir/core/CPUAllocator.cpp.o.d -o c10/CMakeFiles/c10.dir/core/CPUAllocator.cpp.o -c ../c10/core/CPUAllocator.cpp
In file included from ../c10/util/typeid.h:24:0,
from ../c10/core/CPUAllocator.cpp:2:
../c10/util/Half.h: In function 'uint32_t c10::detail::fp16_ieee_to_fp32_bits(uint16_t)':
../c10/util/Half.h:107:56: error: 'UINT32_C' was not declared in this scope
const uint32_t sign = w & UINT32_C(0x80000000);
^
../c10/util/Half.h:139:72: error: 'INT32_C' was not declared in this scope
((int32_t)(nonsign + 0x04000000) >> 8) & INT32_C(0x7F800000);
^
../c10/util/Half.h: In function 'float c10::detail::fp16_ieee_to_fp32_value(uint16_t)':
../c10/util/Half.h:198:56: error: 'UINT32_C' was not declared in this scope
const uint32_t sign = w & UINT32_C(0x80000000);
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
pytorch build successfully
## Environment
- PyTorch Version (e.g., 1.0): 1.1.0
- OS (e.g., Linux): centos 6.9
- How you installed PyTorch (`conda`, `pip`, source): source
- Build command you used (if compiling from source): see above
- Python version: 3.7.4
- CUDA/cuDNN version: 10.1/7.6.2
- GPU models and configuration: GTX 1080
- Any other relevant information:
## Additional context
gcc has it's definition for INT32_C in stdint-gcc.h. If I add #include <stdint-gcc.h> to c10/util/Half.h then it compiles fine | module: build,triaged | medium | Critical |
473,677,817 | rust | "Variable is assigned to, but never used" not thrown | ```rust
fn main() {
let mut error = 0;
for i in 0..3 {
error += i;
}
}
```
This ([playground](https://play.integer32.com/?version=stable&mode=debug&edition=2018&gist=58536d969b9dce47854128a31f47454b)) correctly throws the warning 'variable `error` is assigned to, but never used'.
But _this_ ([playground](https://play.integer32.com/?version=stable&mode=debug&edition=2018&gist=89507f1543cedcc9b452ad430eab5ca8)) **doesn't**
```rust
fn main() {
let mut error = 0;
for i in 0..3 {
error = error + i;
}
}
```
[in 1.36, beta, nightly; 2015,2018]
---
I hope this isn't a duplicate.
Found by @ColinPitrat in #63046 :wink: | A-lints,T-compiler,C-bug | low | Critical |
473,706,310 | go | runtime: use PAGE_NOACCESS in sysReseve | Windows version of runtime.sysReseve calls VirtualAlloc with MEM_RESERVE and PAGE_READWRITE, but it should use PAGE_NOACCESS when using MEM_RESERVE.
According to https://devblogs.microsoft.com/oldnewthing/20171227-00/?p=97656
> Why do you have to pass a valid value even if the system doesn’t use it?
>
> This is an artifact of how the front-end parameter validation is done. The VirtualAlloc function does parameter validation by checking each parameter individually to confirm that the value is among the valid values.
and in particular
> Bonus chatter: You would think that the flProtect would not be used when reserving address space with MEM_RESERVE, but you’d be wrong. If reserving regular address space, then the protection should be PAGE_NOACCESS.
Someone would have to try and see what effect this change have on memory used by Go executable before making this change. Maybe by using vmmap or rammap tools.
I am creating this issue before I forget.
/cc @aclements and @zx2c4 because you might be interested.
Alex | NeedsInvestigation | low | Minor |
473,707,448 | electron | Unable to set default filter to 'All Files (`*.*`)' in Windows save file dialog | <!-- As an open source project with a dedicated but small maintainer team, it can sometimes take a long time for issues to be addressed so please be patient and we will get back to you as soon as we can.
-->
### Preflight Checklist
<!-- Please ensure you've completed the following steps by replacing [ ] with [x]-->
* [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project.
* [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to.
* [x] I have searched the issue tracker for a feature request that matches the one I want to file, without success.
### Problem Description
It is not possible to set the default filter on a save file dialog in Windows to `All Files (*.*)`. This is a nuisance because often when you want to save a file with no/unknown extension, such as `.babelrc`, it appends the first filter's extension. Example: https://github.com/microsoft/vscode/issues/28425

Now, I know this is intentionally not allowed, as can be seen in [`shell/browser/ui/file_dialog_win#L206`](https://github.com/electron/electron/blob/34c4c8d5088fa183f56baea28809de6f2a427e02/shell/browser/ui/file_dialog_win.cc#L206):
```cpp
// By default, *.* will be added to the file name if file type is "*.*". In
// Electron, we disable it to make a better experience.
// ...
for (size_t i = 0; i < filterspec.size(); ++i) {
if (std::wstring(filterspec[i].pszSpec) != L"*.*") {
// SetFileTypeIndex is regarded as one-based index.
dialog->SetFileTypeIndex(i + 1);
dialog->SetDefaultExtension(filterspec[i].pszSpec);
break;
}
}
```
It basically picks the first filter as the default, **except for the All Files one** because of a supposed issue where it appends `*.*` to the file name. However, there has to be a way to do this, because other programs do (unfortunately I couldn't think of any open source one so we could look at the sources).
### Proposed Solution
No real ready solution in mind. Just gotta figure out how to allow it and avoid the issue. Maybe if instead of setting the default extension to `"*.*"` we set it `""` when the _All Files_ filter is used, we could get both things to work.
### Alternatives Considered
Verify whether the _appending `*.*` to filename_ issue actually happens. If it doesn't, we can just cut the code out.
### Additional Information
Fiddle: https://gist.github.com/e9cf0de5417180669245e79b10c2f898
| enhancement :sparkles: | low | Minor |
473,710,023 | rust | Tracking issue for RFC 2515, "Permit impl Trait in type aliases" | This is a tracking issue for the RFC "Permit impl Trait in type aliases" (rust-lang/rfcs#2515) which is implemented under the following `#![feature(..)]` gates:
- `type_alias_impl_trait`
- `impl_trait_in_assoc_type`: https://github.com/rust-lang/rust/pull/110237
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also uses as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Steps
- [x] Implement the RFC
- [ ] Adjust documentation ([see instructions on rustc-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-guide][stabilization-guide])
[stabilization-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#documentation-prs
### Unresolved questions
- [x] Exactly what should count as "defining uses" for opaque types?
- [x] Should the set of "defining uses" for an opaque type in an impl be just items of the impl, or include nested items within the impl functions etc? ([see here for example](https://github.com/rust-lang/rust/pull/52650#discussion_r204893239))
- [ ] should return-position-impl-trait also start allowing nested functions and closures to affect the hidden type? -- independent question to be answered separately by the lang team. Would simplify compiler.
- [x] can we come up with consistent rules when cross-usage type inference can happen?
- ```rust
fn foo(x: bool) -> impl Debug {
if x { return vec!["hi"] }
Default::default()
}
```
compiles on stable, even though there is no obvious type for `Default::default()` to produce a value of. We combine all return sites though and compute a shared type across them, so we'll figure out a `Vec<&'static str>`
- `impl Foo` can be used for associated types that expect a type that implements `Bar`, even if `Foo` and `Bar` are entirely unrelated. The hidden type must satisfy both. See https://github.com/rust-lang/rust/pull/99860/files for examples.
- [x] impl traits in consts through const fns are allowed but shouldn't be: https://github.com/rust-lang/rust/issues/87277
| B-RFC-approved,T-lang,B-unstable,A-impl-trait,C-tracking-issue,needs-fcp,F-type_alias_impl_trait,requires-nightly,T-types,F-impl_trait_in_assoc_type | high | Critical |
473,711,570 | rust | Tracking issue for `impl Trait` in `const` and `static` items and `let` bindings | As provided for by rust-lang/rfcs#2071, this is a tracking issue for `impl Trait` in:
- `const` items
- `static` items
- `let` bindings
**Steps:**
- [x] Implement the RFC
- In `let`: https://github.com/rust-lang/rust/pull/134185
- [ ] Adjust documentation ([see instructions on rustc-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-guide][stabilization-guide])
[stabilization-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#documentation-prs
**Unresolved questions:**
- [ ] Should the part about `let` bindings be recast more generally to be about *coercions*? This would allow `foo as impl Debug` and `foo: impl Debug` as well.
**Blocking issues:**
Incomplete. Search the `F-impl_trait_in_bindings` issue.
- [x] https://github.com/rust-lang/rust/issues/83021
- [ ] https://github.com/rust-lang/rust/issues/61773 | B-RFC-approved,T-lang,B-unstable,A-impl-trait,C-tracking-issue,F-impl_trait_in_bindings,requires-nightly,S-tracking-impl-incomplete | medium | Critical |
473,711,683 | rust | Meta tracking issue for `impl Trait` | This issue tracks the progress of `impl Trait` in general.
This issue is not for discussion about specific extensions to `impl Trait` and only exists to provide links to other places that track the progress of specific issues. If you wish to discuss some subject related to `impl Trait`, please find an existing appropriate issue below or create an new issue and comment here with a link to the newly created issue.
--------------------------
The `impl Trait` related issues currently on deck are as follows:
* [ ] [Label `A-impl-trait`](https://github.com/rust-lang/rust/labels/A-impl-trait)
* [ ] Permit `type Foo = impl Bar;`. https://github.com/rust-lang/rust/issues/63063
* [ ] Permit `type Foo = impl Bar;` in `trait` definitions. https://github.com/rust-lang/rust/issues/29661
* [ ] In `const` and `static` items and `let` bindings. https://github.com/rust-lang/rust/issues/63065
* [x] Member constraints in region inference: https://github.com/rust-lang/rust/issues/61997
* [ ] Existential lifetimes. https://github.com/rust-lang/rust/issues/60670
* [ ] Support lifetime elision in argument position. https://github.com/rust-lang/rust/issues/49287
* [ ] Should we allow `impl Trait` after `->` in `fn` types or parentheses sugar? https://github.com/rust-lang/rust/issues/45994
* [ ] Do we have to impose a DAG across all functions to allow for auto-safe leakage, or can we use some kind of deferral.
- Discussion: https://github.com/rust-lang/rust/pull/35091#discussion_r73738398
- Present semantics: DAG.
* [ ] Should we permit specifying types if some parameters are implicit and some are explicit? e.g., `fn foo<T>(x: impl Iterator<Item = T>>)`?
- Current behavior: An error to specify types
- Other alternatives: [treat `impl Trait` as arguments in the list, permitting migration]
* [ ] [Some concerns about nested impl Trait usage](https://github.com/rust-lang/rust/issues/34511#issuecomment-350715858)
--------------------------
Open RFCs:
None. | metabug,T-lang,T-compiler,A-impl-trait,C-tracking-issue,S-tracking-impl-incomplete | high | Critical |
473,713,240 | create-react-app | from node-10-16 lts, impossible to run nm start after npx create-react-app my_app [closed] | ### Describe the bug
from node-10-16 lts, impossible to run nm start after npx create-react-app my_app
errors are:
events.js:174
throw er; // Unhandled 'error' event
^
Error: spawn /usr/bin/palemoon ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:240:19)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
Emitted 'error' event at:
at Process.ChildProcess._handle.onexit (internal/child_process.js:246:12)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start: `react-scripts start`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/jerome/.npm/_logs/2019-07-28T06_52_10_841Z-debug.log
after installed package-lock.json, as suggested above, the server doesn't start and no error message....
yarn.lock installed but node_modules can not with many error again:
npm ERR! code ETARGET
npm ERR! notarget No matching version found for undefined@node_modules
npm ERR! notarget In most cases you or one of your dependencies are requesting
npm ERR! notarget a package version that doesn't exist.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/jerome/.npm/_logs/2019-07-28T06_54_34_517Z-debug.log
npm version is 6.10.2
I don't want to remove my official repos packages (it is a very bad practice) ad i hope it is possible to just use react with lts node application,if not, it housld be impossible ofr me to learn and use react due to miss-respect version lts concept and unmaintainable applications from this crucial point.
### Environment
`npx create-react-app --info`
Environment Info:
System:
OS: Linux 5.2 Arch Linux
CPU: (8) x64 Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz
Binaries:
Node: 10.16.0 - /usr/bin/node
Yarn: Not Found
npm: 6.10.2 - /usr/bin/npm
Browsers:
Chrome: Not Found
Firefox: 68.0.1
npmPackages:
react: ^16.8.6 => 16.8.6
react-dom: ^16.8.6 => 16.8.6
react-scripts: 3.0.1 => 3.0.1
npmGlobalPackages:
create-react-app: 3.0.1
### Steps to reproduce
1. npm install -g create-react-app
2. npx create-react-app my_app
3. cd my_app
4. npm start
### Expected behavior
server should run with information about server access url
### Actual behavior

| tag: enhancement | low | Critical |
473,716,688 | rust | Tracking issue for RFC 2574, "SIMD vectors in FFI" | This is a tracking issue for the RFC "SIMD vectors in FFI" (rust-lang/rfcs#2574).
The feature gate is `#![feature(simd_ffi)]`.
**Steps:**
- [ ] Implement the RFC (cc @rkruppe @gnzlbg); WIP in https://github.com/rust-lang/rust/pull/59238 and #86546.
- [ ] Adjust documentation ([see instructions on rustc-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-guide][stabilization-guide])
[stabilization-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#documentation-prs
**Unresolved questions:**
- [ ] Should it be possible to use, e.g., `__m128` on C FFI when the `avx` feature is enabled? Does that change the calling convention and make doing so unsafe ? We could extend this RFC to also require that to use certain types certain features must be disabled. | A-FFI,B-RFC-approved,T-lang,T-compiler,A-SIMD,C-tracking-issue,F-simd_ffi,requires-nightly,S-tracking-unimplemented | low | Minor |
473,721,797 | rust | New trait for types which can report memory usage | Hi all,
In the course of implementing the sizeof operator in rustpython, we stumbled upon a feature we possibly lack: the ability to request the true memory usage of a struct. Meaning, the memory usage of the struct itself, as reported by `std::mem::size_of` and friends, but also the size of the dynamically allocated heap structures, such as `Vec`.
Our current approach is to dive into the inner workings of the bigint type, but this is probably undesirable, since the resulting code is unsafe.
The proposed change can be viewed here:
https://github.com/RustPython/RustPython/pull/1172/files
My request, would be to have a trait in the `alloc` crate or `std::mem` crate, which can be implemented by other packages, which support the reporting of dynamic memory by an object.
Example of the trait:
```rust
trait DynamicSizeOf {
/// Returns the memory used by this type, including all its nested fields
/// which are dynamically allocated.
fn sizeof(&self) -> usize
}
impl DynamicSizeOf for BigInt {
fn sizeof(&self) -> usize {
// arbitrarily deep query on sub structures memory usage.
}
}
```
Please also see the related issue here: https://github.com/rust-num/num-bigint/issues/98
Is this a good idea? Are there other options available to achieve the same behavior?
@askaholic and @cuviper what are your opinions on this topic? Do you see other ideas? | T-libs-api,C-feature-request | low | Minor |
473,723,628 | go | cmd/vet: warn about slice being compared to nil | ### What version of Go are you using (`go version`)?
go version go1.12.7 windows/amd64
### Does this issue reproduce with the latest release?
yes
### What did you do?
I reported bugs #33103, #33104 and #33105. In the changes to fix these bugs, I noticed a common pattern: a slice is checked for being nil instead of checking its length.
I had thought that `len(slice) == 0` would be the canonical way of testing a slice for emptiness, and I was surprised to find nothing about this topic in [Effective Go](https://golang.org/doc/effective_go.html).
I was also surprised that this bug has occurred in go/printer, which is written by the core go team. Is there a specific reason for using the nil check instead of the len check?
To prevent this kind of bugs in the future, there should be some tool that warns about this situation. My first idea was to integrate this check into vet. Since vet _reports suspicious constructs_, this seems to fit perfectly. | NeedsDecision,Analysis | low | Critical |
473,735,870 | rust | Rustdoc should index "Methods from Deref" in search results | Rustdoc should index "Methods from Deref" in search results, and mark them somehow. So people be able to find something like `Vec::last` during search. | T-rustdoc,C-enhancement,A-rustdoc-search | low | Minor |
473,750,474 | terminal | Screen saver wakeup incomplete | When focus is in the Terminal and screen saver kicks in the screen turns black. Then when I move the mouse the screen shows and immediately goes black again. This continues until I move the mouse pointer outside of Terminal window or I click the mouse. This is not happening when focus is outside of Terminal window when screen saver kicks in. In that case wakeup works as expected.
Edit: I incorrectly used term "screen saver". This is related to power saver "screen off" state. Fully reproducible. Must allow screen to stay in the off state for a few seconds to repro.
Windows 18941.rs_prerelease.190713-1700.
Dell XPS 13 9343 | Help Wanted,Area-Rendering,Issue-Bug,Product-Terminal | low | Minor |
473,760,888 | pytorch | upcoming PEP 554: how much effort we need to support sub-interpreter | ## 🚀 Feature
support python sub-interpreters and maintains all status of the torch library.
## Motivation
as #10950 demonstrates, the current ``torch`` library cannot lives on multiple sub-interpreter simultaneously within the same process. But we do need to run python codes on multiple "threads" at the same time for the very reasons why ``torch`` introduces ``torch.multiprocessing`` and ``DistributedDataParallel`` (the single node scenario). As [PEP 554](https://www.python.org/dev/peps/pep-0554/) is proposed back in 2017 and maybe available by 2019 or 2020, I think it is necessary to make use of it because:
- It is easier to sharing data between interpreters than between processes
- It will reduce gpu memory overhead (Every subprocess consume at least 400~500MB gpu memory)
- It can help avoid relatively complex process management problems
And between multi-interpreter and multi-process, there is almost no difference on user coding experience and front-end design, and the changes will be made behind the scene.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
I think works need to be done on following aspects:
- Changes any global status that should bind to a interpreter to a per-interpreter status set. (the ``detach`` method mentioned in #10950, for example)
``Tensor`` lifecycle management maybe not a good example, because it is also a choice that ``Tensor`` can be shared across interpreters.
- Prevent re-initializing and double-finalize for those status that are indeed global. (CUDA initialization, for example)
- Create interface and infrastructure for controlling communication and sharing ``Tensor`` between interpreters.
- Deprecate ``torch.multiprocessing`` module
| feature,triaged | low | Major |
473,762,264 | rust | Tracking issue for `const fn` `type_name` | This is a tracking issue for making and stabilizing `type_name` as `const fn`. It is not clear whether this is sound. Needs some T-lang discussion probably, too.
Steps needed:
* [x] Implementation (essentially add `const` and `#[rustc_const_unstable(feature = "const_type_name")` to the function and add tests showing that it works at compile-time.
* [x] fix https://github.com/rust-lang/rust/issues/94187
* [ ] check if https://github.com/rust-lang/rust/issues/97156 replicates for `type_name`, too
* [ ] Stabilization
* Note that stabilization isn't happening any time soon. This function can change its output between rustc compilations and allows breaking referential transparency. Doing so at compile-time has not been discussed fully and has various points that are problematic.
| T-lang,T-libs-api,B-unstable,C-tracking-issue,A-const-eval,requires-nightly,Libs-Tracked,Libs-Small,S-tracking-needs-summary | medium | Critical |
473,789,700 | godot | Removed signals are still counted by the warning system | **Godot version:**
becbb7b
**Issue description:**
If you remove a user made signal, the warning system will act like it still exists, and complain of missing receiver methods to the previous connections.
**Steps to reproduce:**
1. Make a signal (`signal signal_name`).
2. Connect the signal to anything.
3. Remove the signal creation line.
4. Remove the receiver method. You will see that a warning will appear about it missing.
CC @Paulb23 | bug,topic:editor,confirmed | low | Minor |
473,806,663 | angular | LocationUpgradeModule throws Unhandled Navigation Error on initialisation when useHash is set to true | <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 🐞 bug report
### Affected Package
<!-- Can you pin-point one or more @angular/* packages as the source of the bug? -->
<!-- ✍️edit: --> The issue is caused by package @angular/common/upgrade
### Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- ✍️--> No, the feature on which this was built was introduced in Angular 8
### Description
When `useHash` is set to `true` using the newly introduced `LocationUpgradeModule`, Angular routing fails to initialise. This appears to be caused by the private `$$parse` method in `location_shim.ts`, which is being called with urls containing an unexpected `#` prefix (the current code only checks for a `/` prefix
## 🔬 Minimal Reproduction
<!--
Please create and share minimal reproduction of the issue starting with this template: https://stackblitz.com/fork/angular-issue-repro2
-->
[https://angular-issue-repro2-3bkm6g.stackblitz.io](https://angular-issue-repro2-3bkm6g.stackblitz.io)
<!--
If StackBlitz is not suitable for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue.
A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem.
Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior.
Issues that don't have enough info and can't be reproduced will be closed.
You can read more about issue submission guidelines here: https://github.com/angular/angular/blob/master/CONTRIBUTING.md#-submitting-an-issue
-->
## 🔥 Exception or Error
The initial console error is:
<pre><code>
Unhandled Navigation Error:
</code></pre>
Using the debugger to drill down into the code to reveal the actual error:
<pre><code>
Invalid url "#/", missing path prefix "https://angular-issue-repro2-3bkm6g.stackblitz.io/".
</code></pre>
## 🌍 Your Environment
**Angular Version:**
<pre><code>
Angular CLI: 8.1.2
Node: 10.15.0
OS: darwin x64
Angular: 8.1.3
... common, compiler, compiler-cli, core, language-service
... platform-browser, platform-browser-dynamic, router, upgrade
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.801.2
@angular-devkit/build-angular 0.801.2
@angular-devkit/build-optimizer 0.801.2
@angular-devkit/build-webpack 0.801.2
@angular-devkit/core 8.1.2
@angular-devkit/schematics 8.1.2
@angular/cli 8.1.2
@ngtools/webpack 8.1.2
@schematics/angular 8.1.2
@schematics/update 0.801.2
rxjs 6.4.0
typescript 3.4.5
webpack 4.35.2
</code></pre>
**Anything else relevant?**
<!-- ✍️Is this a browser specific issue? If so, please specify the browser and version. -->
<!-- ✍️Do any of these matter: operating system, IDE, package manager, HTTP server, ...? If so, please mention it below. -->
| type: bug/fix,area: common,freq1: low,area: upgrade,state: confirmed,P3 | low | Critical |
473,846,607 | go | net/http: document r.Context().Done() behaviour on cancelled POST request | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
### What did you expect to see?
### What did you see instead?
---
Recently I found [this question on stack overflow](https://stackoverflow.com/questions/57246852), the OP asked why the `r.Context().Done()` channel does not receive any data on cancelled `POST` request. But it does receive data on cancelled `GET` request.
I tried to find the relevant explanation on the go doc, but I found nothing. Is this not documented? | Documentation,NeedsInvestigation | low | Critical |
473,902,136 | youtube-dl | camwhoreshd.com | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.07.27. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.07.27**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
http://www.camwhoreshd.com/videos/431328/always-late-she-sayd-just-two-minutes-before-dinner-with-friends-7fe334c374e0e886/
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
| site-support-request | low | Critical |
473,933,981 | vue | Add JSDoc to types | ### What problem does this feature solve?
This will improve the developer experience when using Vue in editors such as VSCode.
Currently, the developer must look up the default value in the [documentation](https://vuejs.org/v2/guide/components-props.html#Prop-Validation). And even there it is not clear, without trying it.
### What does the proposed API look like?
One example:
https://github.com/vuejs/vue/blob/b7c2d9366cf731a1551286b8ac712e6e0905070e/types/options.d.ts#L155
This line could get a JSDoc like:
```
Indicates whether the property must be set or is optional. Default is `false`.
```
### Example Result

<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | low | Major |
473,948,024 | flutter | Roku support | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
Running an application in Android TV, Apple TV and Roku from the same codebase.
## Proposal
Flutter should be able to run in Roku also using the same codebase.
| c: new feature,engine,dependency: dart,P3,team-engine,triaged-engine | high | Critical |
473,951,008 | flutter | Null pointer in Overlay when visiting ancestors | I was poking around `BuildContext` APIs and tried to invoke `visitAncestorElements()` all the way up the tree. I invoked `visitAncestorElements()` from my widget's `build()` method. I ran into a null pointer error for `_offstage` in `Overlay` at:
https://github.com/flutter/flutter/blob/a3cbe2535373d7fd3c3665f65b4af0c97e565561/packages/flutter/lib/src/widgets/overlay.dart#L566
The error was:
```
flutter: ══╡ EXCEPTION CAUGHT BY WIDGETS LIBRARY ╞═══════════════════════════════════════════════════════════
flutter: The following NoSuchMethodError was thrown building MyDescendant6(dirty):
flutter: The getter 'iterator' was called on null.
flutter: Receiver: null
flutter: Tried calling: iterator
flutter:
flutter: User-created ancestor of the error-causing widget was:
flutter: Padding
flutter: file:///Users/[redacted]/buildcontext_ancestors.dart:177:18
flutter:
flutter: When the exception was thrown, this was the stack:
flutter: #0 Object.noSuchMethod (dart:core-patch/object_patch.dart:50:5)
flutter: #1 _TheatreElement.visitChildren (package:flutter/src/widgets/overlay.dart:566:27)
...
```
It's possible that this error was the result of me mis-using the API, but in that case the API docs should probably be updated with the limitation, and maybe an appropriate error should be thrown when this happens.
Flutter Doctor:
```
• Flutter version 1.8.2-pre.216 at [redacted]
• Framework revision 24f483cf25 (5 hours ago), 2019-07-28 21:44:21 -0700
• Engine revision 38ac5f30a7
• Dart version 2.5.0 (build 2.5.0-dev.1.0 0ca1582afd)
``` | team,framework,P2,team-framework,triaged-framework | low | Critical |
474,058,062 | neovim | :lua io.write() default input/output | <!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`: NVIM v0.4.0-1447-gfe2ada737
- `vim -u DEFAULTS` (version: ) behaves differently? yes
- Operating system/version: macos 10.14
- Terminal name/version: mac terminal
### Steps to reproduce using `nvim -u NORC`
1. `nvim -u NONE`
2. `:lua io.write('aaaa') `
Show as follow:

### Actual behaviour
cmd line mess up
### Expected behaviour
| io,lua | low | Major |
474,060,511 | godot | StreamPeerBuffer appears empty after writing | **Godot version:**
3.1.1.stable.official
**OS/device including version:**
Ubuntu 18.04.2 LTS
**Issue description:**
After writing to `b: StreamPeerBuffer`, `b.get_available_bytes()` returns zero and no data can be extracted. After assigning `b.data_array = b.data_array`, `b.get_available_bytes()` returns the correct value and data may be extracted as expected.
**Steps to reproduce:**
```coffeescript
var b := StreamPeerBuffer.new()
b.put_8(123)
print(b.get_available_bytes()) # 0
print(b.get_8()) # 0
b.data_array = b.data_array
print(b.get_available_bytes()) # 1
print(b.get_8()) # 123
```
**Minimal reproduction project:**
[30924.zip](https://github.com/godotengine/godot/files/3442654/30924.zip)
| enhancement,documentation | low | Minor |
474,071,342 | rust | Add an option to avoid merging of calls to panic!() | Given code like:
```rust
pub fn f(v: Option<u32>, k: Option<u32>) -> u32{
v.unwrap() + k.unwrap()
}
```
Rust generates
```asm
example::f:
push rax
test edi, edi
je .LBB0_3
test edx, edx
je .LBB0_3
mov eax, ecx
add eax, esi
pop rcx
ret
.LBB0_3:
lea rdi, [rip + .L__unnamed_1]
call qword ptr [rip + core::panicking::panic@GOTPCREL]
ud2
```
The two calls to wrap are unified and it's not possible to tell which one failed at runtime.
I requested a way to tell LLVM not to do this in https://bugs.llvm.org/show_bug.cgi?id=42783 and Reid pointed me at https://cs.chromium.org/chromium/src/base/immediate_crash.h?l=10&rcl=92d78c90964200453b80fe098b367e0838681d09 which is Chrome's attempt to achieve this.
It turns out that this approach is pretty easily adapted to Rust. Here's a quick rework of the code above showing it off:
```rust
#![feature(asm)]
pub enum FOption<T> {
Some(T),
None
}
impl<T> FOption<T> {
pub fn unwrap(self) -> T {
match self {
FOption::Some(val) => val,
FOption::None => {
unsafe { asm!("" : : : : "volatile") }
panic!("called `Option::unwrap()` on a `None` value")
}
}
}
}
pub fn f(v: FOption<u32>, k: FOption<u32>) -> u32{
v.unwrap() + k.unwrap()
}
```
which generates:
```asm
example::f:
push rax
test edi, edi
jne .LBB7_3
test edx, edx
jne .LBB7_4
mov eax, ecx
add eax, esi
pop rcx
ret
.LBB7_3:
call std::panicking::begin_panic
ud2
.LBB7_4:
call std::panicking::begin_panic
ud2
```
| T-lang,C-feature-request | low | Critical |
474,114,481 | create-react-app | Non-dead JavaScript is removed during build | <!--
Please note that your issue will be fixed much faster if you spend about
half an hour preparing it, including the exact reproduction steps and a demo.
If you're in a hurry or don't feel confident, it's fine to report bugs with
less details, but this makes it less likely they'll get fixed soon.
In either case, please use this template and fill in as many fields below as you can.
Note that we don't provide help for webpack questions after ejecting.
You can find webpack docs at https://webpack.js.org/.
-->
### Describe the bug
`react-scripts build` seems to be removing inline JavaScript code from index.html that should not be removed. It happens to code paths that are conditional and the condition contains a build-time '%VARIABLE%'.
Workaround: move the build-time variable outside of the condition.
### Did you try recovering your dependencies?
<!--
Your module tree might be corrupted, and that might be causing the issues.
Let's try to recover it. First, delete these files and folders in your project:
* node_modules
* package-lock.json
* yarn.lock
Then you need to decide which package manager you prefer to use.
We support both npm (https://npmjs.com) and yarn (http://yarnpkg.com/).
However, **they can't be used together in one project** so you need to pick one.
If you decided to use npm, run this in your project directory:
npm install -g npm@latest
npm install
This should fix your project.
If you decided to use yarn, update it first (https://yarnpkg.com/en/docs/install).
Then run in your project directory:
yarn
This should fix your project.
Importantly, **if you decided to use yarn, you should never run `npm install` in the project**.
For example, yarn users should run `yarn add <library>` instead of `npm install <library>`.
Otherwise your project will break again.
Have you done all these steps and still see the issue?
Please paste the output of `npm --version` and/or `yarn --version` to confirm.
-->
No, it's a newly created project.
### Which terms did you search for in User Guide?
<!--
There are a few common documented problems, such as watcher not detecting changes, or build failing.
They are described in the Troubleshooting section of the User Guide:
https://facebook.github.io/create-react-app/docs/troubleshooting
Please scan these few sections for common problems.
Additionally, you can search the User Guide itself for something you're having issues with:
https://facebook.github.io/create-react-app/
If you didn't find the solution, please share which words you searched for.
This helps us improve documentation for future readers who might encounter the same problem.
-->
NODE_ENV, index.html, variables
### Environment
<!--
To help identify if a problem is specific to a platform, browser, or module version, information about your environment is required.
This enables the maintainers quickly reproduce the issue and give feedback.
Run the following command in your React app's folder in terminal.
Note: The result is copied to your clipboard directly.
`npx create-react-app --info`
Paste the output of the command in the section below.
-->
```
System:
OS: macOS 10.14.5
CPU: (4) x64 Intel(R) Core(TM) i5-5287U CPU @ 2.90GHz
Binaries:
Node: 8.16.0 - /usr/local/bin/node
Yarn: 1.17.3 - /usr/local/bin/yarn
npm: 6.10.1-salomvary1 - /usr/local/bin/npm
Browsers:
Chrome: 75.0.3770.142
Firefox: 68.0.1
Safari: 12.1.1
npmPackages:
react: ^16.8.6 => 16.8.6
react-dom: ^16.8.6 => 16.8.6
react-scripts: 3.0.1 => 3.0.1
npmGlobalPackages:
create-react-app: 3.0.1
```
### Steps to reproduce
<!--
How would you describe your issue to someone who doesn’t know you or your project?
Try to write a sequence of steps that anybody can repeat to see the issue.
-->
1. Create a new app with `create-react-app`.
2. Edit `public/index.html` and add a the snipped below after `<body>`.
3. Run `npm run build`.
4. Open `build/index.html`
```
<script>
console.log('%NODE_ENV%');
var nodeEnv = '%NODE_ENV%';
if ('%NODE_ENV%' === 'production') {
console.log('This is production!');
}
if (nodeEnv === 'production') {
console.log('This is REALLY production!');
}
</script>
```
### Expected behavior
<!--
How did you expect the tool to behave?
It’s fine if you’re not sure your understanding is correct.
Just write down what you thought would happen.
-->
`build/index.html` should contain the minified version of the entire snippet with `%NODE_ENV%` replaced with `production`.
### Actual behavior
This is what `build/index.html` contains instead (note that the first if block is entirely missing):
```
<script type="text/javascript">
console.log("production");var nodeEnv="production";"production"===nodeEnv&&console.log("This is REALLY production!")
</script>
```
<!--
Did something go wrong?
Is something broken, or not behaving as you expected?
Please attach screenshots if possible! They are extremely helpful for diagnosing issues.
-->
### Reproducible demo
<!--
If you can, please share a project that reproduces the issue.
This is the single most effective way to get an issue fixed soon.
There are two ways to do it:
* Create a new app and try to reproduce the issue in it.
This is useful if you roughly know where the problem is, or can’t share the real code.
* Or, copy your app and remove things until you’re left with the minimal reproducible demo.
This is useful for finding the root cause. You may then optionally create a new project.
This is a good guide to creating bug demos: https://stackoverflow.com/help/mcve
Once you’re done, push the project to GitHub and paste the link to it below:
-->
https://github.com/salomvary/create-react-app-bug
<!--
What happens if you skip this step?
We will try to help you, but in many cases it is impossible because crucial
information is missing. In that case we'll tag an issue as having a low priority,
and eventually close it if there is no clear direction.
We still appreciate the report though, as eventually somebody else might
create a reproducible example for it.
Thanks for helping us help you!
-->
| issue: bug | low | Critical |
474,139,918 | flutter | Feature Request for a Synchronous way to Rasterize Images / Draw Pixels to Canvas | # Use case
This feature request is about the [`Canvas` class in `engine/painting.dart`](https://github.com/flutter/engine/blob/master/lib/ui/painting.dart#L3105).
As extensively discussed in #31598, using regular canvas operations to draw pixel data is *extremely slow*.
This makes the only option we have decoding an image from our raw pixel data, e.g. using [`decodeImageFromPixels`](https://api.flutter.dev/flutter/dart-ui/decodeImageFromPixels.html). This has another problem, [pointed out by @andrewackerman in the mentioned issue](https://github.com/flutter/flutter/issues/31598#issuecomment-511510444), which is that creating an `Image` in Dart is an `async` operations.
This means that we will have to postpone drawing our raw pixel data to the next frame if we wish to use real-time operations.
## The `async` `Image` problem
`Image`s in Flutter currently have the inherent problem that rasterizing / decoding them is **always** an asynchronous operation. This is why using them for drawing pixel data does not work synchronously, but it also creates a whole nother class of problems:
Any other canvas operation in Flutter that involves `Image`s is also **asynchronous**. So not only can you not draw pixel data synchronously, but you cannot rasterize _any_ `Picture` synchronously, which is necessary for achieving various effects (e.g. [this](https://youtu.be/qlVNSMVv7Qk)). Also [Canvas.drawAtlas](https://api.flutter.dev/flutter/dart-ui/Canvas/drawAtlas.html) suffers from this problem.
### `drawPixels` proposal
Here is an example proposal ([written by @andrewackerman in #31598]((https://github.com/flutter/flutter/issues/31598#issuecomment-516050204))) for directly drawing pixel data.
Note that this might not cover all use cases because often we _do_ need to have a decoded image (`dart:ui.Image`) available to us, e.g. when using [Canvas.drawAtlas](https://api.flutter.dev/flutter/dart-ui/Canvas/drawAtlas.html). It should, however, give an idea as to what we are looking for.
> [...] it should have a similar signature to `decodImageFromPixels` [sic], so something like:
```dart
class Canvas {
...
void drawPixels(
Offset c,
Uint8List bytes,
int width,
int height,
PixelFormat pixelFormat, {
int rowBytes,
int targetWidth,
int targetHeight,
}) {
assert(c != null && bytes != null && width != null && height != null && pixelFormat != null);
assert(bytes.length == width * height * 4);
if (rowBytes == null) rowBytes = width * 4;
if (targetWidth == null && targetHeight == null) {
targetWidth = width;
targetHeight = height;
} else if (targetWidth == null) {
targetWidth = (targetHeight * (width / height)).toInt();
} else if (targetHeight == null) {
targetHeight = (targetWidth * (height / width)).toInt();
}
_drawPixels(c, pixelData, width, height, pixelFormat, rowBytes, targetWidth, targetHeight);
}
}
```
<details><summary>Click to expand details</summary>
> `c` is the position that the image will be drawn at.
>
> `bytes` is the pixel data that will be drawn to the canvas, represented as byte groups of 4 bytes per pixel. The color channels will be extrapolated based on the value of `pixelFormat`.
>
> `width` and `height` are the dimensions of the image represented by the pixel data. If `width * height * 4` is not equal to `pixelData.length`, an error will be thrown.
>
> `rowBytes` is the number of bytes consumed by each row of pixels in the data buffer. If unspecified, it defaults to width multiplied by the number of bytes per pixel in the provided `format`.
>
> The `targetWidth and `targetHeight` arguments specify the size of the output image, in image pixels. If they are not equal to the intrinsic dimensions of the image, then the image will be draw. If exactly one of these two arguments is specified, then the aspect ratio will be maintained while forcing the image to match the other given dimension. If neither is specified, then the image maintains its real size. (Debatable as to whether these parameters should be supported as they don't really fall into the whole "just draw these pixels" mentality of this method.)**
@andrewackerman assumed that the byte data will always have 32 bits per pixel, which makes sense because the only [`PixelFormat` values](https://api.flutter.dev/flutter/dart-ui/PixelFormat-class.html) that currently exist are `bgra8888` and `rgba8888`, which use 32 bits per pixel.
The [`decodeImageFromPixels` function](https://api.flutter.dev/flutter/dart-ui/decodeImageFromPixels.html) is open to any `PixelFormat` that might be supported in the future:
> `rowBytes` is the number of bytes consumed by each row of pixels in the data buffer. If unspecified, it defaults to width multiplied by the number of bytes per pixel in the provided `format`.
I think that this approach would make sense to also be used for the `Canvas` method (if it will use `PixelFormat`). Consequently, the assertion for `width * height * 4` should probably be `width * height * bytesPerPixel` and similarly, `bytes` should be pixel data, represented as byte groups of `bytesPerPixel` bytes per pixel.
</details> | c: new feature,engine,c: performance,c: proposal,P3,a: gamedev,team-engine,triaged-engine | high | Critical |
474,154,178 | go | cmd/compile: redundant moves and stack variables when function using bits.Add64 is inlined | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13beta1 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/jared/.cache/go-build"
GOENV="/home/jared/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/jared/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build580535950=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
https://play.golang.org/p/PWeQpDU4ilg
### What did you expect to see?
No allocation of Zig()'s y variable on the stack, and no code duplication. Something along the lines of...
```
0x0089 00137 (bug.go:8) MOVQ "".x+168(SP), AX
0x0091 00145 (bug.go:8) ADDQ AX, AX
0x0094 00148 (bug.go:8) SBBQ CX, CX
0x009c 00156 (bug.go:9) XORQ AX, CX
```
### What did you see instead?
Execution of the Zig()'s bits.Add64()/ADDQ twice, with unnecessary use of temporary variable y.
```
0x005a 00090 (bug.go:8) MOVQ "".x+168(SP), DX
0x0062 00098 (bug.go:8) ADDQ DX, DX
0x0065 00101 (bug.go:8) MOVQ DX, "".y+80(SP)
...
0x0089 00137 (bug.go:8) MOVQ "".x+168(SP), AX
0x0091 00145 (bug.go:8) ADDQ AX, AX
0x0094 00148 (bug.go:8) SBBQ AX, AX
0x0097 00151 (<unknown line number>) NOP
0x0097 00151 (<unknown line number>) NOP
0x0097 00151 (bug.go:9) MOVQ "".y+80(SP), CX
0x009c 00156 (bug.go:9) XORQ AX, CX
``` | Performance,NeedsInvestigation | low | Critical |
474,159,011 | pytorch | Enable PyTorch Bfloat16 for CPU and add MKL-DNN bfloat16 optimization for Cooper Lake | Enable PyTorch Bfloat16 for CPU and add MKL-DNN bfloat16 optimization for Cooper Lake
## Motivation
Bfloat16 is a 16-bit floating point representation with same exponent bit-width as 32-bit floating point representation (FP32). It improves deep learning training performance by reducing both the computation and memory bandwidth while keeps the accuracy at the same level as FP32. It has been adopted by various deep learning HW, and Pytorch added initial Bfloat16 support recently.
Intel’s upcoming Cooper Lake 14nm Intel Xeon® processor family will add Bfloat16 support, which provides 2x speedup for SIMD FMA instructions and 2x performance benefits on memory access. MKL-DNN v1.0 introduced bfloat16 support and expect more to come in the future releases. Comparing to FP32 baseline, we projected 1.6x+ performance benefit end to end for a wide range of vision models, and expect benefit for speech-recognition, recommendation engines and machine translation also.
## Pitch
Support Pytorch bfloat16 feature on CPU path and optimize it for Intel Cooper Lake
## Additional context
On the CPU path, we plan to extend Bfloat16 tensor operation support to have a full coverage as FP16. The input and output tensors are all in public format. We need to override the basic data type operations (like “+”, “-, “*”, “/”) in Bfloat16 class and modify the Bfloat16 tensor operations for special process if needed (like accumulate in FP32 precision in BN). On CPU prior to Cooper Lake, the basic Bfloat16 data operation is emulated, with conversion of input to FP32 and rounding of result to Bfloat16 using Rounding to Nearest Mode.
On the MKLDNN path, the input tensor is expected to be converted to MKL-DNN blocking format so represented internally as MKL-DNN tensor. The dispatch to MKL-DNN Bfloat16 operation will happen within MKL-DNN operations, not according to first level tensor type id.
On CPU prior to Cooper Lake, User can run the BFloat16 model with this enabling effort. User may experience lower performance for both CPU and MKL-DNN path on Bfloat16 than FP32 baseline. For best performance, User needs to run Bfloat16 model on MKL-DNN path on Cooper Lake.
| module: performance,module: cpu,triaged | low | Major |
474,171,769 | godot | Loading heavy resources on demand | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
3.1.1 stable
**OS/device including version:**
All
**Issue description:**
I seek the ability to not have all "heavy" resources (like models, textures etc.) loaded immediately upon loading the scene, but on demand, if necessary. Please hear my reasoning before ridiculing me :)
I'm having much fun with Godot since 2.0, experimenting and trying different things, and I finally decided to write a 3d dungeon crawler from first-person perspective, with **randomly generated levels**. I already have a working PoC and it works very nice. My problem lies with the randomness in relation to the objects to fill level with. Let's say I want to generate a "cave" level. Since caves are all about spiders and such, I'd like to only fill the level with "spider" monsters. Now, I have several types of spiders, each one in a different .tscn, with a specific script, sounds, models, etc. Now, how to actually find all these scenes and choose from them to place a monster?
_Attempt 1_: Master scene containing all the monsters and a script to get_monster_type. Nice idea, but guess what, all the resources will be loaded anyway with the scene even if I only want a single spider type. Can you see where I'm going with this?
_Attempt 2_: A custom node tree containing _string_ paths to .tscn files and some metadata, for example biomes the monster appears in etc. Works, but you need to synchronize your library of monsters and the "database", so it's easy to forget something (or specify wrong path). Besides that, you will have redundancy (the database and the monster will definitely share some metadata, so you need to watch out for differences). And you lose self-containment of a monster .tscn, which I just love (a monster is a complete component, that will simply work if placed under a parent that supports it, like a level).
_Attempt 3_: Editor plugin to scan all monster scenes before running/deploying the game and producing metadata library with all the necessary info about monsters into a .json file. Pretty nice, but still something may break, and besides that, the editor must load everything to scan the .tscn. Besides that, it's still a workaround.
We have this beautiful (I really mean it) editor, which makes it easy to have a game object **completely self-contained** in a .tscn and I really don't like to being actually forced to make workarounds around it. I was also pretty surprised, that loading PackedScene actually loads everything inside it, even before instantiating it; a lightweight PackedScene would be enough for me (my "database" node tree would reference the monsters, so it could become Autoload without loading all monster resources).
Please comment on my plight, perhaps some of you had similar experience or a better idea? | discussion,topic:core | low | Minor |
474,182,238 | flutter | Map bounds set with incorrect zoom level | ## Steps to Reproduce
Using `google_maps_flutter`
```
google_maps_flutter:
dependency: "direct main"
description:
name: google_maps_flutter
url: "https://pub.dartlang.org"
source: hosted
version: "0.5.19+2"
```
in `onMapCreated` immediately invoke
```dart
mapController.moveCamera(
CameraUpdate.newLatLngBounds(
LatLngBounds(
southwest: const LatLng(-38.483935, 113.248673),
northeast: const LatLng(-8.982446, 153.823821),
),
10.0,
),
);
```
I would expect the map to be centered on the bounds with the max zoom level. Instead, the map is zoomed out all the way.
```dart
await Future<void>.delayed(Duration(seconds: 0));
await mapController.moveCamera(
CameraUpdate.newLatLngBounds(
LatLngBounds(
southwest: const LatLng(-38.483935, 113.248673),
northeast: const LatLng(-8.982446, 153.823821),
),
10.0,
),
);
```
seems to help a bit but does not completely solve the problem.
**Edit: seems like doing the bounding in a postFrameCallback fixes the issue**
```dart
SchedulerBinding.instance.addPostFrameCallback((_) {
mapController.moveCamera(
CameraUpdate.newLatLngBounds(
LatLngBounds(
southwest: const LatLng(-38.483935, 113.248673),
northeast: const LatLng(-8.982446, 153.823821),
),
10.0,
),
)
});
```
<details>
<summary>flutter doctor -v</summary>
```console
flutter doctor -v
^[[✓] Flutter (Channel stable, v1.7.8+hotfix.3, on Mac OS X 10.14.5 18F132, locale en-US)
• Flutter version 1.7.8+hotfix.3 at /Users/felix/flutter
• Framework revision b712a172f9 (3 weeks ago), 2019-07-09 13:14:38 -0700
• Engine revision 54ad777fd2
• Dart version 2.4.0
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/felix/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• ANDROID_HOME = /Users/felix/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 10.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.1, Build version 10B61
• CocoaPods version 1.7.2
[✓] iOS tools - develop for iOS devices
• ios-deploy 1.9.4
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 34.0.2
• Dart plugin version 183.5901
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[!] IntelliJ IDEA Community Edition (version 2019.1.1)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
• For information about installing plugins, see
https://flutter.dev/intellij-setup/#installing-the-plugins
[✓] VS Code (version 1.36.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.2.0
[✓] Connected device (1 available)
• iPhone XR • 0C839885-CCD4-4E6E-88CA-BCB2D68227F5 • ios • iOS 12.1 (simulator)
! Doctor found issues in 1 category.
```
</details>
| p: maps,package,customer: vroom,team-ecosystem,has reproducible steps,P2,found in release: 2.0,found in release: 2.3,triaged-ecosystem | low | Major |
474,188,400 | pytorch | Build reconfiguration should consistently honor env variables | Per discussion here: https://github.com/pytorch/pytorch/pull/23323#issuecomment-515168182
We currently have
(plainly == no additional build options passed in to setup.py)
for clean tree:
- configuration only: `--cmake-only`
- configuration + build: invoke setup.py plainly
For rebuild:
- reconfiguration without rebuild: `--cmake-only --cmake` (unreliable), or edit CMakeCache.txt and run cmake directly
- rebuild without reconfiguration: invoke setup.py plainly, all build options persist
- reconfiguration + rebuild: `--cmake` (unreliable), or edit CMakeCache.txt and invoke setup.py plainly
We need to make the two "unreliable" spots "reliable". | module: build,triaged | low | Minor |
474,210,490 | terminal | Don't resize the terminal buffer until dragging is released | Add a _setting_ to enable this behavior.
Maybe the user might not want to reflow the buffer in real-time as they resize the window. Maybe they only want to have the resize occur when they _finish_ resizing.
I'd love to see an E2E spec of this scenario before code starts getting written, because I feel it might be a little wonky in certain scenarios. Questions that would be good to be answered:
* How will we communicate the start of a move/size loop from WindowsTerminal down to the TerminalApp?
* How will this play with panes that aren't terminals?
* how do we handle maximize/restore?
* Should we pause rendering while this is happening? How should this _look_ to the user?
From discussion in #1465 | Issue-Feature,Help Wanted,Area-UserInterface,Product-Terminal | low | Minor |
474,228,509 | terminal | Lock and Unlock in conhost should decouple Ctrl+C dispatch and use smarter handling | The Lock and Unlock procedures in Conhost.exe are fraught with error.
For one, we are using intricate details of how many recursive entries there are on the lock to count when it is fully unlocked and dispatch Ctrl+C events. We're not supposed to be dependent on the lock count at all.
Additionally, there's multiple layers at which the locks can be processed, some of which DO dispatch the Ctrl+C events on the last unlock and some of which do NOT.
Finally, the lock/unlock procedure is bad in that it is easy to not unlock something since it's not in any sort of smart RAII-type object. This is sort of mitigated by wil::scope_exit in many places, but that function is also discouraged when you can use something better.
This issue encompasses rooting around and finding a better overall way to do all of this.
It does NOT encompass more granular locks than already exist. | Product-Conhost,Area-Server,Issue-Task | low | Critical |
474,251,109 | godot | AnimationNodeStateMachinePlayback Issues/Mistakes | **Godot version:**
3.1.1
**OS/device including version:**
Windows
**Issue description:**
The state machine playback lack of methods and features and your single method hurt state machine concepts.
Problems:
- no methods for consume your transitions, making obrigatory use the raw parent class methods (graph methods), and use endless conditions for check state of state machine, no make sense for me.
- travel method is a the graph method only, he will perform his state independent of any transition, hurt state machine concept.
- the worst problem: the current state accessing unaccessable transitional conditions.
Set your condition for true and see her running in any current state and forever, make obrigatory use of the endless conditions, again.
Reproduce:
States:
- stand (start node)
- cast (one-shot animation)
Transition between:
- stand to cast
advance codition: cast_action == true
- cast to stand
no transition
Set for true:
AnimationTree::set("path_to/cast_action ", true)
Result:
Re-execution animation forever. it's wrong, because cast is a final state and he execute a one-shot animation.
**My Conclusion**
Usable in cutscenes only with switch mode AtEnd and auto advance enabled, because you will have to program the state machine over a state machine.
Is a state machine without state machine features.
A note in the documentation, warning about lack of functionalities for new users of godot.
**Solutions Suggested**
Correct travel, r method and advance condition in transiton.
_Sorry if there is any mistake of mine_
**Steps to reproduce:**
Create AnimationTree with AnimationNodeStateMachine | bug,confirmed,topic:animation | low | Major |
474,256,277 | pytorch | MultiheadAttention output changes if input order is not exactly same | ## 🐛 Bug
I have trained a transformer model and the results seem to be wrong
## To Reproduce
Steps to reproduce the behavior:
1. Train a model using the multi head attention (encoder only)
2. construct vector x = [x1,x2] and xx = [x3,x1] where x1, x2, and x3 are two inputs
3. calculate y = transformer(x) and yy = transformer(xx)
## Expected behavior
y[0] = yy[1]
Believe the issue due to the reshaping of q,k,v in torch\nn\modules\activation.py (lines 804-808) and as a consequence the softmax at line 860 in the same file.
## Environment
- PyTorch Version (e.g., 1.0): 1.1.0
- OS (e.g., Linux): windows
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source): na
- Python version: 3.7.3
- CUDA/cuDNN version: na
- GPU models and configuration: na
- Any other relevant information:
## Additional context | needs reproduction,module: nn,triaged | low | Critical |
474,302,630 | flutter | Add API to query main Isolate's stack trace | ## Use case
To identify performance bottlenecks of real user devices in the wild, we need a release-mode Flutter app to be able to use another Isolate to sample the main Isolate's stack trace. This is an ask from "customer: countless". They currently do similar things on Android using `Looper.getMainLooper().getThread().getStackTrace();`.
## Proposal
Add a `getStackTrace` API to Dart `Isolate` class: https://github.com/dart-lang/sdk/issues/37664
We may also need to work on the Flutter side a little bit to make it convenient to create a dedicated Isolate just to sample the main Isolate's stack trace.
| c: new feature,framework,c: performance,dependency: dart,customer: countless,P3,team-framework,triaged-framework | low | Major |
474,315,076 | pytorch | Error from PyTorch when finalizing Python embedded in C++ | ## 🐛 Bug
I have a C++ application with an embedded Python session, and I've observed that importing PyTorch causes strange errors when the program ends. From what I can tell, the problem emerges because I finalize Python during the program exit, after PyTorch's internal objects have already been destroyed. PyTorch deallocates the objects' memory, but it does not update the corresponding reference counters. Thus, when Python finalizes and garbage collects, it attempts to deallocate memory that has already been freed and crashes.
If my interpretation is correct, may I suggest a few possibilities:
- Register a cleanup function with `atexit` to make sure internal Python objects are appropriately cleaned up, even if Python has not yet finalized.
- If C++ is an option, wrap the internal objects in classes with appropriate destructors.
- If this edge case is too weird and you decide not to support it, put an error message if the internal objects are destroyed before Python has been finalized.
## To Reproduce
Minimal working example:
<details> <summary> main.cpp </summary>
```c++
#include <Python.h>
/** Singleton class to manage embedded Python session. */
class session {
public:
/** Get singleton instance.
*
* Initializes an embedded Python session the first time it is
* called.
*/
static session& get() {
static session instance;
return instance;
}
~session() {
if (Py_IsInitialized()) { Py_Finalize(); }
}
private:
session() {
if (!Py_IsInitialized()) { Py_Initialize(); }
}
session(const session&) = delete;
session& operator=(const session&) = delete;
};
int main(int argc, char** argv) {
session::get(); // Start Python
#ifdef WITH_TORCH
PyImport_ImportModule("torch");
#endif
return EXIT_SUCCESS;
}
```
</details>
<details> <summary> Makefile </summary>
```makefile
# System-dependent variables
CXX ?= g++
PYTHON ?= python3
PYTHON_INC_DIR = $(shell $(PYTHON) -c "import sys; from distutils.sysconfig import get_python_inc; sys.stdout.write(get_python_inc())")
PYTHON_LIB = $(shell $(PYTHON) -c "import sys; from os.path import join; from distutils.sysconfig import get_config_var; sys.stdout.write(join(get_config_var('LIBDIR'),get_config_var('INSTSONAME')))")
# Compiler flags
CXX_FLAGS = -I$(PYTHON_INC_DIR)
LD_FLAGS = -Wl,$(PYTHON_LIB)
# ================================================
# Make rules
# ================================================
all: with_torch without_torch
with_torch: main.cpp
$(CXX) -o $@ $^ $(CXX_FLAGS) $(LD_FLAGS) -DWITH_TORCH
without_torch: main.cpp
$(CXX) -o $@ $^ $(CXX_FLAGS) $(LD_FLAGS)
%.o: %.cpp
$(CXX) -c -o $@ $< $(CXX_FLAGS)
.PHONY: clean
clean:
rm -f main with_torch without_torch *.o *.core
```
</details>
Running the makefile will generate two executables: `without_torch` and `with_torch`. Both maintain a singleton class that initializes and finalizes an embedded Python session. The only difference is that `with_torch` imports PyTorch. `without_torch` runs fine and exits cleanly. `without_torch` crashes with the following message:
<details> <summary> Error message </summary>
```
*** Error in `/g/g17/moon13/temp/pytorch_bug/with_torch': corrupted double-linked list: 0x0000000000a52a50 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x7f5d4)[0x2aaaabb4f5d4]
/lib64/libc.so.6(+0x8151d)[0x2aaaabb5151d]
/usr/tce/packages/python/python-3.7.2/lib/libpython3.7m.so.1.0(+0xb5be7)[0x2aaaaad84be7]
/usr/tce/packages/python/python-3.7.2/lib/libpython3.7m.so.1.0(+0xc9e97)[0x2aaaaad98e97]
/usr/tce/packages/python/python-3.7.2/lib/libpython3.7m.so.1.0(+0xf27f6)[0x2aaaaadc17f6]
/usr/tce/packages/python/python-3.7.2/lib/libpython3.7m.so.1.0(+0x1d7cd4)[0x2aaaaaea6cd4]
/usr/tce/packages/python/python-3.7.2/lib/libpython3.7m.so.1.0(_PyGC_CollectNoFail+0x3d)[0x2aaaaaea7fcd]
/usr/tce/packages/python/python-3.7.2/lib/libpython3.7m.so.1.0(PyImport_Cleanup+0x196)[0x2aaaaae68a66]
/usr/tce/packages/python/python-3.7.2/lib/libpython3.7m.so.1.0(Py_FinalizeEx+0xe0)[0x2aaaaae77b20]
/g/g17/moon13/temp/pytorch_bug/with_torch[0x400a9d]
/lib64/libc.so.6(+0x39b69)[0x2aaaabb09b69]
/lib64/libc.so.6(+0x39bb7)[0x2aaaabb09bb7]
/lib64/libc.so.6(__libc_start_main+0xfc)[0x2aaaabaf23dc]
/g/g17/moon13/temp/pytorch_bug/with_torch[0x400929]
======= Memory map: ========
```
</details>
Running `with_torch` under valgrind (with the [appropriate suppression file](https://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp)) yields many errors similar to:
<details> <summary> Error message </summary>
```
==3436== Invalid read of size 8
==3436== at 0x500F530: visit_decref (gcmodule.c:271)
==3436== by 0x4F00C8D: dict_traverse (dictobject.c:2987)
==3436== by 0x500E7D6: subtract_refs (gcmodule.c:296)
==3436== by 0x500E7D6: collect (gcmodule.c:853)
==3436== by 0x500FF24: collect_with_callback (gcmodule.c:1028)
==3436== by 0x500FF24: PyGC_Collect (gcmodule.c:1573)
==3436== by 0x4FDFB1A: Py_FinalizeEx (pylifecycle.c:1185)
==3436== by 0x400A9C: session::~session() (in /g/g17/moon13/temp/pytorch_bug/with_torch)
==3436== by 0x5C71B68: __run_exit_handlers (exit.c:77)
==3436== by 0x5C71BB6: exit (exit.c:99)
==3436== by 0x5C5A3DB: (below main) (libc-start.c:300)
==3436== Address 0x9331efd8 is 8 bytes inside a block of size 16,128 free'd
==3436== at 0x4C2B42D: operator delete(void*) (vg_replace_malloc.c:576)
==3436== by 0x5C71B68: __run_exit_handlers (exit.c:77)
==3436== by 0x5C71BB6: exit (exit.c:99)
==3436== by 0x5C5A3DB: (below main) (libc-start.c:300)
==3436== Block was alloc'd at
==3436== at 0x4C2A4A3: operator new(unsigned long) (vg_replace_malloc.c:334)
==3436== by 0xA8CAA33: torch::tensors::initialize_python_bindings() (in /usr/WS1/moon13/python/venv/pascal/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
==3436== by 0xA4FD82E: THPModule_initExtension(_object*, _object*) (in /usr/WS1/moon13/python/venv/pascal/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
==3436== by 0x4ED02B8: _PyMethodDef_RawFastCallKeywords (call.c:644)
==3436== by 0x4ED0304: _PyCFunction_FastCallKeywords (call.c:730)
==3436== by 0x4EAA613: call_function (ceval.c:4568)
==3436== by 0x4EAA613: _PyEval_EvalFrameDefault (ceval.c:3093)
==3436== by 0x4FB671D: _PyEval_EvalCodeWithName (ceval.c:3930)
==3436== by 0x4FB683D: PyEval_EvalCodeEx (ceval.c:3959)
==3436== by 0x4FB686A: PyEval_EvalCode (ceval.c:524)
==3436== by 0x4FB3F34: builtin_exec_impl (bltinmodule.c:1079)
==3436== by 0x4FB3F34: builtin_exec (bltinmodule.c.h:283)
==3436== by 0x4ED099B: _PyMethodDef_RawFastCallDict (call.c:530)
==3436== by 0x4ED0A34: _PyCFunction_FastCallDict (call.c:582)
```
</details>
## Expected behavior
`with_torch` should run and exit cleanly, like `without_torch`.
## Environment
```
~/temp/pytorch_bug $ python collect_env.py
Collecting environment information...
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Red Hat Enterprise Linux Server release 7.6 (Maipo)
GCC version: (GCC) 7.3.0
CMake version: version 3.12.1
Python version: 3.7
Is CUDA available: No
CUDA runtime version: 9.2.88
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.16.3
[pip3] torch==1.1.0
[pip3] torchvision==0.2.2.post3
[conda] Could not collect
```
## Additional context
This bug is also discussed in https://github.com/LLNL/lbann/issues/1111. | triaged,module: pybind | low | Critical |
474,372,837 | godot | Stretch Mode 2D, Mouse Position scaled incorrectly | **Godot version:**
Release 3.1.1 (Latest)
**OS/device including version:**
Windows 10
**Issue description:**
My project is configured at the resolution 1600x900.
When the project is scaled up (I.E Fullscreen) the Viewport size is increased to 1920x1080. However, if the mouse is moved to the bottom right, the value is about 1600,900 (Actually around half a pixel less)
Note: If this is not a bug:
Documentation of Viewport states that the mouse position is relative to the viewport...

**Steps to reproduce:**
Make a brand new project, change the stretch mode to 2D.
Write a script that prints the mouse position.
Scale the window and check what the mouse position is claimed to be.
**Minimal reproduction project:**
[Test.zip](https://github.com/godotengine/godot/files/3445248/Test.zip)
| bug,topic:core,confirmed,topic:input | low | Critical |
474,375,394 | go | x/mobile/bind/testdata/testpkg: Readasset() fails in ios | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.7 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/wucanrui/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/wucanrui/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12.7/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12.7/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/0_/j6m36l9s0psg9xm5ggt2_b680000gp/T/go-build235355928=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
gomobile bind -target=ios golang.org/x/mobile/bind/testdata/testpkg
And then use the testpkgReadasset by using testpkg.Framework.
### What did you expect to see?
Everything is ok.
### What did you see instead?
Cannot find assets/hello.txt
| NeedsInvestigation,mobile | low | Critical |
474,377,045 | pytorch | model use dilated conv backward in v1.1.0 is ~3x slower than in v0.4.1 on 1080Ti | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
we found backward time of model with dilated conv in v1.1.0 much slower than in v0.4.1 on 1080ti, but comparable on v100.
## To Reproduce
```
import torch
import torch.nn as nn
import time
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None):
super(Bottleneck, self).__init__()
assert dilation == 1 or (dilation > 1 and stride == 1)
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(
planes, planes, kernel_size=3,
stride=stride, padding=dilation, dilation=dilation, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu = nn.ReLU(inplace=False)
self.relu_inplace = nn.ReLU(inplace=True)
self.downsample = downsample
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu_inplace(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu_inplace(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu_inplace(out)
return out
class ResNet(nn.Module):
def __init__(self, block, layers, use_stem=False, use_dilation=True):
super(ResNet, self).__init__()
self.use_stem = use_stem
out_planes = [64, 64, 128, 256, 512]
if use_stem:
self.inplanes = 128
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu1 = nn.ReLU(inplace=False)
self.conv2 = nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(64)
self.relu2 = nn.ReLU(inplace=False)
self.conv3 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(128)
self.relu3 = nn.ReLU(inplace=False)
else:
self.inplanes = out_planes[0]
self.conv1 = nn.Conv2d(3, out_planes[0], kernel_size=7, stride=2, padding=3, bias=False)
self.bn1 = nn.BatchNorm2d(out_planes[0])
self.relu1 = nn.ReLU(inplace=False)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1, ceil_mode=True) # change ceil_mode
self.layer1 = self._make_layer(block, out_planes[1], layers[0])
self.layer2 = self._make_layer(block, out_planes[2], layers[1], stride=2)
if use_dilation:
self.layer3 = self._make_layer(block, out_planes[3], layers[2], stride=1, dilation=2)
self.layer4 = self._make_layer(block, out_planes[4], layers[3], stride=1, dilation=4)
else:
self.layer3 = self._make_layer(block, out_planes[3], layers[2], stride=2)
self.layer4 = self._make_layer(block, out_planes[4], layers[3], stride=2)
def _make_layer(self, block, planes, blocks, stride=1, dilation=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = [block(self.inplanes, planes, stride, dilation, downsample)]
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, stride=1, dilation=dilation))
return nn.Sequential(*layers)
def forward(self, x):
if self.use_stem:
x = self.relu1(self.bn1(self.conv1(x)))
x = self.relu2(self.bn2(self.conv2(x)))
x = self.relu3(self.bn3(self.conv3(x)))
else:
x = self.relu1(self.bn1(self.conv1(x)))
c1 = self.maxpool(x)
c2 = self.layer1(c1)
c3 = self.layer2(c2)
c4 = self.layer3(c3)
c5 = self.layer4(c4)
return c5
if __name__ == '__main__':
net = ResNet(Bottleneck, [3, 4, 6, 3]).cuda()
x = torch.ones(1, 3, 1024, 2048).cuda()
t0 = time.time()
out = net(x)
loss = out.sum()
torch.cuda.synchronize()
print("forward: ", time.time() - t0)
t0 = time.time()
loss.backward()
torch.cuda.synchronize()
print("backward: ", time.time() - t0)
```
v1.1.0 (cudnn_version=7501):
```
forward: 0.35411787033081055
backward: 1.4410080909729004
```
v0.4.1 (cudnn_version=7102):
```
forward: 0.37447357177734375
backward: 0.5850026607513428
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
the backward time should be similar.
<!-- A clear and concise description of what you expected to happen. -->
## Environment
cuda 9.0
python 3.6
## Additional context
<!-- Add any other context about the problem here. -->
| module: performance,module: cudnn,module: cuda,module: convolution,triaged | low | Critical |
474,439,626 | go | cmd/gofmt: canonicalize octal literals to the '0o' form | Go 1.13 adds the `0o` form for octal numbers: instead of writing `0644` we can write `0o644`. Gofmt is also changing to canonicalize `0O` to `0o`.
We should go one step further and have gofmt canonicalize the old style of octal literals to the `0o` style: `0644` should be changed to `0o644`.
Octal numbers are a surprising feature for programmers coming from a language that does not have them, or where they are rarely used. It's unintuitive that adding a leading zero to an integer changes its meaning. Some modern languages, such as Rust and Swift, have opted to omit the 0-prefix syntax altogether and *only* support the 0o form. Removing the 0-prefix syntax from Go would be too disruptive now, but we should strongly encourage use of the 0o form via this gofmt rule.
As a separate justification, with the new octal syntax there will be two different octal styles the programmer can choose. It's better for gofmt to remove this choice and enforce a single standard.
This will cause some churn in code bases, mostly around os.OpenFile and similar calls, but I believe the change is well worth it. | NeedsDecision | medium | Major |
474,460,618 | TypeScript | Translate JSX elements based on objects | When using a custom `jsxFactory`, it is currently only possible to use something that is a `string` or callable as the JSX element tag. I do not see a reason why there should be such limitation. In my use case I would like to use objects as "blueprints" for my jsx factory.
**TypeScript Version:** 3.5.1
**Search Terms:**
jsx jsxFactory object TS2604
**Code**
```tsx
/*
* @jsx h
*/
function h (...args: any[]): string {
return 'something';
}
function FuncComponent() {
return '';
}
const StringComponent = 'test';
const ObjectComponent = { key: 'test '};
// works
const dom1 = <FuncComponent text="hello world" />;
// works
const dom3 = <StringComponent text="hello world" />;
// doesnt work
const dom4 = <ObjectComponent text="hello world" />;
```
**Expected behavior:**
JSX element `<ObjectComponent text="hello world" />` should be translated to `h(ObjectComponent, { text: "hello world" })`.
**Actual behavior:**
Typescript throws an error: `JSX element type 'ObjectComponent' does not have any construct or call signatures.`
**Playground Link:**
https://www.typescriptlang.org/play/index.html?jsx=2#code/PQKgsAUABCUAICsDOAPKALSNiUgMwFcA7AYwBcBLAeyIygAoA6ZgQwCcBzJALihaICeAbQC6ASl5IybCkQ5QA3lihsApmQJtaAciRUAtuvSyO2gNyQAvrgiFSlGlABixEgGEDABxqqiZemKKymoaWlDa5laQJDRSUADK0iYe+t5EvmRQALzhZKpSkRAxRHEA8gBGCKrkKWkZ2YpQANaqArzaeXHalhYQkMDAUADuVGxNSNGxmQAmBgCMDQA8LqS1Pn5QeShkWQBE6KoANodUw6OH07tQwAB8vf2DI2MTRVNQs-oAzEuJMnJr6Q2Wx2+yOJzObAuV1u9wgA3eVHyGyeTUmJRmBgALEsKlUal51plgXsDsdTk8odc7pAgA
**Related Issues:**
N/A
| Suggestion,Domain: JSX/TSX,Awaiting More Feedback | low | Critical |
474,468,784 | flutter | PlatformView MotionEventDispatcher removes source from motion events. | As seen in the source code here:
https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/rendering/platform_view.dart#L682
The source (e.g. touch screen, pointer,...) for motion events is set to 0. When implementing platform views, this stops certain 3rd party tools (such as Unity) to not recognise touch input events (as they don't seem to originate from device hardware).
This requires making manual changes to the touch events coming from Unity to the PlatformView (see https://github.com/snowballdigital/flutter-unity-view-widget/pull/16/files)
Is this planned to be implemented, as Flutter should have the original hardware source of these events available, or was this a design decision to set the source hardcoded to 0?
| framework,a: platform-views,c: proposal,P3,team-framework,triaged-framework | low | Minor |
474,498,786 | rust | Immutable reference can't be destructured from a mutable reference. | Just a minor papercut & inconsistency. [Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=1ad8eba40e76d644b07dba5ca4d48f5c):
```rust
// Motivating example.
let foo = &mut [0, 3, 2, 1];
foo.sort();
for &x in foo {
println!("{}", x);
}
// Which of course doesn't work, because this doesn't work
let &_g = &mut 0;
// But this does!
fn f(_: &i32) {}
f(&mut 0);
``` | T-lang,C-feature-request | low | Critical |
474,503,660 | go | x/crypto/ssh/knownhosts: can't verify host key if host certificate is sent | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.7 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/julian/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/julian/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/go"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build549038278=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
Write the following single line to a file called `known_hosts`:
<pre>
faui06.informatik.uni-erlangen.de ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8Mt0PH9D8QxhTRy3LezMhwKSp2l5D5FsljhbDU4NhRH7PP668zdPBfOMoJXNDOHSzKcs4X2K4P5eNVyWhFU7jJwdLDCtetnZWRA984jwmBrWUGOZXpuxs72wjHVfnYp5npq3LzbYUPQ6FzdVmHsWHy/SW1OW28xNP1Z4JhHysqcnZpuVT7wOvgpQ81ltpbnqEEkMez39mwin044CfdpjDQUKSYjySsxexX9wrZqMD4CfwB0D/Y5T/sZToHO8UURnxIw08SMOxn7VwGFFv1F6AhDu9T7Pd/9aWMP0djY/WWIJwB6iAhmalPcdEA88uHBar5Zwbo6yKusmRb0JjiKkb
</pre>
Run the following Go code (in the same directory so that it finds the file or change the path accordingly):
```go
package main
import (
"golang.org/x/crypto/ssh"
"golang.org/x/crypto/ssh/knownhosts"
"log"
)
func main() {
hostKeyCallback, err := knownhosts.New("known_hosts")
if err != nil {
log.Fatalf("unable to read ssh known hosts file: %v", err)
}
config := &ssh.ClientConfig{
User: "nobody",
HostKeyCallback: hostKeyCallback,
// HostKeyAlgorithms: []string{ssh.KeyAlgoRSA},
}
client, err := ssh.Dial("tcp", "faui06.informatik.uni-erlangen.de:22", config)
if err != nil {
log.Fatalf("unable to connect: %v", err)
}
defer client.Close()
}
```
### What did you expect to see?
The host key can be verified successfully.
Due to the minimal example which omits any authentication, there will also be an error in the successful case, but a different one:
```
unable to connect: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain
```
Note that the connection works fine if the `HostKeyAlgorithms` from the code is uncommitted, which disables requesting host certificates. I found this behavior quite surprising and it took me some time to figure this out. Also this is inconsistent with OpenSSH which, if it receives a host certificate, seems to extract the host public key from it and also check this against the known hosts file.
### What did you see instead?
The host key can't be verified and the program exits with this error message:
```
unable to connect: ssh: handshake failed: ssh: no authorities for hostname: faui06.informatik.uni-erlangen.de:22
``` | NeedsInvestigation | low | Critical |
474,549,376 | vscode | Add a shortcut or a button to restart extension host. | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Right now if the extension host crashes we have to restart the editor , it would be awesome if we could add a button or shortcut for the same. | feature-request,extension-host | medium | Critical |
474,666,671 | godot | Can't create folder in a redirected folder on Windows | I'm using Godot on a Windows computer that uses redirected folders. I have saved the godot folder on C and used the _sc_ file to save everything in that folder.
But, Godot can't seem to access anything outside of my redirected user folder and it can't create a folder within this folder structure.

| bug,platform:windows,topic:editor | low | Minor |
474,712,396 | go | cmd/go: allow local replacements without a corresponding explicit requirement? | In [CL 153157](https://golang.org/cl/153157), I added code so that a module path that has a `replace` directive resolves to a pseudo-version of the form `vX.0.0-00010101000000-000000000000` rather than checking for upstream dependencies.
That unblocks builds, but means that the module graph may indicate a commit that doesn't actually exist. There are use-cases for such a configuration, such as testing against unpublished commits of upstream modules (see https://github.com/golang/go/issues/26241#issuecomment-409907166 and #32776), but since the commit does not actually exist, the requirement graph of the resulting module _cannot_ be used without a corresponding `replace` directive in every downstream consumer.
Since this pseudo-version has an unambiguous and very unique commit hash (all zeroes!), I propose that we special-case it (if unreplaced) and treat it as equivalent to an empty commit.
CC @jayconrod @thepudds @rogpeppe | NeedsDecision,modules | low | Minor |
474,718,229 | svelte | Debugging: Decouple debug logging from breakpoints | **Is your feature request related to a problem? Please describe.**
Currently, using `{@debug}` statements to watch variables is an exercise in frustration when using Svelte to create apps (as opposed to developing Svelte itself) due to the `debugger` statements injected along with the console output unconditionally.
While it is neat that the language includes breakpoints in generated code and super useful for tracking down framework bugs, it really should be decoupled from the act of simply logging variables.
**Describe the solution you'd like**
Add a directive, say, `{@watch}`, that only logs the value of vars when they change.
The `{@debug}` directive can stay as it is to prevent unintended consequences of changing the behavior.
**Describe alternatives you've considered**
An alternative would be to expose the `dev` flag or have entire blocks of conditionally compiled code available to users; however, these alternatives really go against the grain of less boilerplate code in applications.
**How important is this feature to you?**
I personally stay away from the debug directive because of the unconditional breakpoints and end up having to `console.log()` all the time so it would be a big help in a production environment.
**Additional context**
The idea was prompted by a user asking for help regarding framework supported debugging for components. Being able to get started quickly and ramp up to being productive is a real big sell for Svelte. A major aspect of being able to do that is to iterate quickly and debug your code.
| feature request,temp-stale | low | Critical |
474,798,695 | terminal | Divide by 0 exception in DimensionsTests::TestGetLargestConsoleWindowSize() & DimensionsTests::TestSetConsoleScreenBufferInfoEx() | _migrated from 21432343_ | Product-Conhost,Area-Server,Issue-Bug | low | Minor |
474,798,708 | terminal | Several Dbcs Tests fail when "UseDx = 1" | _migrated from 21433270_ | Product-Conhost,Area-Rendering,Issue-Bug | low | Minor |
474,830,854 | terminal | Make BeginResize a RAII thing | Uh, maybe this method should just return an RAII object that stops the resize when it is destroyed like what we envision good locks to do.
_Originally posted by @miniksa in https://github.com/microsoft/terminal/pull/2149_ | Product-Conhost,Issue-Task,Area-CodeHealth | low | Minor |
474,836,777 | flutter | Suppress Xcode warnings originating from Flutter plugins as pods in Flutter apps | ## Use case
Flutter plugins often contain code that causes Xcode warnings. These warnings show up in Flutter apps, but are not fixable by Flutter app developers. The sheer number of these warnings can hide actually fixable warnings.
## Proposal
- Suppress warnings from Flutter plugins and Flutter engine in apps when installed from CoocaPods
- Do not suppress warnings in plugin example apps so plugin developers can see and fix their own warnings
## Note
Modules already have this behavior as of https://github.com/flutter/flutter/commit/bd47a31e32d7c65ef35524f3bf18655e513e3a5b#diff-b528f954e369c153bb31a631a8033be3R37 so host apps do not see Flutter pod warnings.
See https://github.com/flutter/flutter/issues/30110 for example of an issue where plugin warnings completely drown any actual errors. | platform-ios,tool,t: xcode,c: proposal,P3,a: plugins,team-ios,triaged-ios | medium | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.