id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
348,500,315 | TypeScript | Support open-ended unions | ## Suggestion
Ability to discriminate between union members when not all union members are known.
## Use Cases
* redux actions: See https://github.com/Microsoft/TypeScript/issues/2214#issuecomment-352103680
* This would enable us to discriminate on Node kinds without converting Node to a huge union (which slows down compilation too much). (Also, Node is effectively open-ended because we add new kinds fairly often.)
## Examples
```ts
interface Shape {
unique kind: string;
}
interface Square extends Shape {
kind: "square";
size: number;
}
interface Circle extends Shape {
kind: "circle";
radius: number;
}
// other shapes may exist
function area(s: Shape) {
switch (s.kind) {
case "square":
return s.size * s.size;
case "circle":
return Math.PI * s.radius ** 2;
default:
return 0; // Or hand off to some other function that handles other Shape kinds
}
}
```
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. new expression-level syntax)
## Workarounds
Cast the general type to a union of known type.
```ts
function area(sIn: Shape) {
const s = sIn as Square | Circle;
...
``` | Suggestion,Needs Proposal | medium | Major |
348,501,098 | TypeScript | In JS, JS containers can't be augmented inside an IIFE | ```js
var Ns = {};
(function () {
Ns.x = 1
})()
Ns.x
```
**Expected behavior:**
No errors. Ns.x should be visible inside and outside the IIFE.
**Actual behavior:**
No errors, but completion dooesn't suggest 'x' as a property of Ns and Ns.x ends up with type any. | Bug | low | Critical |
348,524,777 | TypeScript | JS: Can't assign to superclass property | **TypeScript Version:** 3.1.0-dev.20180807
**Code**
**a.js**
```ts
class A {
constructor() {
this.initializer = 2;
}
}
class B extends A {
constructor() {
super();
this.initializer = this.initializer + 1;
}
}
```
**Expected behavior:**
No error.
**Actual behavior:**
```
src/a.js:10:9 - error TS7022: 'initializer' implicitly has type 'any' because it does not have a type annotation and is referenced directly or indirectly in its own initializer.
10 this.initializer = this.initializer + 1;
``` | Bug,Domain: JSDoc,checkJs,Domain: JavaScript | low | Critical |
348,557,538 | pytorch | [Caffe2] Which document should we follow to install caffe2? | There are at least TWO "official" documents for caffe2:
(1) https://github.com/pytorch/pytorch#from-source
(2) https://caffe2.ai/docs/getting-started.html?platform=ubuntu&configuration=compile
Which one should we follow?
As we noticed that there are lots of questions in building caffe2, it would be nice if there is an "official" way to do it other than multiple documents that are different from each other. Thanks. | caffe2 | low | Minor |
348,562,029 | pytorch | [Caffe2] which kind of database can I read from caffe2 | I know caffe2 support lmdb, leveldb, hdf5. But after I move to the pytorch prebuilt by anaconda, I find all these features were turned off. Now I can't use Save, Load, CreateDB operators with the database formats I am familiar with.
I don't want build pytorch myself. that is too brutal to get these features. I just want to know which kind of database format is supported by default. Thx | caffe2 | low | Minor |
348,590,828 | rust | docs for extend_from_slice say extend isnt specialized for slice yet | Noticed an error in the docs of Vec::extend_from_slice:
> Note that this function is same as extend except that it is specialized to work with slices instead. If and when Rust gets specialization this function will likely be deprecated (but still available).
extend is already specialized for slices | C-enhancement,P-low,A-collections,T-libs-api,A-docs | low | Critical |
348,841,096 | go | go/types: 25% of time used in map insertions | Reminder issue to investigate.
From an e-mail by @alandonovan:
FYI: a profile of a go/types app showed nodeSet.add's use of map insertions accounts for 25% of all typechecking time. Something to look at on a rainy day (if you find yourself in SF this summer).
| Performance | low | Minor |
348,843,770 | TypeScript | Refactors which in TS would generate a type annotation should generate jsdoc in JS | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.1.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
// In TS
function x(): 42 {
return /*extract to inner function*/42;
}
// In JS
// @ts-check
/**
* @returns {42}
*/
function x() {
return /*extract to inner function*/42;
}
```
**Expected behavior:**
Under `ts-check`, a refactoring should apply appropriate jsdocs to allow js to typecheck after a refactoring if it typechecked before it.
**Actual behavior:**

**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Suggestion,Committed | low | Critical |
348,871,584 | angular | (feature): Ability to pass a default value with @Optional() | ## I'm submitting a...
<pre><code>
[x] Feature request
</code></pre>
## Current behavior
There is no way to change the value `null` passed to a constructor by DI if a dependency have not been resolved.
## Expected behavior
We should have the ability to pass arbitrary default values instead of `null` when needed. This should be similar to the TS behavior: when declaring an optional method parameter, we have the ability to declare a default value we'd like to get instead of `undefined` when a caller has not been provided the parameter:
`method(optionalParameter?: ParameterType = ParameterType.DefaultValue): void { }
`
## What is the motivation / use case for changing the behavior?
**Issue 1:**
The most obvious use case is that we can't technically have a constructor signature like this:
`constructor(@Optional() param1?: number) { }
`
because with the TS strict typing enabled we must declare all possible types `param1` could get:
`constructor(@Optional() param1?: number | null) { }
`
**Issue 2:**
Expanding this example further: due to the lack of this feature, we can't avoid using `null`s in our TS\JS as recommended by many reputable references or in case your team just prefers to do so.
In addition to declare `null` as a possible type, we must either use `param1 == null` (notice double equlity instead of the triple one) whenever we want to know whether `param1` has an actual value or coerce `null`s into `undefined`s immediately which comes with inability to declare a field as a constructor parameter:
```
private param1: number | undefined;
constructor(@Optional() param1?: number | null) {
this.param1 = param1 !== null ? param1 : undefined;
}
```
instead of just awesome:
```
constructor(@Optional(undefined) private param1?: number) {
}
```
Both of those alternatives are a huge mess and a pain.
The proposal is to be able to have:
```
@Optional(undefined)
@Optional(3)
@Optional("qwery")
@Optional({ prop1: 5 })
@Optional(new SomeClass)
// etc.
```
Moreover for consistency, since TS provides missing optional parameters as `undefined`s, I'd also make a proposal for `@Optional` to also provide `undefined` by default instead of `null`. But since it's a breaking change, community feedback is required for this part of proposal.
And even more, the behavior of providing `null` instead of `undefined` is inconsistent with another aspect of Angular itself: `@Input` properties. When there is no `@Input` property binding specified in a template, the property would be `undefined` instead of `null`.
## Environment
<pre><code>
Angular version: 6.1.1 | feature,area: core,core: di,feature: under consideration | medium | Critical |
348,875,446 | go | brand: presentation theme: speaker info not centered over horizontal lines | I'm trying to use the Go presentation theme in a new presentation.
In the first slide (https://golang.org/s/presentation-theme#slide=id.g33148270ac_0_143), there is a text box for presenter information that overlays two horizontal lines.
That text box is not centered over the lines, which looks especially awkward if the text lines are close to the length of the horizontal lines:

CC @cassandraoid @spf13 | NeedsFix | low | Minor |
348,888,172 | go | brand: presentation theme: “One column text” template slides are misnamed | In the Go presentation theme, the template layout named “One column text 1 1” (https://golang.org/s/presentation-theme#slide=id.g33148270ac_0_387) contains two (asymmetric) columns of text, not one. | NeedsFix | low | Minor |
348,921,557 | angular | Why reset styles after animating in Angular? | ## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Performance issue
[ ] Feature request
[x] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[ ] Other... Please describe:
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
The Input field appears and stretches during the animation time, and then the styles that were applied during the animation time are removed (the width and border of the input element become standard, not the ones I specified in the "style({})" object
## Expected behavior
<!-- Describe what the desired behavior would be. -->
The input box will appear and stretch to 608 pixels wide and will remain in this position until I click on the closing cross.
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://stackblitz.com or similar (you can use this template as a starting point: https://stackblitz.com/fork/angular-gitter).
-->
https://stackblitz.com/edit/angular-3ug5j7?file=app%2Fexample.component.ts
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
I animate the search bar using Angular animation and ngIf. After the input field appears, when its input animation ends, the applied styles are reset (the width becomes standard and the border disappears). Why is this happening and how to fix it?
## Environment
<pre><code>
Angular version: 6.0.0
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [x] Chrome (desktop) version 68.0.3440.84
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: v8.11.3 <!-- run `node --version` -->
- Platform: Windows 10 64bit<!-- Mac, Linux, Windows -->
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
| type: bug/fix,area: animations,freq1: low,P4 | low | Critical |
348,923,586 | rust | dead_code lint not running on code generated by crate-external macros | We've hit a bit of a snag in https://github.com/rust-lang-nursery/lazy-static.rs/issues/110 where it looks like the `dead_code` lint for code generated by the `lazy_static` macro in external crates isn't being triggered and is causing one of our compilefail tests to start building instead of failing.
I'm just trying to track down what we changed here in the compiler to get some context about how we should approach the issue in `lazy_static`. We can probably live with warnings not being surfaced if there's some bigger picture around the change. I may also be totally off in the weeds on this.
**EDIT:** The change was somewhere between 12ed235ad and 6a1c0637c | A-lints | low | Minor |
349,045,998 | godot | Profiler scale graph when stopped | **Godot version:**
3.1 9bd5315
**OS/device including version:**
Windows 10 Intel HD 4600
**Issue description:**
When graph panel in Profiler is small and Profiler is stopped, any attempt to change the size of this panel will result in the graph being scaled instead of its extension/increased details
**Steps to reproduce:**
1. Run Game
2. Run Profiler when graph panel is small(150px)
3. Stop profiling
4. Resize graph panel

| enhancement,topic:editor,usability | low | Minor |
349,077,553 | flutter | Why does appbar of Scaffold and bottom of Appbar require a PreferredSizeWidget | There are some parts of the framework where I understand the reasons for needing to know widget sizes (e.g. #12319). I also understand that this is difficult to do if not impossible (see #16061). Nevertheless the API requires explicit sizing information in some cases. But there are lots of use cases for widgets like the bottom part of an `AppBar` or even the icon and label of a `Tab` widget, where the height is actually variable (In my case it is the `maxExtent` property of `SliverPersistentHeader`).
For example it might depend on the theme settings or on how much lines a `Text` widget has etc. Or a widget might even change or animate its size, which makes it even worse. So it's not possible to provide fixed sizing information.
Why is it not possible to do it as with a `Column` or `Row`, where the widgets are just given their minimum sizes and the parent fits itself around them? | c: new feature,framework,P2,team-framework,triaged-framework | low | Major |
349,123,533 | pytorch | Sending CUDA tensor to process, and then back, does not work | **Editorial note:** See https://github.com/pytorch/pytorch/issues/10375#issuecomment-412574947 for the real problem
## Issue description
I can not get the tensor from queue.
## Code example
```
import torch
def _process(queue):
input_ = queue.get()
print('get')
queue.put(input_)
print('put')
if __name__ == '__main__':
torch.multiprocessing.set_start_method('spawn')
input_ = torch.ones(1).cuda()
queue = torch.multiprocessing.Queue()
queue.put(input_)
process = torch.multiprocessing.Process(target=_process, args=(queue,))
process.start()
process.join()
result = queue.get()
print('end')
print(result)
```
I executed this code, and only get and put printed, no end. I use ctrl-c to interrupt it, I find it blocks in result = queue.get(). Any idea?
## System Info
Collecting environment information...
PyTorch version: 0.4.0a0+200fb22
Is debug build: No
CUDA used to build PyTorch: Could not collect
OS: Arch Linux
GCC version: (GCC) 8.1.1 20180531
CMake version: version 3.11.4
Python version: 2.7
Is CUDA available: No
CUDA runtime version: 9.2.148
GPU models and configuration: GPU 0: GeForce GTX 1080
Nvidia driver version: 396.45
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy (1.14.3)
[pip] torch (0.4.1)
[pip] torchvision (0.2.1)
[conda] Could not collect
cc @ngimel | module: bootcamp,module: multiprocessing,module: cuda,triaged | low | Critical |
349,144,296 | pytorch | [Docs] Update broadcasting documentation for scalars / n-dimensional empty tensors | `Each tensor has at least one dimension.` is out of date.
`Then, for each dimension size, the resulting dimension size is the max of the sizes of x and y along that dimension.` -- this is wrong now that we have n-dimensional empty tensors (1 goes to 0).
I don't think the backwards compatibility section is relevant anymore.
cc @jlin27 @mruberry | todo,module: docs,triaged | low | Minor |
349,191,983 | go | runtime: transparent hugepages on linux/mipsle causes segfaults | Split out from #26179. This was observed on the builder through much of the Go 1.11 cycle, until transparent hugepages were disabled. We should investigate and find out whether this is a kernel bug, a Go bug, or what.
cc @aclements @vstefanovic @milanknezevic
| NeedsInvestigation,compiler/runtime | low | Critical |
349,261,489 | go | cmd/go: allow replacement modules to alias other active modules | ### What version of Go are you using (`go version`)?
Go tip:
`go version devel +f2131f6e0c Wed Aug 8 21:37:36 2018 +0000 darwin/amd64`
### Does this issue reproduce with the latest release?
Yes (it is not reproduced with `go version go1.11beta2 darwin/amd64`)
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/ikorolev/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/var/folders/_b/d1934m9s587_8t_6ngv3hnc00000gp/T/tmp.cqU8g8OM/gopath"
GOPROXY=""
GORACE=""
GOROOT="/Users/ikorolev/.gvm/gos/go1.11beta3"
GOTMPDIR=""
GOTOOLDIR="/Users/ikorolev/.gvm/gos/go1.11beta3/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/var/folders/_b/d1934m9s587_8t_6ngv3hnc00000gp/T/tmp.cqU8g8OM/vgo-a-user/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/_b/d1934m9s587_8t_6ngv3hnc00000gp/T/go-build138999780=/tmp/go-build -gno-record-gcc-switches -fno-common"
```
### What did you do?
Sorry, no standalone reproduction, since issue is connected with repository forking
Assume we have a repository A: https://github.com/mwf/vgo-a with the only feature:
```go
package a
var A = "A"
```
Than we have a `fork1` https://github.com/mwf/vgo-a-fork1, adding a feature B :
```go
package a
var B = "B is a new feature in a-fork1"
```
Unfortunately `fork1` will never be merged to the upstream, just because `a` author don't like this feature.
It's important to note, that both `a` and `a-fork1` **don't have** `go.mod`, they are too conservative for that 😄
Then we got a happy user, using both projects in his repo.
go.mod:
```
module github.com/mwf/vgo-a-user
require (
github.com/mwf/vgo-a v0.1.0
github.com/mwf/vgo-a-fork1 v0.2.0
)
```
main.go
```go
package main
import (
"fmt"
"github.com/mwf/vgo-a"
a_fork "github.com/mwf/vgo-a-fork1"
)
func main() {
fmt.Printf("A: %q\n", a.A)
fmt.Printf("B: %q\n", a_fork.B)
}
```
All just works fine:
```
$ go run .
A: "A"
B: "B is a new feature in a-fork1"
```
Here appears `fork2` https://github.com/mwf/vgo-a-fork2, forked from `fork1`, and fixing some bugs **both** in the upstream and in `fork1`.
We use the fork2 with `replace` in our main repo: https://github.com/mwf/vgo-a-user/blob/master/go.mod
```
module github.com/mwf/vgo-a-user
require (
github.com/mwf/vgo-a v0.1.0
github.com/mwf/vgo-a-fork1 v0.2.0
)
replace github.com/mwf/vgo-a => github.com/mwf/vgo-a-fork2 v0.2.1
replace github.com/mwf/vgo-a-fork1 => github.com/mwf/vgo-a-fork2 v0.2.1
```
### What did you expect to see?
Building this with `go1.11beta2` works just fine:
```
cd `mktemp -d`
git clone [email protected]:mwf/vgo-a-user.git .
go version && go run .
```
Output:
```
go version go1.11beta2 darwin/amd64
go: finding github.com/mwf/vgo-a-fork2 v0.2.1
go: downloading github.com/mwf/vgo-a-fork2 v0.2.1
go: finding github.com/mwf/vgo-a v0.1.0
go: finding github.com/mwf/vgo-a-fork1 v0.2.0
A: "A, fixed in a-fork2"
B: "B, fixed in a-fork2"
```
### What did you see instead?
Building with the tip (and beta3) returns an error:
```
cd `mktemp -d`
git clone [email protected]:mwf/vgo-a-user.git .
go version && go run .
```
Output:
```
go version devel +f2131f6e0c Wed Aug 8 21:37:36 2018 +0000 darwin/amd64
go: finding github.com/mwf/vgo-a-fork2 v0.2.1
go: downloading github.com/mwf/vgo-a-fork2 v0.2.1
go: github.com/mwf/[email protected] used for two different module paths (github.com/mwf/vgo-a and github.com/mwf/vgo-a-fork1)
```
### More comments
I understand that this case is very specific and arguable - this should not ever happen ideally, but we have the real case here:
https://github.com/utrack/clay/blob/master/integration/binding_with_body_and_response/go.mod
There is a little workaround, to define `go.mod` at fork2 and make a replace upstream -> fork2_with_go.mod, but it's too dirty :)
```
replace github.com/mwf/vgo-a => github.com/mwf/vgo-a-fork2 v0.3.0 // version with go.mod
replace github.com/mwf/vgo-a-fork1 => github.com/mwf/vgo-a-fork2 v0.2.1 // no go.mod
```
It works with tip and beta3:
```
$ go version && go run .
go version devel +f2131f6e0c Wed Aug 8 21:37:36 2018 +0000 darwin/amd64
A: "A, fixed in a-fork2"
B: "B, fixed in a-fork2"
```
If you decide that the case is too specific and crazy, and you'd like to close as "**Won't fix**" - then I assume we should change the error string, because it's **confusing** now:
> go: github.com/mwf/**[email protected]** used for two different module paths (github.com/mwf/vgo-a and github.com/mwf/vgo-a-fork1)
It should look like this:
> go: github.com/mwf/**[email protected]** used for two different module paths (github.com/mwf/vgo-a and github.com/mwf/vgo-a-fork1)
because it's `github.com/mwf/vgo-a-fork2` who's to blame for the error. | NeedsInvestigation,modules | high | Critical |
349,262,078 | rust | _mm_cmpXstrc() not coalesced with a subsequent _mm_cmpXstrm() or _mm_cmpXstri() | If an \_mm_cmpestrc() intrinsic is used to check whether a match was found, and then an \_mm_cmpestrm() intrinsic is used to obtain the mask __m128i, the assembly that is generated by rustc contains a (V)PCMPESTRI instruction and a (V)PCMPESTRM instruction. Ideally, the assembly would have only one (V)PCMPESTRM instruction.
The situation is the same for the following combinations:
* \_mm_cmpestrc() followed by an \_mm_cmpestri() generates two (V)PCMPESTRI instructions
* \_mm_cmpistrc() followed by an \_mm_cmpistrm() generates a (V)PCMPISTRI instruction and a (V)PCMPISTRM instruction
* \_mm_cmpistrc() followed by an \_mm_cmpistri() generates two (V)PCMPISTRI instructions
Over at shepmaster/jetscii#24, I prepared a Rust test case and a functionally equivalent C++ test case: https://github.com/shepmaster/jetscii/pull/24#issuecomment-411861393
I found that GCC 8.2.0 is able to coalesce the (V)PCMPxSTRx instructions. However, LLVM 6.0.1 is not. This appears to be a known issue in LLVM: https://bugs.llvm.org/show_bug.cgi?id=37246
This issue particularly impacts efficient use of the "mask" variants (explicit and implicit length); the "index" variants can simply check whether the index is 16, which indicates no match.
I am posting this issue here so that people can more easily find this information, but I am not sure what stdsimd can do. Perhaps, though, one possibility is to provide "mask" variant combination intrinsics, say cmpestrm() and cmpistrm(), which return the CFlag *and* mask XMM register. | A-LLVM,T-compiler | low | Critical |
349,295,827 | pytorch | [feature request] batch_first option in torch.utils.data | It would be nice to have a `batch_first` option for `torch.utils.data` classes such as DataLoader. At the moment, tensors are handled in batch first form, in opposition to default `batch_first` option in recurrent layers which is set to `False`.
cc @SsnL @VitalyFedyunin @ejguan | module: dataloader,triaged,enhancement | low | Minor |
349,325,172 | TypeScript | Analysis support for metaprogramming decorators | ## Search Terms
decorator, metaprogramming, rename, renaming
## Suggestion
As decorators near Stage 3, use cases are coming up that would currently pose problems for TypeScript due to their dynamic nature. Any additional properties added won't be visible, renamed properties won't analyze, and custom visibility schemes like protected or friends won't be understood.
This is another area where the dynamic nature of JavaScript might stretch the TypeScript type system, but with mapped and conditional types we might be able to describe a subset of the use-cases in the type system.
I don't have a concrete solution to suggest, but a couple of requirements that a solution should meet:
- Describe the change in type of a decorated property
- Describe the change in name of a property
- Describe newly introduced properties (ie, private storage for an `@observed` property)
- Describe changes to a class (ie, added static properties)
Mapped and conditional types already support some type transformation, but not transforming the keys, or describing a transformation across the static and instance types.
Maybe there's a way to expand on the list-comprehension-like syntax for mapped types to be even more like a list comprehension and allow
- Transformations of property names
- Filtering
Here's a rough idea of a mapping that would transform a key name to a symbol, and filter by properties decorated with a specific decorator:
```ts
type Namespaced<T> = {
[Symbol(P): T[P] for P in keyof T where namespaced decorates P];
}
```
`Symbol(P)` and `where namespaced decorates P` are obviously very made up and probably not great choices choices.
That's not exactly right for use from individual decorators though. Since decorators take and return an extended PropertyDescriptor, maybe we can just type the decorator itself:
```ts
const namespaced = <P extends PropertyDescriptor>(descriptor: P): NamespacedPropertyDescriptor<P> => { ... }
type NamespacedPropertyDescriptor<P extends PropertyDescriptor> = {
key: Symbol(P['key']);
} extends P;
```
(I threw in the `extends` type operator because `key` must override `key` in `P`, so an intersection won't work. Some kind of object-spread-like syntax could also work)
That describes that the key of the property is transformed, but it doesn't associate it with the declaration of that symbol.
One possibility for that is for TypeScript to understand the type of the `finisher` method on the property descriptor and use that to infer changes to the class itself:
```ts
// P is the PropertyDescriptor passed to the decorator
// C is the constructor type for static properties, there would need to be a instance type
// as well
type NamespacedPropertyDescriptor<P extends PropertyDescriptor, C> = {
key: typeof C[P['key']];
finisher(klass: C): NamespacedClass<C, P['key']>;
};
type NamespacedClass<C, K> = {
readonly K: symbol;
} extends C;
```
I realize I just put a lot of hand-wavy, complicated, probably not workable ideas up there, but maybe there's some better direction for the solution (short of adding an imperative type language) those could inspire.
Another approach would be for TypeScript to intrinsically understand the operation of a few well-known decorators as and not allow their definition within the type system itself.
## Use Cases
One example of name-rewriting decorators is to prevent name collisions with string properties by renaming a property to a unique Symbol defined on the class:
```ts
abstract class A {
@namespaced f() {}
}
abstract class B {
@namespaced f() {}
}
class C implements A, B {
[A.f]() {
console.log('C[A.f]');
}
[B.f]() {
console.log('C[B.f]');
}
}
```
Class A above would be equivalent to this manual namespacing:
```ts
class A {
static f = Symbol();
[A.f]() {}
}
```
## Examples
```ts
/**
* Rewrites a string-keyed class property to use a Symbol defined on the constructor.
* @
*/
const namespaced = (descriptor: PropertyDescriptor) => {
if (typeof descriptor.key === 'symbol') {
return descriptor;
}
const symbol = Symbol(descriptor.key);
return {
key: symbol,
...descriptor,
finisher(klass) { Object.defineProperty(klass, descriptor.key, {value: symbol}); }
};
};
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,Needs Proposal | low | Major |
349,334,443 | go | cmd/go: go list -e does not assign missing import errors to correct package | See TODO in testdata/script/mod_lists_bad_import.txt. | NeedsFix,GoCommand | low | Critical |
349,401,641 | pytorch | [feature request] Provide a way to redirect shared memory prefix "/torch_" | If you have a question or would like help and support, please ask at our
[forums](https://discuss.pytorch.org/).
If you are submitting a feature request, please preface the title with [feature request].
If you are submitting a bug report, please fill in the following details.
## Issue description
Shared memory in pytorch always use ‘/torch_xxx' but there may be not enough space.
So I get error like:
```
RuntimeError: unable to write to file </torch_76_3625483894> at /pytorch/aten/src/TH/THAllocator.c:383
```
Related code:
https://github.com/pytorch/pytorch/blob/4a6fbf03c62bcfbfdc60d955b48f5c44bfe42173/torch/csrc/generic/StorageSharing.cpp#L41
```
std::string handle = "/torch_";
```
Can u provide a way to config or redirect the location of shared memory using environment variable. If the environment variable is not set, "/torch_" is used as default.
## Code example
Please try to provide a minimal example to repro the bug.
Error messages and stack traces are also helpful.
## System Info
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch or Caffe2: PyTorch
- How you installed PyTorch (conda, pip, source): pip
- Build command you used (if compiling from source):
- OS: CentOS 7
- PyTorch version: 0.4.1
- Python version: 2.7 & 3.6
- CUDA/cuDNN version:
- GPU models and configuration:
- GCC version (if compiling from source):
- CMake version:
- Versions of any other relevant libraries:
| triaged,enhancement | low | Critical |
349,406,117 | angular | Make it easier to use Angular elements outside of an Angular app | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Performance issue
[x] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[ ] Other... Please describe:
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
In documentation & samples demo projects create custom elements from existing Angular components. Packing of these elements includes main.JS. This will include the entire application with the element. How would one implement exporting of specific components inside an existing application?
## Expected behavior
<!-- Describe what the desired behavior would be. -->
Export specific components with any required infrastructure, not whole app.
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://stackblitz.com or similar (you can use this template as a starting point: https://stackblitz.com/fork/angular-gitter).
-->
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
Code reusability: being able to use existing specific components in other apps (for instance: in a minimal index.html to be loaded by Electron app, so it'll have same functionality as Angular app, with no code duplications)
## Environment
<pre><code>
Angular version: X.Y.Z
<!-- Check whether this is still an issue in the most recent Angular version -->
6
Browser:
- [ ] Chrome (desktop) version XX
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: XX <!-- run `node --version` -->
- Platform: <!-- Mac, Linux, Windows -->
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
| feature,freq2: medium,area: elements,type: confusing,state: needs more investigation,P4,feature: insufficient votes,feature: votes required,canonical | low | Critical |
349,447,889 | rust | Indirect inference for closure arguments fails depending on the contents of the closure | Minimal example I could create:
```rust
trait Callback<Args> {
fn call(&mut self, args: Args);
}
impl<F, Args> Callback<Args> for F where F: FnMut(Args) {
fn call(&mut self, args: Args) {
self(args)
}
}
#[derive(Debug)]
struct Wrapper<T>(T);
#[derive(Debug)]
enum Foo {
Bar,
Baz,
Baq
}
fn use_cb<C: Callback<Wrapper<Foo>>>(mut c: C) {
c.call(Wrapper(Foo::Baz))
}
fn main() {
// this one works
use_cb(|variant| {
println!("the variant is: {:?}", variant);
});
// this one fails to infer
use_cb(|variant| {
println!("the variant is: {:?}", variant.0);
});
// with these annotations it works
use_cb(|variant: Wrapper<_>| {
println!("the variant is: {:?}", variant.0);
});
}
```
note that the sole difference between the first and second closure is that the first prints `variant` while the other tries to access its inner field.
It reproduces on both stable and nightly. | C-enhancement,T-compiler,A-inference | low | Critical |
349,459,199 | kubernetes | StatefulSet - can't rollback from a broken state | /kind bug
**What happened**:
I updated a StatefulSet with a non-existent Docker image. As expected, a pod of the statefulset is destroyed and can't be recreated (ErrImagePull). However, when I change back the StatefulSet with an existing image, the StatefulSet doesn't try to remove the broken pod to replace it by a good one. It keeps trying to pull the non-existing image.
You have to delete the broken pod manually to unblock the situation.
[Related Stackoverflow question](https://stackoverflow.com/questions/48894414/kubernetes-statefulset-pod-startup-error-recovery)
**What you expected to happen**:
When rolling back the bad config, I expected the StatefulSet to remove the broken pod and replace it by a good one.
**How to reproduce it (as minimally and precisely as possible)**:
1. Deploy the following StatefulSet:
```
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 10Gi
```
2. Once the 3 pods are running, update the StatefulSet spec and change the image to `k8s.gcr.io/nginx-slim:foobar`
3. Observe the new pod failing to pull the image.
4. Roll back the change.
5. Observe the broken pod not being deleted.
**Anything else we need to know?**:
* I observed this behaviour both on 1.8 and 1.10.
* This seems related to the discussion in #18568
**Environment**:
- Kubernetes version (use `kubectl version`):
```
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9
.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.5-gke.3", GitCommit:"6265b9797fc8680c8395abeab12c1e3bad14069a", GitTreeState:"clean", BuildDate:"2018-07-19T23:02:51Z", GoVersi
on:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
```
- Cloud provider or hardware configuration: Google Kubernetes Engine
- OS (e.g. from /etc/os-release): COS
cc @joe-boyce | kind/bug,sig/scheduling,sig/apps,sig/architecture,lifecycle/frozen | high | Critical |
349,543,298 | go | x/tools/cmd/goimports: lost line between package and import statements in output | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.10.3 linux/amd64
### Does this issue reproduce with the latest release?
Yes with go1.10.3 and the following commits:
```
golang.org/x/tools/imports: 8cb83b71b42ccf5fe279fa8a24a6a8f65507dc9c
golang.org/x/tools/cmd/goimports: 059bec968c61383b574810040ba9410712de36c5
```
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/legers/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/legers/go"
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="/usr/bin/gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build017712154=/tmp/go-build -gno-record-gcc-switches"
```
### What did you do?
Very similar to issue #26290, but no initial import statement.
```
$ cat x.go
package p // comment
// T does something useful.
func T() {
var _ = fmt.Printf
}
$ goimports x.go
package p // comment
import "fmt" // T does something useful.
func T() {
var _ = fmt.Printf
}
```
A blank line should have been inserted after the new import statement.
| help wanted,NeedsFix,Tools | low | Critical |
349,554,077 | flutter | Horizontal Stepper with addition and removal of Steps | Feature Request:
* Need a way to add and remove steps from Stepper
* Need a better implementation for Horizontal stepper type
The issue, appears to be in the TODO list of Stepper.dart for a long time. The issue hasn't seen any update in a while, and seems to be left behind. It would be a great leap, if the hardcoded stepper.dart is made more developer friendly.
[Take a look at Stepper.dart, Stepper implementation code](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/material/stepper.dart) | c: new feature,framework,f: material design,P3,team-design,triaged-design | low | Major |
349,590,789 | neovim | ":silent !echo 1" does not display anything (unlike Vim) | Given t.vim:
```vim
silent !echo 1 1>&2
```
`nvim -Nu t.vim -cq` will not output anything, unlike Vim.
The `:silent` is required with Vim to not display the `:!` command, and to not cause a hit-ENTER prompt.
This is used by Vader to display test results.
It makes no difference to use `--headless`.
I've been using `writefile(lines, '/dev/stderr', 'a')` as a workaround for now (via $VADER_OUTPUT_FILE), but that does not work on Windows (tried "CON" for example).
Without the `:silent` Vim displays:
```
:!echo 1 1>&2
1
Press ENTER or type command to continue
```
And Neovim (requires `--headless`; `nvim -u NONE -c ':!echo 1 >&2' -cq --headless`):
```
:!echo 1 >&2
1
```
Any suggestions for writing to Windows console (without going through a real file that needs to be `type`ed then)?
NVIM v0.3.2-77-g2b9fc9a13 | compatibility,terminal | low | Major |
349,681,335 | godot | rect_size of Control node with tool script is (0, 0) in _ready | **Godot version:**
3.0.6
**Issue description:**
The value of rect_size of a Control node is (0, 0) in _ready when it is called in a tool script. | enhancement,documentation | low | Major |
349,688,192 | rust | libc and test features are special-cased for feature validity checks | When implementing https://github.com/rust-lang/rust/pull/52644, where we emit errors when using unknown features, I had to special case two features: `libc` and `test`, because they weren't being picked up (and there were time constraints).
The special-casing is here:
https://github.com/rust-lang/rust/blob/0aa8d0320266b5579428312095fe49af05ada972/src/librustc/middle/stability.rs#L840-L847
- `libc` is declared unlike any other feature:
https://github.com/rust-lang/libc/blob/6bdbf5dc937459bd10e6bc4dc52b0adbd8cf4358/src/lib.rs#L92-L94
so it's not entirely surprising that the detection overlooks it, but I didn't find the root cause.
- I didn't manage to figure out what makes `test` special.
Ideally these should both not be special-cased. The relevant code for feature collection is in `src/librustc/middle/lib_features.rs`, so that's a good place to start looking. | C-cleanup | low | Critical |
349,700,392 | TypeScript | Math with Number Literal Type | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
number literal type, math
If there's already another such proposal, I apologize; I couldn't find it and have been asking on Gitter if such a proposal was already brought up every now and then.
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
We can have number literals as types,
```ts
const x : 32 = 34; //Error
const y : 5 = 5; //OK
```
If possible, math should be enabled with these number literals,
```ts
const x : 32 + 5 = 38; //Error
const y : 42 + 10 = 52; //OK
const z : 10 - 22 = -12; //OK
```
And comparisons,
```ts
const x : 32 >= 3 ? true : false = true; //OK
const y : 32 >= 3 ? true : false = false; //Error
//Along with >, <, <=, ==, != maybe?
```
And ways to convert between string literals and number literals,
```ts
//Not too sure about syntax
const w : (string)32 = "hello"; //Error
const x : (string)10 = "10"; //OK
const y : (number)"45" = 54; //Error
const z : (number)"67" = 67; //OK
```
## Use Cases
One such use case (probably the most convincing one?) is using it to implement tuple operations.
Below, you'll find the types `Add<>, Subtract<>, NumberToString<>, StringToNumber<>`.
They have been implemented with... Copy-pasting code until the desired length.
Then, using those four types, the tuple operations are implemented.
While this works, having to copy-paste isn't ideal and shows there's something lacking in the language.
I've found that I've had to increase the number of copy-pastes every few days/weeks as I realize I'm working with larger and larger tuples over time.
The below implementation also ignores negative numbers for simplicity but supporting negative numbers would be good.
```ts
/*
function gen (max) {
const base = [];
const result = [];
for (let i=0; i<max; ++i) {
if (i == max-1) {
base.push(`${i}: number;`);
} else {
base.push(`${i}: ${i+1};`);
}
if (i>=2) {
result.push(`${i}: Add<Add<T, ${i-1}>, 1>;`);
}
}
const a = base.join("\n ");
const b = result.join("\n ");
return `${a}\n }[T];\n ${b}`
}
gen(100)
*/
export type Add<T extends number, U extends number> = {
[index: number]: number;
0: T;
1: {
[index: number]: number;
0: 1;
1: 2;
2: 3;
3: 4;
4: 5;
5: 6;
6: 7;
7: 8;
8: 9;
9: 10;
10: 11;
11: 12;
12: 13;
13: 14;
14: 15;
15: 16;
16: 17;
17: 18;
18: 19;
19: 20;
20: 21;
21: 22;
22: 23;
23: 24;
24: number;
}[T];
2: Add<Add<T, 1>, 1>;
3: Add<Add<T, 2>, 1>;
4: Add<Add<T, 3>, 1>;
5: Add<Add<T, 4>, 1>;
6: Add<Add<T, 5>, 1>;
7: Add<Add<T, 6>, 1>;
8: Add<Add<T, 7>, 1>;
9: Add<Add<T, 8>, 1>;
10: Add<Add<T, 9>, 1>;
11: Add<Add<T, 10>, 1>;
12: Add<Add<T, 11>, 1>;
13: Add<Add<T, 12>, 1>;
14: Add<Add<T, 13>, 1>;
15: Add<Add<T, 14>, 1>;
16: Add<Add<T, 15>, 1>;
17: Add<Add<T, 16>, 1>;
18: Add<Add<T, 17>, 1>;
19: Add<Add<T, 18>, 1>;
20: Add<Add<T, 19>, 1>;
21: Add<Add<T, 20>, 1>;
22: Add<Add<T, 21>, 1>;
23: Add<Add<T, 22>, 1>;
24: Add<Add<T, 23>, 1>;
}[U];
/*
function gen (max) {
const base = [];
const result = [];
for (let i=1; i<=max; ++i) {
base.push(`${i}: ${i-1};`);
if (i>=2) {
result.push(`${i}: Subtract<Subtract<T, ${i-1}>, 1>;`);
}
}
const a = base.join("\n ");
const b = result.join("\n ");
return `${a}\n }[T];\n ${b}`
}
gen(100)
*/
export type Subtract<T extends number, U extends number> = {
[index: number]: number;
0: T;
1: {
[index: number]: number;
0: number;
1: 0;
2: 1;
3: 2;
4: 3;
5: 4;
6: 5;
7: 6;
8: 7;
9: 8;
10: 9;
11: 10;
12: 11;
13: 12;
14: 13;
15: 14;
16: 15;
17: 16;
18: 17;
19: 18;
20: 19;
21: 20;
22: 21;
23: 22;
24: 23;
25: 24;
}[T];
2: Subtract<Subtract<T, 1>, 1>;
3: Subtract<Subtract<T, 2>, 1>;
4: Subtract<Subtract<T, 3>, 1>;
5: Subtract<Subtract<T, 4>, 1>;
6: Subtract<Subtract<T, 5>, 1>;
7: Subtract<Subtract<T, 6>, 1>;
8: Subtract<Subtract<T, 7>, 1>;
9: Subtract<Subtract<T, 8>, 1>;
10: Subtract<Subtract<T, 9>, 1>;
11: Subtract<Subtract<T, 10>, 1>;
12: Subtract<Subtract<T, 11>, 1>;
13: Subtract<Subtract<T, 12>, 1>;
14: Subtract<Subtract<T, 13>, 1>;
15: Subtract<Subtract<T, 14>, 1>;
16: Subtract<Subtract<T, 15>, 1>;
17: Subtract<Subtract<T, 16>, 1>;
18: Subtract<Subtract<T, 17>, 1>;
19: Subtract<Subtract<T, 18>, 1>;
20: Subtract<Subtract<T, 19>, 1>;
21: Subtract<Subtract<T, 20>, 1>;
22: Subtract<Subtract<T, 21>, 1>;
23: Subtract<Subtract<T, 22>, 1>;
24: Subtract<Subtract<T, 23>, 1>;
25: Subtract<Subtract<T, 24>, 1>;
}[U];
/*
function gen (max) {
const base = [];
for (let i=0; i<max; ++i) {
base.push(`${i}: "${i}";`);
}
return base.join("\n ");
}
gen(101)
*/
export type NumberToString<N extends number> = ({
0: "0";
1: "1";
2: "2";
3: "3";
4: "4";
5: "5";
6: "6";
7: "7";
8: "8";
9: "9";
10: "10";
11: "11";
12: "12";
13: "13";
14: "14";
15: "15";
16: "16";
17: "17";
18: "18";
19: "19";
20: "20";
21: "21";
22: "22";
23: "23";
24: "24";
25: "25";
26: "26";
27: "27";
28: "28";
29: "29";
30: "30";
} & { [index : number] : never })[N];
/*
function gen (max) {
const base = [];
for (let i=0; i<max; ++i) {
base.push(`"${i}": ${i};`);
}
return base.join("\n ");
}
gen(101)
*/
export type StringToNumber<S extends string> = ({
"0": 0;
"1": 1;
"2": 2;
"3": 3;
"4": 4;
"5": 5;
"6": 6;
"7": 7;
"8": 8;
"9": 9;
"10": 10;
"11": 11;
"12": 12;
"13": 13;
"14": 14;
"15": 15;
"16": 16;
"17": 17;
"18": 18;
"19": 19;
"20": 20;
"21": 21;
"22": 22;
"23": 23;
"24": 24;
"25": 25;
"26": 26;
"27": 27;
"28": 28;
"29": 29;
"30": 30;
} & { [index: string]: never })[S];
type LastIndex<ArrT extends any[]> = (
Subtract<ArrT["length"], 1>
);
type IndicesOf<ArrT> = (
Extract<
Exclude<keyof ArrT, keyof any[]>,
string
>
);
type ElementsOf<ArrT> = (
{
[index in IndicesOf<ArrT>] : ArrT[index]
}[IndicesOf<ArrT>]
);
type GtEq<X extends number, Y extends number> = (
number extends X ?
boolean :
number extends Y ?
boolean :
number extends Subtract<X, Y> ?
//Subtracted too much
false :
true
);
type KeepGtEq<X extends number, Y extends number> = (
{
[n in X]: (
true extends GtEq<n, Y>?
n : never
)
}[X]
)
type SliceImpl<ArrT extends any[], OffsetT extends number> = (
{
[index in Subtract<
KeepGtEq<
StringToNumber<IndicesOf<ArrT>>,
OffsetT
>,
OffsetT
>]: (
ArrT[Extract<
Add<index, OffsetT>,
keyof ArrT
>]
)
}
);
type Slice<ArrT extends any[], OffsetT extends number> = (
SliceImpl<ArrT, OffsetT> &
ElementsOf<SliceImpl<ArrT, OffsetT>>[] &
{ length : Subtract<ArrT["length"], OffsetT> }
);
declare const sliced0: Slice<["x", "y", "z"], 0>;
const sliced0Assignment: ["x", "y", "z"] = sliced0; //OK
declare const sliced1: Slice<["x", "y", "z"], 1>;
const sliced1Assignment: ["y", "z"] = sliced1; //OK
declare const sliced2: Slice<["x", "y", "z"], 2>;
const sliced2Assignment: ["z"] = sliced2; //OK
declare const sliced3: Slice<["x", "y", "z"], 3>;
const sliced3Assignment: [] = sliced3; //OK
//Pop Front
type PopFrontImpl<ArrT extends any[]> = (
{
[index in Exclude<
IndicesOf<ArrT>,
NumberToString<LastIndex<ArrT>>
>]: (
ArrT[Extract<
Add<StringToNumber<index>, 1>,
keyof ArrT
>]
)
}
);
type PopFront<ArrT extends any[]> = (
PopFrontImpl<ArrT> &
ElementsOf<PopFrontImpl<ArrT>>[] &
{ length: Subtract<ArrT["length"], 1> }
);
//Kind of like Slice<["x", "y", "z"], 1>
declare const popped: PopFront<["x", "y", "z"]>;
const poppedAssignment: ["y", "z"] = popped; //OK
//Concat
type ConcatImpl<ArrT extends any[], ArrU extends any[]> = (
{
[index in IndicesOf<ArrT>] : ArrT[index]
} &
{
[index in NumberToString<Add<
StringToNumber<IndicesOf<ArrU>>,
ArrT["length"]
>>]: (
ArrU[Subtract<index, ArrT["length"]>]
)
}
);
type Concat<ArrT extends any[], ArrU extends any[]> = (
ConcatImpl<ArrT, ArrU> &
ElementsOf<ConcatImpl<ArrT, ArrU>>[] &
{ length : Add<ArrT["length"], ArrU["length"]> }
);
declare const concat0: Concat<[], ["x", "y"]>;
const concat0Assignment: ["x", "y"] = concat0;
declare const concat1: Concat<[], ["x"]>;
const concat1Assignment: ["x"] = concat1;
declare const concat2: Concat<[], []>;
const concat2Assignment: [] = concat2;
declare const concat3: Concat<["a"], ["x"]>;
const concat3Assignment: ["a", "x"] = concat3;
declare const concat4: Concat<["a"], []>;
const concat4Assignment: ["a"] = concat4;
declare const concat5: Concat<["a", "b"], []>;
const concat5Assignment: ["a", "b"] = concat5;
declare const concat6: Concat<["a", "b"], ["x", "y"]>;
const concat6Assignment: ["a", "b", "x", "y"] = concat6;
type PushBackImpl<ArrT extends any[], ElementT> = (
{
[index in IndicesOf<ArrT>] : ArrT[index]
} &
{
[index in NumberToString<ArrT["length"]>] : ElementT
}
);
type PushBack<ArrT extends any[], ElementT> = (
PushBackImpl<ArrT, ElementT> &
ElementsOf<PushBackImpl<ArrT, ElementT>>[] &
{ length : Add<ArrT["length"], 1> }
);
declare const pushBack0: PushBack<[], true>;
const pushBack0Assignment: [true] = pushBack0;
declare const pushBack1: PushBack<[true], "a">;
const pushBack1Assignment: [true, "a"] = pushBack1;
declare const pushBack2: PushBack<[true, "a"], "c">;
const pushBack2Assignment: [true, "a", "c"] = pushBack2;
type IndexOf<ArrT extends any[], ElementT> = (
{
[index in IndicesOf<ArrT>]: (
ElementT extends ArrT[index] ?
(ArrT[index] extends ElementT ? index : never) :
never
);
}[IndicesOf<ArrT>]
);
//Can use StringToNumber<> to get a number
declare const indexOf0: IndexOf<["a", "b", "c"], "a">; //"0"
declare const indexOf1: IndexOf<["a", "b", "c"], "b">; //"1"
declare const indexOf2: IndexOf<["a", "b", "c"], "c">; //"2"
declare const indexOf3: IndexOf<["a", "b", "c"], "d">; //Never
declare const indexOf4: IndexOf<["a", "b", "a"], "a">; //"0"|"2"
declare const indexOf5: IndexOf<["a", "b", "c"], "a" | "b">; //"0"|"1"
//Splice
//Pop Back
//Push Front
//And other tuple operations?
//Implementing Map<> is even worse, you basically have to copy-paste some boilerplate code
//for each kind of Map<> operation you want to implement because we
//can't have generic types as type parameters
```
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
<!-- Show how this would be used and what the behavior would be -->
Addition and subtraction should only allow integers
```ts
type example0 = 1 + 1; //2
type example1 = 1 + number; //number
type example2 = number + 1; //number
type example3 = number + number; //number
type example4 = 1.0 + 3.0; //4
type example5 = 1 - 1; //0
type example6 = 1 - number; //number
type example7 = number - 1; //number
type example8 = number - number; //number
type example9 = 1.0 - 3.0; //-2
```
If we did allow `5.1 - 3.2` as a type, we would get `1.8999999999999995` as a type.
```ts
type invalidSub = 5.1 - 3.2; //Error, 5.1 not allowed; must be integer; 3.2 not allowed; must be integer
type invalidAdd = 5.1 + 3.2; //Error, 5.1 not allowed; must be integer; 3.2 not allowed; must be integer
```
Maybe throw a compiler error on overflow with concrete numeric types substituted in,
```ts
//Number.MAX_SAFE_INTEGER + 1
type overflow = 9007199254740992 + 1; //Should throw compiler error; overflow
//Number.MIN_SAFE_INTEGER - 1000000
type overflow2 = -9007199254740991 - 1000000; //Should throw compiler error; overflow
type OverflowIfGreaterThanZero<N extends number> = (
9007199254740992 + N
);
type okay0 = OverflowIfGreaterThanZero<0>; //Will be Number.MAX_SAFE_INTEGER, so no error
type okay1 = OverflowIfGreaterThanZero<number>; //No error because type is number
type err = OverflowIfGreaterThanZero<1>; //Overflow; error
```
Comparisons should work kind of like `extends`
```ts
type gt = 3 > 2 ? "Yes" : "No"; //"Yes"
type gteq = 3 >= 2 ? "Yes" : "No"; //"Yes"
type lt = 3 < 2 ? "Yes" : "No"; //"No"
type lteq = 3 <= 2 ? "Yes" : "No"; //"No"
type eq = 3 == 3 ? "Yes" : "No"; //"Yes"
type neq = 3 != 3 ? "Yes" : "No"; //"No"
```
If either operand is `number`, the result should distribute
```ts
type gt0 = number > 2 ? "Yes" : "No"; //"Yes"|"No"
type gt1 = 2 > number ? "Yes" : "No"; //"Yes"|"No"
type gt2 = number > number ? "Yes" : "No"; //"Yes"|"No"
```
Don't think floating-point comparison should be allowed.
Possible to have too many decimal places to represent accurately.
```ts
type precisionError = 3.141592653589793 < 3.141592653589793238 ?
"Yes" : "No"; //Ends up being "No" even though it should be "Yes" because precision
```
Converting between string and number literals is mostly for working with tuple indices,
```ts
type example0 = (string)1; //"1"
type example1 = (string)1|2; //"1"|"2"
type example2 = (number)"1"|"2"; //1|2
```
Converting from integer string literals to number literals should be allowed, as long as within `MIN_SAFE_INTEGER` and `MAX_SAFE_INTEGER`,
but floating point should not be allowed, as it's possible that the string can be a floating point number that cannot be accurately represented.
For the same reason, converting floating point number literals to string literals shouldn't be allowed.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
Since this suggestion is purely about the type system, it shouldn't change any run-time behaviour, or cause any JS code to be emitted.
I'm pretty sure I've overlooked a million things in this proposal... | Suggestion,Awaiting More Feedback | high | Critical |
349,709,391 | TypeScript | Autocomplete fails on inferred type |
**TypeScript Version:** 3.1.0-dev.20180810
**Search Terms:** Autocomplete inferred
**Code**
```typescript
type ValueTypes = 1 | 2 | 3;
export type RecordValues<Props extends string | number | symbol> = Record<
Props,
ValueTypes
>;
export function identityFn<RV extends RecordValues<keyof RV>>(value: RV) {
return value;
}
const recordValue: RecordValues<"asdf"> = {
asdf: 2 //Autotcomplete works fine for the value here
};
identityFn({
asdf: 1 //Autocomplete shows every type available
});
```
**Expected behavior:**
For the autocomplete for the value on the second comment to be the same as the first.
Please note I'm not talking about the keys but the values.
**Actual behavior:**
Autocomplete for the value suggests every type available.
**Playground Link:** [link](https://www.typescriptlang.org/play/#src=type%20ValueTypes%20%3D%201%20%7C%202%20%7C%203%3B%0D%0A%0D%0Aexport%20type%20RecordValues%3CProps%20extends%20string%20%7C%20number%20%7C%20symbol%3E%20%3D%20Record%3C%0D%0A%20%20Props%2C%0D%0A%20%20ValueTypes%0D%0A%3E%3B%0D%0A%0D%0Aexport%20function%20identityFn%3CRV%20extends%20RecordValues%3Ckeyof%20RV%3E%3E(value%3A%20RV)%20%7B%0D%0A%20%20return%20value%3B%0D%0A%7D%0D%0A%0D%0Aconst%20recordValue%3A%20RecordValues%3C%22asdf%22%3E%20%3D%20%7B%0D%0A%20%20asdf%3A%202%20%2F%2FAutotcompletion%20works%20fine%20here%0D%0A%7D%3B%0D%0A%0D%0AidentityFn(%7B%0D%0A%20%20asdf%3A%201%20%2F%2F%20Try%20autocomplete%20here%0D%0A%7D)%3B%0D%0A)
**Related Issues:** #14841
| Bug,Domain: Completion Lists | low | Major |
349,728,004 | rust | specialize clone_from_slice for Copy types? | Just found that `clone_from_slice` emits worse code for `Copy` types than necessary. Wonder if it's possible to specialize it to emit the same code as `copy_from_slice`. | C-enhancement,T-libs-api | low | Minor |
349,730,524 | godot | range with 0 step won't raise error when used in for loop | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot/OS version:**
<!-- Specify commit hash if non-official. -->
a71a5fc Manjaro x86_64
**Issue description:**
<!-- What happened, and what was expected. -->
Expected: debugger to stop the execution on line with following code:
```gdscript
for i in range(0,2,0):
```
because the following line stops the execution
```gdscript
print(range(0,2,0))
```
with this error
```
Error calling built-in function 'range': step argument is zero!
```
Happened: fails silently
**Steps to reproduce:**
New node with this script
```gdscript
extends Node
func _ready():
for i in range(0,2,0):
pass
```
the execution continues. inner block is never executed. | bug,topic:gdscript,confirmed | low | Critical |
349,734,206 | rust | diagnostics: suggest using smaller type if constant is small enough and smaller type impls conversion | this code
````rust
fn main() {
let x = 123.1234_f32 - f32::from(100);
}
````
fails to compile as follows:
````rust
error[E0277]: the trait bound `f32: std::convert::From<i32>` is not satisfied
--> src/main.rs:2:28
|
2 | let x = 123.1234_f32 - f32::from(100);
| ^^^^^^^^^ the trait `std::convert::From<i32>` is not implemented for `f32`
|
= help: the following implementations were found:
<f32 as std::convert::From<u8>>
<f32 as std::convert::From<i16>>
<f32 as std::convert::From<u16>>
<f32 as std::convert::From<i8>>
= note: required by `std::convert::From::from
error: aborting due to previous error
For more information about this error, try `rustc --explain E0277`.
````
We know that ````100 < <u8>::max_value()````and that there is ```` <f32 as std::convert::From<u8>>```` thus maybe we could suggest using ````f32::from(100_u8)```` instead? | C-enhancement,T-compiler,A-suggestion-diagnostics | low | Critical |
349,737,254 | youtube-dl | Site support request: ViuTV (viu.tv) | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.08.04*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.08.04**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'http://viu.tv/encore/amanchu-advance/amanchu-advancee1mau-goh-ha-tin-maan-seung-yue-go-baak-dik-si']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.08.04
[debug] Python version 3.7.0 (CPython) - Linux-4.17.13-arch1-1-ARCH-x86_64-with-arch-Arch-Linux
[debug] exe versions: ffmpeg n4.0.2, ffprobe n4.0.2, rtmpdump 2.4
[debug] Proxy map: {}
[generic] amanchu-advancee1mau-goh-ha-tin-maan-seung-yue-go-baak-dik-si: Requesting header
WARNING: Falling back on generic information extractor.
[generic] amanchu-advancee1mau-goh-ha-tin-maan-seung-yue-go-baak-dik-si: Downloading webpage
[generic] amanchu-advancee1mau-goh-ha-tin-maan-seung-yue-go-baak-dik-si: Extracting information
ERROR: Unsupported URL: http://viu.tv/encore/amanchu-advance/amanchu-advancee1mau-goh-ha-tin-maan-seung-yue-go-baak-dik-si
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 792, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3.7/site-packages/youtube_dl/extractor/common.py", line 502, in extract
ie_result = self._real_extract(url)
File "/usr/lib/python3.7/site-packages/youtube_dl/extractor/generic.py", line 3274, in _real_extract
raise UnsupportedError(url)
youtube_dl.utils.UnsupportedError: Unsupported URL: http://viu.tv/encore/amanchu-advance/amanchu-advancee1mau-goh-ha-tin-maan-seung-yue-go-baak-dik-si
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: http://viu.tv/encore/amanchu-advance/amanchu-advancee1mau-goh-ha-tin-maan-seung-yue-go-baak-dik-si
- Single video: http://viu.tv/encore/re-zero-starting-life-in-another-world/re-zero--starting-life-in-another-worlde14ming-wai-juet-mong-dik-jat-beng
- Playlist: (Not applicable)
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.
If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
1. ViuTV is a legitimate TV channel in Hong Kong. Not to be confused with the over-the-top (OTT) video service, Viu.
2. For some programmes, one can visit the website and watch recent episodes for free without an account. (i.e. No need to log in)
3. **The programmes can only be watched in Hong Kong. The website will check the IP address.**
3. The above example URLs should be valid until 2018/08/25. For those programmes, an episode is available for 14 days from the original air date.
4. The URLs seem to be in the following format: `http://viu.tv/encore/<programme name>/<programe name>e<episode number><episode title in Cantonese Romanization a.k.a. Cantonese Pinyin>`
5. Using various tools, such as uBlock Origin, I can see that the browser accesses some `.m3u8` URLs when loading the page. Some `.m3u8` URLs have a token parameter. (e.g. http://219.76.112.178/session/p8-5-6bd5ef84f75-103b5d391edc78b/hls/vodcp20/201808090592104/201808090592104.m3u8?token=a286766801b1a06d909fc654407b37a6_1533995750_1533981650)
6. By using `ffmpeg` on the `.m3u8` URLs, the episode can be downloaded. You may find an excerpt of `ffmpeg` output below.
*Note that there are two audio streams in the example - one is Japanese dub and another is Cantonese dub. The command in the example does not download both audio streams - only one audio stream will be selected.*
7. For the same session, subtitles are available at `http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104-TRD.srt`.
```$ ffmpeg -i 'http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104.m3u8?token=499cc7923bef342a897c3b352abf68ef_1534009994_1533995894' -c copy output.mkv
ffmpeg version n4.0.2 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 8.1.1 (GCC) 20180531
configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-avresample --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-nvenc --enable-omx --enable-shared --enable-version3
libavutil 56. 14.100 / 56. 14.100
libavcodec 58. 18.100 / 58. 18.100
libavformat 58. 12.100 / 58. 12.100
libavdevice 58. 3.100 / 58. 3.100
libavfilter 7. 16.100 / 7. 16.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 1.100 / 5. 1.100
libswresample 3. 1.100 / 3. 1.100
libpostproc 55. 1.100 / 55. 1.100
[hls,applehttp @ 0x556f43e6ba40] Opening 'http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer5_vod.m3u8' for reading
[http @ 0x556f43e83180] Opening 'http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer1_vod.m3u8' for reading
[http @ 0x556f43e83180] Opening 'http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer2_vod.m3u8' for reading
[http @ 0x556f43e83180] Opening 'http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer3_vod.m3u8' for reading
[http @ 0x556f43e83180] Opening 'http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer4_vod.m3u8' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'https://wstreamks.now.com/hls/vodcp20/vodcp20.key' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'crypto+http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer5/4452_Period1/segment0.aac' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'https://wstreamks.now.com/hls/vodcp20/vodcp20.key' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'crypto+http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer1/4452_Period1/segment0.ts' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'https://wstreamks.now.com/hls/vodcp20/vodcp20.key' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'crypto+http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer2/4452_Period1/segment0.ts' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'https://wstreamks.now.com/hls/vodcp20/vodcp20.key' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'crypto+http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer3/4452_Period1/segment0.ts' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'https://wstreamks.now.com/hls/vodcp20/vodcp20.key' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'crypto+http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer4/4452_Period1/segment0.ts' for reading
Input #0, hls,applehttp, from 'http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104.m3u8?token=499cc7923bef342a897c3b352abf68ef_1534009994_1533995894':
Duration: 00:23:02.00, start: 0.127989, bitrate: 0 kb/s
Program 0
Metadata:
variant_bitrate : 339789
Stream #0:0(A2): Audio: aac (HE-AAC), 48000 Hz, stereo, fltp, 92 kb/s
Metadata:
id3v2_priv.com.apple.streaming.transportStreamTimestamp: \x00\x00\x00\x00\x00\x00,\xff
comment : Audio 2
Stream #0:1: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv), 426x240, 25 fps, 25 tbr, 90k tbn, 50 tbc
Metadata:
variant_bitrate : 339789
Stream #0:2(A1): Audio: aac (HE-AAC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp (default)
Metadata:
variant_bitrate : 339789
comment : Audio 1
Program 1
Metadata:
variant_bitrate : 853700
Stream #0:0(A2): Audio: aac (HE-AAC), 48000 Hz, stereo, fltp, 92 kb/s
Metadata:
id3v2_priv.com.apple.streaming.transportStreamTimestamp: \x00\x00\x00\x00\x00\x00,\xff
comment : Audio 2
Stream #0:3: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv), 854x480, 25 fps, 25 tbr, 90k tbn, 50 tbc
Metadata:
variant_bitrate : 853700
Stream #0:4(A1): Audio: aac (HE-AAC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp (default)
Metadata:
variant_bitrate : 853700
comment : Audio 1
Program 2
Metadata:
variant_bitrate : 1675777
Stream #0:0(A2): Audio: aac (HE-AAC), 48000 Hz, stereo, fltp, 92 kb/s
Metadata:
id3v2_priv.com.apple.streaming.transportStreamTimestamp: \x00\x00\x00\x00\x00\x00,\xff
comment : Audio 2
Stream #0:5: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv), 1280x720, 25 fps, 25 tbr, 90k tbn, 50 tbc
Metadata:
variant_bitrate : 1675777
Stream #0:6(A1): Audio: aac (HE-AAC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp (default)
Metadata:
variant_bitrate : 1675777
comment : Audio 1
Program 3
Metadata:
variant_bitrate : 3116703
Stream #0:0(A2): Audio: aac (HE-AAC), 48000 Hz, stereo, fltp, 92 kb/s
Metadata:
id3v2_priv.com.apple.streaming.transportStreamTimestamp: \x00\x00\x00\x00\x00\x00,\xff
comment : Audio 2
Stream #0:7: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv), 1920x1080, 25 fps, 25 tbr, 90k tbn, 50 tbc
Metadata:
variant_bitrate : 3116703
Stream #0:8(A1): Audio: aac (HE-AAC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp (default)
Metadata:
variant_bitrate : 3116703
comment : Audio 1
Output #0, matroska, to 'AA_01.mkv':
Metadata:
encoder : Lavf58.12.100
Stream #0:0: Video: h264 (Main) (H264 / 0x34363248), yuv420p(tv), 1920x1080, q=2-31, 25 fps, 25 tbr, 1k tbn, 90k tbc
Metadata:
variant_bitrate : 3116703
Stream #0:1(A2): Audio: aac (HE-AAC) ([255][0][0][0] / 0x00FF), 48000 Hz, stereo, fltp, 92 kb/s
Metadata:
id3v2_priv.com.apple.streaming.transportStreamTimestamp: \x00\x00\x00\x00\x00\x00,\xff
comment : Audio 2
Stream mapping:
Stream #0:7 -> #0:0 (copy)
Stream #0:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
[hls,applehttp @ 0x556f43e6ba40] Opening 'crypto+http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer5/4452_Period1/segment1.aac' for reading
[hls,applehttp @ 0x556f43e6ba40] No longer receiving playlist 1
[hls,applehttp @ 0x556f43e6ba40] No longer receiving playlist 2
[hls,applehttp @ 0x556f43e6ba40] No longer receiving playlist 3
[hls,applehttp @ 0x556f43e6ba40] Opening 'crypto+http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer4/4452_Period1/segment1.ts' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'crypto+http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer5/4452_Period1/segment2.aac' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'crypto+http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer4/4452_Period1/segment2.ts' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'crypto+http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer5/4452_Period1/segment3.aac' for reading
[hls,applehttp @ 0x556f43e6ba40] Opening 'crypto+http://219.76.14.67/session/p8-5-27eb10f8627-183c7b5e177af05/hls/vodcp20/201808090592104/201808090592104_Layer4/4452_Period1/segment3.ts' for reading
```
| geo-restricted | low | Critical |
349,762,476 | pytorch | [caffe2] enforce fail at context_gpu.cu:285, cannot get GPU memory usage statistics | ## Issue description
enforce fail at context_gpu.cu:285, cannot get GPU memory usage statistics
## Code example
from caffe2.python import utils
mem_stats = utils.GetGPUMemoryUsageStats()
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0811 21:28:20.350441 28545 init.h:99] Caffe2 GlobalInit should be run before any other API calls.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/utils.py", line 245, in GetGPUMemoryUsageStats
device_option=core.DeviceOption(caffe2_pb2.CUDA, 0),
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/workspace.py", line 165, in RunOperatorOnce
return C.run_operator_once(StringifyProto(operator))
RuntimeError: [enforce fail at context_gpu.cu:285] FLAGS_caffe2_gpu_memory_tracking. Pass --caffe2_gpu_memory_tracking to enable memory stats Error from operator:
output: "____mem____" name: "" type: "GetGPUMemoryUsage" device_option { device_type: 1 cuda_gpu_id: 0 }
## System Info
- PyTorch or Caffe2: Caffe2
- How you installed PyTorch (conda, pip, source): source
- OS: Ubuntu 16.04
- CUDA/cuDNN version: 8.0 / 5.1
- GPU models and configuration: GTX 1080
- GCC version (if compiling from source): 5.1
| caffe2 | low | Critical |
349,764,311 | rust | Feature Requst: Add regex support to test filters | ## Proposal
I'd like to filter my tests based on a more complex pattern than just substring.
The current behavior of rust tests is that when a pattern is specified, then only tests that contain that pattern as a substring are run.
`cargo test -- foo` will run only tests that contain the substring `foo`.
I propose to amend this to allow for a regex syntax.
`cargo test -- /regex/` which will run tests that match the regex `regex`.
This is "technically" a breaking change because it's possible that people are already using filters with slashes in them, but since test names can't contain slashes these invocations would always be running 0 tests.
# Misc
* Let me know if this should be an RFC. The feature seems pretty small, but I'd understand if it should be more speced out.
* I'd be happy to implement this if people are interested.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"Jasonoro"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | T-libs-api,C-feature-request,A-libtest | medium | Major |
349,775,073 | TypeScript | Incorrect "Cannot find name" when importing default value | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.0.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** Cannot find name import default
**Code**
```ts
// fetch.d.ts
declare function fetch(url: string): Promise<any>;
export = fetch;
declare namespace fetch {
export { fetch as default };
}
// index.ts
import f from "./fetch";
f("url"); // error TS2304: Cannot find name 'f'
```
**Expected behavior:** Name should be found
**Actual behavior:** Cannot find name. Interestingly `import { default as f } from "./fetch";` works.
**Playground Link:** (It doesn't support modules :/)
**Related Issues:** Couldn't find one
| Bug | low | Critical |
349,776,467 | rust | Add a feature to print actual static libs linked in, --print=native-static-libs seems not to work | For Debian we need to keep track of static libs linked into a final binary, for copyright purposes.
It seems that `src/librustc_codegen_llvm/back/link.rs` performs the actual linking and search of paths, and it refers to a `--print=native-static-libs` option but this doesn't seem to work.
For example when I [link to the system libbacktrace](https://github.com/infinity0/backtrace-rs/commit/9cbfda1e96ea624991237c3ba48f398df741cd90) and then build this with `RUSTFLAGS="--print=native-static-libs" cargo build --verbose --verbose --verbose` I can see the build.rs output which is
~~~~
cargo:rustc-link-lib=static=backtrace
cargo:rustc-link-search=native=/usr/lib/gcc/x86_64-linux-gnu/7/
~~~~
But then I don't see any follow-up output that indicates that rustc actually found the file `/usr/lib/gcc/x86_64-linux-gnu/7/libbacktrace.a` and this was what was actually linked into the final `backtrace-sys.rlib`. | A-linkage,A-driver,T-compiler,C-feature-request | low | Minor |
349,800,295 | angular | pwa can't cache resource with space in url | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Performance issue
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[ ] Other... Please describe:
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
when where is a resources in assets directory, and filename includes space " ", the @angular/pwa service worker can't cache it correctly.
## Expected behavior
<!-- Describe what the desired behavior would be. -->
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://stackblitz.com or similar (you can use this template as a starting point: https://stackblitz.com/fork/angular-gitter).
-->
ng add @angular/pwa
put an image named "a b.jpg" in assets
reference it in html
ng build --prod
open it in browser
enable offline mode in chrome devtools
refresh page
it will got a 504.
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
## Environment
<pre><code>
Angular version: X.Y.Z
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [x] Chrome (desktop) version 68
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: XX <!-- run `node --version` -->
- Platform: <!-- Mac, Linux, Windows -->
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
| type: bug/fix,effort1: hours,freq1: low,area: service-worker,state: confirmed,P3 | low | Critical |
349,813,158 | TypeScript | Assignment of argument to argument loses deduced typing | Could be a duplicate but difficult to construct an issue search to discern.
Problem shows with `typescript@latest` version `3.0.1`.
Given nature of problem suspect that it would not have been fixed in `typescript@next`. Apologies for not testing.
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
type inference error argument
**Code**
```ts
// A *self-contained* demonstration of the problem follows...
// Just paste this code into your favorite TypeScript IDE
```
```
function foobar_good(arg: unknown): void | number {
if (typeof arg != 'number') {
return
}
return arg;
}
function foobar_bad(arg: unknown): void | number {
if (typeof arg != 'number') {
return
}
arg = arg;
return arg;
}
```
> [ts] Type 'unknown' is not assignable to type 'number | void'.
Type 'unknown' is not assignable to type 'void'.
**Expected behavior:**
Should compile.
**Actual behavior:**
Does not compile.
| Bug | low | Critical |
349,832,945 | go | encoding/json: clarify what happens when unmarshaling into a non-empty interface{} | ### What version of Go are you using (`go version`)?
```
go version go1.10.3 linux/amd64
```
### What operating system and processor architecture are you using (`go env`)?
```sh
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/deuill/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/deuill/.go"
GORACE=""
GOROOT="/usr/lib/go"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build634949289=/tmp/go-build -gno-record-gcc-switches"
```
### What did you do?
Link to Play: https://play.golang.org/p/37E1QHWofMy
Passing a pointer to a `struct` (anonymous or not) as type `interface{}` to `json.Unmarshal` will have the original type replaced with a `map[string]interface{}` (correctly decoded, however). It appears the difference between correct and incorrect result is changing:
```go
var nopointer interface{} = Object{}
```
to:
```go
var nopointer = Object{}
```
In the example above. A `interface{}` containing a pointer to a `struct` works as expected, e.g.:
```go
var pointer interface{} = &Object{}
```
is the same as:
```go
var pointer = &Object{}
```
### What did you expect to see?
The following results, in order of desirability:
- The correct `struct` type (`Object`, for the example above) populated with data from the JSON object given.
- An error denoting the inability to unmarshal into the underlying type given.
- The behaviour above documented in the documentation for `json.Unmarshal`.
### What did you see instead?
None of the above? The use case that led to this issue for me was giving constructors types, where these types would be used as "templates" of sorts for populating with data in subsequent requests. For example:
```go
type Decoder interface {
Decode(*http.Request) (interface{}, error)
}
type RequestHandler func(interface{}) (interface{}, error)
type Request struct {
Value interface{}
}
func (r Request) Decode(r *http.request) (interface{}, error) {
// Read body from request
// ...
if err := json.Unmarshal(body, &r.Value); err != nil {
return nil, err
}
return r, nil
}
type User struct {
Name string `json:"name"`
}
func NewHandler(decoder Decoder, handler RequestHandler) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
req, err := decoder.Decode(r)
if err != nil {
return
}
resp, err := handler(req)
if err != nil {
return
}
// Encode response into ResponseWriter etc.
// ...
}
}
```
It is implied, then, that `NewHandler` is called with the bare type:
```
NewHandler(Request{Value: User{}}, userHandler)
```
Which will give "fresh" values to the `RequestHandler` each time, as they emerge from `Request.Decode`. If given a pointer, i.e.:
```
NewHandler(Request{Value: &User{}}, userHandler)
```
Previous requests will populate the pointer to `User`, leaving garbage data for future requests.
Apologies for the perhaps obtuse example. | Documentation,NeedsFix | low | Critical |
349,837,288 | rust | Add a `skip` method to the `Read` trait | The [`Read`](https://doc.rust-lang.org/std/io/trait.Read.html) trait doesn't currently offer an optimal way to discard a number of bytes from the data source. Using the `bytes()` iterator requires you to evaluate a `Result` for each byte you discard and using `read_exact()` requires you to provide a buffer to unnecessarily fill.
This solution (borrowed from [a StackOverflow answer](https://stackoverflow.com/a/42247224/109549)) seems to work nicely:
```rust
let mut file = File::open("foo.txt").unwrap();
// Discard 27 bytes
io::copy(&mut file.by_ref().take(27), &mut io::sink());
// Read the rest
let mut interesting_contents = Vec::new();
file.read_to_end(&mut interesting_contents).unwrap();
```
I can put a PR together if you'd like. | T-libs-api,C-feature-request | medium | Major |
349,849,658 | TypeScript | No completions in global namespace augmentation from original namespace | **TypeScript Version:** 3.1.0-dev.20180810
**Code**
**a.d.ts**
```ts
declare namespace N {
class A {}
}
```
**b.d.ts**
```ts
export {};
declare global {
namespace N {
class B extends N./**/ {}
}
}
```
**Expected behavior:**
Completions at `/**/` are `A` and `B`.
**Actual behavior:**
Only get completion for `B`. | Bug,Help Wanted,Domain: Completion Lists | low | Minor |
349,907,696 | go | cmd/compile: recognize map-copy with range idiom | As @mvdan notes on the map clearing idiom issue #20138 there are more cases of the map copying pattern than there are cases of the map clearing pattern in the std library:
```
gogrep '~ $m2 = $_; $*_; for $k, $v := range $(m1 is(map)) { $m2[$k] = $v }' std cmd
```
map copy pattern:
```go
m2 := make(map[K]V)
for k, v := range m1 {
m2[k] = v
}
```
This pattern and variations of it could be detected and made more efficient by avoiding (some) iteration overhead and allocating the new map to a sufficient size.
### Implementation Notes
Copying the backing array directly with memcpy (as analog to memclr in the map clearing pattern optimization) however would need to keep the internal hash seed of the two maps equal. Given that there are no guarantees what the iteration order over maps is (not even that it can not be the same) from a language spec perspective that change seems backwards compatible. There is the possibility of leaking state (the seed) to other parts of a go implemented system and by learning something about collisions in the copied map now being able to infer collisions in the original map.
Another implementation caveat is that any values in overflow buckets will need to be copied too. In the map clear case they are currently ignored for clearing as the pointers to them will be removed and the overflow buckets are garbage collected. For a map copy however all overflow buckets will need to be processed too. The new overflow buckets could be bulk allocated. Depending on the complexity and performance gain the backing array and overflow buckets could be allocated in one allocation together.
A further aspect is that if the map is growing that growth would either need to finish first before copying the new bucket array (could be a lot of work) or the state of map growth with old and new bucket array would have to be copied too.
Note that map make + map copy issue is similar to slice make + slice copy/append #26252 and some of the infrastructure to detect the patterns could be reused for both optimizations. There is the possibility to make even more general compiler pass that detects making and overwriting more complex structures (that can not already be easily handled in ssa phase) to avoid unneeded memclr in the future. Slices and Maps directly seem like common enough and requested cases to specialize first in absence of a more general and likely more complex solution.
### Map Copying A Special Case Of Map Merging
A more general optimization to map patterns could be to detect any merge (where copy is a special case of m2 being empty):
```go
for k, v := range m1 {
m2[k] = v
}
```
While this is easier to detect as a pattern (no compile time reasoning that m2 is empty needed) it is unclear if there are large performance gains that could be made in the non-copy case.
@josharian @khr @mvdan | Performance,NeedsInvestigation,compiler/runtime | low | Major |
349,912,413 | angular | @angular/animations. incoherent and inconsistent behavior when animating divs | ## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Performance issue
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[ ] Other... Please describe:
</code></pre>
## Current behavior
I am playing around with the animation package and multiple points i have to raise here.
Please look at the stackblitz link provided to understand the bottom example.
For the exact same animation code (but with 2 different animation triggers):
. item red in an ngFor
. item yellow without ngFor
When you hit the button a first time
item red: move with animation (20px)
item yellow: move with animation (30px)
When you hit the button a second time
item red: move with animation from the beginning of the row (not from his current position) => **Problem1 (lost position)**
item yellow: move with NO animation from his current position to the new one => **Problem2 (lost animation)**
note:
When you uncomment only the line in the method `animSecondDone` the problem2 is solved. But it should not be needed in my opinion since `transition('* => moveYellowX', animate('1s linear'))` includes `moveYellowX to moveYellowX`.
Doing the same for red item=> when you uncomment only the line in the method `animDone`, it broke the animation of red Item totally **Problem3**.
**Problem4 => animation behave differently with/without ngFor**
I think all problems [1 to 4] mentioned should be fixed. Can you confirm that all these are problems?
(i didn't open one issue per problem but we can if you confirm and needed)
## Expected behavior
On each button click, item yellow and item red behave the same so move horizontally with animation.
## Minimal reproduction of the problem with instructions
check `Current behavior` section for instructions.
https://stackblitz.com/edit/angular-animations-poke-bugs?file=src/app/anim-test-a.component.ts
## What is the motivation / use case for changing the behavior?
Fixing the package.
## Environment
Angular V. => all in the stackblitz package.json
Browser:
- [x] Chrome (desktop) Version 68.0.3440.106 (Official Build) (64-bit)
| type: bug/fix,area: animations,freq2: medium,P3 | low | Critical |
349,939,564 | pytorch | [feature request] Rank-Revealing QR - Adding dgeqp3 support to torch.qr | ## Issue description
PyTorch currently does not QR decomposition with pivots, which is useful in the context of rank identification.
This can be done with the current setup using `SVD`, but is much less efficient than a QR decomposition.
Given the availability of `orgqr` and `geqrf` as direct calls, I assume that `torch.qr` is implemented using those. Support for [`dgeqp3`](http://www.netlib.org/lapack/explore-html/dd/d9a/group__double_g_ecomputational_ga1b0500f49e03d2771b797c6e88adabbb.html#ga1b0500f49e03d2771b797c6e88adabbb) (QR with pivots) would be a nice addition.
Scipy's implementation of [`sp.linalg.qr`](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.linalg.qr.html) supports pivoting through the `pivot=True` argument and a call to [`dgeqp3`](http://www.netlib.org/lapack/explore-html/dd/d9a/group__double_g_ecomputational_ga1b0500f49e03d2771b797c6e88adabbb.html#ga1b0500f49e03d2771b797c6e88adabbb)
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano @vishwakftw @SsnL | todo,feature,triaged,module: linear algebra,actionable | low | Major |
349,940,756 | go | cmd/go: provide straightforward way to see non-test dependencies | It's important to be able to see the full set of dependencies used by production code exclusive of tests, particularly with different major versions as distinct modules, as inadvertently using an import with a different major version can introduce bugs (also, some external dependencies can be toxic to have in production code).
The `go mod why` command provides a `-vendor` flag to exclude tests; perhaps `go list` could provide a similar flag to exclude testing dependencies.
In addition to the above, it is also useful to be able to be able to see modules used by local production code and tests, but not transitively by tests in external dependencies, as external dependencies are harder to change, and inadvertently forked modules are still an issue for tests too. I'm not sure of a good way to specify that, though. | NeedsInvestigation,modules | high | Critical |
349,941,424 | go | x/net/http2: Connection reset incorrectly in high load performance test | ### What version of Go are you using (`go version`)?
golang:1.10.0 linux
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
container: rhel
kubenet:
### What did you do?
```
func (sc *serverConn) wroteFrame(res frameWriteResult) {
...
sc.closeStream(st, errHandlerComplete) <=== _code1_
}
...
wr.replyToWriter(res.err) <=== _code2_
...
}
```
**In some cases, the server goroutine will be switch out execution context between _code1_ and _code2_.
If the handler goroutine switch back to execution context in these cases, that will make writeDataFromHandler() (writeHeaders() also?) failed and set responseWriterState to dirty.**
```
func (sc *serverConn) writeDataFromHandler(stream *stream, data []byte, endStream bool) error {
...
case <-stream.cw:
// If both ch and stream.cw were ready (as might
// happen on the final Write after an http.Handler
// ends), prefer the write result. Otherwise this
// might just be us successfully closing the stream.
// The writeFrameAsync and serve goroutines guarantee
// that the ch send will happen before the stream.cw
// close.
select {
case err = <-ch:
frameWriteDone = true
default:
return errStreamClosed
}
}
...
}
func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) {
...
if err := rws.conn.writeDataFromHandler(rws.stream, p, endStream); err != nil {
rws.dirty = true
return 0, err
}
...
}
```
**This issue can be reproduced easily if add sc.logf() between _code1_ and _code2_ in lower load performace test.
PS: I think in wroteFrame(), we shouldn'd call "wr.replyToWriter(res.err)" when closing stream with error. It will cause writeDataFromHandler() exit with no error. But in fact this stream/responseWriterState is already on incorrect status.**
### What did you expect to see?
no connection reset in load test
### What did you see instead?
connecttion reset and traffic failed beofre new connection set up
| NeedsInvestigation | low | Critical |
349,972,860 | go | x/mobile: lifecycle.StageDead never reach on Android | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.8.1 darwin/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN="/Users/ntop/tools/go"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/ntop/workspace/go"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/t6/8417qy4n2zn3lb370499nrl00000gn/T/go-build741486693=/tmp/go-build -gno-record-gcc-switches -fno-common"
CXX="clang++"
CGO_ENABLED="1"
PKG_CONFIG="pkg-config"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
### What did you do?
Add following code in the [basic example](https://github.com/golang/mobile/blob/master/example/basic/main.go)
```
for e := range a.Events() {
switch e := a.Filter(e).(type) {
case lifecycle.Event:
switch e.Crosses(lifecycle.StageAlive) {
case lifecycle.CrossOn:
log.Println("==========application create")
case lifecycle.CrossOff:
log.Println("==========application destroy")
}
```
### What did you expect to see?
Both "==========application create" and "==========application destroy" should be printed.
### What did you see instead?
I see only "==========application create" printed.
According to the documentation at lifecycle.go, StageAlive: On Android, these correspond to onCreate and onDestroy. So `e.Crosses(lifecycle.StageAlive) == lifecycle.CrossOff` will be true when Activity.Destroy() is Called. But it just did not happen. I have read the code of 'android.go', it did't implement the onDestroy() method, and did not send any lifecycle.StageDead message.
| mobile | low | Critical |
349,991,956 | godot | Undocumented thread (un)safety | As I commented in #18714 we have some basic concepts of thead safety documented in the docs.
I've been working on a procedural generator and trying to speed-up the generation. I decided to use multithreading but I have several doubts/problems.
context: I'm working using C++ GDN.
1. I'm using std::thread and OpenMP pragmas, does it have implications compared to the use of godot::Thread in relation to interacting with the engine API?
2. I have the following code to generate some meshes:
```
Spatial *meshes_node = new Spatial;
//#pragma omp parallel for
for (int i = 0; i < meshDataArray.size(); i++) {
const Array &arr = meshDataArray[i];
MeshInstance *meshInst = new MeshInstance;
meshInst->set_material_override(material);
Ref<ArrayMesh> mesh_ref(new ArrayMesh);
mesh_ref->add_surface_from_arrays(Mesh::PRIMITIVE_TRIANGLES, arr);
meshInst->set_mesh(mesh_ref);
meshes_node->add_child(meshInst);
meshInst->add_to_group(TERRAIN_MESH_GROUP_NAME);
}
m_world->root_node->add_child(meshes_node);
```
If I uncomment the #pragma it only generates the meshes processed by one thread. That's the reason of that? Do I have to generate it only with the main thread? Can that behaviour be circumvented using Nodes or the VisualServer?
3. Other code example, in this case is for collision generation:
```
Spatial *collisions_node = new Spatial;
//#pragma omp parallel for
for (int i = 0; i < meshDataArray.size(); i++) {
const Array &arr = meshDataArray[i];
Ref<ConcavePolygonShape> shape(new ConcavePolygonShape);
shape->set_faces(arr[Mesh::ARRAY_VERTEX]);
StaticBody *body = new StaticBody;
int owner_id = body->create_shape_owner(body);
body->shape_owner_add_shape(owner_id, shape);
collisions_node->add_child(body);
body->add_to_group(BODIES_GROUP_NAME);
}
m_world->root_node->add_child(collisions_node);
```
After removing the comment in the pragma te generation of concave collisions is way faster (without multithreading takes some seconds to complete) but the generation is unrealiable, it sometimes finishes and sometimes crashes with the following error:
> ERROR: get: Condition ' !id_map.has(p_rid.get_data()) ' is true. returned: __null
> At: core/rid.h:155.
> ERROR: shape_set_data: Condition ' !shape ' is true.
> At: modules/bullet/bullet_physics_server.cpp:146.
> ERROR: get: Condition ' !id_map.has(p_rid.get_data()) ' is true. returned: __null
> At: core/rid.h:155.
> ERROR: body_add_shape: Condition ' !shape ' is true.
> At: modules/bullet/bullet_physics_server.cpp:503.
> handle_crash: Program crashed with signal 11
What's the cause of this error? Do I need to use the PhysicsServer in order to have a safe concurrent access?
This class is being loaded as a Singleton and the docs say [it's safe to access the server in that case](https://godot.readthedocs.io/en/latest/tutorials/threads/thread_safe_apis.html#global-scope). What about nodes as in my examples? | discussion,topic:core | low | Critical |
349,998,957 | youtube-dl | YouTube content_html error | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.08.04*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.08.04**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['--verbose', '--no-overwrites', '--no-progress', '--download-archive=/home/rubin/bin/youtube-dl.archive.txt', '--ignore-errors', '--rate-limit=512k', '--retries=1', '--buffer-size=16K', '--continue', '--prefer-free-formats', '--merge-output-format=mkv', '--add-metadata', '--output=/srv/media/music/Monstercat Release/Dubstep/%(title)s.%(ext)s', '--extract-audio', '--audio-format=mp3', '--no-post-overwrites', '--audio-quality=0', 'https://www.youtube.com/playlist?list=PLF5C76212C58C464A']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.05.18.1
[debug] Python version 3.5.3 - Linux-4.9.0-4-amd64-x86_64-with-debian-9.5
[debug] exe versions: ffmpeg 2.6.4, ffprobe 2.6.4
[debug] Proxy map: {}
[youtube:playlist] PLF5C76212C58C464A: Downloading webpage
[download] Downloading playlist: Dubstep
[youtube:playlist] PLF5C76212C58C464A: Downloading page #1
ERROR: 'content_html'
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/youtube_dl/YoutubeDL.py", line 771, in extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/usr/lib/python3/dist-packages/youtube_dl/YoutubeDL.py", line 923, in process_ie_result
ie_entries, playliststart, playlistend))
File "/usr/lib/python3/dist-packages/youtube_dl/extractor/youtube.py", line 272, in _entries
content_html = more['content_html']
KeyError: 'content_html'
```
### Description of your *issue*, suggested solution and other information
Error while downloading playlist `https://www.youtube.com/playlist?list=PLF5C76212C58C464A` | cant-reproduce | low | Critical |
350,049,113 | TypeScript | Contradicting types in tooltip and type checker when inference fails and when using @ts-ignore | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.1.0-dev.20180802
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** inference fail assignability
**Code**
When ignoring an inference error using `//@ts-ignore`, Intellisense and the type checker seem to contradict each other.
```ts
// ===============
// utility types:
// taken from https://github.com/Microsoft/TypeScript/issues/14829#issuecomment-322267089
type NoInfer<T> = T & {[K in keyof T]: T[K]};
/**
* Tests if two types are equal
*/
export type Equals<T, S> =
[T] extends [S] ? (
[S] extends [T] ? true : false
) : false
;
// =================
// actual repro
const testFn1 = <T>(arg1: T, arg2: NoInfer<T>) => arg2 as T;
// @ts-ignore: this is an intentional error
const test1 = testFn1({ a: 1 }, { b: 2 });
// ^ because we forbid to infer T from arg2, test1 is inferred as {}
// otherwise we would get a union type
type Test1 = Equals<typeof test1, { a: 1 }>; // false
const success1: Test1 = true; // no error, although typeof success1 is false
const success2: Test1 = false; // error: Type "false" is not assignable to "true", although typeof success2 is false
```
**Expected behavior:**
`false` is assignable to a variable which (according to the tooltip) has type `false`.
`true` is not assignable to a variable which (according to the tooltip) has type `false`.
**Actual behavior:**
`false` **is not** assignable to a variable which (according to the tooltip) has type `false`.
`true` **is** assignable to a variable which (according to the tooltip) has type `false`.


**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
Maybe https://github.com/Microsoft/TypeScript/issues/17863 at a first glance | Bug | low | Critical |
350,057,417 | go | x/mobile: preserve eglContext when activity paused on Android | I have read some code in `x/mobile/app`, specially, `android.c` and `android.go`. Gomobile will create Surface and eglContext very time when application resumed:
```
// android.go
case w := <-windowRedrawNeeded:
if C.surface == nil {
if errStr := C.createEGLSurface(w); errStr != nil {
return fmt.Errorf("%s (%s)", C.GoString(errStr), eglGetError())
}
}
theApp.sendLifecycle(lifecycle.StageFocused)
widthPx := int(C.ANativeWindow_getWidth(w))
heightPx := int(C.ANativeWindow_getHeight(w))
theApp.eventsIn <- size.Event{
WidthPx: widthPx,
HeightPx: heightPx,
WidthPt: geom.Pt(float32(widthPx) / pixelsPerPt),
HeightPt: geom.Pt(float32(heightPx) / pixelsPerPt),
PixelsPerPt: pixelsPerPt,
Orientation: orientation,
}
theApp.eventsIn <- paint.Event{External: true}
case <-windowDestroyed:
if C.surface != nil {
if errStr := C.destroyEGLSurface(); errStr != nil {
return fmt.Errorf("%s (%s)", C.GoString(errStr), eglGetError())
}
}
C.surface = nil
theApp.sendLifecycle(lifecycle.StageAlive)
// android.c
char* createEGLSurface(ANativeWindow* window) {
char* err;
EGLint numConfigs, format;
EGLConfig config;
EGLContext context;
if (display == 0) {
if ((err = initEGLDisplay()) != NULL) {
return err;
}
}
if (!eglChooseConfig(display, RGB_888, &config, 1, &numConfigs)) {
return "EGL choose RGB_888 config failed";
}
if (numConfigs <= 0) {
return "EGL no config found";
}
eglGetConfigAttrib(display, config, EGL_NATIVE_VISUAL_ID, &format);
if (ANativeWindow_setBuffersGeometry(window, 0, 0, format) != 0) {
return "EGL set buffers geometry failed";
}
surface = eglCreateWindowSurface(display, config, window, NULL);
if (surface == EGL_NO_SURFACE) {
return "EGL create surface failed";
}
const EGLint contextAttribs[] = { EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE };
context = eglCreateContext(display, config, EGL_NO_CONTEXT, contextAttribs);
if (eglMakeCurrent(display, surface, surface, context) == EGL_FALSE) {
return "eglMakeCurrent failed";
}
return NULL;
}
char* destroyEGLSurface() {
if (!eglDestroySurface(display, surface)) {
return "EGL destroy surface failed";
}
return NULL;
}
```
This will cause the application lost eglContext when activity is paused(eg: press Home). A simple way to avoid this is to preserve the eglContext(only destroy Surface) and reuse it when needed. I have also make some research:
1. [Rethink how JNI and Map interact in lifecycle](https://github.com/mapbox/mapbox-gl-native/issues/578)
2. [100% NDK version of setPreserveEGLContextOnPause?](https://groups.google.com/forum/#!topic/android-ndk/jwVMF6zINus)
3. [ndk_helper/GLContext.cpp](https://android.googlesource.com/platform/development/+/19f647e/ndk/sources/android/ndk_helper/GLContext.cpp)
PS: The sample app [basic](https://github.com/golang/mobile/tree/master/example/basic) don't have the problem, cause it recreate the shader/index-buffer/vertex-buffer each time the window recreated, but it's impossible in a game with large amount of textures and buffers . | mobile | low | Critical |
350,059,738 | pytorch | Type name float registered twice. This should not happen. Do you have duplicated CAFFE_KNOWN_TYPE? | This is a tracking issue for issues in PyTorch related to runtime errors people may see when they build and install PyTorch and Caffe2 separately latest PyTorch master. This is caused by https://github.com/pytorch/pytorch/pull/10163 and it is **expected.** The solution is to stop building and installing PyTorch and Caffe2 separately; instead use `FULL_CAFFE2=1`. If you use the ONNX scripts after #10427.
I may update this space with more information as it happens.
If we add other static initializer registry type things, the `CAFFE_KNOWN_TYPE` initializer may not run first. We need to add appropriate error checking at all static initializer sites.
CC @yinghai @Yangqing | caffe2 | low | Critical |
350,068,288 | pytorch | Key already registered. Offending key: caffe2_print_stacktraces. | This error occurs if you install a `FULL_CAFFE2` build of PyTorch (e.g., `FULL_CAFFE2=1 python setup.py develop`), as well as another build of Caffe2 (e.g., `python setup_caffe2.py develop`)
`pip uninstall caffe2` is NOT sufficient to fix the problem. You will need to delete `caffe2/python/caffe2_pybind11_state.cpython-36m-x86_64-linux-gnu.so` (or similar) at least; a more reliable way to fix the problem is to clean your local build and try again. | caffe2 | low | Critical |
350,073,633 | go | proposal: cmd/go: add signal for `go get -u` to track branch | Hi everyone,
I already posted an issue regarding go.mod files.
https://github.com/golang/go/issues/26640
This one is a little bit different but little bit linked to that old one.
I'm actually developping multiple repositories using 2 branches **master** and **dev**. Those branches are defined for each repositories. Lets assume I have the following ones:
- **myrepo1 (dev and master) using myrepo2**
- **myrepo2 (dev and master)**
I was wondering if there is a way to specify a branch for a specific repository inside the go.mod file.
For instance:
```
module github.com/myrepo1
require (
github.com/myrepo2 v@sometag or @somebranch
}
```
The reason why I'm asking that is that, when I'm actually runing the `go mod -sync` cmd in my **repo2**, it will always check for the last commit in master (or maybe I'm wrong?). But I would like that it looks into my **dev branch** for the last commit.
So I was thinking that maybe I can **tag** my dev branch and then specify the version I want to grab but it means that I will need to tag everytime I want to test my dev branch.
Does anyone have already face that pb?
Hope my explanations are clear :s
Thanks in advance ! :)
@rsc
| Proposal,GoCommand,modules | medium | Major |
350,084,531 | pytorch | caffe2 CI does not test header install | This meant we did not catch #10463 | caffe2 | low | Minor |
350,090,149 | TypeScript | Improve diagnostic when string from property is not assignable to string literal. | **TypeScript Version:** 3.1.0-dev.20180810
**Code**
```ts
interface IceCreamOptions {
readonly cherry: "yes" | "no"
}
declare function iceCream(options: IceCreamOptions): void;
const options = { cherry: "yes" };
iceCream(options);
```
**Expected behavior:**
Error message recommending to add a type annotation to `options`. This should be detectable because the source of the failing property is a `PropertyAssignment` in an object literal being assigned to a variable. (If there is no variable assignment, we could still recommend using ``"yes" as "yes"` although that isn't as pretty.)
Could also come with a codefix. Note that in JS the fix is slightly different: put `/** @type {IceCreamOptions} */` before the `const` or use `/** @type {"yes"} */ ("yes")`.
**Actual behavior:**
Error messages says `Type 'string' is not assignable to type '"yes" | "no"'.` with no information on how to fix it. | Suggestion,Help Wanted,Effort: Moderate,Domain: Error Messages | low | Critical |
350,121,610 | pytorch | [Caffe2] Multithreading in Caffe2 | I am trying to implement a multithreading approach in Caffe2, but I got these errors:
```
E0809 13:54:40.390210 101295 net_async_scheduling.cc:213] Detected concurrent runs
E0809 13:54:40.390240 101295 net.h:57] Failed to execute async run
Exception in thread thread2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/alfred/catkin_ws/src/live_pytorch/src/segmenter.py", line 119, in worker
densepose(data)
File "/home/alfred/catkin_ws/src/live_pytorch/src/segmenter.py", line 77, in densepose
cls_boxes, cls_segms, cls_keyps, cls_bodys = infer_engine.im_detect_all(model, cv_image, None, timers=timers)
File "/home/alfred/exp/DensePose/detectron/core/test.py", line 58, in im_detect_all
model, im, cfg.TEST.SCALE, cfg.TEST.MAX_SIZE, boxes=box_proposals
File "/home/alfred/exp/DensePose/detectron/core/test.py", line 158, in im_detect_bbox
workspace.RunNet(model.net.Proto().name)
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/workspace.py", line 219, in RunNet
StringifyNetName(name), num_iter, allow_fail,
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/workspace.py", line 180, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at pybind_state.cc:1111] success. Error running net generalized_rcnn
```
May I know why I get these errors? | caffe2 | low | Critical |
350,124,085 | rust | If finalize_session_directory is called after errors it complains about missing session dir | It first deletes the session dir https://github.com/rust-lang/rust/blob/db1acaac7f8c84d8fb46211f5e96f678c28628a8/src/librustc_incremental/persist/fs.rs#L325-L335
And then tries to rename the session dir https://github.com/rust-lang/rust/blob/db1acaac7f8c84d8fb46211f5e96f678c28628a8/src/librustc_incremental/persist/fs.rs#L366-L383
It should return after deleting the session dir. | T-compiler,A-incr-comp,C-bug | low | Critical |
350,132,008 | go | cmd/go: diagnose platform-specific identifiers when building for other platforms |
### What version of Go are you using (`go version`)?
go version go1.10.3 darwin/amd64
### Does this issue reproduce with the latest release?
Yes
### What did you do?
main.go:
```
package main
func main() {
foo()
}
```
foo_windows.go:
```
package main
import "fmt"
func foo() {
fmt.Println("Windows")
}
```
Execute this command to get the error below:
```
$ GOOS=darwin go build
```
### What did you expect to see?
```
./main.go:4:2: undefined for darwin_amd64: foo (defined for windows)
```
### What did you see instead?
```
./main.go:4:2: undefined: foo
```
### Context
When a colleague that had not used golang previously tried to compile some linux-specific project on OSX, they got the 'undefined' error above and assumed the project was simply broken, not that it was platform-specific. If the golang compiler gave this user a better error message, they most likely would have figured out their issue quicker.
It seems to me that when the compiler runs into an undefined function/constant/whatever, it would be really useful if it could look at the other platform-specific files in that directory and see if they are defined there. If so, it could emit a better error message noting that the function isn't defined for the particular platform that you're compiling for, but that it is defined for other platforms (and mention which ones).
Thinking about my imagined solution, when encountering this type of error the compiler could probably 'quickly' scan the AST of other platforms and see if the thing was defined there? In particular, since this would only be done on error, that wouldn't affect compile performance in general?
Thanks for your consideration. | NeedsInvestigation,FeatureRequest | low | Critical |
350,163,120 | pytorch | Reduce code duplication in interpolate and make it more generic | Currently, `torch.nn.functional.interpolate` has dedicated implementation for 1d, 2d and 3d data, as well as for nearest, bilinear and bicubic interpolation. The set of different kernels that are dispatched can be seen [here](https://github.com/pytorch/pytorch/blob/master/torch/nn/functional.py#L2050-L2079).
Those implementation are all very similar one to the other, and there is a lot of code duplication in there.
Compare for example [nearest](https://github.com/pytorch/pytorch/blob/master/aten/src/THNN/generic/SpatialUpSamplingNearest.c#L31-L95) with [bilinear](https://github.com/pytorch/pytorch/blob/master/aten/src/THNN/generic/SpatialUpSamplingBilinear.c#L33-L106).
I believe it is possible to refactor the underlying implementation so that we can have a single C++ / CUDA codepath, with minimal code duplication.
This could be achieved in two independent steps (that could be done at the same time):
- factor out the different kernel computations, so that we have a generic `filter` that is used to compute the interpolation, plus the size of the filter as a struct. For an example, [see how Pillow implements it](https://github.com/python-pillow/Pillow/blob/master/src/libImaging/Resample.c#L9-L83), and then the interpolation coefficients can be generically computed [as in here](https://github.com/python-pillow/Pillow/blob/master/src/libImaging/Resample.c#L236).
- make the interpolate kernel separable (it is already separable in most cases). This means that we can have the user specify the dimensions he wants to interpolate as a list, and in the C++/CUDA code we have a loop over the dimensions. Something like `F.interpolate(image, dim=[-2, -1])` for spatial interpolation, or `F.interpolate(volume, dim=[-3, -2, -1])` for volumetric data. We can have reasonable defaults if `dim` is not specified (that depends on the shape of the input), to keep backwards compatibility.
The first point will allow us to fuse the `nearest` / `bilinear` / `bicubic` interpolation modes in a single file, while the second point will fuse `temporal` / `spatial` and `volumetric` into the same function. | feature,triaged,module: interpolation | medium | Major |
350,170,180 | puppeteer | feature: expose page.on('networkidle0') and page.on('networkidle2') events | This has been requested quite a few times on the bugtracker already.
As a workaround, network idleness [can be emulated](https://github.com/GoogleChrome/puppeteer/issues/1353#issuecomment-356561654) on the puppeteer-side. | feature,chromium | high | Critical |
350,271,403 | rust | "invert" borrow computation | I think we can make the process of checking borrows more efficient. Right now, we use a dataflow computation to compute the "borrows in scope" at each point. Then we walk over the MIR. At each point, whenever we access a path `P1`, we execute over the borrows in scope and look for any which conflict with `P1`. If we find any, we report an error.
I think what we should do instead is this: walk over all the borrows. For each borrow, do a DFS from the point of the borrow. Stop the DFS when we either (a) exit the borrow region or (b) detect that the path which was borrowed was overwritten. (We already do a very similar computation.) During that DFS, we look at the paths that are accessed by each statement we traverse, and check if they conflict with the borrow. (This will require us to efficiently index the paths that are accessed by a given location.)
This is a non-trivial amount of refactoring but I think it will be a win. We're already doing the DFS as part of the `precompute_borrows_out_of_scope` routine -- the only difference is that now we'll be checking the paths accessed by each statement as we go. But we can make that efficient by using a simple hashing scheme like the one described in #53159.
In exchange, we get to avoid doing an extra dataflow computation. This means one less dataflow to do and also one less thing to update.
| C-enhancement,P-medium,I-compiletime,T-compiler,A-NLL,WG-compiler-performance,NLL-performant | low | Critical |
350,313,871 | rust | Nondeterministic compiletest failure on long compiler output | If I write a UI test that produces a lot of compiler errors, the test fails with a message like this:
```none
running 1 test
F
failures:
---- [ui] ui/dont-commit.rs stdout ----
error: failed to decode compiler output as json: `EOF while parsing a string at line 1 column 302`
output: {"message":"snipped for brevity"}
line: {"message":"snipped for brevity"}
```
I can reliably trigger the bug with a UI test like this one:
```rust
fn main() {
let e = 1;
e = 2;
e = 2;
e = 2; // repeated a thousand times or so
}
```
However, the exact column at which the JSON parse fails is apparently nondeterministic. I ran the test multiple times under varying system load and got 237, 302, and 678. Seems like a race condition to me, although I don't know enough about compiletest to guess where the race might be.
# version info
The bug is present on commit 3e05f80a3790bb96.
```none
$ build/x86_64-unknown-linux-gnu/stage1/bin/rustc --version --verbose
rustc 1.30.0-dev
binary: rustc
commit-hash: unknown
commit-date: unknown
host: x86_64-unknown-linux-gnu
release: 1.30.0-dev
LLVM version: 7.0
``` | A-testsuite,E-hard,T-bootstrap,C-bug,A-compiletest | low | Critical |
350,335,284 | rust | Segfault in rustc -v on Linux when libunwind is present during compilation | I found a segfault in `rustc` that happens only when `libunwind.so` is present in the system during the build. Even though it [should use `libgcc_s.so`](https://github.com/rust-lang/rust/blob/master/src/libunwind/build.rs#L21), `libunwind` implementation is used instead.
The problem disappears when I uninstall `libunwind` and then rebuild Rust -- it correctly uses `libgcc_s.so`. While this could be a possible workaround other packages I use need this library installed.
Expected:
```
$ rustc -v
error: no input filename given
$
```
Actual:
```
$ rustc -v
error: no input filename given
Segmentation fault
$ dmesg | tail -1
[130216.618605] rustc[2142]: segfault at 0 ip 00007fc3e16174ee sp 00007ffdb724fc20 error 4 in libstd-1a86a61763c2b34b.so[7fc3e1542000+176000]
$
```
## Steps to reproduce
* Install `libunwind-dev` or equivalent (library + development files)
* Build Rust
* Run `rustc -v`
## Meta
`rustc --version --verbose`:
```
rustc 1.28.0-dev
binary: rustc
commit-hash: unknown
commit-date: unknown
host: x86_64-unknown-linux-gnu
release: 1.28.0-dev
LLVM version: 6.0
```
I collected the backtraces below with gdb and [rr](//rr-project.org)
Backtrace (last call to `_Unwind_Resume` before crash):
```
#0 0x00007f8d4dc19bf0 in __libunwind_Unwind_Resume () from /usr/lib64/libunwind.so.8
#1 0x00007f8d53ee87ad in <rustc_driver::RustcDefaultCalls as rustc_driver::CompilerCalls<'a>>::no_input () from /usr/lib64/librustc_driver-f1b059b9d0550531.so
#2 0x00007f8d53ee3ca2 in rustc_driver::run_compiler_with_pool () from /usr/lib64/librustc_driver-f1b059b9d0550531.so
#3 0x00007f8d53e4902c in <scoped_tls::ScopedKey<T>>::set () from /usr/lib64/librustc_driver-f1b059b9d0550531.so
#4 0x00007f8d53e1ddbe in syntax::with_globals () from /usr/lib64/librustc_driver-f1b059b9d0550531.so
#5 0x00007f8d53dfe783 in <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once () from /usr/lib64/librustc_driver-f1b059b9d0550531.so
#6 0x00007f8d53ab21da in __rust_maybe_catch_panic () from /usr/lib64/libstd-1a86a61763c2b34b.so
#7 0x00007f8d53ee1871 in rustc_driver::run () from /usr/lib64/librustc_driver-f1b059b9d0550531.so
#8 0x00007f8d53ef162b in rustc_driver::main () from /usr/lib64/librustc_driver-f1b059b9d0550531.so
#9 0x000055e87feadac3 in std::rt::lang_start::{{closure}} ()
#10 0x00007f8d53a77413 in std::panicking::try::do_call () from /usr/lib64/libstd-1a86a61763c2b34b.so
#11 0x00007f8d53ab21da in __rust_maybe_catch_panic () from /usr/lib64/libstd-1a86a61763c2b34b.so
#12 0x00007f8d53a6c166 in std::rt::lang_start_internal () from /usr/lib64/libstd-1a86a61763c2b34b.so
#13 0x000055e87feadb24 in main ()
```
Backtrace (after resume):
```
#0 0x00007f8d53eda2b0 in core::ptr::drop_in_place () from /usr/lib64/librustc_driver-f1b059b9d0550531.so
#1 0x00007f8d53ee4e0a in rustc_driver::run_compiler_with_pool () from /usr/lib64/librustc_driver-f1b059b9d0550531.so
#2 0x00007ffd2d23f170 in ?? ()
#3 0x0000000000000000 in ?? ()
```
Backtrace (at the time of crash):
```
#0 je_rtree_get (dependent=true, key=0, rtree=<optimized out>)
at /var/tmp/portage/dev-lang/rust-1.28.0-r2/work/rustc-1.28.0-src/src/liballoc_jemalloc/../jemalloc/include/jemalloc/internal/rtree.h:325
#1 je_chunk_lookup (dependent=true, ptr=0x0)
at /var/tmp/portage/dev-lang/rust-1.28.0-r2/work/rustc-1.28.0-src/src/liballoc_jemalloc/../jemalloc/include/jemalloc/internal/chunk.h:89
#2 huge_node_get (ptr=0x0)
at /var/tmp/portage/dev-lang/rust-1.28.0-r2/work/rustc-1.28.0-src/src/liballoc_jemalloc/../jemalloc/src/huge.c:11
#3 je_huge_dalloc (tsdn=0x7ffff7fdf530, ptr=0x0)
at /var/tmp/portage/dev-lang/rust-1.28.0-r2/work/rustc-1.28.0-src/src/liballoc_jemalloc/../jemalloc/src/huge.c:424
#4 0x00007ffff7ac7e0a in rustc_driver::run_compiler_with_pool () from /usr/lib64/librustc_driver-f1b059b9d0550531.so
#5 0x00007fffffffb6a0 in ?? ()
#6 0x0000000000000000 in ?? ()
``` | I-crash,O-linux,C-bug | low | Critical |
350,342,724 | vscode | Support auto update for Windows and Linux ZIP | Hi,
Updating VSCode Insiders (portable) on macOS is a great experience; it simply restarts the app and applies the update.
However VSCode Insiders (portable) on Windows, prompts me every single day to download a zip file. Not only does this method take a lot of time regularly, it also takes a ton of time to sync tens of thousands of files in Dropbox, where the portable versions will typically be placed.
Could we have the Windows version update same as the macOS version? :) | feature-request,install-update,linux,windows | medium | Critical |
350,360,972 | rust | std::fs::remove_dir_all doesn't handle paths that are too long | On Linux `unlink()` might fail with `ENAMETOOLONG` but `std::fs::remove_dir_all` doesn't handle that and ignores the error, leaving the directory as it was.
When the path is too long a parent directory should be `open()`ed instead and `unlinkat()` should be called with the relative path. | O-linux,C-bug,T-libs,A-io | low | Critical |
350,367,391 | rust | Performance regression in tight loop since rust 1.25 | I've finally gotten around to doing some proper benchmarking of rust versions for my crate:
http://chimper.org/rawloader-rustc-benchmarks/
As can be seen in the graph on that page there's a general performance improvement over time but there are some very negative outliers. Most (maybe all) of them seem to be very simple loops that decode packed formats. Since rust 1.25 those are seeing 30-40% degradations in performance. I've extracted a minimal test case that shows the issue:
```rust
fn decode_12le(buf: &[u8], width: usize, height: usize) -> Vec<u16> {
let mut out: Vec<u16> = vec![0; width*height];
for (row, line) in out.chunks_mut(width).enumerate() {
let inb = &buf[(row*width*12/8)..];
for (o, i) in line.chunks_mut(2).zip(inb.chunks(3)) {
let g1: u16 = i[0] as u16;
let g2: u16 = i[1] as u16;
let g3: u16 = i[2] as u16;
o[0] = ((g2 & 0x0f) << 8) | g1;
o[1] = (g3 << 4) | (g2 >> 4);
}
}
out
}
fn main() {
let width = 5000;
let height = 4000;
let buffer: Vec<u8> = vec![0; width*height*12/8];
for _ in 0..100 {
decode_12le(&buffer, width, height);
}
}
```
Here's a test run on my machine:
```sh
$ rustc +1.24.0 -C opt-level=3 bench_decode.rs
$ time ./bench_decode
real 0m4.817s
user 0m3.581s
sys 0m1.236s
$ rustc +1.25.0 -C opt-level=3 bench_decode.rs
$ time ./bench_decode
real 0m6.263s
user 0m5.067s
sys 0m1.196s
``` | I-slow,regression-from-stable-to-stable,T-libs | medium | Major |
350,379,394 | godot | Compress HTML5 export for hosts that don't support on-the-fly compression like itch.io | Hey there.
It would be nice if the Html5 export had a option to automatically gzip both the `.wasm` and the `.pck` output files. Currently you can do that afterwards but you'll need to hack into the exported `.js` file to ungzip the `.wasm` so it is not optimal (ungzipping the `.pck` is probably possible from the Html shell).
Usually you'd rely on your server to gzip those files but for instance [https://itch.io/](https://itch.io/) (which is really popular for distributing web games) does not do it because other engines (e.g. Unity) already compress their data during the export. | enhancement,platform:web,topic:porting,performance | medium | Critical |
350,415,328 | opencv | cuda::alphaComp is inaccurate with CV_8UC4 Mat type | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
Please:
* Read the documentation to test with the latest developer build.
* Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue.
* Try to be as detailed as possible in your report.
* Report only one problem per created issue.
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
- OpenCV => 3.4.1
- Operating System / Platform => Ubuntu Linux 16.04.9
- Compiler => g++ (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
##### Detailed description
Using cuda::alphaComp on 8-bit char-type images (CV_8UC4) produces inaccurate results. On first glance and when used once, this is barely visible, as the RGB value of the pixels is only offset by 1 or 2. However, when repeatedly compositing the same image, it gets darker until reaching a balance at 50% opacity.
This does not happen with images of type CV_32FC4.
##### Steps to reproduce
```.cpp
#include <stdio.h>
#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/opengl.hpp>
#include <opencv2/core/cuda.hpp>
#include <opencv2/cudaimgproc.hpp>
#include <opencv2/cudaarithm.hpp>
using namespace std;
using namespace cv;
// When using these defines, the resulting green square starts dimming.
// On first iteration of the loop, its RGB color is 0, 253, 0 instead of 0, 255, 0
// After running this loop for a while, the color dims to a perfect 0, 128, 0
#define TYPE_4CH CV_8UC4
#define TYPE_1CH CV_8UC1
#define MAX_VAL 255
// When using these defines, the results are as expected: the green square remains green
// And the pink square slowly fades in.
//#define TYPE_4CH CV_32FC4
//#define TYPE_1CH CV_32FC1
//#define MAX_VAL 1.0
int main(int argc, char *argv[])
{
namedWindow("test", CV_WINDOW_OPENGL);
// Drawing a green rectangle with 100% opacity
Mat frame(1080, 1920, TYPE_4CH, Scalar::all(0));
rectangle(frame, Rect(10, 10, 400, 400), Scalar(0, MAX_VAL, 0, MAX_VAL), CV_FILLED);
Mat blank(1080, 1920, TYPE_4CH, Scalar::all(0));
rectangle(blank, Rect(150, 150, 400, 400), Scalar(MAX_VAL, 0, MAX_VAL, MAX_VAL / 2), CV_FILLED);
cuda::GpuMat gpuFrame(frame);
while(true){
cuda::GpuMat gpuBlank(blank);
// Alpha-compositing them together numerous times should have the following results
// - Visible part of green square remains 100% green (RGB = 0, 255, 0)
// - Magenta square starts at opacity 50% and gradually fades in until becoming 100% visible
// over part of the green square
cuda::GpuMat gpuFrameResult;
cuda::alphaComp(gpuBlank, gpuFrame, gpuFrameResult, cuda::ALPHA_OVER);
// Splitting the image into 4 mats, to remove any resulting opacity from the image
vector<cuda::GpuMat> frameChannels;
cuda::GpuMat finalFrame(1080, 1920, TYPE_4CH, Scalar::all(0));
cuda::split(gpuFrameResult, frameChannels);
// A GpuMat with 100% opacity just to discard the eventual resulting
// alpha chanel in the result of alpha-compositing
cuda::GpuMat fullAlpha(1080, 1920, TYPE_1CH, Scalar(MAX_VAL));
if (gpuFrame.channels() > 3) {
frameChannels.pop_back();
}
frameChannels.push_back(fullAlpha);
cuda::merge(frameChannels, finalFrame);
finalFrame.copyTo(gpuFrame);
imshow("test", finalFrame);
waitKey(1);
}
}
``` | priority: low,category: gpu/cuda (contrib) | low | Critical |
350,426,481 | vscode | [api] Allow extensions to use the syntax highlighter | It would be great if the extension API would offer a way to access the syntax highlighter of VS Code.
I am currently building an extension that offers an inline help system, I use a WebviewPanel to display the help in the editor. For consistency it would be great if we could format code samples the same way the editor does but I currently see no way to do that. | feature-request,api,tokenization | high | Critical |
350,433,369 | node | running webpack under node version higher than v10.1 produces heavy cpu usage and a high number of Idle Wake Ups | <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Working version**: v10.1.0
* **Issue appears on versions**: v10.2.1 v10.3.0 v10.4.1 v10.5.0 v10.6.0 v10.7.0 v10.8.0
* **NVM VERSION**: 0.33.11
* **Platform**: Darwin 17.6.0 Darwin Kernel Version 17.6.0; root:xnu-4570.61.1~1/RELEASE_X86_64 x86_64
* **Subsystem**: unknow
<!-- Please provide more details below this comment. -->
After upgrading to the latest version I'm suffering constant heavy CPU load when running webpack (even without any file system changes on the code base). The machine gets almost unusable when actively developing.
package.json webpack versions:
```
{
"devDependencies": {
...
"nodemon": "^1.14.9",
...
"webpack": "^3.10.0",
"webpack-assets-manifest": "^1.0.0",
"webpack-bundle-analyzer": "^2.9.1",
"webpack-dev-middleware": "^2.0.4",
"webpack-dev-server": "^2.10.0",
"webpack-hot-middleware": "^2.21.0",
"webpack-manifest-plugin": "^1.3.2",
"webpack-spritesmith": "^0.3.3",
}
}
```
The number of "Idle Wake Ups" in Mac Os "Activity Monitor" is hight when the issue appears.
**v10.1.0 after 5 minutes without any file system changes**
<img width="847" alt="node-v10 1 0-5min" src="https://user-images.githubusercontent.com/2835032/44094668-19a4e316-9fd7-11e8-806e-29fd2ab754a0.png">
**v10.2.1 after 5 minutes without any file system changes**
<img width="847" alt="node-v10 2 1-5min" src="https://user-images.githubusercontent.com/2835032/44094679-20b805f2-9fd7-11e8-94e4-6cce828dc5ce.png">
**v10.3.0 after 5 minutes without any file system changes**
<img width="847" alt="node-v10 3 0-5min" src="https://user-images.githubusercontent.com/2835032/44094779-65ea3f28-9fd7-11e8-9811-cd9e2ae18170.png">
Any help/guidance on how to debug this issue is highly appreciated. | performance | low | Critical |
350,474,441 | go | x/mobile: Can't get the Back key event on Android | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.8.1 darwin/amd64
### Does this issue reproduce with the latest release?
YES
### What did you do?
Add following code in the basic example:
```
for e := range a.Events() {
switch e := a.Filter(e).(type) {
case key.Event:
log.Println("key event:", e)
}
}
```
Build and Run the basic example on an Android devices.
### What did you expect to see?
Print the correct key code value (BacK == 4 on Android).
### What did you see instead?
It prints 'KeyCodeUnknown'. It seems that in the 'android.go', there is a filter method `convAndroidKeyCode` which map the Back key as unknown. I think gomobile should properly process the keycode such as Back. On Android, Menu/Back are commonly used keycode. Maybe we can add a 'Origin' field in the `key.Event` struct, that I can get the original (no filtered) value.
PS: unity map the 'Back' key on Android to ESC. | mobile | low | Minor |
350,518,667 | flutter | Support converting obj-c<->swift host apps | As it stands <code>flutter create</code> requires knowing in advance if you will use swift in your project. Thats seems reasonable. Unfortunately, you also need to know in advance if any of the packages you will include also use swift. If that is the case you should have originally created your project with :
flutter create -i swift my_app
I got a lot of "Swift Language Version" errors - which often did not state the error was in a plugin (I am using 12 plugins)
For example - this error does not state the problem is with a plugin :
Xcode's output:
↳
The “Swift Language Version” (SWIFT_VERSION) build setting must be set to a supported value for targets which use Swift. This setting can be set in the build settings editor.
I wasted many hours and 10gb of downloads to try to figure out what is going on.
You can follow the saga here :
https://github.com/dlutton/flutter_tts/issues/4
Some ideas for how to improve the situation :
1. Remove <code>flutter create -i swift</code> and always add the necessary files for swift in the standard command
OR
2. Provide people with a clue that they should have created their project with the <code>-i swift</code> flag when the "Swift Language Version" error appears.
OR
3. Provide a <code>flutter modify -a swift</code> command to allow people to add support for swift in an existing project
I tried creating two start projects - one with swift and one without. There are thousands of differences between the two - so I think it is going to be quite a challenge to upgrade my existing project to support swift.
OR
4. During <code>flutter create</code> ask if this project will be used on the Mac, if so add swift support.
| platform-ios,tool,a: first hour,t: xcode,P3,team-ios,triaged-ios | low | Critical |
350,561,172 | godot | Wheel Texture is Flipped When Running the Game | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** v3.0.6.stable.official.8314054
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Windows 10 Version: 1803 OS build: 17134.191; Fedora 28
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:** When building the game and positioning the car wheels, everything looks right. However, when running the game the texture appears to be flipped. I was expecting when the game was running, the wheels would look the same.
<!-- What happened, and what was expected. -->

**Steps to reproduce:**
Compare the texture of the wheel in the game engine editor to when it's running.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[Wheel.zip](https://github.com/godotengine/godot/files/2288178/Wheel.zip) | bug,topic:editor,confirmed | low | Critical |
350,583,473 | go | cmd/go: clean on module uses directory name | ### What version of Go are you using (`go version`)?
go version go1.11rc1 darwin/amd64
### What operating system and processor architecture are you using (`go env`)?
```bash
GOARCH="amd64"
GOBIN=""
GOCACHE="$HOME/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="$HOME/go"
GOPROXY=""
GORACE=""
GOROOT="$HOME/sdk/go1.11rc1"
GOTMPDIR=""
GOTOOLDIR="/Users/moquality/sdk/go1.11rc1/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/var/folders/6b/nk1h2bp93yn4_c35zvn2845c0000gq/T/tmp.kodrU4ryxD/node/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/6b/nk1h2bp93yn4_c35zvn2845c0000gq/T/go-build046724770=/tmp/go-build -gno-record-gcc-switches -fno-common"
```
### What did you do?
```bash
mkdir example
cd example
go mod init modname
echo 'package main; func main() {println("This is an example.")}' > example.go
go build -v # This produces a file called 'modname'.
go clean -x # This attempts to delete a file called 'example'.
```
### What did you expect to see?
`go clean` should use the module name to delete rather than the directory name.
### What did you see instead?
`go clean` attempted to use the directory name to delete the previously created executable. | NeedsFix,modules | low | Critical |
350,584,808 | flutter | Adjacent Material widgets in Column with equal elevation cast shadows on each other | Interestingly opposite to #12206, I would expect two material widgets next to each other to cast shadows below, but not on the other. Instead I get this ([code here](https://gist.github.com/lucaslcode/f550e77eb549cfb779fda9751d4b59c8)):

Would also appreciate a workaround. I looked into MergeableMaterial but it didn't seem appropriate since it needs to be unconstrained and I'm using a CustomSingleChildLayout as a PopupRoute (like PopupMenu does). | framework,f: material design,a: fidelity,has reproducible steps,P2,found in release: 3.7,found in release: 3.9,team-design,triaged-design | low | Major |
350,589,201 | pytorch | [Caffe2] Segmentation faults in multithreading Caffe2 | I wrote these scripts in reference to the Caffe2 tutorials to check if Caffe2 is thread-safe.
caffe2_trial.py:
```
#!/usr/bin/python
from caffe2.python import workspace
import threading
import caffe2_multithreading
NUM_THREADS = 2
def worker():
caffe2_multithreading.main()
def run_workers():
threads = [threading.Thread(name="Thread{}".format(i), target=worker) for i in range(NUM_THREADS)]
for thread in threads:
thread.start()
workspace.GlobalInit(['caffe2', '--caffe2_log_level=0'])
run_workers()
```
caffe2_multithreading.py:
```
from caffe2.python import workspace, model_helper
import numpy as np
import threading
def main():
data = np.random.rand(16, 100).astype(np.float32)
label = (np.random.rand(16) * 10).astype(np.int32)
workspace.FeedBlob("data", data)
workspace.FeedBlob("label", label)
m = model_helper.ModelHelper(name="my first net")
weight = m.param_init_net.XavierFill([], "fc_w", shape=[10, 100])
bias = m.param_init_net.ConstantFill([], "fc_b", shape=[10, ])
fc_l = m.net.FC(["data", "fc_w", "fc_b"], "fcl")
pred = m.net.Sigmoid(fc_l, "pred")
softmax, loss = m.net.SoftmaxWithLoss([pred, "label"], ["softmax", "loss"])
workspace.RunNetOnce(m.param_init_net)
workspace.CreateNet(m.net)
for _ in range(100):
data = np.random.rand(16, 100).astype(np.float32)
label = (np.random.rand(16) * 10).astype(np.int32)
workspace.FeedBlob("data", data)
workspace.FeedBlob("label", label)
workspace.RunNet(m.name, 10)
print("{} in {} fetched blob softmax:\n{}".format(workspace.CurrentWorkspace(), threading.current_thread().name, workspace.FetchBlob("softmax")))
print("{} in {} fetched blob loss:\n{}".format(workspace.CurrentWorkspace(), threading.current_thread().name, workspace.FetchBlob("loss")))
if __name__ == '__main__':
main()
```
When I run these, I get segfaults as below:
```
E0814 16:35:11.237641 13596 init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0814 16:35:11.237673 13596 init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0814 16:35:11.237675 13596 init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
*** Aborted at 1534278911 (unix time) try "date -d @1534278911" if you are using GNU date ***
PC: @ 0x7f98556c85b8 caffe2::math::Gemm<>()
*** SIGSEGV (@0x0) received by PID 13596 (TID 0x7f9815001700) from PID 0; stack trace: ***
@ 0x7f98ab66e390 (unknown)
@ 0x7f98556c85b8 caffe2::math::Gemm<>()
@ 0x7f9855c08c58 caffe2::FullyConnectedOp<>::DoRunWithType<>()
@ 0x7f98556d89d2 caffe2::Operator<>::Run()
@ 0x7f98557848bb caffe2::SimpleNet::Run()
@ 0x7f985578f49a caffe2::Workspace::RunNet()
@ 0x7f98564e9477 _ZZN8pybind1112cpp_function10initializeIZN6caffe26python16addGlobalMethodsERNS_6moduleEEUlRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEibE19_bJSD_ibEJNS_4nameENS_5scopeENS_7siblingEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNESV_
@ 0x7f985652ba0e pybind11::cpp_function::dispatcher()
@ 0x4c30ce PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c16e7 PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4d55f3 (unknown)
@ 0x4a577e PyObject_Call
@ 0x4bed3d PyEval_EvalFrameEx
@ 0x4c136f PyEval_EvalFrameEx
@ 0x4c136f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4d54b9 (unknown)
@ 0x4eebee (unknown)
@ 0x4a577e PyObject_Call
@ 0x4c5e10 PyEval_CallObjectWithKeywords
@ 0x589172 (unknown)
@ 0x7f98ab6646ba start_thread
default in Thread0 fetched blob softmax:
[[0.10406707 0.0958132 0.11186256 0.10592665 0.0817344 0.09865279
0.09048746 0.08944649 0.10203679 0.11997259]
[0.10739651 0.09508494 0.10893054 0.10854136 0.0935948 0.09612396
0.10101433 0.08424021 0.09613933 0.10893402]
[0.10163862 0.09200761 0.10511011 0.109869 0.09156405 0.10474087
0.1022335 0.0703143 0.09938712 0.12313475]
[0.08985671 0.09775824 0.11547512 0.09998353 0.09359182 0.10500769
0.08895223 0.08376996 0.10898601 0.11661875]
[0.08954862 0.08843845 0.11500646 0.09689793 0.09185582 0.10589205
0.10810956 0.08333685 0.10339628 0.11751796]
[0.09852833 0.09209983 0.11036412 0.10263348 0.09042244 0.10394713
0.10160396 0.07928675 0.10388856 0.11722537]
[0.10337994 0.09972429 0.11327668 0.08736703 0.09438661 0.10448781
0.0878675 0.08105434 0.10431094 0.12414487]
[0.09534382 0.09335071 0.11011291 0.10551803 0.08672015 0.10744442
0.10062116 0.07553869 0.09846447 0.12688567]
[0.09399191 0.10102197 0.11472756 0.09188537 0.08315386 0.11782556
0.0986233 0.08276524 0.10503418 0.11097106]
[0.09374088 0.09551394 0.11585982 0.11314663 0.09346773 0.09375941
0.09588219 0.08359513 0.1058373 0.10919697]
[0.10109688 0.0993735 0.11368633 0.08764622 0.09819658 0.11559883
0.09014595 0.0805226 0.10728084 0.10645223]
[0.10267685 0.09026854 0.11018284 0.10741374 0.09042422 0.11028332
0.09176522 0.08200574 0.09634313 0.11863648]
[0.10321208 0.09730779 0.10979314 0.10604937 0.09673787 0.10101081
0.09481944 0.08262547 0.09426087 0.11418316]
[0.10170192 0.09810354 0.11086501 0.09821761 0.09967485 0.10490072
0.09211882 0.08475582 0.09108799 0.11857382]
[0.09822202 0.10342799 0.10083076 0.10264489 0.09806627 0.10075969
0.10523681 0.08029877 0.09441201 0.11610077]
[0.09419835 0.10204436 0.10571007 0.11321441 0.09489898 0.1064717
0.08542782 0.09340751 0.09008439 0.11454245]]
default in Thread0 fetched blob loss:
2.26629066467
@ 0x7f98ab39a41d clone
@ 0x0 (unknown)
Segmentation fault (core dumped)
```
Does this mean that Caffe2 is not thread-safe? I have been working hard for days just to get multithreading working in Caffe2. Thanks! | caffe2 | low | Major |
350,599,975 | nvm | nvm install script fails to detect commented out nvm source string and bash_completion in ~/.bashrc | <!-- Thank you for being interested in nvm! Please help us by filling out the following form if you‘re having trouble. If you have a feature request, or some other question, please feel free to clear out the form. Thanks! -->
- Operating system and version:
macOS 10.13.6 (17G65)
- `nvm debug` output:
<details>
<!-- do not delete the following blank line -->
```sh
n/a
```
</details>
- `nvm ls` output:
<details>
<!-- do not delete the following blank line -->
```sh
n/a
```
</details>
- How did you install `nvm`? (e.g. install script in readme, Homebrew):
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
- What steps did you perform?
1. `rm -rf $NVM_DIR`
1. `mkdir $NVM_DIR`
1. commented out the following 2 lines in .bashrc while debugging nvm
```
export NVM_DIR="$HOME/.nvm"
# [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
# [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
```
1. (launch a new shell)
1. `curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash`
- What happened?
`nvm` does not add the proper commands to .bashrc because it doesn't realize they are commented out. Consequently, `nvm` also does not run since the script that loads it is never executed.
```
t-mbp:~ taylor$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 12819 100 12819 0 0 87888 0 --:--:-- --:--:-- --:--:-- 88406
=> Downloading nvm from git to '/Users/taylor/.nvm'
=> Cloning into '/Users/taylor/.nvm'...
remote: Counting objects: 267, done.
remote: Compressing objects: 100% (242/242), done.
remote: Total 267 (delta 31), reused 86 (delta 15), pack-reused 0
Receiving objects: 100% (267/267), 119.47 KiB | 1.81 MiB/s, done.
Resolving deltas: 100% (31/31), done.
=> Compressing and cleaning up git repository
=> nvm source string already in /Users/taylor/.bashrc
=> bash_completion source string already in /Users/taylor/.bashrc
env: node: No such file or directory
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
t-mbp:~ taylor$ nvm
-bash: nvm: command not found
```
(Uncommenting these commands manually makes everything work as expected.)
- What did you expect to happen?
I expected `nvm` to detect that the commands are not actually in .bashrc (since they're commented out) and to add them.
Additionally, it would be nice to see a warning if the commands are there but commented out (i.e., further up in the .bashrc file from previous installs or troubleshooting).
- Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`?
Yes, I'm a developer so plenty of developer stuff but nothing else related to node / nvm besides the 3 lines added by nvm itself.
| pull request wanted,installing nvm | low | Critical |
350,615,117 | vscode | Warn when reloading VSCode if Issue Reporter is open | Issue Type: <b>Feature Request</b>
When trying to isolate a bug, it is common to reload the application to see if a change (e.g., disabling an extension) has fixed things. But that currently discards any content that has been written in the Issue Reporter dialog!
A warning would be nice. Even better would be to save the content to a temporary file & re-open the issue reporter after reloading. But at least a warning, so I can copy & paste to my own temporary file.
VS Code version: Code 1.26.0 (4e9361845dc28659923a300945f84731393e210d, 2018-08-13T16:29:31.933Z)
OS version: Windows_NT x64 10.0.17134
<!-- generated by issue reporter --> | feature-request,issue-reporter | low | Critical |
350,627,761 | react | Boolean DOM properties coerce empty string to false, contrary to HTML standard | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
This is in kind of the same space as https://github.com/facebook/react/pull/13372 and is an offshoot of my attempt to better [model React DOM props in Flow](https://github.com/facebook/flow/pull/6727).
**tl;dr:** Should React warn when the value `""` is passed into a known boolean DOM prop?
---
**Do you want to request a *feature* or report a *bug*?**
Depends on interpretation 😅 This is possibly a bug, definitely an inconsistency worth mitigating IMHO.
**What is the current behavior?**
React normalises values supplied to [known DOM boolean props](https://github.com/facebook/react/blob/69e2a0d732e1ca74f6dc5df9d0ddd0bf24373965/packages/react-dom/src/shared/DOMProperty.js#L278-L331) (e.g. `readOnly`) such that passing the empty string `""` (being falsy in JavaScript) results in the corresponding attribute being omitted from the HTML output. However, in [HTML](https://html.spec.whatwg.org/multipage/common-microsyntaxes.html#boolean-attribute), the empty string is a truthy value in this context; it's one of the values that the standard specifically allows in boolean attributes.
The above is a potential source of confusion in itself, but React 16's handling of unknown attributes gives rise to the following hypothetical scenario: a new DOM boolean attribute `foobar` is introduced, some people write JSX code that uses it as `foobar=""` (passed through to HTML, truthy), and later React adds `foobar` to its internal whitelist in a minor/patch version and starts processing it as a boolean (JS falsy, omitted from HTML); this would _technically_ be a breaking change for those people.
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem.**
https://codesandbox.io/s/y0pmz9149x
**What is the expected behavior?**
There is definitely a clash of expectations here at the interface of JS and HTML.
1. Coming from JS, `""` is falsy and treating it as such in a "boolean" prop is fine; from this perspective, the current behaviour is justifiable.
2. Coming from HTML, it might not be obvious that React is doing this "extra" processing and deviating from what's clearly stated in the HTML spec; from this perspective, the current behaviour is surprising.
There probably isn't justification for changing React's actual handling of `""` (not least for fear of breaking code that relies on this long-standing behaviour, see version information below), but perhaps a warning about the ambiguity is warranted, a la #13372?
Note that a warning won't fully mitigate the worst-case scenario I mentioned above (since we can't warn about a prop that we don't _know_ is a DOM boolean), but at least it would give some signal _after_ the React version update that the code might not be doing the expected thing anymore.
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
Versions of React as far back as 0.14 (and probably way older) process whitelisted boolean DOM props the same way. | Type: Discussion | low | Critical |
350,677,578 | youtube-dl | This Old House: Index Error: list index out of range | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.08.04*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [ X] I've **verified** and **I assure** that I'm running youtube-dl **2018.08.04**
### Before submitting an *issue* make sure you have:
- [ X] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [X ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [ X] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ X] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
> youtube-dl -v -u XXXXXX -p XXXXXX https://www.thisoldhouse.com/watch/dorchester-house-house-history-and-kitchen-plans
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'-u', u'PRIVATE', u'-p', u'PRIVATE', u'https://www.thisoldhouse.com/watch/dorchester-house-house-history-and-kitchen-plans']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.08.04
[debug] Python version 2.7.15 (CPython) - Linux-4.17.12-1-default-x86_64-with-glibc2.2.5
[debug] exe versions: ffmpeg 4.0.1, ffprobe 4.0.1
[debug] Proxy map: {}
[ThisOldHouse] dorchester-house-house-history-and-kitchen-plans: Downloading webpage
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/bin/youtube-dl/__main__.py", line 19, in <module>
File "/usr/bin/youtube-dl/youtube_dl/__init__.py", line 472, in main
File "/usr/bin/youtube-dl/youtube_dl/__init__.py", line 462, in _real_main
File "/usr/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2001, in download
File "/usr/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 792, in extract_info
File "/usr/bin/youtube-dl/youtube_dl/extractor/common.py", line 502, in extract
File "/usr/bin/youtube-dl/youtube_dl/extractor/thisoldhouse.py", line 43, in _real_extract
IndexError: list index out of range
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.
If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
This is just one example from the This Old House Insider TV archives. You can use the 30-day free trial to verify, or, if a developer contacts me privately, I can provide temporary use of my credentials.
Or give me some hints on how to fix this myself. | account-needed | low | Critical |
350,694,554 | vue-element-admin | feature: package some components into vue-cli-plugin | Now that: such as `tabs-view`、`permission-control`、`i18n` and so on are coupled together.
Package it into `vue-cli-plugin` so that everyone can modify it. And you can install it from the `command line` or `vue ui` without having to manually copy the code.
| enhancement :star:,feature,in plan | low | Minor |
350,735,886 | godot | Multiple node toggle editable children | **Godot version:** 3.1 custom
**Issue description:**
Would be really nice to be able to toggle editable children for multiple nodes at once.
| enhancement,topic:editor,usability | low | Minor |
350,794,319 | vue | <transition> 在 UIWebView和WKWebView 中没有起作用 | ### Version
2.5.17
### Reproduction link
[https://jsfiddle.net/50wL7mdz/608590/](https://jsfiddle.net/50wL7mdz/608590/)
### Steps to reproduce
这段代码在浏览器和安卓中都可以正常运行,但是在UIWebView和WKWebView中不可以
### What is expected?
toast 能够从底部动态滑出
### What is actually happening?
0.3s后,直接显示
<!-- generated by vue-issues. DO NOT REMOVE --> | need repro | low | Major |
350,795,361 | rust | Unused trait functions are preserved after linking | ## What is it about
When investigating why the binary is large I found it contains functions which are never called (even in release builds).
## Problem
A library defines a trait.
The trait has two functions, there's a trait implementation.
One function is called, and another function is not.
But the resulting binary contains both functions.
```
pub trait FooTrait {
fn foo_trait_fn_proper(&self);
fn foo_trait_fn_never_called(&self);
}
pub struct BarStruct {}
impl FooTrait for BarStruct {
fn foo_trait_fn_proper(&self) {}
// this function is never called, but it exists in the resulting binary
fn foo_trait_fn_never_called(&self) {}
}
pub fn not_a_foo_trait_fn() {
}
#[inline(never)]
fn zzz(foo: &FooTrait) {
foo.foo_trait_fn_proper();
}
#[inline(never)]
pub fn some_function() {
let bs = BarStruct {};
zzz(&bs);
}
```
## How to repro
1. `git clone https://github.com/stepancheg/rust-unused-not-removed`
2. Build it with `cargo build`
```
nm target/debug/mybin | grep trait_fn
0000000100001510 T __ZN52_$LT$mylib..BarStruct$u20$as$u20$mylib..FooTrait$GT$19foo_trait_fn_proper17he329ac1c72a1b169E
0000000100001520 T __ZN52_$LT$mylib..BarStruct$u20$as$u20$mylib..FooTrait$GT$25foo_trait_fn_never_called17h94443f6f107765deE
```
`foo_trait_fn_proper` function is called. `foo_trait_fn_never_called` is never called, but both functions are in the resulting binary. (However, non-trait function `not_a_foo_trait_fn` is properly removed).
(When compiling with `--release`, all functions are erased, I guess it's due to LTO or something like that, and I guess it does not always work reliably, but I cannot reproduce it in the toy example.). | A-linkage,A-trait-system,T-compiler,C-bug,E-needs-mcve | low | Critical |
350,852,425 | go | cmd/compile/internal/gc: esc.go duplicated sink paths for re-assignments | Given this code:
```go
/* 1*/ package example
/* 2*/
/* 3*/ var sink interface{}
/* 4*/
/* 5*/ func fn() {
/* 6*/ var x *int
/* 7*/ x = new(int)
/* 8*/ sink = x // Sink 1 at the line 8; first new(int) flows here
/* 9*/ x = new(int)
/*10*/ sink = x // Sink 2 at the line 10; second new(int) flows here
/*11*/ }
```
(Code annotated with line number for convenience, minimal example outlined after the issue description.)
Execute the command `$ go tool compile -m=2 example.go`.
The output on tip is (important bits are in **bold**):
<pre>
example.go:5:6: can inline fn as: func() { var x *int; x = <N>; x = new(int); sink = x; x = new(int); sink = x }
example.go:8:7: x escapes to heap
example.go:8:7: from sink (assigned to top level variable) at example.go:8:7
example.go:7:9: new(int) escapes to heap
example.go:7:9: from x (assigned) at example.go:7:4
<b>example.go:7:9: from x (interface-converted) at example.go:8:7
example.go:7:9: from sink (assigned to top level variable) at example.go:8:7</b>
example.go:9:9: new(int) escapes to heap
example.go:9:9: from x (assigned) at example.go:9:4
<b>example.go:9:9: from x (interface-converted) at example.go:8:7
example.go:9:9: from sink (assigned to top level variable) at example.go:8:7</b>
example.go:10:7: x escapes to heap
example.go:10:7: from sink (assigned to top level variable) at example.go:10:7
</pre>
Expected output would not report second sink to the same location.
This is a consequence of how graph is constructed and printed.
Simplified example:
```go
package example
var sink *int
func fn() {
var x *int
x = new(int)
sink = x
x = new(int)
sink = x
}
```
And the output:
<pre>
example.go:5:6: can inline fn as: func() { var x *int; x = <N>; x = new(int); sink = x; x = new(int); sink = x }
example.go:7:9: new(int) escapes to heap
<b>example.go:7:9: from x (assigned) at example.go:7:4
example.go:7:9: from sink (assigned to top level variable) at example.go:8:7</b>
example.go:9:9: new(int) escapes to heap
<b>example.go:9:9: from x (assigned) at example.go:9:4
example.go:9:9: from sink (assigned to top level variable) at example.go:8:7</b>
</pre>
Expected output:
<pre>
example.go:5:6: can inline fn as: func() { var x *int; x = <N>; x = new(int); sink = x; x = new(int); sink = x }
example.go:7:9: new(int) escapes to heap
<b>example.go:7:9: from x (assigned) at example.go:7:4
example.go:7:9: from sink (assigned to top level variable) at example.go:8:7</b>
example.go:9:9: new(int) escapes to heap
<b>example.go:9:9: from x (assigned) at example.go:9:4
example.go:9:9: from sink (assigned to top level variable) at example.go:10:7</b>
</pre>
For that example, we have something like this:
* `sink` pseudo node has 2 source edges, both comes from `x` node.
* `x` node has 2 source edges, they come from the two different `new(int)`
```
sink.Flowsrc == { {dst=sink, src=x, where=line8}, {dst=sink, src=x, where=line10} }
x.Flowsrc == { {dst=x, src=new(int), where=line7 }, {dst=x, src=new(int), where=line9} }
```
During traversal in `escwalkBody`, both `new(int)` paths are printed using first `sink` destination as a parent, so we get two same paths. For the second destination nothing is printed due to `osrcesc` variable check that is used to avoid duplicated messages.
If `osrcesc` is removed, both paths are printed twice (so, 4 messages instead of 2, but 2 of them are correct). Currently, `osrcesc` leads to 2 messages, 1 of which is incorrect.
It's not enough to just check whether destination node located before the actual starting flow point because of recursive functions:
```go
func f8(x int, y *int) *int {
if x <= 0 {
return y
}
x--
return f8(*y, &x)
}
```
Here the `return y` is a destination endpoint, and it comes before the tracked `&x`.
I have no good ideas on how to fix this one.
Hopefully, insights above can help someone to roll the solution to this. | NeedsInvestigation | low | Minor |
350,874,326 | TypeScript | Codefix for implicit-any 'this' in function | **TypeScript Version:** 3.1.0-dev.20180813
**Code**
```ts
function f() {
this.x;
}
```
**Expected behavior:**
Codefix to add `this: any` as a first parameter to `f`.
In situations where `this` is valid outside of the function (e.g. inside a class method), codefix to convert to `const f = () => { ... }` and hoist to the top of its scope if necessary.
**Actual behavior:**
No codefix. | Suggestion,Domain: Quick Fixes,Experience Enhancement | low | Minor |
350,906,081 | flutter | ListTile takes taps when onTap is null but onLongPress is defined | If you have a `ListTile` and give `onLongPress` a function, but leave `onTap` blank, or even explicitly set it to null, `ListTile` still will take taps.
This might seem fine at first, but when in `ListTile` is in a `ListView` wrapped with a `GestureDetector` for the purposes of removing the keyboard when tapped, it allows the useless `onTap` to go through. This results in poor UX and I cannot think of a reason why `ListTile` should show the tap animation if there is no action there.
The workaround is to give each `ListTile` an onTap callback that removes focus from the keyboard, but this creates UI jank by showing the `onTap` splash when removing the keyboard.
Cheers
Luke | framework,f: material design,a: quality,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-design,triaged-design | low | Major |
350,928,915 | vscode | add a context menu to entities listed in breadcrumbs | for folders listed in breadcrumbs i wish i could right click and create a new file from there

| feature-request,breadcrumbs | medium | Critical |
350,929,162 | TypeScript | Refactor to extract existing function to outer scope | **TypeScript Version:** 3.1.0-dev.20180813
**Code**
```ts
function a(x: number) {
function b(y: number) {
return x + y;
}
return b(1);
}
```
**Expected behavior:**
Refactor on `b` to convert to:
```ts
function a(x: number) {
return b(1, x);
}
function b(y: number, x: number) {
return x + y;
}
```
**Actual behavior:**
No such refactor.
We already have the ability to refactor *code* to an outer scope. So a workaround is to highlight the entire body of `b` and refactor to a function in global scope, then delete `b` and rename the refactored function to `b`, then manually update call sites. | Suggestion,In Discussion,Domain: Refactorings | low | Minor |
350,942,783 | go | cmd/compile: provide parameter escape information in DWARF | Now that we have function calls in debuggers (#21678), the next step is to make them safer. When the compiler generates a function call, it knows which parameters may leak to the heap, and makes sure that those parameters are heap-allocated. When a debugger forms a function call, it needs that same information so that it can do safety checks.
Just as a motivating example, consider:
```
var theThing *Thing
func SaveThing(t *Thing) {
theThing = t
}
func MakeAThing() {
var t Thing
...
return t
}
func DoTheThing(t *Thing) {
t := MakeAThing()
...
}
```
If we break in `DoTheThing` and try to pass `t` to `SaveThing`, the debugger should be able to warn the user that they may be setting themselves up for a crash by passing a stack-allocated variable to a function that could leak it to the heap. (This isn't always dangerous, since many parameters that "escape" don't actually get saved anywhere. Some human judgment is required.)
The most obvious implementation is to have the compiler add a bit to each function parameter's DWARF indicating whether it escapes or not. I think this should be as simple as copying `(*gc.Node).Noescape()` to a custom DWARF attribute.
cc @dr2chase, @aarzilli, @derekparker | Debugging,compiler/runtime | low | Critical |
350,954,177 | go | cmd/go: go modules ignores go.mod in semver repos not using semantic import versioning | When trying to import a package, in this case https://github.com/gobuffalo/pop, that has a semver tag `>=2.0.0`, in Pop's case it is `v4.6.4` Go modules skips over versions that have `go.mod` files.
In this example Go Modules will always return `v4.5.9`, which is the highest version that does not have a `go.mod`. Because all versions above this have `go.mod` files, Go Modules refuses to pick them, resulting in strange results.
Setting a version explicitly will work, but letting Go Modules find it, always fails.
It would appear that this is the line throwing away the good releases https://github.com/golang/go/blob/master/src/cmd/go/internal/modfetch/coderepo.go#L137
A repo that shows the problem can be found here: https://github.com/gobuffalo/pop-vgo
### What version of Go are you using (`go version`)?
`go version go1.11rc1 darwin/amd64`
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/markbates/Library/Caches/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/markbates/Dropbox/go"
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/6m/vw2ck7mj32z5f63wpgfw5qk80000gn/T/go-build858485480=/tmp/go-build -gno-record-gcc-switches -fno-common"
```
### What did you do?
```
export GO111MODULE=on
go1.11rc1 mod init
cat go.mod
go1.11rc1 build -v .
go1.11rc1 mod tidy -v
cat go.mod | grep pop
```
### What did you expect to see?
```
module pop-vgo
require (
github.com/fsnotify/fsnotify v1.4.7 // indirect
github.com/gobuffalo/pop v0.0.0-20180810203029-9f8bf0c11920
github.com/golang/protobuf v1.1.0 // indirect
github.com/hpcloud/tail v1.0.0 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/kr/pretty v0.1.0 // indirect
github.com/onsi/ginkgo v1.6.0 // indirect
github.com/onsi/gomega v1.4.1 // indirect
github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e // indirect
github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041 // indirect
golang.org/x/sys v0.0.0-20180815093151-14742f9018cd // indirect
golang.org/x/text v0.3.0 // indirect
google.golang.org/appengine v1.1.0 // indirect
gopkg.in/fsnotify.v1 v1.4.7 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
)
```
### What did you see instead?
```
module github.com/gobuffalo/pop-vgo
require (
github.com/cockroachdb/cockroach-go v0.0.0-20180212155653-59c0560478b7 // indirect
github.com/fatih/color v1.7.0 // indirect
github.com/fsnotify/fsnotify v1.4.7 // indirect
github.com/go-sql-driver/mysql v1.4.0 // indirect
github.com/gobuffalo/makr v1.1.2 // indirect
github.com/gobuffalo/packr v1.13.2 // indirect
github.com/gobuffalo/pop v4.5.9+incompatible
github.com/gobuffalo/uuid v2.0.0+incompatible // indirect
github.com/gobuffalo/validate v2.0.0+incompatible // indirect
github.com/golang/protobuf v1.1.0 // indirect
github.com/hpcloud/tail v1.0.0 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/jmoiron/sqlx v0.0.0-20180614180643-0dae4fefe7c0 // indirect
github.com/lib/pq v0.0.0-20180523175426-90697d60dd84 // indirect
github.com/markbates/going v1.0.1 // indirect
github.com/mattn/anko v0.0.6 // indirect
github.com/mattn/go-colorable v0.0.9 // indirect
github.com/mattn/go-isatty v0.0.3 // indirect
github.com/mattn/go-sqlite3 v1.9.0 // indirect
github.com/onsi/ginkgo v1.6.0 // indirect
github.com/onsi/gomega v1.4.1 // indirect
github.com/serenize/snaker v0.0.0-20171204205717-a683aaf2d516 // indirect
golang.org/x/sys v0.0.0-20180815093151-14742f9018cd // indirect
golang.org/x/text v0.3.0 // indirect
google.golang.org/appengine v1.1.0 // indirect
gopkg.in/fsnotify.v1 v1.4.7 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v2 v2.2.1 // indirect
)
``` | Documentation,NeedsInvestigation,modules | medium | Critical |
350,987,256 | opencv | Build failure using CUDA compute 7.0 | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
Please:
* Read the documentation to test with the latest developer build.
* Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue.
* Try to be as detailed as possible in your report.
* Report only one problem per created issue.
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV =>
- Operating System / Platform =>
- Compiler => Visual Studio 2015
-->
- OpenCV => 3.4.2 and 4.0 (master)
- Operating System / Platform =>Windows 10 64 Bit
- Compiler =>Visual Studio 2017 + CUDA 9.2
##### Detailed description
I see that in the latest cuda cmake file, Volta/7.0 have been added, but every time I try to run a build I get:
`Error LNK2019 unresolved external symbol`
`__cudaRegisterLinkedBinary_54_tmpxft_00003c60_00000000_14_gpu_mat_compute_70_cpp1_ii_71482d89 referenced in function "void __cdecl __sti____cudaRegisterAll(void)" (?__sti____cudaRegisterAll@@YAXXZ) opencv_core C:\OCV40\modules\core\cuda_compile_generated_gpu_mat.cu.obj`
So it seems to be unhappy about CUDA 70, so is it expected that it is supported at the moment, or is this expected behavior? Thanks
##### Steps to reproduce
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
-->
| priority: low,category: build/install,category: gpu/cuda (contrib) | low | Critical |
350,998,013 | rust | #[macro use] should suggest #[macro_use] and similar | I swapped an underscore for a space while typing out the attribute. It'd be nice if the error message said that instead of the mess it shows here:
```
error: expected identifier, found reserved keyword `macro`
--> src/bin/factoid.rs:3:3
|
3 | #[macro use]
| ^^^^^ expected identifier, found reserved keyword
error: cannot find macro `setup_panic!` in this scope
--> src/bin/factoid.rs:13:5
|
13 | setup_panic!(Metadata {
| ^^^^^^^^^^^
error[E0658]: The attribute `r#macro` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
--> src/bin/factoid.rs:3:1
|
3 | #[macro use]
| ^^^^^^^^^^^^
error: expected one of `(` or `=`, found `use`
--> src/bin/factoid.rs:3:9
|
1 | extern crate clap;
| - expected one of `(` or `=` here
2 | extern crate exitcode;
3 | #[macro use]
| ^^^ unexpected token
```
This should also happen for all the other macros with underscores in them:
* `crate_name`
* `crate_type`
* `no_builtins`
* `no_main`
* `no_start`
* `no_std`
* `recursion_limit`
* `windows_subsystem`
* `no_implicit_prelude`
* `link_args`
* `linked_from`
* `link_name`
* `macro_use`
* `macro_reexport`
* `macro_export`
* `no_link`
* `export_name`
* `no_mangle`
* `link_section`
* `cfg_attr`
* `must_use` | C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.