id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
369,287,185 | angular | data- prefix gets stripped from bindings such as <div data-id="{{1}}"> | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Performance issue
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[x] Other... Please describe: embarrassment
</code></pre>
## Current behavior
```html
<button [attr.data-id]="'btn-' + type">action 1</button>
<button data-id="btn-{{type}}">action 2</button>
```

## Expected behavior
<!-- Describe what the desired behavior would be. -->
There must be the same behavior
## Minimal reproduction of the problem with instructions
https://stackblitz.com/edit/angular-w2nsvr?file=src/app/app.component.html
## Environment
<pre><code>
Angular version: 6 | type: bug/fix,breaking changes,freq1: low,area: core,state: confirmed,core: basic template syntax,core: binding & interpolation,P3 | low | Critical |
369,291,746 | rust | Can't cast `self as &Trait` in trait default method | ```rust
trait Blah {
fn test(&self) {
self as &Blah;
}
}
```
```
error[E0277]: the size for values of type `Self` cannot be known at compilation time
--> src/lib.rs:3:9
|
3 | self as &Blah;
| ^^^^ doesn't have a size known at compile-time
|
= help: the trait `std::marker::Sized` is not implemented for `Self`
= note: to learn more, visit <https://doc.rust-lang.org/book/second-edition/ch19-04-advanced-types.html#dynamically-sized-types-and-sized>
= help: consider adding a `where Self: std::marker::Sized` bound
= note: required for the cast to the object type `dyn Blah`
```
[(playground)](https://play.rust-lang.org/?gist=68b51b8defb6bddf8d1c0b6ec1eafa2d&version=stable&mode=debug&edition=2015)
Well, `self` is usually a thin pointer, tho it may be a fat pointer if eg it's a slice. However `Self` is concrete. `Self` may not be sized, but `&Self` certainly is. `mem::size_of` is a const fn, so [this code](https://play.rust-lang.org/?gist=ba7372f0d844c49ea3f4b338180f67d3&version=stable&mode=debug&edition=2015) proves that the size is known at compile time! Well, it maybe a matter of precisely when in the compile time. Anyways, that code should totally compile.
## Workaround 1
```rust
trait Blah {
fn as_blah(&self) -> &Blah;
β¦
}
```
## Workaround 2
```rust
trait Blah {
fn test(&self, this: &Blah) {
β¦
}
}
```
## Workaround 3
Manually construct the reference, somehow. | C-enhancement,A-trait-system,T-compiler | low | Critical |
369,299,487 | TypeScript | Feature Request: "extends oneof" generic constraint; allows for narrowing type parameters | ## Search Terms
* generic bounds
* narrow generics
* extends oneof
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
Add a new kind of generic type bound, similar to `T extends C` but of the form `T extends oneof(A, B, C)`.
(Please bikeshed the semantics, not the syntax. I know this version is not great to write, but it *is* backwards compatible.)
Similar to `T extends C`, when the type parameter is determined (either explicitly or through inference), the compiler would check that the constraint holds. `T extends oneof(A, B, C)` means that *at least one of* `T extends A`, `T extends B`, `T extends C` holds. So, for example, in a function
```ts
function smallest<T extends oneof(string, number)>(x: T[]): T {
if (x.length == 0) {
throw new Error('empty');
}
return x.slice(0).sort()[0];
}
```
Just like today, these would be legal:
```ts
smallest<number>([1, 2, 3); // legal
smallest<string>(["a", "b", "c"]); // legal
smallest([1, 2, 3]); // legal
smallest(["a", "b", "c"]); // legal
```
But (unlike using `extends`) the following would be **illegal**:
```ts
smallest<string | number>(["a", "b", "c"]); // illegal
// string|number does not extend string
// string|number does not extend number
// Therefore, string|number is not "in" string|number, so the call fails (at compile time).
// Similarly, these are illegal:
smallest<string | number>([1, 2, 3]); // illegal
smallest([1, "a", 3]); // illegal
```
## Use Cases / Examples
What this would open up is the ability to narrow *generic parameters* by putting type guards on values inside functions:
```ts
function smallestString(xs: string[]): string {
... // e.g. a natural-sort smallest string function
}
function smallestNumber(x: number[]): number {
... // e.g. a sort that compares numbers correctly instead of lexicographically
}
function smallest<T extends oneof(string, number)>(x: T[]): T {
if (x.length == 0) {
throw new Error('empty');
}
const first = x[0]; // first has type "T"
if (typeof first == "string") {
// it is either the case that T extends string or that T extends number.
// typeof (anything extending number) is not "string", so we know at this point that
// T extends string only.
return smallestString(x); // legal
}
// at this point, we know that if T extended string, it would have exited the first if.
// therefore, we can safely call
return smallestNumber(x);
}
```
This can't be safely done using `extends`, since looking at one item (even if there's *only* one item) can't tell you anything about `T`; only about that object's dynamic type.
## Unresolved: Syntax
The actual syntax isn't really important to me; I just would like to be able to get narrowing of generic types in a principled way.
(EDIT:)
Note: despite the initial appearance, `oneof(...)` is not a type operator. The abstract syntax parse would be more like `T extends_oneof(A, B, C)`; the `oneof` and the `extends` are not separate.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
(any solution will reserve new syntax, so it's not a breaking change, and it only affects flow / type narrowing so no runtime component is needed)
| Suggestion,In Discussion | high | Critical |
369,309,598 | rust | Add filtering to `rustc_on_unimplemented` to avoid misleading suggestion | After #54946, the code `let x = [0..10]; for _ in x {}` will cause the following output:
```
error[E0277]: `[std::ops::Range<{integer}>; 1]` is not an iterator
--> $DIR/array-of-ranges.rs:11:14
|
LL | for _ in array_of_range {}
| ^^^^^^^^^^^^^^ if you meant to iterate between two values, remove the square brackets
|
= help: the trait `std::iter::Iterator` is not implemented for `[std::ops::Range<{integer}>; 1]`
= note: `[start..end]` is an array of one `Range`; you might have meant to have a `Range` without the brackets: `start..end`
= note: required by `std::iter::IntoIterator::into_iter`
```
Add a way to identify this case to `rustc_on_unimplemented`, in order to avoid giving this misleading/incorrect diagnostic. | C-enhancement,A-diagnostics,T-compiler,F-on_unimplemented | low | Critical |
369,320,022 | terminal | WINDOW_BUFFER_SIZE_EVENT generated during window scrolling | Windows Version 10.0.17763.1
> [SetConsoleWindowInfo](https://docs.microsoft.com/en-us/windows/console/setconsolewindowinfo) can be used to scroll the contents of the console screen buffer by shifting the position of the window rectangle without changing its size.
Starting from Windows 10 1709 (FCU) such scrolling generates a WINDOW_BUFFER_SIZE_EVENT **even though the console buffer size remains unchanged**.
This breaks our application behaviour and does not make sense for the following reasons:
- The [documentation](https://docs.microsoft.com/en-us/windows/console/window-buffer-size-record-str) explicitly says that WINDOW_BUFFER_SIZE_RECORD "describes *a change in the size* of the console screen buffer", but there's no change in this case.
- The event is generated only if the contents of the console screen buffer is scrolled via SetConsoleWindowInfo API, but it's useless - the application already knows that the console is being scrolled because the scrolling is initiated by the application itself.
- event is **not** generated when the user moves the scrollbar manually, so the application does not know that the console it being scrolled in that case.
- It does not happen in Legacy mode and never happened before for 20+ years.
A minimal project to reproduce the issue attached.
[BufferSizeEventBug.zip](https://github.com/Microsoft/console/files/2470845/BufferSizeEventBug.zip)
| Work-Item,Issue-Feature,Product-Conhost,Area-Server | medium | Critical |
369,329,359 | vscode | Can't Install local user update due to `\\bin` folder being used by another process | 
> There was an error while deleting a directory: `%LOCALAPPDATA%\Programs\Microsoft VS Code\bin`: the process cannot access the file because it is being used by another processβ¦
- VSCode Version: 1.2.7.2 (user setup)
- OS Version: Windows 10
## Steps to Reproduce:
1. when an update is available, click **Install Update**
**Does this issue occur when all extensions are disabled?:** cannot try this because installation is disabled/destroyed due to incomplete setup
**Workaround:** will go back to system installs instead of [local user installs](https://code.visualstudio.com/updates/v1_26#_user-setup-for-windows). | bug,install-update,windows | high | Critical |
369,343,061 | TypeScript | Suggestion diagnostics show in untyped JS code when regular diagnostics don't | **TypeScript Version:** 3.2.0-dev.20181011
**Code**
```ts
"".toUperKase();
const b = require("./b");
```
**Expected behavior:**
If we're not showing regular diagnostics, we should probably not show any suggestions either.
**Actual behavior:**
`"".toUperKase()` is fine. `require` is a real problem that we need to let the user know about right away!
 | Suggestion,In Discussion,Domain: Error Messages,Domain: TSServer | low | Minor |
369,385,245 | opencv | Poor precision on RGB to L*a*b* color conversion | The precision and numerical stability of RGB to L\*a\*b\* color space conversions (and back) is poor when the pixel luminance values are low. For example:
import cv2
import numpy as np
rgbimg = np.array([[[0.001, 0.001, 0.001]]], np.float32)
print rgbimg # Outputs [[[0.001 0.001 0.001]]]
labimg = cv2.cvtColor(rgbimg, cv2.COLOR_RGB2LAB)
print labimg # Outputs [[[0. 0. 0.]]]
rgbimg = cv2.cvtColor(labimg, cv2.COLOR_LAB2RGB)
print rgbimg # Outputs [[[0. 0. 0.]]]
rgbimg = np.array([[[1, 1, 1]]], np.uint8)
print rgbimg # Outputs [[[1 1 1]]]
labimg = cv2.cvtColor(rgbimg, cv2.COLOR_RGB2LAB)
print labimg # Outputs [[[ 1 128 128]]]
rgbimg = cv2.cvtColor(labimg, cv2.COLOR_LAB2RGB)
print rgbimg # Outputs [[[2 2 2]]]
Other image processing libraries maintain much better precision in these conversions.
| category: imgproc,priority: low | low | Minor |
369,400,171 | tensorflow | Feature Request: GPUOptions for Go binding | Current implementation of Go binding can not specify options.
GPUOptions struct is in internal package. And `go generate` doesn't work for protobuf directory. So we can't specify GPUOptions for `NewSession`.
| stat:contribution welcome,type:feature,good first issue | high | Critical |
369,432,343 | node | Warn on potentially insecure inspector options (--inspect=0.0.0.0) | Extracted from #21774.
Inspector by default is bound to 127.0.0.1, but suggestion to launch it with `--inspect=0.0.0.0` is highly copy-pasted without proper understanding what it does. I've observed that personally in chats, also see [google](https://www.google.ca/search?q="--inspect%3D0.0.0.0").
Binding inspector to 0.0.0.0 (in fact, to anything but the loopback interface ip) allows RCE, which could be catastrophic in cases where the IP is public. The users should be informed of that.
A warning printed to the console (with corresponding documentation change) should at least somewhat mitigate this.
Note: the doc change and the c++ change can come separately. | help wanted,doc,security,inspector | low | Minor |
369,473,845 | puppeteer | When I try to print `msg.text` in 'console' event with type 'error', I got `JSHandle@error` | <!--
STEP 1: Are you in the right place?
- For general technical questions or "how to" guidance, please search StackOverflow for questions tagged "puppeteer" or create a new post.
https://stackoverflow.com/questions/tagged/puppeteer
- For issues or feature requests related to the DevTools Protocol (https://chromedevtools.github.io/devtools-protocol/), file an issue there:
https://github.com/ChromeDevTools/devtools-protocol/issues/new.
- Problem in Headless Chrome? File an issue against Chromium's issue tracker:
https://bugs.chromium.org/p/chromium/issues/entry?components=Internals%3EHeadless&blocking=705916
For issues, feature requests, or setup troubles with Puppeteer, file an issue right here!
-->
### Steps to reproduce
**Tell us about your environment:**
* Puppeteer version: 1.9.0
* Platform / OS version: macos
* URLs (if applicable):
* Node.js version: 8.11.3
**What steps will reproduce the problem?**
_Please include code that reproduces the issue._
1. Simply add these code in your page
```javascript
try {
// try to print a value doesn't exist
// and it will throw an error
console.log(a)
} catch (e) {
// catch the error and print this with `console.error`
console.error(e)
}
```
2. puppeteer script:
```javascript
...
page.on('console', msg => {
console.log(msg.text())
})
...
```
**What is the expected result?**
It prints `JSHandle@error`
**What happens instead?**
The exact error object like `ReferenceError: a is not defined...`
| feature,chromium,confirmed | medium | Critical |
369,494,510 | TypeScript | Improve `Array.from(tuple)` and `[...tuple]` | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
`Array.from(tuple)` and `[...tuple]` should preserve individual types that made up tuple.
## Proposal
### `Array.from` (Simple)
Add this overload to `Array.from`:
```typescript
interface ArrayConstructor {
from<T extends any[]> (array: T): T
}
```
[demo 1](https://www.typescriptlang.org/play/#src=interface%20ArrayConstructor%20%7B%0D%0A%20%20%20%20from%3CT%20extends%20any%5B%5D%3E%20(array%3A%20T)%3A%20T%0D%0A%7D%0D%0A%0D%0Aconst%20a%3A%20%5B0%2C%201%2C%202%5D%20%3D%20%5B0%2C%201%2C%202%5D%0D%0Aconst%20b%20%3D%20Array.from(a)%20%2F%2F%20Type%3A%20%5B0%2C%201%2C%202%5D)
**Caveats:**
* The above definition preserves everything including unrelated properties that do not belong to `Array.prototype` whilst actual `Array.from` discards them. (for instance, if input has `foo: 'bar'`, output array will also has `foo: 'bar'`).
### `Array.from` (Complete)
Fix above caveats.
```typescript
interface ArrayConstructor {
from<T extends any[]> (array: T): CloneArray<T>
}
type CloneArray<T extends any[]> = {
[i in number & keyof T]: T[i]
} & {
length: T['length']
} & any[]
```
[demo 2](https://www.typescriptlang.org/play/#src=interface%20ArrayConstructor%20%7B%0D%0A%20%20%20%20from%3CT%20extends%20any%5B%5D%3E%20(array%3A%20T)%3A%20CloneArray%3CT%3E%0D%0A%7D%0D%0A%0D%0Atype%20CloneArray%3CT%20extends%20any%5B%5D%3E%20%3D%20%7B%0D%0A%20%20%20%20%5Bi%20in%20number%20%26%20keyof%20T%5D%3A%20T%5Bi%5D%0D%0A%7D%20%26%20%7B%0D%0A%20%20%20%20length%3A%20T%5B'length'%5D%0D%0A%7D%20%26%20any%5B%5D%0D%0A%0D%0Aconst%20a%3A%20%5B0%2C%201%2C%202%5D%20%3D%20%5B0%2C%201%2C%202%5D%0D%0Aconst%20b%20%3D%20Array.from(a)%20%2F%2F%20Type%3A%20CloneArray%3C%5B0%2C%201%2C%202%5D%3E%0D%0Aconst%20l%20%3D%20a.length%20%2F%2F%20Type%3A%203%0D%0Aconst%20%5Bx%2C%20y%2C%20z%5D%20%3D%20a%20%2F%2F%20Type%3A%20%5B0%2C%201%2C%202%5D%0D%0Aa.map(x%20%3D%3E%20x)%20%2F%2F%20(0%20%7C%201%20%7C%202)%5B%5D)
### Spread operator
```typescript
declare const tuple: [0, 1, 2] & { foo: 'bar' }
// $ExpectType [string, string, 0, 1, 2, string, string]
const clone = ['a', 'b', ...tuple, 'c', 'd']
```
Note that `typeof clone` does not contain `{ foo: 'bar' }`.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
* Clone a tuple without losing type information.
## Examples
<!-- Show how this would be used and what the behavior would be -->
### Clone a tuple
```typescript
const a: [0, 1, 2] = [0, 1, 2]
// $ExpectType [0, 1, 2]
const b = Array.from(a)
```
### Clone a generic tuple
```typescript
function clone<T extends any[]> (a: T): T {
return Array.from(a)
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,In Discussion,Domain: lib.d.ts | low | Critical |
369,521,758 | react | head > meta > content escaping issue | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
**Do you want to request a *feature* or report a *bug*?**
I'm guessing it's a bug.
**What is the current behavior?**
The following source code,
```jsx
<meta property="og:image" content="https://onepixel.imgix.net/60366a63-1ac8-9626-1df8-9d8d5e5e2601_1000.jpg?auto=format&q=80&mark=watermark%2Fcenter-v5.png&markalign=center%2Cmiddle&h=500&w=500&s=60ec785603e5f71fe944f76b4dacef08" />
```
, is being escaped once server side rendered:
```jsx
<meta property="og:image" content="https://onepixel.imgix.net/60366a63-1ac8-9626-1df8-9d8d5e5e2601_1000.jpg?auto=format&q=80&mark=watermark%2Fcenter-v5.png&markalign=center%2Cmiddle&h=500&w=500&s=60ec785603e5f71fe944f76b4dacef08"/>
```
You can reproduce the behavior like this:
```jsx
const React = require("react");
const ReactDOMServer = require("react-dom/server");
const http = require("http");
const doc = React.createElement("html", {
children: [
React.createElement("head", {
children: React.createElement("meta", {
property: "og:image",
content:
"https://onepixel.imgix.net/60366a63-1ac8-9626-1df8-9d8d5e5e2601_1000.jpg?auto=format&q=80&mark=watermark%2Fcenter-v5.png&markalign=center%2Cmiddle&h=500&w=500&s=60ec785603e5f71fe944f76b4dacef08"
})
}),
React.createElement("body", { children: "og:image" })
]
});
//create a server object:
http
.createServer(function(req, res) {
res.write("<!DOCTYPE html>" + ReactDOMServer.renderToStaticMarkup(doc)); //write a response to the client
res.end(); //end the response
})
.listen(8080); //the server object listens on port 8080
```
editor: https://codesandbox.io/s/my299jk7qp
output : https://my299jk7qp.sse.codesandbox.io/
**What is the expected behavior?**
I would expect the content not being escaped. It's related to https://github.com/zeit/next.js/issues/2006#issuecomment-355917446.
I'm using the `og:image` meta element so my pages can have nice previews within Facebook :).

**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
16.5.2 | Component: Server Rendering,Type: Needs Investigation | high | Critical |
369,567,305 | go | cmd/go: allow replacement version to be omitted if the target module has a required version | Currently you must add a version when entering a non-filesystem (remote) replacement for a module. A foolish attempt to put `replace github.com/fsnotify/fsnotify => gitmirror.corp.xyz.com/fsnotify/fsnotify` provokes a rebuke from `go mod verify`:
`go.mod:42: replacement module without version must be directory path (rooted or starting with ./ or ../)`
- so you have to fix it to be: `replace github.com/fsnotify/fsnotify => gitmirror.corp.xyz.com/fsnotify/fsnotify v1.4.7`.
However, there _is_ already a version specification for that package in `go.mod` the `require` statement, for example:
`require github.com/fsnotify/fsnotify v1.4.7`
This seems to be the reasonable default value for the missing version in the `replace` directive for the same package - "if no replacement version is given, use the same as in the `require` directive for that specific package`.
What would be especially nice is when upgrading a package such as github.com/fsnotify/fsnotify to the future v1..4.8 version, one would not need to first run `go get -u github.com/fsnotify/fsnotify` and then have to look up the new version and manually update the old version to the new one in the `replace` section (or worse, forgetting to do it and ending up with the unintended replacement with the old version).
@thepudds said on Slack that he wanted to suggest this as well. @bcmills @rsc - does it seem reasonable to you?
| NeedsInvestigation,modules | medium | Critical |
369,588,783 | go | net/http: add CONNECT bidi example | https://go-review.googlesource.com/c/go/+/123156 is removing some documentation from http.Transport that says not to use CONNECT requests with Transport because it's adding CONNECT support.
We should also add examples. CL 123156 has a test which is close at least, but could be cleaned up and be made less unit-testy.
| Documentation,help wanted,NeedsFix | low | Major |
369,601,132 | TypeScript | Computed Properties aren't bound correctly during Object/Class evaluation | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.0-dev.20181011
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** computed property expression
**Code**
```ts
const classes = [];
for (let i = 0; i <= 10; ++i) {
classes.push(
class A {
[i] = "my property";
}
);
}
for (const clazz of classes) {
console.log(Object.getOwnPropertyNames(new clazz()));
}
```
**Expected behavior:** The log statements indicate that each class in `classes` has a different property name (`i` should be evaluated at the time of the class evaluation and all instances of that class should have a property name corresponding to that evaluation of `i`).
**Actual behavior:** Compiled code logs:
> [ '10' ]
> [ '10' ]
> [ '10' ]
> [ '10' ]
> [ '10' ]
> [ '10' ]
> [ '10' ]
> [ '10' ]
> [ '10' ]
> [ '10' ]
> [ '10' ]
[**Playground Link**](https://www.typescriptlang.org/play/index.html#src=const%20classes%20%3D%20%5B%5D%3B%0D%0Afor%20(let%20i%20%3D%200%3B%20i%20%3C%3D%2010%3B%20%2B%2Bi)%20%7B%0D%0A%20%20classes.push(%0D%0A%20%20%20%20class%20A%20%7B%0D%0A%20%20%20%20%20%20%5Bi%5D%20%3D%20%22my%20property%22%3B%0D%0A%20%20%20%20%7D%0D%0A%20%20)%3B%0D%0A%7D%0D%0Afor%20(const%20clazz%20of%20classes)%20%7B%0D%0A%20%20console.log(Object.getOwnPropertyNames(new%20clazz()))%3B%0D%0A%7D)
| Bug,Help Wanted,Effort: Moderate,Domain: Transforms | low | Critical |
369,624,255 | pytorch | Request for stripped down / inference only pytorch wheels | ## π Feature
Creating a precompiled pytorch wheel file that is trimmed down, inference only version.
## Motivation
Right now pytorch wheels are on average ~400MB zipped -> 1.+ GB unzipped, which is not a big deal for training & prototyping as generally the wheels are only installed once - but that's not the case for productionizing using service providers like sagemaker / algorithmia / etc.
## Pitch
If we can create a trimmed down, potentially inference only capable wheel file - we can directly improve the load time performance of these algorithms in serverless algorithm delivery environments, which could directly pytorch's ability to compete in the HPC serverless marketplace.
## Alternatives
We could also provide a clear way for users to create their own wheels, by simplifying and documenting the build process somewhat to enable optional features during the compilation process.
## Additional context
Full disclosure, I'm an employee at Algorithmia and this change would make my life much easier :smile:
cc @malfet @seemethere @walterddr | module: build,triaged | low | Major |
369,669,027 | opencv | opencv_createsamples app can fail if negative images are different sizes | ##### System information (version)
- OpenCV => 3.4.3
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2017
##### Detailed description
In the opencv_createsamples app, the cvCreateTestSamples() for loop calculates maxscale only once if it is initially passed with a negative value. This works fine if all of the negative training images are the same size, but can cause an assertion to be raised if a smaller image is encountered. (width or height becomes too large and results in x or y being negative)
#####
for( i = 0; i < count; i++ )
{
icvGetNextFromBackgroundData( cvbgdata, cvbgreader );
if( maxscale < 0.0 ) /// maxscale is only < 0 once and is never recalculated
{
maxscale = MIN( 0.7F * cvbgreader->src.cols / winwidth,
0.7F * cvbgreader->src.rows / winheight );
}
if( maxscale < 1.0F ) continue;
scale = theRNG().uniform( 1.0F, (float)maxscale );
width = (int) (scale * winwidth);
height = (int) (scale * winheight);
x = (int) ( theRNG().uniform( 0.1, 0.8 ) * (cvbgreader->src.cols - width));
y = (int) ( theRNG().uniform( 0.1, 0.8 ) * (cvbgreader->src.rows - height));
//////
// During loop execution, cvbgreader->src.cols and cvbgreader->src.rows may
// change, but maxscale does not.
// width or height could thus be greater than src.cols or src.rows resulting in
// negative x or y values which causes an assertion when icvPlaceDistortedSample()
// is called
//////
if( invert == CV_RANDOM_INVERT )
{
inverse = theRNG().uniform( 0, 2 );
}
icvPlaceDistortedSample( cvbgreader->src(Rect(x, y, width, height)), inverse, maxintensitydev,
maxxangle, maxyangle, maxzangle,
1, 0.0, 0.0, &data );
| feature,category: apps | low | Minor |
369,675,744 | rust | Expected identifer error hides other expected tokens | ```
error: expected identifier, found `,`
--> src/lib.rs:5:18
|
5 | #[cfg_attr(all(),,)]
| ^ expected identifier
```
I would expect the error message to say ``^ expected identifier or `)` ``. It doesn't because this diagnostic is created by [`expected_ident_found`](https://github.com/rust-lang/rust/blob/master/src/libsyntax/parse/parser.rs#L797) which ignores the other expected tokens. I wonder if the suggestion can be added to [`expect_one_of`](https://github.com/rust-lang/rust/blob/master/src/libsyntax/parse/parser.rs#L681) with the rest of the logic.
The same seems to happen for "expected type" e.g. `impl;` saying only "expected type" instead of also expecting a `<` for generics. I'm not sure if I should open another issue for that. | C-enhancement,A-diagnostics,T-compiler,D-papercut | low | Critical |
369,754,297 | flutter | Using copyWith in a degenerative scenario | NOTE: this feels like it might be a case of "too bad, too late" in terms of getting any changes into flutter, but I want to raise it anyway to see what feedback I can get.
It appears to be impossible to use `copyWith` in a degenerative scenario. Suppose, for instance, you have an `InputDecoration` and you want to ensure the `errorText` is `null`:
```dart
final effectiveDecoration = decoration.copyWith(errorText: null);
```
This won't have the desired effect because the `copyWith` method does this:
```dart
errorText: errorText ?? this.errorText,
```
So it ignores the request because it can't distinguish between wanting to set `errorText` to `null` and not wanting to set it at all. And in this particular case, one can't even work around it by setting `errorText` to `""` because that still causes an (empty) error message to appear.
The only workaround that I can find is to literally copy the entire object and set the `errorText` to `null`:
```dart
final effectiveDecoration = InputDecoration(
errorText: null,
border: decoration.border,
contentPadding: decoration.contentPadding,
counterStyle: decoration.counterStyle,
counterText: decoration.counterText,
disabledBorder: decoration.disabledBorder,
enabled: decoration.enabled,
enabledBorder: decoration.enabledBorder,
errorBorder: decoration.errorBorder,
errorMaxLines: decoration.errorMaxLines,
errorStyle: decoration.errorStyle,
fillColor: decoration.fillColor,
filled: decoration.filled,
focusedBorder: decoration.focusedBorder,
focusedErrorBorder: decoration.focusedErrorBorder,
helperStyle: decoration.helperStyle,
helperText: decoration.helperText,
hintStyle: decoration.hintStyle,
hintText: decoration.hintText,
icon: decoration.icon,
isDense: decoration.isDense,
labelStyle: decoration.labelStyle,
labelText: decoration.labelText,
prefix: decoration.prefix,
prefixIcon: decoration.prefixIcon,
prefixStyle: decoration.prefixStyle,
prefixText: decoration.prefixText,
suffix: decoration.suffix,
suffixIcon: decoration.suffixIcon,
suffixStyle: decoration.suffixStyle,
suffixText: decoration.suffixText,
);
```
This is obviously super clumsy and not future-proof (fields added to `InputDecoration` will not be copied and won't break the build).
Suggestions? | framework,c: proposal,a: null-safety,P2,team-framework,triaged-framework | low | Critical |
369,776,297 | rust | Missed optimization: layout optimized enums produce slow derived code | Changing
```rust
const FOO_A: u32 = 0xFFFF_FFFF;
const FOO_B: u32 = 0xFFFF_FFFE;
const BAR_X: u32 = 0;
const BAR_Y: u32 = 1;
const BAR_Z: u32 = 2;
struct Foo { u: u32 }
```
https://play.rust-lang.org/?gist=9d1ff0355fbfabbc0c47f15e78e94687&version=nightly&mode=debug&edition=2015
to
```rust
pub enum Bar {
X, Y, Z
}
enum Foo {
A,
B,
Other(Bar),
}
```
https://play.rust-lang.org/?gist=faf6db37cdc627b1c5f8d582ad5c6779&version=nightly&mode=release&edition=2015
While this will result in pretty much the same layout as before, any derived code on `Foo` will now generate less optimal code. Apparently llvm can't manage to clean that up.
The llvm IR for the first playground link is
```llvm
define zeroext i1 @_ZN10playground3foo17h7604dbf314c89374E(i32, i32) unnamed_addr #0 {
start:
%2 = icmp eq i32 %0, %1
ret i1 %2
}
```
while the one for the second link is
```llvm
define zeroext i1 @_ZN10playground3foo17ha662001f5519a11dE(i32, i32) unnamed_addr #0 {
start:
%2 = add nsw i32 %0, -3
%3 = icmp ult i32 %2, 2
%narrow.i = select i1 %3, i32 %2, i32 2
%4 = add nsw i32 %1, -3
%5 = icmp ult i32 %4, 2
%narrow8.i = select i1 %5, i32 %4, i32 2
%6 = icmp eq i32 %narrow.i, %narrow8.i
br i1 %6, label %bb6.i, label %"_ZN56_$LT$playground..Foo$u20$as$u20$core..cmp..PartialEq$GT$2eq17h647dd5d9c0e8f1fcE.exit"
bb6.i: ; preds = %start
%7 = icmp eq i32 %0, %1
%not.or.cond.i = or i1 %3, %5
%spec.select.i = or i1 %7, %not.or.cond.i
br label %"_ZN56_$LT$playground..Foo$u20$as$u20$core..cmp..PartialEq$GT$2eq17h647dd5d9c0e8f1fcE.exit"
"_ZN56_$LT$playground..Foo$u20$as$u20$core..cmp..PartialEq$GT$2eq17h647dd5d9c0e8f1fcE.exit": ; preds = %start, %bb6.i
%8 = phi i1 [ %spec.select.i, %bb6.i ], [ false, %start ]
ret i1 %8
}
```
We can't improve the derives, because the derives on `Foo` can't see the definition of `Bar`.
| A-LLVM,I-slow,A-codegen,T-compiler | low | Critical |
369,806,614 | rust | Functions still get personality function attached to them when landing pads are disabled | Compiling with `-Cpanic=abort` or `-Zno-landing-pads` should make associated personality functions entirely unnecessary, yet they still somehow end up getting attached to functions generated with the "current" CG.
Consider for example this function:
```rust
pub fn fails2(a: &mut u32, b: &mut u32) -> i32 {
::std::mem::swap(a, b);
2 + 2
}
```
which when compiled (with or without optimisations) with `-Cpanic=abort`, will contain no personality functions in `1.27.1` but will contain them starting with `1.28`.
<details>
<summary>1.27.1</summary>
```llvm
define i32 @_ZN7example6fails217h8dd58cf9651f12a5E(i32* noalias nocapture dereferenceable(4) %a, i32* noalias nocapture dereferenceable(4) %b) unnamed_addr #0 !dbg !4 {
start:
%0 = load i32, i32* %a, align 1, !dbg !7, !alias.scope !30, !noalias !33
%1 = load i32, i32* %b, align 1, !dbg !35, !alias.scope !33, !noalias !30
store i32 %1, i32* %a, align 1, !dbg !35, !alias.scope !30, !noalias !33
store i32 %0, i32* %b, align 1, !dbg !36, !alias.scope !33, !noalias !30
ret i32 4, !dbg !37
}
```
</details>
<details>
<summary>1.28</summary>
```llvm
define i32 @_ZN7example6fails217h203cd5a0beec258bE(i32* noalias nocapture dereferenceable(4) %a, i32* noalias nocapture dereferenceable(4) %b) unnamed_addr #0 personality i32 (i32, i32, i64, %"unwind::libunwind::_Unwind_Exception"*, %"unwind::libunwind::_Unwind_Context"*)* @rust_eh_personality !dbg !4 {
start:
%tmp.0.copyload.i.i.i = load i32, i32* %a, align 4, !dbg !7, !alias.scope !21, !noalias !24
%0 = load i32, i32* %b, align 4, !dbg !26, !alias.scope !24, !noalias !21
store i32 %0, i32* %a, align 4, !dbg !26, !alias.scope !21, !noalias !24
store i32 %tmp.0.copyload.i.i.i, i32* %b, align 4, !dbg !28, !alias.scope !24, !noalias !21
ret i32 4, !dbg !31
}
```
</details>
This is technically a codegen regression, albeit very innocuous one.
| A-codegen,T-compiler | low | Minor |
369,817,335 | go | encoding/json: confusing errors when unmarshaling custom types | In go1.11
There is a minor issue with custom type unmarshaling using "encoding/json". As far as I understand the documentation (and also looking through the internal code), the requirements are:
- when processing map keys, the custom type needs to support UnmarshalText
- when processing values, the custom type needs to support UnmarshalJSON
When both of these interfaces are supported, everything is fine. Otherwise, the produced error messages are a bit cryptic and in some cases really confusing. I believe, adding a test for `Implements(jsonUnmarshallerType)` inside `encoding/json/decode.go: func (d *decodeState) object(v reflect.Value)` will make things more consistent.
Here is the code (try commenting out `UnmarshalText` and/or/xor `UnmarshalJSON` methods):
``` go
package main
import (
"encoding/json"
"fmt"
)
type Enum int
const (
Enum1 = Enum(iota + 1)
Enum2
)
func (enum Enum) String() string {
switch enum {
case Enum1: return "Enum1"
case Enum2: return "Enum2"
default: return "<INVALID ENUM>"
}
}
func (enum *Enum) unmarshal(b []byte) error {
var s string
err := json.Unmarshal(b, &s)
if err != nil { return err }
switch s {
case "ONE": *enum = Enum1
case "TWO": *enum = Enum2
default: return fmt.Errorf("Invalid Enum value '%s'", s)
}
return nil
}
func (enum *Enum) UnmarshalText(b []byte) error {
return enum.unmarshal(b)
}
func (enum *Enum) UnmarshalJSON(b []byte) error {
return enum.unmarshal(b)
}
func main() {
data := []byte(`{"ONE":"ONE", "TWO":"TWO"}`)
var ss map[string]string
err := json.Unmarshal(data, &ss)
if err != nil { fmt.Println("ss failure:", err) } else { fmt.Println("ss success:", ss) }
var se map[string]Enum
err = json.Unmarshal(data, &se)
if err != nil { fmt.Println("se failure:", err) } else { fmt.Println("se success:", se) }
var es map[Enum]string
err = json.Unmarshal(data, &es)
if err != nil { fmt.Println("es failure:", err) } else { fmt.Println("es success:", es) }
var ee map[Enum]Enum
err = json.Unmarshal(data, &ee)
if err != nil { fmt.Println("ee failure:", err) } else { fmt.Println("ee success:", ee) }
// Output when both UnmarshalText and UnmarshalJSON are defined:
// ss success: map[ONE:ONE TWO:TWO]
// se success: map[ONE:Enum1 TWO:Enum2]
// es success: map[Enum1:ONE Enum2:TWO]
// ee success: map[Enum1:Enum1 Enum2:Enum2]
// Output when UnmarshalJSON is commented out:
// ss success: map[ONE:ONE TWO:TWO]
// se failure: invalid character 'T' looking for beginning of value
// es failure: invalid character 'O' looking for beginning of value
// ee failure: invalid character 'T' looking for beginning of value
// Output when UnmarshalText is commented out:
// ss success: map[ONE:ONE TWO:TWO]
// se success: map[ONE:Enum1 TWO:Enum2]
// es failure: json: cannot unmarshal number ONE into Go value of type main.Enum
// ee failure: json: cannot unmarshal number ONE into Go value of type main.Enum
// In more complex cases, having UnmarshalText undefined also produced this
// error message: JSON decoder out of sync - data changing underfoot?
}
```
| NeedsDecision | medium | Critical |
369,834,512 | vscode | Open next build error file+line based on regex parsing of build output. | Edit: The proposed ability to jump to first/next build error should not be part of the "Problems" explorer system, since this requires code analysis. **The command should be based solely on regex parsing of the build output, and should work in absence of any language support aka extensions.** I shouldn't have to run 300mb and jvm in the background just to jump to build errors.
When I run a build task to compile my code, I would like a keyboard shortcut that will open the location of the first/next compile error by opening the file and jumping to the line.
This command should ignore non build related errors and warning (from linters / intellisense). I am not interested in general VSCode problem noise, I just want to be able to quickly and easily address compile errors.
It should start at the first compile error in the terminal, and cycle from there. Basically, just go see how all other editors in the world jump to build errors with a shortcut.....do that.
Its a pretty basic and necessary feature.
1. Hit build.
2. Jump to error....
The problems view is not adequate.
**1. It requires the language support plugin to be installed.** This is totally unnecessary, as regex parsing of the build output is simpler, lighter, and tried tested and true.
**2 It is more than a single hotkey press.** In sublime when I build, I press a single shortcut to jump to error, then I fix error, I press a single shortcut to jump to next error....rinse and repeat.
In VSCode I...
-press a shortcut to focus problems
-I press down to highlight error
-I press ENTER to focus editor and the fix error
-I press shortcut to focus problems view again
Its so much more work!
**3. It does not cycle problems in order.** This is important, as when compiling, often the first build error is also causing the subsequent errors.
| feature-request,tasks | medium | Critical |
369,855,222 | go | cmd/go: retry failed fetches | ```
$ go version
go version go1.11.1 darwin/amd64
```
```go
package main
import (
_ "github.com/google/go-cloud/wire/cmd/wire"
)
func main() {
//
}
```
```
go build -o a.out
go: finding github.com/google/go-cloud/wire/cmd/wire latest
go: finding github.com/google/go-cloud/wire/cmd latest
go: finding github.com/google/go-cloud/wire latest
go: finding google.golang.org/api v0.0.0-20180606215403-8e9de5a6de6d
go: google.golang.org/[email protected]: git fetch -f https://code.googlesource.com/google-api-go-client refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /Users/chai/go/pkg/mod/cache/vcs/9e62a95b0409d58bc0130bae299bdffbc7b7e74f3abe1ecf897474cc474b8bc0: exit status 128:
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
go: error loading module requirements
``` | NeedsInvestigation,FeatureRequest,early-in-cycle,modules | medium | Critical |
369,864,702 | godot | Need 'brush_transfer' func for 3.0/3.1 | **Godot version:**
3.1 alpha
**Issue description:**
Miss function in Image Class compared with Godot 2.1.
| enhancement,topic:core | low | Minor |
369,924,671 | vscode | Centered layout should be per workbench, not per editor area | With centered editor layout enabled (with or without zen mode), toggling the side bar on/off will resize the sashes. Even though there is enough real-estate for the side bar to show without doing so. To illustrate this behavior, look at the left sash in both these screenshots.
1.

2.

This illustrates the "jumpiness" when opening the side bar in this scenario. It would be preferable that the sashes were not moved unless needed. | feature-request,layout,workbench-zen | high | Critical |
369,926,347 | create-react-app | Check Node version early | We need to add better Node version checks. The current one doesnβt cover all requirements (like Node >= 8.9.0). | tag: enhancement,difficulty: starter,contributions: claimed,good first issue | medium | Major |
369,926,657 | flutter | ListView should have a addKeepAlive field | ListView adds AutomaticKeepAlives by default. When things scroll off screen, the list items are deleted. To work around this, I've been calling `KeepAliveNotification(KeepAliveHandle()).dispatch(context)` for every item in the `ListView`. I believe it would be cleaner to have an option for adding a `KeepAlive` instead of an `AutomaticKeepAlive`. | c: new feature,framework,f: scrolling,P3,team-framework,triaged-framework | low | Major |
369,941,598 | create-react-app | Compile JSX to direct createElement() calls | Due to how webpack works today with CommonJS, we pay the cost of *three* object property accesses (`ReactWebpackBinding.default.createElement`) for every JSX call. It doesn't minify well and has a minor effect on runtime performance. It's also a bit clowny.
We should fix this to compile JSX to something like
```js
var createElement = require('react').createElement
createElement(...)
```
Could be a custom Babel transform. Could be a transform that inserts `_createReactElement` into scope and specifies it as the JSX pragma. | contributions: up for grabs!,tag: enhancement,difficulty: medium | low | Major |
369,941,882 | three.js | ArrayCamera: Compute frustum based on sub-cameras. | ArrayCamera only extends and only supports Perspective Cameras. | Enhancement | low | Major |
369,952,211 | rust | Refiling "Deprecate "implicit ()" by making it a compilation error." | Refiled from https://github.com/rust-lang/rfcs/issues/2098#issuecomment-320580803 as a diagnostics issue:
>
> clippy can't do anything right now if a compiler error occurs before our lints run and most our lints are run after type checking.
>
> I agree though that example A should backtrack the source of the value and see if there's the possibility of removing a semicolon in the presence of `()`.
>
> Note that there are many other situations, e.g. the reverse of the above comparison:
>
> ```rust
> fn main() {
> let v = {
> println!("hacky debug: v is being initialized.");
> 42;
> };
> assert!(42 == v);
> }
> ```
>
> produces
>
> ```
> Compiling playground v0.0.1 (file:///playground)
> error[E0277]: the trait bound `{integer}: std::cmp::PartialEq<()>` is not satisfied
> --> src/main.rs:6:16
> |
> 6 | assert!(42 == v);
> | ^^ can't compare `{integer}` with `()`
> |
> = help: the trait `std::cmp::PartialEq<()>` is not implemented for `{integer}`
> ```
>
> I do not think that this requires an RFC. Simply implementing improved diagnostics and opening a PR is totally fine (I've not seen a denied diagnostic improvement PR so far).
cc @oli-obk @estebank | A-frontend,C-enhancement,A-diagnostics,T-compiler,WG-diagnostics | low | Critical |
369,955,869 | godot | `KinematicBody.move_and_collide(rel_vec)` returns appears to return `GridMap` (not a PhysicsBody) | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
`v3.0.6-stable`
**Issue description:**
`KinematicBody.move_and_collide(rel_vec)` returns appears to return `GridMap` as the `KinematicCollision.collider`. This is an issue if you want to handle **all** collisions by looping through and adding collision exceptions after each is handled. You cannot add a collision exception with `GridMap` since it is not a `PhysicsBody` and any looping and calling `move_and_collide` will just return `GridMap` .
**Steps to reproduce:**
Cause a collision with a gridmap element that contains a collision shape.
**Minimal reproduction project:**
I'll drop one up here if needed.
I'll take a look at the code soon to see if this is an easy fix. For now I'm just documenting this issue. | discussion,topic:core | low | Minor |
369,961,475 | pytorch | how to store a bounding box in Tensor? | how to store a bounding box of image in TensorProto of caffe2 ?
| caffe2 | low | Minor |
369,965,679 | pytorch | Install Jetson TX2 Max Regcount Error | ## π Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. Attempt to install from source on a fresh Jetpack 3.3 on nVidia Jetson TX2
2. Instead of ```python setup.py install```, install with ```python3 setup.py install``` (Tried with both, same error)
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
Errors are:
...about 100 NVLink errors, listing the last few below along with final error log.
nvlink error : entry function '_Z28ncclAllReduceLLKernel_sum_i88ncclColl' with max regcount of 80 calls function '_Z25ncclReduceScatter_max_u64P14CollectiveArgs' with regcount of 96
nvlink error : entry function '_Z29ncclAllReduceLLKernel_sum_i328ncclColl' with max regcount of 80 calls function '_Z25ncclReduceScatter_max_u64P14CollectiveArgs' with regcount of 96
nvlink error : entry function '_Z29ncclAllReduceLLKernel_sum_f168ncclColl' with max regcount of 80 calls function '_Z25ncclReduceScatter_max_u64P14CollectiveArgs' with regcount of 96
nvlink error : entry function '_Z29ncclAllReduceLLKernel_sum_u328ncclColl' with max regcount of 80 calls function '_Z25ncclReduceScatter_max_u64P14CollectiveArgs' with regcount of 96
nvlink error : entry function '_Z29ncclAllReduceLLKernel_sum_f328ncclColl' with max regcount of 80 calls function '_Z25ncclReduceScatter_max_u64P14CollectiveArgs' with regcount of 96
nvlink error : entry function '_Z29ncclAllReduceLLKernel_sum_u648ncclColl' with max regcount of 80 calls function '_Z25ncclReduceScatter_max_u64P14CollectiveArgs' with regcount of 96
nvlink error : entry function '_Z28ncclAllReduceLLKernel_sum_u88ncclColl' with max regcount of 80 calls function '_Z25ncclReduceScatter_max_u64P14CollectiveArgs' with regcount of 96
Makefile:83: recipe for target '/home/nvidia/jetson-reinforcement/build/pytorch/third_party/build/nccl/obj/collectives/device/devlink.o' failed
make[5]: *** [/home/nvidia/jetson-reinforcement/build/pytorch/third_party/build/nccl/obj/collectives/device/devlink.o] Error 255
Makefile:45: recipe for target 'devicelib' failed
make[4]: *** [devicelib] Error 2
Makefile:24: recipe for target 'src.build' failed
make[3]: *** [src.build] Error 2
CMakeFiles/nccl.dir/build.make:60: recipe for target 'lib/libnccl.so' failed
make[2]: *** [lib/libnccl.so] Error 2
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/nccl.dir/all' failed
make[1]: *** [CMakeFiles/nccl.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-nnpack nccl caffe2 libshm gloo c10d THD'
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Install should work so that I can open a Python 3 console and can succesfully do: ```import torch```
## Environment
Script does not run.
- PyTorch Version (e.g., 1.0): Latest master
- OS (e.g., Linux): nVidia Jetson TX2 Ubuntu, aarch64 architecture
- How you installed PyTorch (`conda`, `pip`, source): source
- Build command you used (if compiling from source): ```python3 setup.py install```
- Python version: 3.5.3
- CUDA/cuDNN version: 9.0, 7.0
- GPU models and configuration:
- Any other relevant information:
There is no Conda build for aarch64, so have to use standard python libraries.
cc @malfet @seemethere @walterddr | needs reproduction,module: build,triaged,module: jetson | low | Critical |
369,966,750 | TypeScript | Enforce definite initialization of `static` members | Is there some clever reason why TSC can't check this (maybe... related to semantics of JS inheritance?)?
**TypeScript Version:** 3.2.0-dev.20181011
**Search Terms:** strictPropertyInitialization, static, initialized, uninitialized, assigned, property
**Code**
```ts
class A {
static s: number
a() {
A.s * 3 // Should be a compile error
}
}
A.s * 7 // Should be an compile error
```
**Expected behavior:** Both lines should produce compile errors.
**Actual behavior:** Both lines result in runtime exceptions.
[**Playground Link**](https://www.typescriptlang.org/play/index.html#src=class%20A%20%7B%0D%0A%20%20static%20s%3A%20number%0D%0A%20%20a()%20%7B%0D%0A%20%20%20%20A.s%20*%203%20%2F%2F%20Should%20be%20a%20compile%20error%0D%0A%20%20%7D%0D%0A%7D%0D%0AA.s%20*%207%20%2F%2F%20Should%20be%20an%20compile%20error)
**Related Issues:**
- https://github.com/Microsoft/TypeScript/issues/21976
| Suggestion,Awaiting More Feedback | medium | Critical |
369,967,892 | gin | How to use multiple domains? | How to use multiple domains with one port ? | question | low | Minor |
369,975,065 | go | x/crypto/acme/autocert: verify the beginning time of an issued cert is not necessary | autocert.go line 1095 compares the current time and the time the cert is issued to check if the cert is valid.
But the time of the server is not always 100% accurate, and if the server time is behind the acme server time (i.e. the real time), autocert judges the cert as not valid, and will request for another cert on the next request which will hit the rate limit in a very short time.
### What did you expect to see?
Verify the beginning of the time of the cert is not necessary. Only the expiry date is needed to be verified. Since the valid duration of an acme cert is typically 3 months long, I don't think any server will have such a huge time difference.
### What did you see instead?
Cert judged as not valid. More cert requests will be sent subsequently (they will be judged as invalid too) and hits the rate limit. No cert can be successfully issued until the server time is corrected.
| NeedsInvestigation | medium | Major |
369,979,536 | rust | improve diagnostic for trait impl involving (infinite?) type recursion and constants | I'm seeing an internal compiler error in something I *think* is related to infinite recursion in the type system while implementing a generic trait.
I tried compiling this code via cargo as a library crate (all of this in `lib.rs`):
```
use std::ops::Mul;
use std::f64::consts::PI;
pub trait Unit: Default + Copy + Clone + PartialOrd + PartialEq {}
#[derive(Default, Copy, Clone, PartialOrd, PartialEq)]
pub struct Quantity<U: Unit>(
pub f64,
pub U,
);
#[derive(Default, Copy, Clone, PartialOrd, PartialEq)]
pub struct Point<T=f64> {
pub x: T,
pub y: T,
}
impl<'b, T> Mul<&'b Point<T>> for f64 where f64: Mul<&'b T> {
type Output = Point<<f64 as Mul<&'b T>>::Output>;
fn mul(self, rhs: &'b Point<T>) -> Self::Output { Point{ x: self*&rhs.x, y: self*&rhs.y } }
}
mod detail { // private with public struct, for use in a public type alias only
use super::Unit;
#[derive(Default, Copy, Clone, PartialOrd, PartialEq)]
pub struct AngleUnit;
impl Unit for AngleUnit {}
} // mod detail
pub type Angle = Quantity<detail::AngleUnit>;
pub const RADIANS: Angle = Quantity(1.0, detail::AngleUnit{});
pub const DEGREES: Angle = Quantity(RADIANS.0*PI/180.0, detail::AngleUnit{});
```
I expected to see some reasonable compiler error; I'm pretty confident the code is bad. In particular, if you comment out the last two lines (the definition of the constants), you get a friendlier error:
```
error[E0275]: overflow evaluating the requirement `_: std::marker::Sized`
```
(followed by some less-friendly recursive messages that do eventually truncate).
With those two `const` lines as above, I get:
```
error: internal compiler error: librustc/traits/structural_impls.rs:178: impossible case reached
thread 'main' panicked at 'Box<Any>', librustc_errors/lib.rs:578:9
note: Run with `RUST_BACKTRACE=1` for a backtrace.
error: aborting due to previous error
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.29.1 (b801ae664 2018-09-20) running on x86_64-unknown-linux-gnu
note: compiler flags: -C debuginfo=2 -C incremental --crate-type lib
note: some of the compiler flags provided by cargo are hidden
```
## Meta
(see above for version)
Since I'm not completely confident in my ability to run just `rustc` with the same options, here's the output of `RUST_BACKTRACE=1 cargo test`, I get:
```
error: internal compiler error: librustc/traits/structural_impls.rs:178: impossible case reached
thread 'main' panicked at 'Box<Any>', librustc_errors/lib.rs:578:9
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::print
at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::print
at libstd/sys_common/backtrace.rs:71
at libstd/sys_common/backtrace.rs:59
2: std::panicking::default_hook::{{closure}}
at libstd/sys_common/backtrace.rs:71
at libstd/sys_common/backtrace.rs:59
2: std::panicking::default_hook::{{closure}}
at libstd/panicking.rs:211
3: std::panicking::default_hook
at libstd/panicking.rs:227
4: rustc::util::common::panic_hook
5: std::panicking::rust_panic_with_hook
at libstd/panicking.rs:479
6: std::panicking::begin_panic
7: rustc_errors::Handler::bug
8: rustc::session::opt_span_bug_fmt::{{closure}}
9: rustc::ty::context::tls::with_opt::{{closure}}
10: rustc::ty::context::tls::with_context_opt
11: rustc::ty::context::tls::with_opt
at libstd/panicking.rs12:: 211rustc
:: session ::3opt_span_bug_fmt:
std::panicking ::13default_hook:
rustc:: session at ::libstd/panicking.rsbug_fmt:
227
14: rustc ::4traits: ::structural_implsrustc::::<utilimpl:: commonrustc::::panic_hookty
::context:: Lift <5': tcx>std ::forpanicking ::rustcrust_panic_with_hook::
traits:: SelectionError at <libstd/panicking.rs':a479>
>::lift_to_tcx
6: 15std: ::rustcpanicking::::tybegin_panic::
context::TyCtxt :: lift_to_global7
: rustc_errors ::16Handler: ::rustcbug::
traits::select :: SelectionContext8::: candidate_from_obligation
rustc::session ::17opt_span_bug_fmt: ::rustc{::{traitsclosure::}select}::
SelectionContext::evaluate_stack
9 : 18rustc: ::rustcty::::tycontext::::contexttls::::tlswith_opt::::with_context{
{closure }19}:
rustc:: dep_graph10::: graph::rustcDepGraph::::tywith_anon_task::
context::tls ::20with_context_opt:
rustc:: traits11::: selectrustc::::SelectionContextty::::evaluate_predicate_recursivelycontext
::tls:: with_opt21
: rustc ::12infer: ::InferCtxtrustc::::probesession
::opt_span_bug_fmt
22 : 13: <&rustc'::asession ::mutbug_fmt
I as14 : core::rustciter::::traitsiterator::::structural_implsIterator::><::implnext
rustc::ty ::23context: ::Lift<<alloc'::tcxvec>:: Vecfor< Trustc>:: traitsas:: SelectionErroralloc<::'veca::>SpecExtend><::Tlift_to_tcx,
I> >15::: from_iter
rustc::ty ::24context: ::TyCtxtrustc::::lift_to_globaltraits
::select ::16SelectionContext: ::rustccandidate_from_obligation_no_cache::
traits::select ::25SelectionContext: ::rustccandidate_from_obligation::
ty::context ::17tls: ::rustcwith_context::
traits::select ::26SelectionContext: ::evaluate_stackrustc
::dep_graph:: graph18::: DepGraph::rustcwith_anon_task::
ty:: context27::: tls::rustcwith_context::
traits::select:: SelectionContext19::: candidate_from_obligation
rustc:: dep_graph28::: graph::rustcDepGraph::::traitswith_anon_task::
select::SelectionContext ::20evaluate_stack:
rustc:: traits29::: selectrustc::::SelectionContextty::::evaluate_predicate_recursivelycontext
::tls ::21with_context:
rustc:: infer30::: InferCtxtrustc::::probedep_graph
::graph ::22DepGraph: ::with_anon_task<
&' a31 : mutrustc ::Itraits ::asselect ::coreSelectionContext::::iterevaluate_predicate_recursively::
iterator:: Iterator32>: ::rustcnext::
infer:: InferCtxt23::: probe<
alloc:: vec33::: Vec<<T&>' aas mutalloc ::vec::ISpecExtend <asT ,core ::Iiter>::>iterator::::from_iterIterator
>::next
24: 34rustc: ::traits<::allocselect::::vecSelectionContext::::Veccandidate_from_obligation_no_cache<
T> 25as: allocrustc::::vecty::::SpecExtendcontext<::Ttls,:: with_contextI
>> ::26from_iter:
rustc:: dep_graph35::: graphrustc::::DepGraphtraits::::with_anon_taskselect
::SelectionContext:: candidate_from_obligation_no_cache27
: rustc ::36traits: ::rustcselect::::tySelectionContext::::contextcandidate_from_obligation::
tls:: with_context28
: rustc ::37traits: ::selectrustc::::SelectionContextdep_graph::::evaluate_stackgraph
::DepGraph ::29with_anon_task:
rustc:: ty38::: context::rustctls::::traitswith_context::
select::SelectionContext ::30candidate_from_obligation:
rustc::dep_graph ::39graph: ::rustcDepGraph::::traitswith_anon_task::
select ::31SelectionContext: ::evaluate_stackrustc
::traits:: select40::: SelectionContextrustc::::evaluate_predicate_recursivelyty
::context ::32tls: ::rustcwith_context::
infer:: InferCtxt41::: probe
rustc ::33dep_graph: ::<graph&::'DepGrapha:: with_anon_taskmut
I 42as: corerustc::::itertraits::::iteratorselect::::IteratorSelectionContext>::::evaluate_predicate_recursivelynext
4334: : rustc::<inferalloc::::InferCtxtvec::::probeVec
<T> 44as: <alloc&::'veca:: SpecExtendmut< TI, asI >core>::::iterfrom_iter::
iterator:: Iterator35>: ::rustcnext::
traits::select ::45SelectionContext: ::candidate_from_obligation_no_cache<
alloc:: vec36::: Vec<rustcT::>ty ::ascontext ::alloctls::::vecwith_context::
SpecExtend< T37,: Irustc>::>dep_graph::::from_itergraph
::DepGraph ::46with_anon_task:
rustc::traits ::38select: ::rustcSelectionContext::::traitscandidate_from_obligation_no_cache::
select ::47SelectionContext: ::rustccandidate_from_obligation::
ty ::39context: ::tlsrustc::::with_contexttraits
::select:: SelectionContext48::: evaluate_stackrustc
::dep_graph ::40graph: ::DepGraphrustc::::with_anon_taskty
::context ::49tls: ::rustcwith_context::
traits::select ::41SelectionContext: ::rustccandidate_from_obligation::
dep_graph::graph ::50DepGraph: ::rustcwith_anon_task::
traits:: select42::: SelectionContextrustc::::evaluate_stacktraits
::select ::51SelectionContext: ::rustcevaluate_predicate_recursively::
ty ::43context: ::rustctls::::inferwith_context::
InferCtxt:: probe52
: rustc ::44dep_graph: ::<graph&::'DepGrapha:: with_anon_taskmut
I as53 : corerustc::::itertraits::::iteratorselect::::IteratorSelectionContext>::::evaluate_predicate_recursivelynext
5445: : <allocrustc::::vecinfer::::VecInferCtxt<::Tprobe>
as 55alloc: ::vec<::&SpecExtend'<aT ,mut II> >as:: from_itercore
:: iter46::: iterator::rustcIterator::>traits::::nextselect
::SelectionContext ::56candidate_from_obligation_no_cache:
<alloc ::47vec: ::rustcVec::<tyT::>context ::astls ::allocwith_context::
vec:: SpecExtend48<: Trustc,:: dep_graphI::>graph>::::DepGraphfrom_iter::
with_anon_task
5749: : rustcrustc::::traitstraits::::selectselect::::SelectionContextSelectionContext::::candidate_from_obligation_no_cachecandidate_from_obligation
5850: : rustc::rustcty::::traitscontext::::selecttls::::SelectionContextwith_context::
evaluate_stack
59: rustc51::: dep_graph::rustcgraph::::tyDepGraph::::contextwith_anon_task::
tls ::60with_context:
rustc ::52traits: ::rustcselect::::dep_graphSelectionContext::::graphcandidate_from_obligation::
DepGraph ::61with_anon_task:
rustc::traits ::53select: ::SelectionContextrustc::::evaluate_stacktraits
::select ::62SelectionContext: ::rustcevaluate_predicate_recursively::
ty:: context54::: tls::rustcwith_context::
infer:: InferCtxt63::: proberustc
::dep_graph ::55graph: ::DepGraph<::&with_anon_task'
a mut64 : Irustc ::astraits ::coreselect::::iterSelectionContext::::iteratorevaluate_predicate_recursively::
Iterator>:: next65
: rustc ::56infer: ::InferCtxt<::allocprobe::
vec:: Vec66<: T>< &as' aalloc ::mutvec ::ISpecExtend <asT ,core ::Iiter>::>iterator::::from_iterIterator
>:: next57
: rustc67::: traits::<selectalloc::::SelectionContextvec::::candidate_from_obligation_no_cacheVec
<T >58 : as rustcalloc::::tyvec::::contextSpecExtend::<tlsT::,with_context
I>> ::59from_iter:
rustc::dep_graph ::68graph: ::rustcDepGraph::::traitswith_anon_task::
select::SelectionContext ::60candidate_from_obligation_no_cache:
rustc:: traits69::: select::rustcSelectionContext::::tycandidate_from_obligation::
context:: tls61::: with_contextrustc
::traits ::70select: ::SelectionContextrustc::::evaluate_stackdep_graph
::graph:: DepGraph62::: with_anon_task
rustc:: ty71::: contextrustc::::tlstraits::::with_contextselect
::SelectionContext ::63candidate_from_obligation:
rustc:: dep_graph72::: graphrustc::::DepGraphtraits::::with_anon_taskselect
::SelectionContext ::64evaluate_stack:
rustc::traits:: select73::: SelectionContext::rustcevaluate_predicate_recursively::
ty::context:: tls65::: with_context
rustc::infer ::74InferCtxt: ::rustcprobe::
dep_graph::graph:: DepGraph66::: with_anon_task
<&' a75 : mut rustcI:: traitsas:: core::iter::selectiterator::::SelectionContextIterator::>evaluate_predicate_recursively::
next
76: rustc67::: infer::<InferCtxtalloc::::probevec
::Vec< T77>: <as& 'alloca:: vecmut:: SpecExtendI< Tas, coreI::>iter>::::iteratorfrom_iter::
Iterator>:: next68
: rustc ::78traits: ::<selectalloc::::SelectionContextvec::::candidate_from_obligation_no_cacheVec
<T> 69as: allocrustc::::vecty::::SpecExtendcontext<::Ttls,:: with_contextI
>> ::70from_iter:
rustc::dep_graph ::79graph: ::rustcDepGraph::::traitswith_anon_task::
select:: SelectionContext71::: candidate_from_obligation_no_cacherustc
::traits:: select80::: SelectionContextrustc::::candidate_from_obligationty
::context ::72tls: ::rustcwith_context::
traits:: select81::: SelectionContextrustc::::evaluate_stackdep_graph
::graph ::73DepGraph: ::rustcwith_anon_task::
ty::context ::82tls: ::rustcwith_context::
traits:: select74::: SelectionContextrustc::::candidate_from_obligationdep_graph
::graph ::83DepGraph: ::rustcwith_anon_task::
traits:: select75::: SelectionContext::rustcevaluate_stack::
traits::select ::SelectionContext84::: evaluate_predicate_recursivelyrustc
::ty ::76context: ::tlsrustc::::with_contextinfer
::InferCtxt:: probe85
: rustc ::77dep_graph: ::graph<::&DepGraph'::awith_anon_task
mut I86 : as rustccore::::traitsiter::::selectiterator::::SelectionContextIterator::>evaluate_predicate_recursively::
next
87 : 78: rustc::<inferalloc::::InferCtxtvec::::probeVec
<T> 88as: <alloc&::'veca:: SpecExtendmut< TI, asI >core>::::iterfrom_iter::
iterator:: Iterator79>: ::rustcnext::
traits::select ::89SelectionContext: ::<candidate_from_obligation_no_cachealloc
::vec ::80Vec: <rustcT::>ty ::ascontext ::alloctls::::vecwith_context::
SpecExtend<T ,81 : I>rustc>::::dep_graphfrom_iter::
graph::DepGraph ::90with_anon_task:
rustc:: traits82::: select::rustcSelectionContext::::traitscandidate_from_obligation_no_cache::
select::SelectionContext ::91candidate_from_obligation:
rustc:: ty83::: contextrustc::::tlstraits::::with_contextselect
::SelectionContext:: evaluate_stack92
: rustc ::84dep_graph: ::graphrustc::::DepGraphty::::with_anon_taskcontext
::tls:: with_context93
: rustc ::85traits: ::selectrustc::::SelectionContextdep_graph::::candidate_from_obligationgraph
::DepGraph:: with_anon_task94
: rustc ::86traits: ::selectrustc::::SelectionContexttraits::::evaluate_stackselect
::SelectionContext:: evaluate_predicate_recursively95
: rustc ::87ty: ::contextrustc::::tlsinfer::::with_contextInferCtxt
::probe
96: rustc88::: dep_graph::<graph&::'DepGrapha:: with_anon_taskmut
I 97as: rustccore::::traitsiter::::selectiterator::::SelectionContextIterator::>evaluate_predicate_recursively::
next
98 : 89rustc: ::infer<::allocInferCtxt::::vecprobe::
Vec<T >99 : as< &alloc'::avec ::mutSpecExtend <IT ,as Icore>::>iter::::from_iteriterator
::Iterator 90: rustc::>traits::::nextselect
::SelectionContext::candidate_from_obligation_no_cache
query stack during panic:
91: rustc::ty::context::tls::with_context
92: rustc::dep_graph::graph::DepGraph::with_anon_task
93: rustc::traits::select::SelectionContext::candidate_from_obligation
94: rustc::traits::select::SelectionContext::evaluate_stack
95: rustc::ty::context::tls::with_context
96: rustc::dep_graph::graph::DepGraph::with_anon_task
97: rustc::traits::select::SelectionContext::evaluate_predicate_recursively
98: rustc::infer::InferCtxt::probe
99: <&'a mut I as core::iter::iterator::Iterator>::next
query stack during panic:
#0 [evaluate_obligation] evaluating trait selection obligation `f64: std::ops::Mul<_>`
#1 [typeck_tables_of] processing `DEGREES`
end of query stack
#0 [evaluate_obligation] evaluating trait selection obligation `f64: std::ops::Mul<_>`
#1 [typeck_tables_of] processing `DEGREES`
end of query stack
``` | C-enhancement,A-diagnostics,A-trait-system,T-compiler | low | Critical |
369,986,009 | pytorch | fail to visualize caffe2 model | Refer to caffe2 official website, I tried to visualize caffe2 model by these code:
from caffe2.python import net_drawer
from IPython import display
graph=net_drawer.GetPydotGraph(train_model.net.Proto().op,"train_model",rankdir="LR")
display.Image(graph.create_png(),width=800)
I've also tried:
graph = net_drawer.GetPydotGraph(train_model.net.Proto().op, "train_model", rankdir="LR")
graph.write_png('graph.png')
I've installed graphviz by command:
sudo apt-get install graphviz
pip install pydot
But I will meet error as follows:
Traceback (most recent call last):
File "/mnt/xiongcx/R2Plus1D-master/tools/train_net.py", line 535, in <module>
main()
File "/mnt/xiongcx/R2Plus1D-master/tools/train_net.py", line 530, in main
Train(args)
File "/mnt/xiongcx/R2Plus1D-master/tools/train_net.py", line 418, in Train
explog
File "/mnt/xiongcx/R2Plus1D-master/tools/train_net.py", line 144, in RunEpoch
graph = net_drawer.GetPydotGraph(train_model.net.Proto().op, "train", rankdir="TB")
File "/home/xiongcx/anaconda2/envs/r21d/lib/python2.7/site-packages/pydot.py", line 1673, in new_method
encoding=encoding)
File "/home/xiongcx/anaconda2/envs/r21d/lib/python2.7/site-packages/pydot.py", line 1756, in write
s = self.create(prog, format, encoding=encoding)
File "/home/xiongcx/anaconda2/envs/r21d/lib/python2.7/site-packages/pydot.py", line 1884, in create
assert p.returncode == 0, p.returncode
AssertionError: -11
Do you have ideas to solve it? Or do you have any other ideas to visualize caffe2 model? Thanks in advance.
| caffe2 | low | Critical |
370,106,458 | pytorch | Caffe2 Installation inside Pytorch | i am getting this error message (please see the attached) after run the command "python setup.py install":
.
I followed this office guide of pytorch: https://caffe2.ai/docs/getting-started.html?platform=ubuntu&configuration=compile.
But the weird thing is I can print the coffee2 sucessfully

i would like to ask if does anybody knows how to fix this?
| caffe2 | low | Critical |
370,132,706 | rust | Unreasonably large stack frames | Consider the following function:
```rust
pub struct Tree {
children: Vec<Tree>,
}
pub fn traverse(t: &Tree) {
println!();
for c in &t.children {
traverse(c);
}
}
```
When compiled in release mode, it uses 72 bytes of stack per tree level on x86-64, see [Godbolt](https://godbolt.org/z/AyjJM6).
For comparison, the equivalent C++ function
```c++
struct Tree {
vector<Tree> children;
};
void traverse(const Tree &t) {
cout << endl;
for (const auto &c : t.children) {
traverse(c);
}
}
```
uses only 24 bytes per tree level (when compiled with either gcc or clang), see [Godbolt](https://godbolt.org/z/17JCSX).
------
Version info:
I noticed it on a recent nightly
```
rustc 1.31.0-nightly (b2d6ea98b 2018-10-07)
binary: rustc
commit-hash: b2d6ea98b0db53889c5427e5a23cddb3bcb63040
commit-date: 2018-10-07
host: x86_64-pc-windows-msvc
release: 1.31.0-nightly
LLVM version: 8.0
```
But it's the same on stable 1.29.0 as well. | C-enhancement,T-compiler,I-heavy | low | Minor |
370,140,805 | rust | Support uftrace (and other fentry consumers) | [uftrace](https://github.com/namhyung/uftrace) should work already with `-Zprofile` but seems it is not.
Related issues
- [LLVM instrument functions](https://github.com/rust-lang/rust/issues/34701)
- [-Zprofile tracking issue](https://github.com/rust-lang/rust/issues/42524) | A-LLVM,T-compiler,C-feature-request | low | Minor |
370,168,248 | pytorch | pytorch/torch/utils/cpp_extension.py ignores compiler setting, | Every way of setting the compiler is ignored by the buildextension script
eg. CC=gcc-6 CXX=g++-6 python setup.py,
will use /usr/bin/gcc not /usr/bin/gcc-6.
example code:
```python
import os
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
# it will just ignore everything...
#os.environ["CC"] = "/usr/bin/gcc-6"
#os.environ["CXX"] = "/usr/bin/g++-6"
setup(
name='andreas',
ext_modules=[
CUDAExtension('example', ['example.cu',
'example.cpp'],
)
],
cmdclass={
'build_ext': BuildExtension
},
)
```
Pytorch version 4.1
cc @yf225 @glaringlee | module: cpp-extensions,triaged | low | Minor |
370,212,078 | vscode | Sort lines sorts quoted values inconsistently |
Issue Type: <b>Bug</b>
Sort lines sorts quoted values inconsistently with the same values without double-quotes. I searched for [similar bugs](https://github.com/Microsoft/vscode/issues?q=is%3Aissue+is%3Aopen+%22sort+lines%22+label%3Abug) but didn't find much. Although, there's some complaint about [sorting with mixed case](https://github.com/Microsoft/vscode/issues/18315#issuecomment-352255186), but I specifically avoid that in my examples.

Sort the following lines (asc)
```
a
abc-def
ab
abc
```
You should get
```
a
ab
abc
abc-def
```
Now sort the same lines, but wrapped in double-quotes (like items in a JSON literal often are)
```
"a"
"abc-def"
"ab"
"abc"
```
I expect to get
```
"a"
"ab"
"abc"
"abc-def"
```
But I actually get this, which _must_ be incorrect, right?
```
"a"
"ab"
"abc-def"
"abc"
```
VS Code version: Code 1.28.0 (431ef9da3cf88a7e164f9d33bf62695e07c6c2a9, 2018-10-05T14:58:53.203Z)
OS version: Windows_NT x64 10.0.17134
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz (4 x 2808)|
|GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: enabled<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled|
|Memory (System)|15.89GB (2.21GB free)|
|Process Argv|.|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (7)</summary>
Extension|Author (truncated)|Version
---|---|---
tslint|eg2|1.0.40
vscode-gitk|how|1.3.2
csharp|ms-|1.16.2
Go|ms-|0.6.91
PowerShell|ms-|1.9.0
vsliveshare|ms-|0.3.790
vscode-docker|Pet|0.3.1
(1 theme extensions excluded)
</details>
<!-- generated by issue reporter --> | under-discussion,wont-fix,editor-sorting | low | Critical |
370,230,390 | TypeScript | TS2367: This condition will always return 'false' since the types 'Constructor<T>' and 'typeof Child' have no overlap. | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.0-dev.20181011
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** TS2367
**Code**
```ts
abstract class Base<T> {
get Item(): T { return null }
}
class Child extends Base<Item> {
constructor(public data: any) { super() }
}
interface Item { }
declare type Constructor<T> = new (data: any) => Base<T>
function func<T>(constructor: Constructor<T>): void {
// TS2367: This condition will always return 'false' since the types 'Constructor<T>' and 'typeof Child' have no overlap.
if (constructor === Child) {
// do something
}
new constructor(null)
}
func(Child) // No errors
```
**Expected behavior:**
Compiles without errors.
**Actual behavior:**
```
test.ts:14:9 - error TS2367: This condition will always return 'false' since the types 'Constructor<T>' and 'typeof Child' have no overlap.
14 if (constructor === Child) {
~~~~~~~~~~~~~~~~~~~~~
```
[**Playground Link** ](https://www.typescriptlang.org/play/#src=abstract%20class%20Base<T>%20%7B%0D%0A%20%20%20%20get%20Item()%3A%20T%20%7B%20return%20null%20%7D%0D%0A%7D%0D%0A%0D%0Aclass%20Child%20extends%20Base<Item>%20%7B%0D%0A%20%20%20%20constructor(public%20data%3A%20any)%20%7B%20super()%20%7D%0D%0A%7D%0D%0A%0D%0Ainterface%20Item%20%7B%20%7D%0D%0Adeclare%20type%20Constructor<T>%20%3D%20new%20(data%3A%20any)%20%3D>%20Base<T>%0D%0A%0D%0Afunction%20func<T>(constructor%3A%20Constructor<T>)%3A%20void%20%7B%0D%0A%20%20%20%20if%20(constructor%20%3D%3D%3D%20Child)%20%7B%20%2F%2F%20TS2367%3A%20This%20condition%20will%20always%20return%20'false'%20since%20the%20types%20'new%20(data%3A%20any)%20%3D>%20Base<T>'%20and%20'typeof%20Child'%20have%20no%20overlap.%0D%0A%20%20%20%20%20%20%20%20%2F%2F%20do%20something%0D%0A%20%20%20%20%7D%0D%0A%20%20%20%20new%20constructor(null)%0D%0A%7D%0D%0A%0D%0Afunc(Child)%20%2F%2F%20No%20errors%0D%0A)
**Related Issues:** #25642 | Suggestion,Help Wanted,Good First Issue,Domain: Error Messages,Experience Enhancement,PursuitFellowship | medium | Critical |
370,239,607 | pytorch | Move BigTensorSerialization tests out of default caffe2_cpu_tests | Every time I run Caffe2 CPU tests, I have to wait minutes for the very slow BigTensorSerialization tests to first run. This is dumb. If the test is slow, it should get put somewhere else. | caffe2 | low | Major |
370,241,669 | pytorch | Differentiation through Module parameters updates | ## π Feature
<!-- A clear and concise description of the feature proposal -->
So far, it is possible to get a second order gradient by extracting the first-order gradient as a differentiable tensor:
objective = loss(module, target)
gradients = torch.autograd.grad(objective, module.parameters, create_graph=True)
However, as far as I understand, it is not possible to differentiate through module updates (using optimizers). For example, let say that Z is an independent parameter I want to optimize, and I have a module M with parameter P (P is not Z). I update P as follow:
cost1 = loss( f(P,Z), target1)
cost1.backward()
M_optimizer.step() # P(Z) = P + g1(Z)
cost2 = loss( f(P,Z), target2)
cost2.backward()
M_optimizer.step() # P(Z) = P + g1(Z) + g2(Z)
And now, I want to update Z in order to maximize a meta-objective meta_loss( P(Z), meta-target ).
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
This king of meta-update is common in meta-learning approaches, such as MAML (https://arxiv.org/abs/1703.03400). As mentioned in this post (https://discuss.pytorch.org/t/pytorch-implementation-of-maml-that-works-with-module-style-networks/26278) MAML implementations are limited to functional-based structure and gradient descent are done by hand. For more complex models (CNN + LSTM) a user would have to re-implement everything by hands.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
An optimizer that would change a module's parameters with a differentiable operation, e.g. that would first extract the gradient as a differentiable tensor, and sum the parameter with this differentiable gradient.
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
At least (if the solution above is not possible), the possibility to sum a module's parameter with a differentiable tensor (and then the gradient descent can be easily implemented by hands).
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| feature,module: autograd,module: nn,module: optimizer,triaged | medium | Major |
370,255,645 | react-native | [Android][Animation] Interpolated translateY value causes android view to jump. | <!-- Requirements: please go through this checklist before opening a new issue -->
- [+] Review the documentation: https://facebook.github.io/react-native
- [+] Search for existing issues: https://github.com/facebook/react-native/issues
- [+] Use the latest React Native release: https://github.com/facebook/react-native/releases
## Environment
React Native Environment Info:
System:
OS: macOS 10.14
CPU: x64 Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
Memory: 862.68 MB / 16.00 GB
Shell: 3.2.57 - /bin/sh
Binaries:
Node: 10.9.0 - ~/.nvm/versions/node/v10.9.0/bin/node
Yarn: 1.10.1 - ~/.nvm/versions/node/v10.9.0/bin/yarn
npm: 6.2.0 - ~/.nvm/versions/node/v10.9.0/bin/npm
Watchman: 4.9.0 - /usr/local/bin/watchman
SDKs:
iOS SDK:
Platforms: iOS 11.2, macOS 10.13, tvOS 11.2, watchOS 4.2
IDEs:
Android Studio: 3.1 AI-173.4819257
Xcode: 9.2/9C40b - /usr/bin/xcodebuild
npmPackages:
react: 16.6.0-alpha.8af6728 => 16.6.0-alpha.8af6728
react-native: ^0.57.3 => 0.57.3
npmGlobalPackages:
react-native-cli: 2.0.1
## Description
I'm trying to implement collapsible header using React Native Animated API. using event method i get AnimatedHeaderValue and interpolate it into translateY value.
This value i apply to container of my listView (so listView moves vertically).
IOS animation works perfectly, but Android animation jumping and lagging when i scroll. I tried to inscrease scroll value and Android animation became more smooth.
This is scrollView container that passes onScroll to scrollView (listView)
`
<ScCompanyNewsFeedList optimizeHeight
getRef={scrollView => {
console.log("SCROLL VIEW", scrollView)
this._scrollView = scrollView;
}}
scrollEventThrottle = { 2 }
onScroll={Animated.event(
[{ nativeEvent: { contentOffset: { y: this.AnimatedHeaderValue }}}],
)}
companyId={this.props.companyId}/>
}
`
this is base container that contains tabs and my scrollView. It's moving when scroll.
`
<Animated.View style={[{flex: 1}, {transform: [{translateY: headerHeight}]}]}>
...
</Animated.View>
`
Interpolation
`
const animationRange = this.AnimatedHeaderValue.interpolate({
inputRange: [0, scrollRange],
outputRange: [0, 1],
extrapolate: "clamp"
});
const headerHeight2 = animationRange.interpolate({
inputRange: [0, 1],
outputRange: [0, -200]
});
`
| Platform: Android,Priority: Mid,Bug | high | Critical |
370,288,473 | material-ui | [Dialog] Form + scroll in DialogContent |
- [x] This is not a v0.x issue. <!-- (v0.x is no longer maintained) -->
- [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
I know this has been discussed in #12126, but there is still an issue with the scrolling, the title and the actions aren't fixed and they scroll
Here is the codesandbox: https://codesandbox.io/s/qlo16r5v59
Here is the code
```jsx
<Dialog open aria-labelledby="form-dialog-title">
<form
onSubmit={e => {
alert("form submit!");
e.preventDefault();
}}
>
<DialogTitle id="form-dialog-title">Log in</DialogTitle>
<DialogContent>
<DialogContentText>
Please enter your account number and password.
</DialogContentText>
<TextField
autoFocus
margin="dense"
label="Account Number"
type="text"
fullWidth
/>
<TextField
margin="dense"
label="Password"
type="password"
fullWidth
/>
<div style={{ height: 1000 }} />
</DialogContent>
<DialogActions>
<Button onClick={() => alert("cancel")} color="primary">
Cancel
</Button>
<Button
type="submit"
onClick={() => alert("login")}
color="primary"
variant="contained"
>
Log in
</Button>
</DialogActions>
</form>
</Dialog>
```
| component: dialog,priority: important | medium | Critical |
370,289,498 | kubernetes | All Kubernetes components should offer an option for structured logs | **Is this a BUG REPORT or FEATURE REQUEST?**:
/kind feature
**What happened**:
From the documentation, it appears that only the audit log has a JSON format available (`--audit-log-format string Default: "json"`). All Kubernetes components (including the kubelets) should provide an option to write logs in JSON format for easier processing and indexing.
**How to reproduce it (as minimally and precisely as possible)**:
Other than for the audit logs, I do not see any options to turn on JSON/structured logging in the CLI reference documentation, which I believe is autogenerated from the source code. https://kubernetes.io/docs/reference/command-line-tools-reference/
**Environment**:
- Kubernetes version (use `kubectl version`): 1.8.7-21 with custom patches, but this appears to affect the most recent 1.12 release
- Cloud provider or hardware configuration: On premise | kind/feature,sig/instrumentation,lifecycle/frozen | medium | Critical |
370,328,717 | pytorch | Move collate_fn functionality / responsibility into Dataset object | ## π Feature
The `Dataset` `__getitem__` class should be expected to handle advanced indexing, as opposed to just single integer indexing, and batching functionality provided by DataLoader should then use this instead of collate_fn, i.e.
`batch = collate_fn([dataset[i] for i in indices])`
would become
`batch = dataset[indices]`
## Motivation
- The type of collation necessary for a dataset is more a property of the Dataset than the DataLoader.
- In particular, for `TensorDataset` objects, this will massively speed up batch creation. (Currently DataLoader is prohibitively slow, see #4959.)
- It also allows Datasets to be used more flexibly outside of a DataLoader.
## Pitch
I think it's quite natural to embed this collate functionality into the Dataset rather than the DataLoader. There are no downsides that I can see to changing this functionality. By default, you could simply pass it back through the current default_collate argument. For example, the class for Dataset could become
```python
class Dataset(object):
def __getitem__(self, index):
if type(index) is int:
raise NotImplementedError
else:
requested_elements = [self[i] for i in index]
return default_collate(requested_elements)
```
This isn't completely backwards-compatible with the current API, however it's a simple fix. Exising implementations which are working can just change their implementation of the `__getitem__` method to
```python
if type(index) is int:
(... existing implementation...)
else:
super().__getitem__(self, index)
```
I've made a quick-and-dirty change to this effect [in this fork](https://github.com/mboratko/pytorch/commit/e16804206c867201afbc283885e3a743ec486115). (I'm having trouble with CUDA versions blocking me setting up an actual build, and there's been no discussion yet, so I haven't set up a formal pull request.)
## Alternatives
An alternative workaround to #4959 is simply not to use a `DataLoader` for `TensorDatasets` (or any other datasets that support faster native advanced indexing) and instead write the batch functionality directly. This results in code duplication and also isn't possible for libraries that expect `DataLoader` objects (eg. fastai). The solution so far has been that I wrap `DataLoader` and `_DataLoaderIter` objects into new objects which make exactly the changes I am suggesting, however this obviously is less than desireable.
Obviously, there are also alternatives to my proposed implementation.
cc @SsnL @VitalyFedyunin @ejguan | module: dataloader,triaged | low | Critical |
370,334,065 | flutter | Widget customizing | There are a lot of hardcoded widgets with their private parameters and instead devs to rewrite them with the custom code and copy-paste to have something different on enduser requests would be cool to provide provide some function that give inheristance of the desired widget and looks like
widget.copyof(baseclassname).with({changed_parameters_map}) | c: new feature,framework,P3,team-framework,triaged-framework | low | Minor |
370,337,271 | pytorch | [feature request] ignore_index and size_average in nn.AdaptiveLogSoftMaxWithLoss | **Feature request for nn.AdaptiveLogSoftMaxWithLoss**
Linked to https://github.com/pytorch/pytorch/pull/5287:
It would be nice if we could add these parameters to nn.AdaptiveLogSoftMaxWithLoss:
1) the "ignore_index" parameter, like in [F.cross_entropy](https://pytorch.org/docs/stable/nn.html?highlight=cross%20entropy#torch.nn.functional.cross_entropy), for batches with padding indexes
2) the `reduction` parameter
Thanks!
cc @albanD @mruberry @jbschlosser @SsnL @glample | module: nn,triaged | low | Minor |
370,357,842 | flutter | Tree shaking and platform specific widgets | There is a recurrent argument against Flutter in favor of React native:
React native allow reusable components that have the OS look&feel without having to test which platform we are on.
With flutter we have to manually switch over `defaultTargetPlatform`.
But there are two problems with this approach:
1. It requires a lot of boilerplate:
```
Widget build(BuildContext context) {
return defaultTargetPlatform === TargetPlatform.android
? RaisedButton(...)
: CupertinoButton(...);
}
```
2. This switch breaks tree shaking.
`defaultTargetPlatform` is _not_ a constant, therefore when doing a release build tree shaking cannot decide which platform code to includes... And ultimately includes both
____
This causes discussions such as this medium talk: https://medium.com/flutter-io/do-flutter-apps-dream-of-platform-aware-widgets-7d7ed7b4624d
or this package: https://pub.dartlang.org/packages/flutter_platform_widgets
These solve the boilerplate problem but kills tree-shaking.
Another alternative is to have different `main` for each platform.
This allows for a fully working tree-shaking, but we lose the "write once run everywhere" as we basically have to write things twice.
____
Flutter should provide an easy way to deal with this situation without having tons of boilerplate or shipping both Material and Cupertino widgets at once.
| c: new feature,tool,engine,P3,team-engine,triaged-engine,:hourglass_flowing_sand: | medium | Critical |
370,359,603 | flutter | Setup default project structure to create editable host apps and hidden "modules" for linking projects. | Setup default project structure to create editable host apps and hidden "modules" for linking projects.
This project structure should be equivalent to running the following 2 commands in the current tooling:
flutter create -t module my_app
cd my_app
flutter make-host-app-editable
Ensure that when making these changes that anyone who currently have a "module" project gets automatically migrated to this editable structure. We don't want public support for "make-host-app-editable" at this point in time, so we want to force everyone out of the current "module" project structure.
Rename the hidden directories for clarity:
.android_module
.ios_module
Ensure that hidden directories are marked hidden on windows. | tool,a: existing-apps,P3,team-tool,triaged-tool | low | Minor |
370,368,511 | rust | Compiling hyper 0.12 on armv7-linux-androideabi with target-features=+neon fails with LLVM ERROR: ran out of registers during register allocation | ```
root@7f26157a3837:~/hyper# cargo rustc --release -v --target "armv7-linux-androideabi" -- -C target-feature=+neon
Running `rustc --crate-name hyper src/lib.rs --crate-type lib --emit=dep-info,link -C opt-level=3 -C codegen-units=1 -C target-feature=+neon --cfg 'feature="__internal_flaky_tests"' --cfg 'feature="default"' --cfg 'feature="futures-cpupool"' --cfg 'feature="net2"' --cfg 'feature="runtime"' --cfg 'feature="tokio"' --cfg 'feature="tokio-executor"' --cfg 'feature="tokio-reactor"' --cfg 'feature="tokio-tcp"' --cfg 'feature="tokio-timer"' -C metadata=5c7c44dab8eed49e -C extra-filename=-5c7c44dab8eed49e --out-dir /root/hyper/target/armv7-linux-androideabi/release/deps --target armv7-linux-androideabi -L dependency=/root/hyper/target/armv7-linux-androideabi/release/deps -L dependency=/root/hyper/target/release/deps --extern bytes=/root/hyper/target/armv7-linux-androideabi/release/deps/libbytes-ddc5925e1332c4e2.rlib --extern futures=/root/hyper/target/armv7-linux-androideabi/release/deps/libfutures-169e26de4e2e0883.rlib --extern futures_cpupool=/root/hyper/target/armv7-linux-androideabi/release/deps/libfutures_cpupool-a7ff7f77e82e2fd1.rlib --extern h2=/root/hyper/target/armv7-linux-androideabi/release/deps/libh2-a6ab8093d4aaef1e.rlib --extern http=/root/hyper/target/armv7-linux-androideabi/release/deps/libhttp-a998474d6086d0cc.rlib --extern httparse=/root/hyper/target/armv7-linux-androideabi/release/deps/libhttparse-fe59ea708b984d2c.rlib --extern iovec=/root/hyper/target/armv7-linux-androideabi/release/deps/libiovec-0fbee7688c5d5997.rlib --extern itoa=/root/hyper/target/armv7-linux-androideabi/release/deps/libitoa-6968692ba2ead4d2.rlib --extern log=/root/hyper/target/armv7-linux-androideabi/release/deps/liblog-982ca16cb4a51ba3.rlib --extern net2=/root/hyper/target/armv7-linux-androideabi/release/deps/libnet2-eca709a2543aac40.rlib --extern time=/root/hyper/target/armv7-linux-androideabi/release/deps/libtime-d2ab1c7a701d0c5c.rlib --extern tokio=/root/hyper/target/armv7-linux-androideabi/release/deps/libtokio-f318b1cbbd06c200.rlib --extern tokio_executor=/root/hyper/target/armv7-linux-androideabi/release/deps/libtokio_executor-a34c5d48733eb612.rlib --extern tokio_io=/root/hyper/target/armv7-linux-androideabi/release/deps/libtokio_io-615c1ba46c31b299.rlib --extern tokio_reactor=/root/hyper/target/armv7-linux-androideabi/release/deps/libtokio_reactor-5f33b571b5aa2433.rlib --extern tokio_tcp=/root/hyper/target/armv7-linux-androideabi/release/deps/libtokio_tcp-dcd0b854ed32b20c.rlib --extern tokio_timer=/root/hyper/target/armv7-linux-androideabi/release/deps/libtokio_timer-b8925bce2a2b5c77.rlib --extern want=/root/hyper/target/armv7-linux-androideabi/release/deps/libwant-c71f954f09dd620c.rlib`
LLVM ERROR: ran out of registers during register allocation
error: Could not compile `hyper`.
```
| A-LLVM,O-Arm,T-compiler,C-bug,A-target-feature | low | Critical |
370,378,036 | pytorch | How can I build caffe2_gtest_main under pytorch/caffe2/test/ folder? | ## β Questions and Help
I am trying to build the caffe2_gtest_main in caffe2/test/. So I uncommented add_subdirectory(test) in caffe2/CMakeLists.txt. But it seems that there is no CMakeList.txt in caffe2/test/ folder. I tried to write a CMakeLists.txt. But I am not sure about its dependencies and target_link_libraries. Can anyone help me to enable the gtest build? Thanks.
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
| caffe2,triaged | low | Minor |
370,379,946 | go | x/build/cmd/gopherbot: consider any comment accompanied by a reopening to be βmore infoβ | Suggestion by @bcmills, split out from https://github.com/golang/go/issues/24834#issuecomment-381267439, lest it get lost:
> For the WaitingForInfo case, perhaps we should consider any comment accompanied by a reopening to be βmore infoβ: we should only re-close the issue if it was reopened without comment.
cc @dmitshur @FiloSottile | Builders,NeedsInvestigation | low | Minor |
370,380,244 | go | x/build/cmd/gopherbot: reply when a command is not understood | Broken out from some other recent issue (https://github.com/golang/go/issues/27961#issuecomment-427112448), a suggestion from @bradfitz: if @gopherbot doesn't understand a request (e.g. because a human substituted a synonym for one of the magic commands), it should reply and say so.
cc @FiloSottile @dmitshur
| Builders | low | Critical |
370,393,688 | go | x/build/maintner: reports inconsistent world state (e.g., issue state vs issue events) during short windows of time | ### Problem
A program that fetches a `maintner` corpus and tries to use its data to make decisions may make a mistake, because the world view is inconsistent during short windows of time. Even though the windows are short, it's guaranteed to happen for any daemon that loops over doing corpus updates and making decisions immediately after.
The most visible high-level example of this is #21312.
### Cause
This happens because there are effectively two GitHub data sources that are not synchronized:
1. changes to GitHub state (e.g., issue N now has labels X, Y, Z)
2. GitHub-generated events (e.g., issue N has had an "unlabeled" event)
To give a concrete example of an inconsistent state that `maintner` can report, consider when an issue has just been unlabeled. The first mutation received and processed by a `corpus.Update` call will be that the issue no longer has that label.
The mutation reporting that there has been an unlabeled _event_ on the same issue may come in a few seconds later. Until it does, it will appear that the issue does not have said label and it has never been unlabeled (e.g., `!gi.HasLabel("Documentation") && !gi.HasEvent("unlabeled")` will be true). Which is not the reality (if one considers the reality to be one where the unlabeled event and its effect to happen simultaneously).
#### Details
These are two distinct mutations received and processed by `corpus.Update` method:
```
received mutation at time t0:
github_issue: <
owner: "golang"
repo: "go"
number: 28103
updated: <
seconds: 1539629204
>
remove_label: 223401461
>
... (short window during which the issue doesn't have a label,
but the accompanying "unlabeled" event hasn't been received yet;
aka an inconsistent world state)
received mutation at time t1:
github_issue: <
owner: "golang"
repo: "go"
number: 28103
event: <
id: 1904921842
event_type: "unlabeled"
actor_id: 1924134
created: <
seconds: 1539629204
>
label: <
name: "Builders"
>
>
event: <
id: 1904921913
event_type: "labeled"
actor_id: 8566911
created: <
seconds: 1539629206
>
label: <
name: "Builders"
>
>
event_status: <
server_date: <
seconds: 1539629209
>
>
>
```
There is more relevant information in https://github.com/golang/go/issues/21312#issuecomment-430051456.
/cc @bradfitz | Builders | low | Major |
370,399,347 | rust | float rounding is slow | The scalar fallback for the sinewave benchmark in [fearless_simd](https://github.com/raphlinus/fearless_simd/blob/1cb5202e4c96233a90a170ba183aed1b64aa2fb1/README.md) is very slow as of the current commit, and the reason is the f32::round() operation. When that's changed to (x + 0.5).floor() it goes from 1622ns to 347ns, and 205ns with target_cpu=haswell. With default x86_64 cpu, floorf() is a function call, but it's an efficient one. The asm of roundf() that I looked at was very unoptimized (it moved the float value into int registers and did bit fiddling there). In addition, round() doesn't get auto-vectorized, but floor() does.
I think there's a rich and sordid history behind this. The C standard library has 3 different functions for rounding: [`round`](http://www.cplusplus.com/reference/cmath/round/), [`rint`](http://www.cplusplus.com/reference/cmath/rint/), and [`nearbyint`](http://www.cplusplus.com/reference/cmath/nearbyint/). Of these, the first rounds values with a 0.5 fraction away from zero, and the other two use the stateful rounding direction mode. This last is arguably a wart on C and it's a good thing the idea doesn't exist in Rust. In any case, the _default_ value is FE_TONEAREST, which rounds these values to the nearest even integer (see [Gnu libc documentation](https://www.gnu.org/software/libc/manual/html_node/Rounding.html) and [Wikipedia](https://en.wikipedia.org/wiki/Rounding#Round_half_to_even); the latter does a reasonably good job of motivating why you'd want to do this, the tl;dr is that it avoids some biases).
The [implementation](https://doc.rust-lang.org/src/std/f32.rs.html#51) of [f32::floor](https://doc.rust-lang.org/std/primitive.f32.html#method.floor) is usually intrinsics::floorf32 (but it's intrinsics::floorf64 on msvc, for reasons described there). That in turn is [llvm.floor.f32](https://github.com/rust-lang/rust/blob/b8b4150c042b06c46e29a9d12101f91fe13996e0/src/librustc_codegen_llvm/intrinsic.rs#L67). Generally the other round functions are similar, til it gets to llvm. Inside llvm, one piece of evidence that "round" is special is that it's not listed in the [list of instrinsics that get auto-vectorized](https://llvm.org/docs/Vectorizers.html#vectorization-of-function-calls).
Neither the C standard library nor llvm intrinsics have a function that rounds with "round half to even" behavior. This is arguably a misfeature. A case can be made that Rust should have this function; in cases where a recent Intel CPU is set as target_cpu or target_feature, it compiles to `roundps $8` (analogous to `$9` and `$a` for floor and ceil, respectively), and in compatibility mode the asm shouldn't be any slower than the existing code. I haven't investigated non-x86 architectures though.
For signal processing (the main use case of fearless_simd) I don't care much about the details of rounding of exactly 0.5 fraction values, and just want rounding to be fast. Thus, I think I'll use the _mm_round intrinsics in simd mode (with round half to even behavior) and (x + 0.5).floor() in fallback mode (with round half up behavior). It's not the case now (where I call f32::round) that the rounding behavior matches the SIMD case anyway. If there were a function with "round half to even" behavior, it would match the SIMD, would auto-vectorize well, and would have dramatically better performance with modern target_cpu.
| A-LLVM,I-slow,T-libs-api,A-floating-point | medium | Major |
370,403,889 | godot | Collision signals don't work on static and rigid RigidBody2D but do work between rigid and kinematic RigidBody2D | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
--> I tried. Sorry if dupe
**Godot version:**
<!-- Specify commit hash if non-official. -->
v3.0.6.stable.official.8314054
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
arch linux (godot package v:aur/godot-bin 3.0.6-1)
**Issue description:**
<!-- What happened, and what was expected. -->
collision reporting doesn't work when a RigidBBody2d mode is static but does when its set to kinematic.
**Steps to reproduce:**
create two rigid bodies and set one to static collision reporting wont work when properly set up
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[godotBug.zip](https://github.com/godotengine/godot/files/2481047/godotBug.zip)
The 2 folders in the zip are snapshots of my git repo. Full history is retained in both dirs. project is located at godotQWOPProj/project.godot in both dirs and the script where i set up the signals is located at godotQWOPProj/groundLineStaticBody2D.gd though its attached to a RigidBody2d.
| bug,confirmed,topic:physics | low | Critical |
370,419,465 | kubernetes | CSI: `readOnly` field is not passed to CSI NodePublish RPC call | <!-- This form is for bug reports and feature requests ONLY!
If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
> Uncomment only one, leave it on its own line:
>
/kind bug
> /kind feature
**What happened**:
As per the CSI spec, `readOnly`is passed to ControllerPublishVolumeRequest that should be passed to NodePublish as well..
Ref: https://github.com/container-storage-interface/spec/blob/master/lib/go/csi/v0/csi.pb.go#L1761
For CSI driver that do not support Controller Publish/Unpublish, the `readOnly` flag is missing from the nodePublish RPC request.
**What you expected to happen**:
`readOnly` flag is passed to both, ControllerPublish and NodePublish calls.
**How to reproduce it (as minimally and precisely as possible)**:
CSI driver that doesn't support ControllerPublish call should be able to validate from NodePublish call that the `readOnly` flag is not passed to the driver.
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`): 1.11
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
| kind/bug,priority/important-soon,sig/storage,lifecycle/frozen | medium | Critical |
370,466,784 | node | remote debugger unable to convert path correctly in different operating system | <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**:v10.9.0
* **Platform**:Linux s97712 4.18.10-arch1-1-ARCH #1 SMP PREEMPT Wed Sep 26 09:48:22 UTC 2018 x86_64 GNU/Linux
* **Subsystem**:remote debugger
<!-- Please provide more details below this comment. -->
I debugger node on window, but my node server on linux.
When I set the breakpoint through chrome devtools, like the below:

It doesn't work, nothing happend when reach breakpoint.
I inspect the devtools through `chrome://inspect/#pages`

the devtools check the operating system with `Host.isWin()` , but it check the local operating system, not remote


| help wanted,inspector | low | Critical |
370,527,154 | godot | ImmediateGeometry Node does not color the points | **Godot version:**
3.1 master from the last few days
**OS/device including version:**
Intel HD 5000 (macbook air)
**Issue description:**
I'm using ImmediateGeometry to draw a list of points. I believe I can set the colour of the point similar to an opengl call by using IM.set_color inside the IM.begin call.
However, the points render black in the game. I think it could be one of several problems:
1. my misunderstanding the use of set_color
2. I'm using gles2 as it's an older laptop and perhaps it's not supported
3. There's a bug where ambient light does not affect the points
4. There's a bug where no lights affect the points (I added an omnilight to no effect)
I think this is related to #10024 which describes a similar problem on godot 2.
**Steps to reproduce:**
```
var point_size = 5
var im = ImmediateGeometry.new()
add_child(im)
var m = SpatialMaterial.new()
m.flags_use_point_size = true
m.params_point_size = point_size
im.set_material_override(m)
im.clear()
im.begin(Mesh.PRIMITIVE_POINTS, null)
im.set_color(Color(0,1,1))
for p in pts: #list of Vector3s
im.add_vertex(p)
im.end()
``` | bug,platform:macos,topic:rendering,topic:3d | low | Critical |
370,538,868 | rust | Documentation of std::mem::size_of confusingly only details the `#[repr(C)]` case | The [Size of Structs][1] paragraph bases its explanation of struct sizes on the idea that field are "ordered by declaration order", which is no longer the case since #37429 got merged. This might prompt a user to pointlessly manually order fields to optimize for size.
Instead, the documentation should mention the issue of field alignments, and explain that the compiler will reorder fields to minimize padding. The exact algorithm might be a bit too complex and/or subject to change, so I don't think that it needs to be described completely. It's probably enough to say something like "the compiler reorders fields for optimisation, don't expect a particular order unless using one of the `repr` specifier".
[1]:https://doc.rust-lang.org/core/mem/fn.size_of.html#size-of-structs | C-enhancement,P-medium,A-docs,A-repr | low | Major |
370,586,589 | flutter | App is not Working in Android 21 below | ## App is not Working in Android 21 below
<!--
The app is working fine in Android 21 and above but the same code is producing the following wrror in Android 21 below.
-->
## Logs
```
Launching lib\main.dart on Android SDK built for x86 in debug mode...
Initializing gradle...
Resolving dependencies...
Gradle task 'assembleDebug'...
registerResGeneratingTask is deprecated, use registerGeneratedResFolders(FileCollection)
registerResGeneratingTask is deprecated, use registerGeneratedResFolders(FileCollection)
registerResGeneratingTask is deprecated, use registerGeneratedResFolders(FileCollection)
registerResGeneratingTask is deprecated, use registerGeneratedResFolders(FileCollection)
registerResGeneratingTask is deprecated, use registerGeneratedResFolders(FileCollection)
Note: C:\Users\iqor\AppData\Roaming\Pub\Cache\hosted\pub.dartlang.org\cloud_firestore-0.8.1+1\android\src\main\java\io\flutter\plugins\firebase\cloudfirestore\CloudFirestorePlugin.java uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
Note: C:\Users\iqor\AppData\Roaming\Pub\Cache\hosted\pub.dartlang.org\firebase_auth-0.6.2+1\android\src\main\java\io\flutter\plugins\firebaseauth\FirebaseAuthPlugin.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: C:\Users\iqor\AppData\Roaming\Pub\Cache\hosted\pub.dartlang.org\firebase_core-0.2.5+1\android\src\main\java\io\flutter\plugins\firebase\core\FirebaseCorePlugin.java uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
Note: C:\Users\iqor\AppData\Roaming\Pub\Cache\hosted\pub.dartlang.org\firebase_messaging-2.0.1\android\src\main\java\io\flutter\plugins\firebasemessaging\FlutterFirebaseInstanceIDService.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Built build\app\outputs\apk\debug\app-debug.apk.
Installing build\app\outputs\apk\app.apk...
timeout waiting for the application to start
```
Following app configuration
**app\build,gradle**
```gradle
def localProperties = new Properties()
def localPropertiesFile = rootProject.file('local.properties')
if (localPropertiesFile.exists()) {
localPropertiesFile.withReader('UTF-8') { reader ->
localProperties.load(reader)
}
}
def flutterRoot = localProperties.getProperty('flutter.sdk')
if (flutterRoot == null) {
throw new GradleException("Flutter SDK not found. Define location with flutter.sdk in the local.properties file.")
}
def flutterVersionCode = localProperties.getProperty('flutter.versionCode')
if (flutterVersionCode == null) {
flutterVersionCode = '1'
}
def flutterVersionName = localProperties.getProperty('flutter.versionName')
if (flutterVersionName == null) {
flutterVersionName = '1.0'
}
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply from: "$flutterRoot/packages/flutter_tools/gradle/flutter.gradle"
android {
compileSdkVersion 27
sourceSets {
main.java.srcDirs += 'src/main/kotlin'
}
lintOptions {
disable 'InvalidPackage'
}
defaultConfig {
// TODO: Specify your own unique Application ID (https://developer.android.com/studio/build/application-id.html).
applicationId "com.com.kaledateapp"
minSdkVersion 16
targetSdkVersion 27
versionCode flutterVersionCode.toInteger()
versionName flutterVersionName
multiDexEnabled true
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
// TODO: Add your own signing config for the release build.
// Signing with the debug keys for now, so `flutter run --release` works.
signingConfig signingConfigs.debug
}
}
}
flutter {
source '../..'
}
dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin_version"
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.2'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
implementation "com.android.support:appcompat-v7:27.1.1"
implementation 'com.android.support:multidex:1.0.3'
// implementation 'com.google.android.gms:play-services-places:15.0.1'
}
apply plugin: 'com.google.gms.google-services'
// Work around for one signal-gradle-plugin compatibility
com.google.gms.googleservices.GoogleServicesPlugin.config.disableVersionCheck = true
```
**android\build.gradle**
```gradle
buildscript {
ext.kotlin_version = '1.2.71'
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.2.1'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.google.gms:google-services:4.0.1'
}
}
allprojects {
repositories {
google()
jcenter()
}
}
rootProject.buildDir = '../build'
subprojects {
project.buildDir = "${rootProject.buildDir}/${project.name}"
}
subprojects {
project.evaluationDependsOn(':app')
}
task clean(type: Delete) {
delete rootProject.buildDir
}
subprojects {
project.configurations.all {
resolutionStrategy.eachDependency { details ->
if (details.requested.group == 'com.android.support'
&& !details.requested.name.contains('multidex') ) {
details.useVersion "27.1.1"
}
}
}
}
```
**pubspec.yaml dependencies**
```
dependencies:
flutter:
sdk: flutter
# The following adds the Cupertino Icons font to your application.
# Use with the CupertinoIcons class for iOS style icons.
cupertino_icons: ^0.1.2
google_sign_in: ^3.2.1
firebase_auth: ^0.6.2+1
firebase_core: ^0.2.5+1
firebase_database: ^1.0.5
firebase_storage: ^1.0.4
cloud_firestore: ^0.8.1+1
flutter_facebook_login: ^1.1.1
intl: ^0.15.7
geocoder: ^0.1.1
image_picker: ^0.4.10
shared_preferences: ^0.4.3
codable: ^1.0.0
flutter_staggered_grid_view: ^0.2.2
fluttertoast: ^2.0.9
cached_network_image: ^0.5.0
firebase_messaging: ^2.0.1
dio: ^1.0.6
flutter_local_notifications: ^0.3.9
time_machine: ^0.9.4
``` | c: crash,platform-android,tool,t: gradle,customer: crowd,P2,a: plugins,team-android,triaged-android | low | Critical |
370,611,032 | vue-element-admin | How to install into an existing vue.js project? | How to install into an existing vue.js project? | feature | low | Minor |
370,629,298 | rust | staticlib libgcc_s dependency requirement since 1.21.0 (nightly-2017-08-24) | Hi Rust team!
Since Rust 1.21.0 and nightly-2017-08-24 a new dependency, `libgcc_s`, is needed for static libraries we're building for `gnu`, `musl` and `freebsd` targets.
This requirement slipped past us during a recent upgrade because the build process did not inform us of this new requirement. (I see this is an option `--print=native-static-libs` in the latest nightly (`nightly-2018-10-10`) and unreleased 1.22.0.)
The change was only visible to us upon linking to an Elixir app as a NIF during a two stage build process where on the second stage this particular library was absent. The error we see is:
```
Error loading shared library libgcc_s.so.1: No such file or directory (needed by /app/lib/elixir_package-0.0.1/priv/elixir_package_extension.so)
```
From what I can tell [this commit](https://github.com/rust-lang/rust/commit/c9645678e86861f670ce8b422c2e565c1a232916) and its parent PR (https://github.com/rust-lang/rust/pull/40113) seem related to this issue. According to the commit and PR the `libgcc_s` link requirement gets added for the musl target/to support dynamically-linked musl targets.
However, we see this requirement listed on the latest nightly for the gnu and freebsd targets as well.
Was this libgcc_s link requirement intended for the gnu and freebsd targets as well? Or was this an unintended change?
## Reproducible example project
To track the issue down I created an example project with a reproducible state. It allows building an example Rust staticlib for musl and link it to Elixir as a NIF. Instructions can be found in the README. I'm adding this to add context and show a scenario to show how this requirement breaks between versions.
https://github.com/tombruijn/rust-elixir-linking-issue-example-project | A-linkage,T-compiler | low | Critical |
370,652,285 | go | x/tools/go/packages: clarify error invariants | ```
$ ./gopackages -mode=allsyntax "" "nonesuch"
Go package "_/home/adonovan/got/src/golang.org/x/tools": (has errors)
has complete exported type info
-: no Go files in /home/adonovan/got/src/golang.org/x/tools
Go package "nonesuch": (has errors)
has complete exported type info
-: cannot find package "nonesuch" in any of:
/home/adonovan/goroot/src/nonesuch (from $GOROOT)
/home/adonovan/go/src/nonesuch (from $GOPATH)
```
packages.Load("nonesuch") and Load("") both return a Package, presumably just as a place to hold errors. The package Name field is empty, which is a good clue that the package doesn't exist, and the Errors slice is non-empty. However, the Types field is non-nil and Complete(), which causes the gopackages diagnostic/example tool to report "has complete exported type info" for a package that doesn't exist.
The task of this issue is to clarify which fields can be relied upon in each error scenario and to improve the output and logic of the gopackages tool to reflect this. | NeedsInvestigation,Tools | low | Critical |
370,659,559 | rust | Arms permitted when matching on uninhabited types | There seem to be two issues here:
```rust
pub enum Void {}
pub fn foo(x: Void) {
match x {
_ => {} // This arm shouldn't be permitted.
};
let _ = (); // This should be warned as unreachable, but isn't.
}
```
On the other hand, the following code *does* warn:
```rust
match () {
() => {} // okay
_ => {} // unreachable pattern
}
``` | A-lints,T-compiler,C-bug,A-exhaustiveness-checking | low | Major |
370,705,949 | pytorch | pack_padded_sequence throws IndexError when only kwargs are specified | ## π Bug
pack_padded_sequence throws
> IndexError: tuple index out of range
when only kwargs are specified, but no args.
## To Reproduce
Steps to reproduce the behavior:
1. Call pack_padded_sequence specifying all kwargs, but no args.
## Stack Trace
```
--> 108 packed = pack_padded_sequence(input=x[indices], lengths=lens.tolist(), batch_first=self.batch_first)
109 outputs, hidden_t = self.rnn(
110 input=packed,
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/onnx/__init__.py in wrapper(*args, **kwargs)
65
66 # fast pass
---> 67 if not might_trace(args):
68 return fn(*args, **kwargs)
69
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/onnx/__init__.py in might_trace(args)
139 def might_trace(args):
140 import torch
--> 141 first_arg = args[0]
142 if not isinstance(first_arg, torch.Tensor):
143 raise ValueError('First argument of {} is expected to be a tensor, '
IndexError: tuple index out of range
```
## Expected behavior
Args should not be required to be specified.
## Environment
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.13.6
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
cc @zou3519 | module: rnn,triaged | low | Critical |
370,715,640 | pytorch | cdf in torch.distributions.bernoulli throws NotImplementedError | ## π Bug
When _cdf_ is called with a specified value on an instance of _bernoulli_ from _torch.distributions_, the
> NotImplementedError
is thrown. Although, _log_prob_ works fine.
## To Reproduce
Steps to reproduce the behavior:
1. `a = torch.rand(1, 5)`
1. `m = torch.distributions.bernoulli(a)`
1. `m.cdf(torch.ones(1, 5))`
## Stack trace
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-27-ed52de363764> in <module>()
----> 1 m.cdf(1)
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/distributions/distribution.py in cdf(self, value)
131 value (Tensor):
132 """
--> 133 raise NotImplementedError
134
135 def icdf(self, value):
NotImplementedError:
```
## Expected behavior
The cdf should not throw NotImplementedError.
## Environment
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.13.6
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
cc @fritzo @neerajprad @alicanb @vishwakftw @nikitaved | todo,module: distributions,triaged | low | Critical |
370,749,686 | TypeScript | Using generic argument that extends something, stops inferring super keys when Exclude is used in addition to keyof and Pick | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.1.0
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
keyof extends pick exclude
**Code**
```ts
interface Base {
base1: number;
base2: number;
}
class Data<T extends Base> {
alpha(k:keyof Pick<T, Exclude<keyof T, "base1">>) { }
beta(k:keyof Pick<T, keyof T>) { }
}
function wrapped<T extends Base>() {
let d = new Data<T>();
d.alpha("base2"); // Argument of type '"base2"' is not assignable to parameter of type 'Exclude<keyof T, "base1">'.
d.beta("base2"); // works
}
```
**Expected behavior:**
Both alpha and beta should work.
**Actual behavior:**
Alpha does not work: ` Argument of type '"base2"' is not assignable to parameter of type 'Exclude<keyof T, "base1">'.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
[Playground link](https://www.typescriptlang.org/play/#src=interface%20Base%20%7B%0D%0A%20%20%20%20base1%3A%20number%3B%0D%0A%20%20%20%20base2%3A%20number%3B%0D%0A%7D%0D%0A%0D%0Aclass%20Data%3CT%20extends%20Base%3E%20%7B%0D%0A%20%20%20alpha(k%3Akeyof%20Pick%3CT%2C%20Exclude%3Ckeyof%20T%2C%20%22base1%22%3E%3E)%20%7B%20%20%20%7D%0D%0A%20%20%20%20beta(k%3Akeyof%20Pick%3CT%2C%20keyof%20T%3E)%20%7B%20%20%20%7D%0D%0A%7D%0D%0A%0D%0Afunction%20wrapped%3CT%20extends%20Base%3E()%20%7B%0D%0A%20%20%20%20let%20d%20%3D%20new%20Data%3CT%3E()%3B%0D%0A%20%20%20%20d.alpha(%22base2%22)%3B%20%2F%2F%20Argument%20of%20type%20'%22base2%22'%20is%20not%20assignable%20to%20parameter%20of%20type%20'Exclude%3Ckeyof%20T%2C%20%22base1%22%3E'.%20%0D%0A%20%20%20%20d.beta(%22base2%22)%3B%20%20%2F%2F%20works%0D%0A%7D%0D%0A)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Suggestion,Needs Proposal,Domain: Conditional Types | low | Critical |
370,767,796 | TypeScript | JSON type | ## Search Terms
- JSON
## Suggestion
Type annotation for JSON in a string.
## Use Cases
Let's say you have a string which contains valid JSON object, like so:
```js
const json = '{"hello": "world"}';
```
How can you type annotate the `json` variable? Currently, you can mark it as a `string`:
```ts
const json: string = '{"hello": "world"}';
```
Instead there could be some TypeScript language feature that helps with typing JSON in a string more precisely, for example:
```ts
const json: JSON {hello: string} = '{"hello": "world"}';
```
## Examples
Specify that string contains valid JSON.
```ts
let json: JSON any;
let json: JSON; // shorthand
```
Add typings to an HTTP response body.
```ts
let responseBody: JSON {ping: 'pong'} = '{"ping": "pong"}';
```
Add type safety to `JSON.parse()` method.
```ts
let responseBody: JSON {ping: 'pong'} = '{"ping": "pong"}';
let {ping} = JSON.parse(responseBody);
typeof ping // 'pong'
```
JSON cannot contain complex types.
```ts
type Stats = JSON {mtime: Date}; // Error: Date is not a valid JSON type.
```
Doubly serialized JSON.
```ts
let response: JSON {body: string} = '{"body": "{\"userId\": 123}"}';
let fetchUserResponse: JSON {body: JSON {userId: number}} = response;
```
Get type of serialized JSON string using `jsontype` keyword.
```ts
type Response = JSON {body: string, headers: object};
type ResponseJson = jsontype Response; // {body: string, headers: object}
type ResponseBody = ResponseJson['body']; // string
type ResponseBody = (jsontype Response)['body']; // string
```
Specify that variable is JSON-serializable.
```ts
let serializable: jsontype JSON = {hello: 'world'};
JSON.serialize(serializable); // OK
let nonserializable: object = {hello: 'world'};
JSON.serialize(nonserializable); // Error: 'nonserializable' might not be serializable.
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
## Syntax Alternatives
```ts
type ResponseRaw = JSON {ping: 'pong'};
type ResponseRaw = json {ping: 'pong'};
type ResponseRaw = string {ping: 'pong'};
type ResponseRaw = json_string {ping: 'pong'};
type ResponseRaw = JSON<{ping: 'pong'}>;
type ResponseRaw = JSON({ping: 'pong'});
type Response = jsontype Response; // {ping: 'pong'}
type Response = typeof Response; // {ping: 'pong'}
type Response = parsed(Response); // {ping: 'pong'}
```
| Suggestion,In Discussion | medium | Critical |
370,790,396 | godot | Incorrect syntax highlighting in Dictionaries and enums. | **Windows 10 64-bit - 3.1 alpha 0dbe01483a902c49ecedf4fd36b74353424145a5**
@Paulb23 Wanted to get your opinion on this.
I noticed it is valid to use keywords, class names, and class property names as dictionary keys. They get the syntax highlighting making you think otherwise.
Currently it is possible to create a very colorful dictionary. X)

| enhancement,topic:gdscript,topic:editor,usability | low | Major |
370,858,030 | TypeScript | Differentiate between implicit any and explicit any in the Compiler API. | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
- "Differentiate between implicit any and explicit any in the Compiler API."
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
Working with the compiler API I've noticed the type checker doesn't differentiate between `any` originating from a decision by the author and a default case by the compiler.
In an internal branch I've fixed this by adding a new field to `ts.TypeFlags` and a new intrinsic type which is `any` with the flag added. I'm happy to upstream my implementation if this is accepted.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
```typescript
function hello(a, b) {}
function world(a: any, b: any) {
const c = b;
}
```
In the current version of TypeScript these 2 methods take identical arguments and can't be differentiated by the signatures alone. Adding explicit `any` to the compiler allows tools to distinguish these.
I've found this modification particularly valuable when utilizing the type checker for tools that analyse the output of the compiler. An example I've been using this for internaly is calculating the explicit type coverage of TypeScript code which could otherwise not be done.
## Examples
<!-- Show how this would be used and what the behavior would be -->
```typescript
if (ts.isIdentifier(node)) {
const type = this.checker.getTypeAtLocation(node);
if (type.getFlags() & ts.TypeFlags.Any &&
!(type.getFlags() & ts.TypeFlags.Explicit)) {
this._untypedIdentifiers += 1;
} else {
this._typedIdentifiers += 1;
}
}
```
With the compiler change it's now possible to exclude explicit any from type coverage calculations.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax) | Suggestion,In Discussion | low | Critical |
370,860,161 | neovim | TUI: scrollbar | I think that a scrollbar in each window would be a big UX win.
I'm imagining the scrollbar as a single column, on the very right hand-side of each window, as is standard.
This could be very similar to the scrollbar in the insert completion popup menu. I don't care whether the "gripper" on the scrollbar is interactive or not (e.g. whether you can use the mouse to grab and move the scrollbar). The scrollbar could simply act as visual feedback, and I would be very happy. | enhancement,ui,tui,ui-extensibility,jumps-navigation | low | Major |
370,911,465 | flutter | Android App Launch does not use theme properly | **Short Summary of issue**
It seems that the app launch process does not handle themes properly and does not theme the status bar and background color correctly per how you would expect if you were developing natively on Android
**Additonal Details**
So from my testing there is two basic app launch scenarios a native app should use to launch the app per the material documentation:
https://material.io/design/communication/launch-screen.html#usage
Basically a splash screen and placeholder UI. I saw there are are many pull requests related to this such as:
https://github.com/flutter/flutter/issues/8147
https://github.com/flutter/flutter/pull/11505
It seems that on iOS it works how I would expect but on Android I would expect it to use the theme on launch to make it feel more native.
**Below is how it works on Native Android:**
_Android Native With Splash_
Note: Shows status bar per the theme.

_Android Native Without Splash:_
Note: Shows status bar per the theme.

**Steps to reproduce**
If I create the default flutter android application per Android Studio and below is my flutter doctor results
```
Doctor summary (to see all details, run flutter doctor -v):
[β] Flutter (Channel beta, v0.9.4, on Mac OS X 10.14 18A391, locale en-US)
[β] Android toolchain - develop for Android devices (Android SDK 28.0.3)
[β] iOS toolchain - develop for iOS devices (Xcode 10.0)
[β] Android Studio (version 3.2)
[β] VS Code (version 1.28.1)
[β] Connected devices (1 available)
```
I am testing on a Pixel 2 Emulator API 26
There are 3 situations I would like to highlight with examples
1) Default flutter app with no modifications looks like below:
Note: Status bar changes from Black to the themed blue

2) Flutter app with splash modified to match theme color
Note: Status bar changes from Black to themed blue

3) Flutter app with splash disabled
Note: Screen is black and then goes into themed app

**Goal state**
It would be great if there was a way to first of all get the theme you choose in flutter to plug into android like you included it via a style. Also with the solution you might want to consider how to support more placeholder UI such as I want to show the toolbar on app launch or something like that.
**Work around**
https://github.com/DavidCorrado/FlutterSplashTesting/commit/9aa6861e64fffe52d0138e19ce7be79ed75caaed | c: new feature,platform-android,framework,engine,f: material design,a: fidelity,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-android,triaged-android | low | Major |
370,912,668 | godot | Rotation snapping is active only when it is turned off. | **Win10 64bit - Alpha 3.1 0dbe01483a902c49ecedf4fd36b74353424145a5**
@groud Likely some flags flipped up somewhere. But have noticed that rotation snapping only works when both snapping options are disabled.

| bug,topic:editor,confirmed,usability | low | Major |
370,935,707 | rust | One of the E0599 note disappears when using specialization | Take the following code:
```rust
struct A {
b: bool,
}
struct Foo;
struct B<T, U=Foo>(T, U);
impl<T: Clone> Clone for B<T> {
fn clone(&self) -> B<T> {
B(self.0.clone(), Foo)
}
}
fn main() {
let i = B(A { b: true }, Foo);
let _j = i.clone();
}
```
(derived from `src/test/ui/unique-pinned-nocopy.rs`)
This fails to compile with:
```
error[E0599]: no method named `clone` found for type `B<A>` in the current scope
--> src/main.rs:17:16
|
7 | struct B<T, U=Foo>(T, U);
| ------------------------- method `clone` not found for this
...
17 | let _j = i.clone();
| ^^^^^
|
= note: the method `clone` exists but the following trait bounds were not satisfied:
`B<A> : std::clone::Clone`
= help: items from traits can only be used if the trait is implemented and in scope
= note: the following trait defines an item `clone`, perhaps you need to implement it:
candidate #1: `std::clone::Clone`
```
https://play.rust-lang.org/?version=nightly&mode=debug&edition=2015&gist=bf41d2dae1e36b2cb0ef6d8cae3529e4
Now, add the following:
```rust
#![feature(specialization)]
impl<T: Clone, U: Clone> Clone for B<T, U> {
default fn clone(&self) -> B<T, U> {
B(self.0.clone(), self.1.clone())
}
}
```
and it now fails with:
```
error[E0599]: no method named `clone` found for type `B<A>` in the current scope
--> src/main.rs:25:16
|
9 | struct B<T, U=Foo>(T, U);
| ------------------------- method `clone` not found for this
...
25 | let _j = i.clone();
| ^^^^^
|
= help: items from traits can only be used if the trait is implemented and in scope
= note: the following trait defines an item `clone`, perhaps you need to implement it:
candidate #1: `std::clone::Clone`
```
https://play.rust-lang.org/?version=nightly&mode=debug&edition=2015&gist=76f69c7ad78d25413784e9cbfd306583
The following note disappeared, but seems like it should still be emitted:
```
= note: the method `clone` exists but the following trait bounds were not satisfied:
`B<A> : std::clone::Clone`
``` | A-diagnostics,T-compiler,A-specialization,C-bug,requires-nightly,F-specialization | low | Critical |
370,937,170 | pytorch | caffe2 python custom operator could not update parameters | I write a Python custom operator in caffe2 and set some parameters in them. I have given grads of them. But I found that the parameters are not in model.params list so they could not be updated during the network running.
def f(inputs, outputs):
weight = inputs[1].data
outputs[0].reshape(inputs[0].shape)
outputs[0].data[...] = inputs[0].data * weight
def grad_f(inputs, outputs):
# Ordering of inputs is [fwd inputs, outputs, grad_outputs]
grad_output = inputs[3]
grad_input = output
grad_input[0].reshape(grad_output.shape)
grad_input[1].reshape(grad_output.shape)
grad_input[0].data[...] = grad_output.data * inputs[1].data
grad.input[1].data[...] = grad_output.data * inputs[0].data
here, I wanna update the parameter weight. Is there any python function to add the parameters I want to update into the model.params so it could be updated?
| caffe2 | low | Minor |
370,944,749 | vscode | [css] Autocomplete for media queries | VSCode doesn't offer autocompletion for media queries.
Why is that?

| help wanted,feature-request,css-less-scss | medium | Major |
370,951,265 | godot | Animation editor is not accessible from plugins | **Godot version:**
3.1 155652908a4284658b6c9b27640142a15fcb502c
**Issue description:**
Unless I missed it, the plugin API doesn't seem to provide a way to access animation editor. Working with and extending the animation editor can be useful for many purposes. We have several use cases for that, and it's clear projects may have specific animation-related workflows that could be implemented as plugins. | enhancement,topic:editor,topic:plugin | low | Minor |
370,964,415 | flutter | TextFormField editing cursor "balloon" floating on top of AppBar & TabBarView when scrolled | On Android 6.0.1 device. The form fields are in a SingleChildScrollView (no ListView used at all).
Initially when a field is clicked into:

After scrolling:

```
[β] Flutter (Channel beta, v0.9.4, on Linux, locale en_XX.UTF-8)
β’ Flutter version 0.9.4 at /home/.../flutter
β’ Framework revision f37c235c32 (3 weeks ago), 2018-09-25 17:45:40 -0400
β’ Engine revision 74625aed32
β’ Dart version 2.1.0-dev.5.0.flutter-a2eb050044
[β] Android toolchain - develop for Android devices (Android SDK 28.0.2)
β’ Android SDK at /home/.../Android/Sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-28, build-tools 28.0.2
β’ Java binary at: /home/.../android-studio/jre/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
β’ All Android licenses accepted.
[β] Android Studio (version 3.1)
β’ Android Studio at /home/.../android-studio
β Flutter plugin not installed; this adds Flutter specific functionality.
β Dart plugin not installed; this adds Dart specific functionality.
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
[β] VS Code (version 1.28.1)
β’ VS Code at /usr/share/code
β’ Flutter extension version 2.19.0
``` | a: text input,framework,a: fidelity,f: scrolling,has reproducible steps,P2,found in release: 3.7,found in release: 3.9,team-framework,triaged-framework | low | Major |
370,980,536 | godot | The panoramic sky field of view is not set correctly in orthogonal mode | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
0dbe01483a902c49ecedf4fd36b74353424145a5
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Linux
**Issue description:**
<!-- What happened, and what was expected. -->
Auto/default FOV


The same with a similar custom FOV


**Steps to reproduce:**
Attach a panorama texture to the environment node and press numpad-5.
| bug,topic:rendering,confirmed,topic:3d | low | Major |
371,004,972 | pytorch | [feature request] Spectral norm support in torch.norm or factor out Power Iteration from spectralnormalization in some other place and orthogonalization from PowerSGD hook | Currently PyTorch has a spectral matrix norm implementation based on power iteration hidden inside the SpectralNorm. Since new `torch.norm` supports matrix norms (e.g. nuclear norm), it may be good to move it there, so it's usable in other contexts too (e.g. for debugging/tracing spectral properties of neural net weights, and since PyTorch doesn't support truncated randomized SVD yet). | triaged,module: norms and normalization | low | Critical |
371,008,806 | flutter | [docs] explain difference between animation and secondaryAnimation in buildTransitions | I want to achieve the animation effect of routing switching in the video.
Explicitly, The child page switches to the parent page, and both the child page and the parent page move. And, the animation looks like: the parent page is completely removed, and the child page is completely moved in.
I tested the PageRouteBuilder for custom animations, but it only allows subpages to customize the animation, and the parent page does not.
Can you help meοΌ

| framework,a: animation,d: api docs,f: routes,P2,team-framework,triaged-framework | low | Major |
371,017,054 | rust | dyn closures shouldn't lose range analysis information of the environment | Minimal reproducible example:
```rust
pub fn test(b: bool) -> Box<dyn Fn() -> bool> {
assert!(b);
Box::new(move || b)
}
```
currently compiles into:
```asm
example::test:
push rax
test edi, edi
je .LBB7_3
mov edi, 1
mov esi, 1
call __rust_alloc@PLT
test rax, rax
je .LBB7_4
mov byte ptr [rax], 1
lea rdx, [rip + .Lvtable.7]
pop rcx
ret
.LBB7_3:
call std::panicking::begin_panic
ud2
.LBB7_4:
mov edi, 1
mov esi, 1
call alloc::alloc::handle_alloc_error@PLT
ud2
example::test::{{closure}}:
mov al, byte ptr [rdi]
ret
```
But given that the `b` is guaranteed to be `true`, I'd expect it to be equivalent to inlining a constant.
The only way to propagate such invariants currently seems to be by using the LLVM `assume` intrinsic which is available only on nightly. E.g. for example above:
```rust
#![feature(core_intrinsics)]
use std::intrinsics::assume;
pub fn test(b: bool) -> Box<dyn Fn() -> bool> {
assert!(b);
Box::new(move || {
unsafe { assume(b); }
b
})
}
```
compiles to
```asm
example::test:
push rax
test edi, edi
je .LBB7_3
mov edi, 1
mov esi, 1
call qword ptr [rip + __rust_alloc@GOTPCREL]
test rax, rax
je .LBB7_4
mov byte ptr [rax], 1
lea rdx, [rip + .L__unnamed_4]
pop rcx
ret
.LBB7_3:
call std::panicking::begin_panic
ud2
.LBB7_4:
mov edi, 1
mov esi, 1
call qword ptr [rip + _ZN5alloc5alloc18handle_alloc_error17hfd3c7484b550d419E@GOTPCREL]
ud2
example::test::{{closure}}:
mov al, 1
ret
```
which is much closer to the expected result - function is using a constant as it should. However, this still allocates data for a variable even though it's no longer necessary - it has only one value and isn't used in the optimised function body anyway.
Note that the range analysis issue applies specifically to dynamically dispatched closures (which are still necessary sometimes and are used for generic callbacks), however the second part of the issue (unnecessary allocation) applies to statically dispatched generic closures too. For example:
```rust
pub fn test(b: bool) -> Box<impl Fn() -> bool> {
assert!(b);
Box::new(move || {
b
})
}
```
compiles to:
```asm
example::test:
push rax
test edi, edi
je .LBB6_3
mov edi, 1
mov esi, 1
call qword ptr [rip + __rust_alloc@GOTPCREL]
test rax, rax
je .LBB6_4
mov byte ptr [rax], 1
pop rcx
ret
.LBB6_3:
call std::panicking::begin_panic
ud2
.LBB6_4:
mov edi, 1
mov esi, 1
call qword ptr [rip + _ZN5alloc5alloc18handle_alloc_error17hfd3c7484b550d419E@GOTPCREL]
ud2
```
I understand these examples might seem superficial and can be easily optimised by hand, but I hope they do showcase a more generic issue, which also applies to e.g. captures of `enum` in `match` branches where captured enum is guaranteed to be within the already matched variants, but this information is not properly propagated and lost to the closure, preventing further optimisations. | A-LLVM,I-slow,C-enhancement,T-compiler | low | Critical |
371,040,910 | rust | Compiler can suggest `#[derive(move Trait)]` | When a custom derive generates a closure, and that closure causes a compiler error because it borrows its environment instead of correctly moving it, rustc suggests to put the `move` keyword into the `#[derive]` attribute:
Actual error message I just got:
```
error[E0373]: closure may outlive the current function, but it borrows `route_kind`, which is owned by the current function
--> modules/debug/src/lib.rs:19:10
|
19 | #[derive(FromRequest)]
| ^^^^^^^^^^^
| |
| `route_kind` is borrowed here
| may outlive borrowed value `route_kind`
help: to force the closure to take ownership of `route_kind` (and any other referenced variables), use the `move` keyword
|
19 | #[derive(move FromRequest)]
| ^^^^^^^^^^^^^^^^
error: aborting due to previous error
```
(this is happening in a rather convoluted production codebase, so unfortunately I don't have a test case yet)
If I'm not mistaken, this can only happen when the custom derive macro outputs incorrect code, so the impact is pretty limited. | A-attributes,A-diagnostics,A-macros,T-compiler,C-bug,E-needs-mcve,D-invalid-suggestion | low | Critical |
371,041,879 | angular | Boolean animation states not converted to strings in AnimationEvent | ## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Performance issue
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[ ] Other... Please describe:
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
Boolean animation states are not converted to strings in `fromState` and `toState` in `AnimationEvent` despite these properties being typed as string.
## Expected behavior
<!-- Describe what the desired behavior would be. -->
AnimationEvent typing should be changed or boolean states should be converted to strings.
## Minimal reproduction of the problem with instructions
https://angular-gitter-gx4x9v.stackblitz.io
## What is the motivation / use case for changing the behavior?
When using boolean animation states, one has to cast `fromState` and `toState` from `AnimationEvent`.
## Environment
<pre><code>
Angular version: 6.1.10
Browser: not relevant
</code></pre>
| type: bug/fix,area: animations,freq1: low,P4 | low | Critical |
371,042,894 | pytorch | Caffe2: Two entries of external_input "data_0" in mnist_predict_net.pbtxt file | After training mnist model in caffe2 I found there are two entries of external_input "data_0" in mnist predict_net file. I think it should only have one entry so does any one know how this external_input entries are generated while training model.
I have attached the mnist .pbtxt file here.
[mnist_predict_net.pbtxt.zip](https://github.com/pytorch/pytorch/files/2487290/mnist_predict_net.pbtxt.zip)
| caffe2 | low | Minor |
371,043,405 | pytorch | [feature request] Operator Overloading | I realized that PyTorch does partly prevent operator overloading by raising a `TypeError` rather than returning `NotImplemented`. Therefore Python never checks object methods implementing operations with reflected operands, e. g. `__rmul__`:
```
In[1]: import torch
In[2]: class Two:
def __mul__(self, other):
return other * 2
def __rmul__(self, other):
return self * other
In[3]: two = Two()
In[4]: two * 3
Out[4]: 6
In[5]: 3 * two
Out[5]: 6
In[6]: two * torch.tensor(3)
Out[6]: tensor(6)
In[7]: torch.tensor(3) * two
Traceback (most recent call last):
File "/path/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2961, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-8-e59deefe7a13>", line 1, in <module>
torch.tensor(3) * two
TypeError: mul() received an invalid combination of arguments - got (Two), but expected one of:
* (Tensor other)
didn't match because some of the arguments have invalid types: (!Two!)
* (float other)
didn't match because some of the arguments have invalid types: (!Two!)
```
Is this behavior intended (e. g. for performance reasons)? I would prefer PyTorch returning `NotImplemented` (which might result in raising a `TypeError`) rather than directly raising a `TypeError`. | todo,feature,triaged | low | Critical |
371,065,454 | vscode | Problem matchers should restore problems on close | For a detailed discussion see https://github.com/Microsoft/vscode/issues/47386
This branch https://github.com/Microsoft/vscode/tree/dbaeumer/47386 has still the correct start of work. | feature-request,tasks | low | Major |
371,071,612 | godot | Path should present an editor warning if bake_interval is higher than the curve's length | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** b1cd673e180ece86b74fc228c8221de2816da3e1
<!-- Specify commit hash if non-official. -->
**OS/device including version:** --
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
It is possible to set a curve's `bake_interval` to higher values than the curve's length. This leads to unintuitive behaviour, e.g. PathFollow nodes just won't work and can't move their "Offset"-sliders.
No hint is given to the user that this is related due to a `bake_interval` value that is not sensible.
It would be good if either an editor warning on the Path node is presented, or if making such a setting would be impossible (both from the editor and from GDScript).
<!-- What happened, and what was expected. -->
**Steps to reproduce:**
1. Create a Path and a PathFollow node as child.
2. draw a short Path
3. set the curve's bake_interval to something very high (which is easy with the current slider scaling)
4. Try to move the PathFollow's offset slider: won't work.
| enhancement,topic:editor | low | Minor |
371,084,164 | TypeScript | Add properties in JSDoc generation on destructuring in javascript file | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
JSDoc destructuring generate property javascript
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
When you use the JSDoc generation on function using destructuring, it could list the properties.
<!-- A summary of what you'd like to see added or changed -->
## Use Cases
Avoid redundancy when writing code.
It could be useful for Javascript files and Typescript files.
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
Action:
<img width="724" alt="capture d ecran 2018-10-17 a 15 21 44" src="https://user-images.githubusercontent.com/7831572/47089671-a04db100-d221-11e8-991b-8026474f9fc1.png">
Actual:
<img width="542" alt="capture d ecran 2018-10-17 a 15 22 00" src="https://user-images.githubusercontent.com/7831572/47089698-acd20980-d221-11e8-9fab-f559210e0f3c.png">
Expectation:
<img width="297" alt="capture d ecran 2018-10-17 a 15 22 48" src="https://user-images.githubusercontent.com/7831572/47089702-af346380-d221-11e8-8cd7-94e8010ec8cb.png">
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,In Discussion,Domain: Quick Fixes,Domain: JSDoc | low | Critical |
371,172,439 | pytorch | Build the docker image from source, but torch.cuda.is_available()==false | ## β Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
## System info Tesla P100

## cuda test show pass
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 9.0, NumDevs = 8
Result = PASS
## but pytorch can't find cuda device:

I have no idea, please help, thank you! | triaged,module: docker | low | Major |
371,218,431 | go | runtime: Interferes with Windows timer frequency | ### What version of Go are you using (`go version`)?
go version go1.11.1 windows/amd64
### Does this issue reproduce with the latest release?
Try to set a high frequency timer
### What operating system and processor architecture are you using (`go env`)?
windows amd64
### What did you do?
Set the OS to a high frequency tick using NtSetTimerResolution
### What did you expect to see?
NtQueryTimerResolution should return 1ms
### What did you see instead?
NtQueryTimerResolution returned 15ms
This used to work but was broken by the following commit:
https://github.com/golang/go/commit/11eaf428867417b9d5fab4deadd0ef03c9fd9773
This "reduces the timer resolution when the Go process is entirely idle" however this is a system wide setting so golang cannot just assume another app on the machine doesn't require a high resolution timer to work correctly, hence setting it back to 15ms isn't safe.
| OS-Windows,NeedsInvestigation,compiler/runtime | medium | Critical |
371,236,587 | react | Uncontrolled input type="checkbox" reflects updating `defaultChecked` in Edge and Safari | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
**Do you want to request a *feature* or report a *bug*?**
bug
**What is the current behavior?**
Updating the value of `defaultChecked` causes a change DOM property's `checked` value.
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:**
https://codesandbox.io/s/yjop5zwmr9
**What is the expected behavior?**
Updating the value of `defaultChecked` should not affect to DOM property's `checked` value.
(no log in codesandbox is expected)
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
reproduce in React v15.2.0 and v16.5.2, ReactDOM v15.2.0 and v16.5.2.
not reproduce in v15.1
browser: Safari 12 on macOS Sierra, Edge 42 on Windows 10
not reproduce in Chrome(70, beta), Chrome(72, canary), Firefox(62), IE11 on Win10
smartphone browsers are not checked. | Component: DOM,Type: Needs Investigation | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.