id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
406,900,118 | flutter | Bring FlutterDriver to feature parity with WidgetController | Does it make sense for FlutterDriver to be a subclass of WidgetController?
Or at least, can the FlutterDriver offer a wider range of APIs, closer to what the WidgetController does? It misses methods like drag, fling, and longPress, that would come in handy in integration testing as well. | a: tests,c: new feature,framework,t: flutter driver,P2,team-framework,triaged-framework | low | Minor |
406,917,224 | flutter | TextField label has too much padding when unfocused | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.io/
* https://docs.flutter.io/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.io/bug-reports/
-->
When a `TextField` is unfocused, the `labelText` property in an `InputDecoration` is not equivalent to an Android edit text. It appears that it contains too much padding around the label. I attached images to showcase the issue.
The focused case looks fine.
## Steps to Reproduce
<!--
Please tell us exactly how to reproduce the problem you are running into.
Please attach a small application (ideally just one main.dart file) that
reproduces the problem. You could use https://gist.github.com/ for this.
If the problem is with your application's rendering, then please attach
a screenshot and explain what the problem is.
-->
1. On Flutter:
Unfocused:
<img width="265" alt="flutter_unfocused" src="https://user-images.githubusercontent.com/11878569/52295322-88717880-2949-11e9-9c15-1c1c5fb30b1c.png">
Focused:
<img width="264" alt="flutter_focused" src="https://user-images.githubusercontent.com/11878569/52295323-88717880-2949-11e9-9b9d-102b83252908.png">
```dart
Widget _buildUsernameField() {
return TextField(
onChanged: _onUsernameChanged,
maxLines: 1,
decoration: InputDecoration(
labelText: "Username",
),
);
}
```
2. On Android:
Unfocused:
<img width="264" alt="android_unfocused" src="https://user-images.githubusercontent.com/11878569/52295353-9b844880-2949-11e9-8e16-9ce0bd7ab492.png">
Focused:
<img width="259" alt="android_focused" src="https://user-images.githubusercontent.com/11878569/52295352-9aebb200-2949-11e9-8b2c-ca05d6867b4e.png">
```xml
<android.support.design.widget.TextInputLayout
android:id="@+id/login_username_layout"
android:layout_width="match_parent"
android:layout_height="wrap_content">
<android.support.design.widget.TextInputEditText
android:id="@+id/login_username"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:hint="@string/login_username"
android:singleLine="true" />
</android.support.design.widget.TextInputLayout>
```
## Logs
<!--
Run your application with `flutter run --verbose` and attach all the
log output below between the lines with the backticks. If there is an
exception, please see if the error message includes enough information
to explain how to solve the issue.
-->
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
<!-- Finally, paste the output of running `flutter doctor -v` here. -->
```
[β] Flutter (Channel beta, v1.1.8, on Mac OS X 10.14 18A391, locale en-US)
β’ Flutter version 1.1.8 at /Users/dev/flutter
β’ Framework revision 985ccb6d14 (4 weeks ago), 2019-01-08 13:45:55 -0800
β’ Engine revision 7112b72cc2
β’ Dart version 2.1.1 (build 2.1.1-dev.0.1 ec86471ccc)
[β] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
β’ Android SDK at /Users/Library/Android/sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-28, build-tools 28.0.3
β’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01)
β’ All Android licenses accepted.
[!] iOS toolchain - develop for iOS devices (Xcode 10.1)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Xcode 10.1, Build version 10B61
β Verify that all connected devices have been paired with this computer in Xcode.
If all devices have been paired, libimobiledevice and ideviceinstaller may require updating.
To update with Brew, run:
brew update
brew uninstall --ignore-dependencies libimobiledevice
brew uninstall --ignore-dependencies usbmuxd
brew install --HEAD usbmuxd
brew unlink usbmuxd
brew link usbmuxd
brew install --HEAD libimobiledevice
brew install ideviceinstaller
β’ ios-deploy 1.9.2
β ios-deploy out of date (1.9.4 is required). To upgrade with Brew:
brew upgrade ios-deploy
β’ CocoaPods version 1.5.3
[!] Android Studio (version 3.3)
β’ Android Studio at /Applications/Android Studio.app/Contents
β Flutter plugin not installed; this adds Flutter specific functionality.
β Dart plugin not installed; this adds Dart specific functionality.
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01)
[β] IntelliJ IDEA Community Edition (version 2018.3.2)
β’ IntelliJ at /Applications/IntelliJ IDEA CE.app
β’ Flutter plugin version 31.3.4
β’ Dart plugin version 183.4886.3
[β] VS Code (version 1.30.2)
β’ VS Code at /Applications/Visual Studio Code.app/Contents
β’ Flutter extension version 2.22.3
[β] Connected device (1 available)
β’ Android SDK built for x86 β’ emulator-5554 β’ android-x86 β’ Android 9 (API 28) (emulator)
! Doctor found issues in 2 categories.
``` | a: text input,framework,f: material design,a: fidelity,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Critical |
406,952,610 | flutter | [webview_flutter] cannot follow `tg://` links on iOS | Cannot follow `tg://` links on iOS to Telegram client. For some reason, Discord links do work, however. I have not tested this on Android.
```
$ flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[β] Flutter (Channel dev, v1.2.0, on Mac OS X 10.13.6 17G4015, locale en-US)
[β] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
[β] iOS toolchain - develop for iOS devices (Xcode 10.1)
[β] Android Studio (version 3.1)
[β] VS Code (version 1.30.2)
[β] Connected device (1 available)
β’ No issues found!
``` | p: webview,package,has reproducible steps,P3,found in release: 3.10,found in release: 3.11,team-ios,triaged-ios | medium | Major |
406,964,676 | TypeScript | type callbacks | search: https://www.google.com/search?q=type+functions+site%3Ahttps%3A%2F%2Fgithub.com%2FMicrosoft%2FTypeScript
The more I work with advance types the more apparent the following problems become:
<strike>
## No intermediate types
At the current stage the advanced types are represented via expressions. Quite often there is a need to reuse exactly the same part of such expression in more than one place. Currently there is no way to capture a part of it and reuse it, consider:
```ts
type A<X> = { x: B<X>, y: B<X> }
```
here I wish I could save `B<X>` into a local type `C` somehow to be able to write
```ts
// speculative syntax:
type A<X> = <
type C = B<X>;
{ x: C, y: C };
>;
```
</strike>
per @RyanCavanaugh intermediate types are covered by https://github.com/Microsoft/TypeScript/issues/23188
## No way to specify a type callback
Currently type parameters are the only way to define a parametric type. There is no way to define a type based on a type callback:
```ts
interface Z { x: string; y: number; }
// speculative syntax:
type A<T, F: X =>..., T> = { [P in keyof T]: F<T[P]>; } // <-- wish could do this
type B = A<Z, X => X extends string ? true : false>; // { x: true; y: false; }
type C = A<Z, X => { get: () => X }>; // { x: { get: () => string; }; y: { get: () => number; }; }
```
I am not sure what a proposal would be, but these problems are definitely worth a discussion | Suggestion,Needs Proposal | low | Minor |
406,993,828 | TypeScript | Set context of this keyword in definition file | ## Search Terms
set `this` context in typescript definition file
## Suggestion
A way to change the context of `this` keyword in a type definition file. This is admittedly a weird feature since, to my knowledge, it doesn't really align with how modules work. Perhaps there is another way to achieve this use case.
## Use Cases
Here's a video of how my application works - https://vimeo.com/315553984
I have an application that allows users to write code which will then be executed via the `Function` API - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function
Here's a simplified version of how the app works at its core -
```
const userCode = `
console.log(this, foo, bar);
`;
const func = Function('foo', 'bar', userCode);
func.call({myThis: 'hello'}, 'fooString', 'barString');
```
My app allows users to write their code in Monaco editor and I've successfully added type definitions for the `foo` and `bar` variables via the `addExtraLib` method - https://microsoft.github.io/monaco-editor/api/interfaces/monaco.languages.typescript.languageservicedefaults.html
So when the user starts to type `foo` intellisense informs them that it is a string. I'd like to do the same for the `this` keyword.
## Examples
See use cases ^
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Major |
407,005,843 | node | http2 API documentation issues | ## Compatibility API example
Here's the example code from the API docs in the [compatibility API section](https://nodejs.org/api/http2.html#http2_compatibility_api):
```js
const http2 = require('http2');
const server = http2.createServer((req, res) => {
res.setHeader('Content-Type', 'text/html');
res.setHeader('X-Foo', 'bar');
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('ok');
});
```
The `Content-Type` header is overwritten here in `writeHead()`, but unless the reader knows that `writeHead()` *merges* its headers with the headers set in `setHeader()`, the reader may think there's something special going on here.
Suggestion: remove first call to `setHeader()`; ensure example is straightforward
## No documentation of `HTTP2_HEADER_*` constants
There are many references to e.g., `http2.constants.HTTP2_HEADER_STATUS`. These are not listed in the [constants section](https://nodejs.org/api/http2.html#http2_http2_constants). Is `HTTP2_HEADER_CONTENT_TYPE` different than `'Content-Type'`? Can these be used interchangeably? If I use the compatibility API, should I use these constants?
Suggestion: document the constants and when to use them. | help wanted,doc,http2 | low | Major |
407,007,247 | TypeScript | Error messages should place most relevant information first. | Here's an actual example of an error message that can go wrong in our library when mispelling a property when creating an object literal:
```
[ts]
Type '{ containers: { nginx: { image: string; memory: number; portMappings: NetworkListener[]; enviroment: number; }; }; }' is not assignable to type 'FargateTaskDefinitionArgs'.
Types of property 'containers' are incompatible.
Type '{ nginx: { image: string; memory: number; portMappings: NetworkListener[]; enviroment: number; }; }' is not assignable to type 'Record<string, Container>'.
Property 'nginx' is incompatible with index signature.
Type '{ image: string; memory: number; portMappings: NetworkListener[]; enviroment: number; }' is not assignable to type 'Container'.
Object literal may only specify known properties, but 'enviroment' does not exist in
type 'Container'. Did you mean to write 'environment'? [2322]
- fargateService.d.ts(169, 5): The expected type comes from property 'taskDefinitionArgs'
which is declared here on type 'FargateServiceArgs'
```
The funny thing here is that the final parts of the error message are superb. Namely:
```
- Object literal may only specify known properties, but 'enviroment' does not exist in
type 'Container'. Did you mean to write 'environment'? [2322]
- fargateService.d.ts(169, 5): The expected type comes from property 'taskDefinitionArgs'
which is declared here on type 'FargateServiceArgs'
```
However, the actual output is quite cluttered and hard to glean the information from. This is esp. true in an ide setting where one sees something like this:

This view is particularly problematic due to the wrapping and wall-of-goop nature of this type of error.
My suggestion would be to invert the information being provided by errors. Give the most specific and directly impactful information first (i.e. `Object literal may only specify known properties, but 'enviroment' does not exist in type 'Container'. Did you mean to write 'environment'?`), and then follow that up with all the extra details that can be used to dive deeper into things when that isn't clear enough. | Suggestion,Needs Proposal | medium | Critical |
407,011,403 | go | x/tools/refactor/rename: move-to strictly respects build tags | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.5 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yeah.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN="/Users/elagergren/gopath/bin"
GOCACHE="/Users/elagergren/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/elagergren/gopath"
GOPROXY=""
GORACE="history_size=7"
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="/Users/elagergren/gopath/src/treehouse.spideroak.com/flow/openssl/lib/darwin/fips2.0/bin/fipsld_clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/2b/74fz3jhd4wz4vnbf4z7ywzww0000gp/T/go-build009059339=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
```
$ tree
.
βββ package.go
βββ package_windows.go (imports windows1)
βββ windows1
βββ windows2
βΒ Β βββ windows2.go
βββ windows1.go (imports windows2)
```
`gomvpkg -from repo/package -to repo/elsewhere/package -vcs_mv_cmd 'git mv {{ .Src }} {{ .Dst }}`
### What did you expect to see?
Files that would otherwise be excluded from compilation via build tags have their imports changed.
### What did you see instead?
They didn't.
---
To be specific: on macOS I moved a package containing `package_windows.go`. That file imported a sub-package. While the macOS/Unix/etc. code (e.g., `package.go`) had its imports updated, the code in `package_windows.go` did not.
I assume the `XXX_windows.go` files were excluded because of build constraints. | NeedsInvestigation,Tools,Refactoring | low | Critical |
407,054,706 | kubernetes | Volume may be resized online if multiple pods share the same volume | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!-->
**What happened**:
Inside [MountVolume](https://sourcegraph.com/github.com/kubernetes/kubernetes@0dfbbc290cce9da8aebd669890dbd3e117e20cc6/-/blob/pkg/volume/util/operationexecutor/operation_generator.go#L446), [resizeFilesystem](https://sourcegraph.com/github.com/kubernetes/kubernetes@0dfbbc290cce9da8aebd669890dbd3e117e20cc6/-/blob/pkg/volume/util/operationexecutor/operation_generator.go#L550) is always called per pod, without checking if other pods sharing the same volume have restarted.
So you can have a sequence like this:
1. Two pods on the same node sharing the volume
2. Resize is started
3. One pod is restarted
4. When that pod goes through MountVolume, it will trigger the filesystem resize, even though the 2nd pod is still running on the node and has not restarted. So the 2nd pod sees an online resize.
**What you expected to happen**:
Both pods should have to restart before the filesystem is resized
@kubernetes/sig-storage-bugs
/assign @gnufied | kind/bug,sig/storage,lifecycle/frozen | low | Critical |
407,115,847 | vscode | List references: transition from history to results is not smooth | Steps to Reproduce:
1. have a history of references
2. find references again
=> π the transition from history to results is not very smooth, in the video you can see how the history partially still shows after running "Find References"

| bug,tree-views,references-viewlet | low | Minor |
407,137,122 | flutter | BottomAppBar elevation doesn't seem to do anything | framework,f: material design,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Critical |
|
407,160,362 | pytorch | Should be a way to unpickle an object with a torch cuda tensor on a CPU-only machine when using plain "pickle" | ## π Feature
The ability to pickle.load a Python object containing a torch cuda tensor on a CPU only machine.
## Motivation
Currently, trying to do this gives `RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.` Even though you are loading with `pickle.load`, not `torch.load`.
When the "loading" code is pytorch agnostic (exists in a repo that does not use pytorch), you can't just change a `pickle.load(f)` into a `torch.load(f, map_location='cpu')`. This may be the case when the saved data takes a particular structure and loading/unloading is handled by some code that does not depend on pytorch.
## Pitch
A context manager could take care of this:
```
with torch.loading_context(map_location='cpu'):
obj = pickle.load(f) # In my case this call is buried deeper in torch-agnostic code
```
| todo,feature,module: serialization,triaged | medium | Critical |
407,186,243 | electron | Support the forward option of BrowserWindow.setIgnoreMouseEvents in Linux | A click-through functionality is very useful for floating-widget type apps with non-rectagular shapes. A solution using the forward option of setIgnoreMouseEvents was proposed in this issue comment: https://github.com/electron/electron/issues/1335#issuecomment-433478053
However, Linux is currently not supported. I'd like to be able to have a cross-platform solution.
Thanks! | enhancement :sparkles: | low | Major |
407,202,654 | flutter | reverse playback of the video. | It will be very helpful to have effects for slower video playback or reversing it as the need be.
Take a look at this: https://stackoverflow.com/questions/54495037/flutter-more-video-player-controller-options
| c: new feature,p: video_player,package,team-ecosystem,P3,triaged-ecosystem | low | Major |
407,259,926 | vue | Ability to cancel rendering | ### What problem does this feature solve?
Under high-load, some pages are rendered to slow. The connection is aborted by the remote side. It would be nice to be able to cancel pages rendering in this case.
### What does the proposed API look like?
const rendererId = renderer.renderToString(...);
rendererId.cancel();
<!-- generated by vue-issues. DO NOT REMOVE --> | feature request,feat:ssr | low | Major |
407,286,856 | rust | [rustdoc search] Add search into struct fields/enum variants | Add a fourth tab. | T-rustdoc,C-feature-request,A-rustdoc-search | low | Minor |
407,289,229 | rust | Trait bound on associated type causes confusing compilation error | This code fails to compile:
```rust
pub trait Append<T> {
type Appended;
}
pub trait Proc {
type Item;
type Tail;
fn process_item<P>(&mut self, partial: P)
where
P: Append<Self::Item>,
P::Appended: Append<Self::Tail>;
}
struct Read;
impl Proc for Read {
type Item = i32;
type Tail = i64;
fn process_item<P>(&mut self, partial: P)
where
P: Append<Self::Item>,
P::Appended: Append<Self::Tail>,
{
}
}
```
```
error[E0277]: the trait bound `P: Append<i32>` is not satisfied
--> src/lib.rs:19:5
|
19 | / fn process_item<P>(&mut self, partial: P)
20 | | where
21 | | P: Append<Self::Item>,
22 | | P::Appended: Append<Self::Tail>,
23 | | {
24 | |
25 | | }
| |_____^ the trait `Append<i32>` is not implemented for `P`
|
= help: consider adding a `where P: Append<i32>` bound
```
The error is confusing because `Self::Item` is `i32`. Furthermore, changing the bound on `P` to the suggested bound (setting aside the fact that this is a trait impl so imposing additional bounds isn't necessarily appropriate) yields an even more confusing error:
```
error[E0277]: the trait bound `P: Append<i32>` is not satisfied
--> src/lib.rs:19:5
|
19 | / fn process_item<P>(&mut self, partial: P)
20 | | where
21 | | P: Append<i32>,
22 | | P::Appended: Append<Self::Tail>,
23 | | {
24 | |
25 | | }
| |_____^ the trait `Append<i32>` is not implemented for `P`
|
= help: consider adding a `where P: Append<i32>` bound
error[E0276]: impl has stricter requirements than trait
--> src/lib.rs:19:5
|
8 | / fn process_item<P>(&mut self, partial: P)
9 | | where
10 | | P: Append<Self::Item>,
11 | | P::Appended: Append<Self::Tail>;
| |________________________________________- definition of `process_item` from trait
...
19 | / fn process_item<P>(&mut self, partial: P)
20 | | where
21 | | P: Append<i32>,
22 | | P::Appended: Append<Self::Tail>,
23 | | {
24 | |
25 | | }
| |_____^ impl has extra requirement `P: Append<i32>`
error: aborting due to 2 previous errors
```
Adding an additional type parameter to the fn and using that as the bound on `P::Appended` appears to work fine, and i think it is semantically equivalent (although applying the bound directly to the associated type is cleaner IMO):
```rust
pub trait Append<T> {
type Appended;
}
pub trait Proc {
type Item;
type Tail;
fn process_item<P, T>(&mut self, partial: P)
where
P: Append<Self::Item, Appended = T>,
T: Append<Self::Tail>;
}
struct Read;
impl Proc for Read {
type Item = i32;
type Tail = i64;
fn process_item<P, T>(&mut self, partial: P)
where
P: Append<i32, Appended = T>,
T: Append<Self::Tail>,
{
}
}
```
Should the original code compile? If not, the error message (and suggested fix) is monumentally unhelpful. Otherwise, if it should compile, this looks like a compiler bug.
| C-enhancement,A-diagnostics,A-associated-items,T-compiler | low | Critical |
407,290,840 | pytorch | Wrong description of positive class weight in BCEWithLogitsLoss | ## π Documentation
The explanation of `pos_weight` for [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss) mentions:
`where :math:`p_n` is the positive weight of class :math:`n`.`
We don't have `n` classes (this is a binary loss). The correct description would be something like this:
`:math:p_n is the weight of the positive class for sample n in the batch`
(I also don't understand the utility of having a different positive-class-weight per sample in a batch, because you usually apply the same class weight for the whole training data. But that's a different issue.) | module: docs,triaged | low | Minor |
407,299,843 | pytorch | cstddef not found when compiling C++ Extension - macOS | ## π Bug
I wanted to write a small C++ extension and compile it via setuptools beforehand as mentioned in the Tutorial. However there seems to be library issues on my mac. Can someone help?
The latest xcode command line tools are installed.
## To Reproduce
Steps to reproduce the behavior:
1. write any C++ extension for pytorch
2. `$ CC=clang CXX=clang++ NO_CUDA=1 python setup.py install`
3. OR: `$ NO_CUDA=1 python setup.py install`
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Error
Running:
`$ CC=clang CXX=clang++ NO_CUDA=1 python setup.py install`
Results in:
`No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
running install
running bdist_egg
running egg_info
writing splitFrames.egg-info/PKG-INFO
writing dependency_links to splitFrames.egg-info/dependency_links.txt
writing top-level names to splitFrames.egg-info/top_level.txt
reading manifest file 'splitFrames.egg-info/SOURCES.txt'
writing manifest file 'splitFrames.egg-info/SOURCES.txt'
installing library code to build/bdist.macosx-10.7-x86_64/egg
running install_lib
running build_ext
building 'splitFrames' extension
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/lucasmueller/anaconda/envs/embl/include -arch x86_64 -I/Users/lucasmueller/anaconda/envs/embl/include -arch x86_64 -I/Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include -I/Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/torch/csrc/api/include -I/Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/TH -I/Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/THC -I/Users/lucasmueller/anaconda/envs/embl/include/python3.7m -c example-app.cpp -o build/temp.macosx-10.7-x86_64-3.7/example-app.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=splitFrames -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
warning: include path for stdlibc++ headers not found; pass '-std=libc++' on the command line to use the libc++ standard library instead
[-Wstdlibcxx-not-found]
In file included from example-app.cpp:1:
In file included from /Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/torch/csrc/api/include/torch/torch.h:3:
In file included from /Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/torch/csrc/api/include/torch/all.h:3:
/Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/torch/csrc/api/include/torch/cuda.h:5:10: fatal error: 'cstddef' file
not found
#include <cstddef>
^~~~~~~~~
1 warning and 1 error generated.
error: command 'clang' failed with exit status 1
`
However running:
`$ NO_CUDA=1 python setup.py install`
Also doesn't work, with the additional warning that the compiler is not compatible.
`No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
running install
running bdist_egg
running egg_info
writing splitFrames.egg-info/PKG-INFO
writing dependency_links to splitFrames.egg-info/dependency_links.txt
writing top-level names to splitFrames.egg-info/top_level.txt
reading manifest file 'splitFrames.egg-info/SOURCES.txt'
writing manifest file 'splitFrames.egg-info/SOURCES.txt'
installing library code to build/bdist.macosx-10.7-x86_64/egg
running install_lib
running build_ext
/Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/utils/cpp_extension.py:166: UserWarning:
!! WARNING !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (g++) is not compatible with the compiler Pytorch was
built with for this platform, which is clang++ on darwin. Please
use clang++ to to compile your extension. Alternatively, you may
compile PyTorch from source using g++, and then you can also use
g++ to compile your extension.
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! WARNING !!
platform=sys.platform))
building 'splitFrames' extension
gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/lucasmueller/anaconda/envs/embl/include -arch x86_64 -I/Users/lucasmueller/anaconda/envs/embl/include -arch x86_64 -I/Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include -I/Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/torch/csrc/api/include -I/Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/TH -I/Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/THC -I/Users/lucasmueller/anaconda/envs/embl/include/python3.7m -c example-app.cpp -o build/temp.macosx-10.7-x86_64-3.7/example-app.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=splitFrames -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
warning: include path for stdlibc++ headers not found; pass '-std=libc++' on the command line to use the libc++ standard library instead
[-Wstdlibcxx-not-found]
In file included from example-app.cpp:1:
In file included from /Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/torch/csrc/api/include/torch/torch.h:3:
In file included from /Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/torch/csrc/api/include/torch/all.h:3:
/Users/lucasmueller/anaconda/envs/embl/lib/python3.7/site-packages/torch/lib/include/torch/csrc/api/include/torch/cuda.h:5:10: fatal error: 'cstddef' file
not found
#include <cstddef>
^~~~~~~~~
1 warning and 1 error generated.
error: command 'gcc' failed with exit status 1`
## Environment
PyTorch version: 1.0.1
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.14.2
GCC version: Could not collect
CMake version: version 3.13.0
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] Could not collect
[conda] blas 1.0 mkl
[conda] gpytorch 0.1.1 pypi_0 pypi
[conda] mkl 2018.0.3 1
[conda] mkl_fft 1.0.6 py37hb8a8100_0
[conda] mkl_random 1.0.1 py37h5d10147_1
[conda] pytorch 1.0.1 py3.7_0 soumith
[conda] torchvision 0.2.1 py_2 soumith
| module: cpp-extensions,triaged | low | Critical |
407,308,048 | flutter | onClosing event of the BottomSheet widget never runs | ```dart
final _scaffoldKey = GlobalKey<ScaffoldState>();
void _showBottomSheet() async {
final bottomSheet = BottomSheet(
builder: (BuildContext context) {
return Container(
color: Colors.white,
height: 100.0,
child: Row(
children: <Widget>[
Text('Tittle')
],
)
);
},
onClosing: () {
print('on closing');
}
);
_scaffoldKey.currentState.showBottomSheet(bottomSheet.builder);
}
@override
Widget build(BuildContext context) {
return Scaffold(
key: _scaffoldKey,
appBar: AppBar(
title: Text("TITLE SCAFFOLD"),
),
body: Container(
...
),
);
}
``` | framework,f: material design,has reproducible steps,P2,workaround available,found in release: 3.7,found in release: 3.10,team-design,triaged-design | low | Major |
407,372,954 | kubernetes | Cloud Provider E2E Tests | **What would you like to be added**:
In https://github.com/kubernetes/kubernetes/pull/72902 we introduce the first set of "cloud provider" tests. There are still a lot of tests to be added here such as:
- [ ] load balancers (some tests already exist in `test/e2e/network/service.go`)
- [ ] routes
- [ ] node registration (validate labels, addresses, tagging?, etc)
- [ ] PV labelling
Note that some tests will overlap with other SIGs, might be worthwhile to validate first that a similar test doesn't already exist. Each test should be marked with `[Feature:CloudProvider]` and `[Disruptive]` where appropriate.
**Why is this needed**:
The primary justification is that we want to be able to test a set of behaviours for a Kubernetes cluster on any cloud provider especially as we transition providers from in-tree to out-of-tree.
| area/test,area/cloudprovider,kind/feature,sig/testing,lifecycle/frozen,sig/cloud-provider | medium | Critical |
407,379,754 | vue | Ability to access context from serverPrefetch | ### What problem does this feature solve?
After vue 2.6 was released and serverPrefetch hook was introduces I lost a possibility to update `httpCode` during SSR if there was a data fetching failure.
### What does the proposed API look like?
Provide access to `context` from the `serverPrefetch` hook.
<!-- generated by vue-issues. DO NOT REMOVE --> | intend to implement,feature request,feat:ssr | medium | Critical |
407,411,373 | rust | rustdoc doesn't show implementations of traits when receiver is behind a #[fundamental] type | For this block of code
```rust
use core::ops::Not;
pub struct F;
impl Not for F {
type Output = F;
fn not(self) -> Self::Output { F }
}
impl Not for &mut F {
type Output = F;
fn not(self) -> Self::Output { F }
}
impl Not for Box<F> {
type Output = F;
fn not(self) -> Self::Output { F }
}
```
`rustdoc` shows the first two implementations in the documentation for `F`, but the third implementation doesn't appear anywhere.
With the stabilization of `Pin` (another `#[fundamental]` wrapper type) this may be much more of an issue, implementing a trait for `Pin<&mut ...>` is going to be common for traits that interact with pinning (generic/concrete implementations defined in the same crate as the trait appear in the docs fine, it's only when you define an implementation of a foreign trait for `Pin<&mut LocalType>` that they seem to be missing). | T-rustdoc,A-trait-system,C-bug,A-rustdoc-ui | low | Minor |
407,454,589 | node | `readable` event not emitted after `net.Socket` reconnects | * **Version**: v10.15.1
* **Platform**: Windows 10 Pro 64-bit
* **Subsystem**: net
If `net.Socket` loses connection (`close` is emitted) and the same socket instance is used to reconnect to the same server, no more `readable` events are emitted (`data` events are still emitted).
Doesn't work in v10.14/v10.15.1. Works in v8.15.0.
Repro: https://gist.github.com/morkai/fa175bd0104443e6142f3d0e22805653
1. Run `server.js`
2. Run `client.js`
3. Client prints `client#readable`
4. Kill the server
5. Run the server again
6. Client reconnects
7. **Client doesn't print any `client#readable` lines**
8. Kill the client
9. Uncomment the `data` event handler and comment the `readable` handler in `client.js`
10. Run the client
11. Client prints `client#data`
12. Kill the server
13. Run the server again
14. Client reconnects
15. Client resumes printing `client#data` lines | confirmed-bug,net,stream | medium | Major |
407,486,499 | TypeScript | `.call` selects the wrong overload for `String.prototype.replace` | **TypeScript Version:** v3.3.1 and v3.4.0-dev.20190206
**Search Terms:** call replace overload
**Code**
```ts
String.prototype.replace.call(
'one string',
/a/g,
'two string', // this line errors
);
```
It's worth noting that this is in a JS file, not a TS file, but I'm using `allowJs` and `checkJs`.
**Expected behavior:**
No error.
**Actual behavior:**
errors with `error TS2345: Argument of type '"two string"' is not assignable to parameter of type '(substring: string, ...args: any[]) => string'`
**Playground Link:** http://www.typescriptlang.org/play/#src=String.prototype.replace.call(%0D%0A%20%20%20%20'one%20string'%2C%0D%0A%20%20%20%20%2Fa%2Fg%2C%0D%0A%20%20%20%20'two%20string'%2C%0D%0A)%3B but the error doesn't seem to show up there.
**Related Issues:** no | Bug,Help Wanted,Domain: lib.d.ts | low | Critical |
407,529,694 | opencv | _InputArray::getMat_ handling empty std::vector<T> | This line in `_InputArray::getMat_`
https://github.com/opencv/opencv/blob/master/modules/core/src/matrix_wrap.cpp#L54
if( k == STD_VECTOR )
{
CV_Assert( i < 0 );
int t = CV_MAT_TYPE(flags);
const std::vector<uchar>& v = *(const std::vector<uchar>*)obj;
return !v.empty() ? Mat(size(), t, (void*)&v[0]) : Mat();
}
Should the last line be changed like this?
return !v.empty() ? Mat(size(), t, (void*)&v[0]) : Mat(0, 0, t);
Otherwise the return Mat ships no type info. | RFC | low | Minor |
407,536,460 | vscode | Git - Support git worktrees in workspace | ```
$ mkdir repro
$ cd repro
$ mkdir example1
$ cd example1
$ git init; echo "hello" > world.txt; git add world.txt; git commit -m "init";
$ git worktree add ../example1branchA
$ git worktree add ../example1branchB
$ cd ..
$ mkdir example2
$ cd example2
$ git init; echo "hello" > world.txt; git add world.txt; git commit -m "init";
$ cd ..
```
Scenario:
1. Open `repro` directory in VS Code
2. Go to Git tab.
**Expected:** "Source Control Providers" lists Git repos.
* `example1`
* `example1branchA`
* `example1branchB`
* `example2`
**Actual:** "Source Control Providers" lists Git repos.
* `example1`
* `example2`
I sometimes will have lots of branches checked out in parallel on huge repos that I can't afford to clone multiple times. I would also like to be able to use VS Code's source control functionality with these checkouts all the same.
Thanks! and thanks for Code! | help wanted,feature-request,git | high | Critical |
407,564,064 | create-react-app | Feature request: Launch browser into a unique origin | ## Problem
`npm start` launches `http://localhost:3000`. If I've ever worked on a different app, its `localStorage` is now polluting my new app as they share an origin.
## Solution
If `HOST === 'localhost'` then call `opn` with `target` set to `http://{folder-name}.localhost:{port}`
This way every app has a unique origin and this its own `localStorage`. I know that I can do this manually, but having CRA do it for me would be nice.
## Thoughts?
This is related to #2578, but writing a script to get this to work seems like a lot of duplicated effort for each project
I also can't just [set HOST](https://facebook.github.io/create-react-app/docs/advanced-configuration) as it won't bind to `{folder-name}.localhost` | issue: proposal | low | Minor |
407,575,197 | go | proposal: cmd/go: enable mutual TLS authentication with client certificates in the go tool | It would be useful to be able to pass to the `go` tool a client TLS certificate + key + CAcert (in environment variables or otherwise) - especially, in environments that take Zero Trust Network (ZTN) seriously and where `go` tool needs to talk to the other side (the `replace` targets) that lives in a public cloud using mutual TLS for authN/Z. The "other side" can be either a GOPROXY server or a redirector (like `golang.org`) serving the meta tags. Since one of the ZTN principles is "every network connection must be authenticated and authorized", the question is how to implement it with the go tool and the requests it initiates.
Technically, it possibly comes down to how to pass desired TLS options (key/certificate/cacert filenames or such) to `tls.Config` that `go` would use when initiating connections.
| Proposal,NeedsInvestigation,Proposal-Hold | low | Major |
407,577,097 | deno | Add support for IndexedDB | It'll be useful to have IndexedDB in Deno.
Related to #1657. | feat,web | high | Critical |
407,597,506 | react | defaultValue does not work with input when type is set to submit | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
**Do you want to request a *feature* or report a *bug*?**
Bug
**What is the current behavior?**
When you use uncontrolled `<input type="submit" />` and set `defaultValue` attribute, it would be ignored in versions 1.5.0 or higher (there would be no `value` attribute in the HTML result). It was working correctly in older versions. Looks like only `type="submit"` is affected, for other input types `defaultValue` behaves correctly.
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:**
Correct behaviour with react 16.4.2: https://codepen.io/anon/pen/zePmrZ
Incorrect behaviour with react 16.8.1: https://codepen.io/anon/pen/PVOyqV
**What is the expected behavior?**
When `defaultValue="foo"` is set on `<input type="submit"/>` it should result in `<input type="submit" value="foo" />` in the HTML result.
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
All versions starting from 16.5.0
| Type: Discussion | medium | Critical |
407,618,317 | TypeScript | Mapped types shouldn't transform unknown type | @ahejlsberg Probably #29740 made this regression.
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.20190207
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
type DeepReadonly<T> =
T extends void ? T :
{ readonly [P in keyof T]: DeepReadonly<T[P]>; };
type m = { a: unknown }; // Also unknown[].
type i = DeepReadonly<m>;
```
**Expected behavior:**
`i` is `{ readonly a: unknown; }`.
**Actual behavior:**
`i` is `{ readonly a: {}; }`.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Suggestion,Awaiting More Feedback | low | Critical |
407,663,830 | rust | Type derivation failure on generic parameters: type annotations required | The following example fails with this error:
```
error[E0284]: type annotations required: cannot resolve `<E as H>::R == NR`
--> examples/minreprod/src/main.rs:24:36
|
24 | impl<E: H<R = NR>, ER> H for A<E>
| ^
|
= note: required because of the requirements on the impl of `H` for `main::make_a::A<E>`
```
Example:
```rust
pub struct NR;
pub trait H {
type R: From<NR>;
}
struct X {}
impl H for X {
type R = NR;
}
fn main() {
fn make_a() -> impl H<R = NR> {
{
struct A<E: H<R = NR>> {
e: E,
}
impl<E: H<R = NR>, ER> H for A<E>
where
E: H<R = ER>,
ER: From<NR>,
NR: From<ER>,
{
type R = NR;
}
A { e: X{} }
}
}
}
```
If you remove the `ER` parameter here, and all bounds on it, compilation succeeds. Likewise, if you remove the bound on `E`, compilation succeeds.
Obviously some of the parameters/bounds are redundant, but I'm generating code like this from a macro which doesn't know how to interpret extra bounds like `E: H<R = NR>` and must include the other bounds to handle other cases, while the compiler should (IMO) be able to figure this out.
For the curious, this is related to [`kas::macros::make_widget`](https://docs.rs/kas/0.0.2/kas/macros/index.html#the-make_widget-macro). | A-type-system,A-associated-items,T-compiler,A-inference,C-bug,T-types | low | Critical |
407,703,314 | vue | Add beforeDeactivated hook | ### What problem does this feature solve?
I use :key and keep-alive tag to display chats. And in order to maintain the position of the scroll when switching between chat rooms, I need to save the scrollTop when the hook is deactivated and put it on the element when the hook is activated. But the deactivated hook is already called when the element has been removed from the DOM, so I needed a new hook - beforeDeactivated.
Seriously, this hook is needed just like the others before* hooks.
### What does the proposed API look like?
Here, I think, everything is clear. Simple hook.
<!-- generated by vue-issues. DO NOT REMOVE --> | feature request | low | Major |
407,713,982 | pytorch | Allow to build pytorch for a *specific* architecture | ## π Feature
Allow the end user to specify which CPU architecture to target in the build process.
## Motivation
We are building Pytorch from source for our clusters. We have clusters that have a variety of CPU architectures, supporting instruction sets from sse3, to avx512 and all in between. Currently, pytorch build *insists* on using the architecture on which it is being built. We tried to pass specific compilation flags (`-march=corei7-avx` for example), but it just won't work, it always get overwritten by something.
## Pitch
Get the build procedure in line with standard practices and let whoever build the source choose the optimization parameters.
| module: build,triaged | medium | Major |
407,772,405 | vscode | Specify terminal group for integrated terminal in launch configuration | It is great to see that task configurations now allow the specification of terminal group (https://github.com/Microsoft/vscode/issues/47265) - works well from my initial testing. Can you please also allow launch configurations to specify terminal group when using the integrated terminal for debug and start/stop?
See my comment here: https://github.com/Microsoft/vscode/issues/47265#issuecomment-449993673 | debug,under-discussion | medium | Critical |
407,806,046 | rust | Problem with using struct expressions with path metavariables | ## STR
this occurs on stable & nightly.
```Rust
macro_rules! m {
($s: path) => {{
Some($s {})
}}
}
struct S {}
fn main() {
let _ = m!(S);
}
```
## Expected Result
Code compiles, e.g. it compiles if the macro doesn't contain a `Some`:
```Rust
macro_rules! m {
($s: path) => {{
$s {}
}}
}
```
## Actual Result
Obscure parser error
```
error: expected one of `)`, `,`, `.`, `?`, or an operator, found `{`
--> src/main.rs:3:17
|
3 | Some($s {})
| ^ expected one of `)`, `,`, `.`, `?`, or an operator here
...
9 | let _ = m!(S);
| ----- in this macro invocation
error: aborting due to previous error
```
cc @dtolnay @estebank @petrochenkov
@nikomatsakis & @eddyb : this frustrates me lifting `TypeFoldable` over `Result` | A-parser,A-macros,T-compiler,C-bug | low | Critical |
407,838,971 | tensorflow | Build Tensorflow version that detects CPU instruction set at runtime and lights-up/down | **System information**
- TensorFlow version (you are using): 1.12.0 CPU
**Describe the feature and the current behavior/state.**
Currently Tensorflow cross-compiles for different instruction sets and will warn if the CPU supports instructions that the TF build does not use, and fail to load if the CPU does not support an instruction set that the TF build uses. This makes it impossible to build an application that runs on a variety of hardware and ensure that it achieves optimal results for that hardware (or even runs at all).
I understand that TF supports cross-compilation and developers can build their own library that works best for their hardware, but this doesn't solve the case where an application developer wants to ship an application that uses TF and runs on a variety of hardware.
I understand that having multiple codepaths with runtime light-up could increase the size of TF, that could be dealt with by making this a separate flavor/configuration of TF, eg: "portable" build, and that could be published as a binary zip/tarball along side the current builds.
**Will this change the current api? How?**
No
**Who will benefit with this feature?**
Applications and libraries redistributing tensor flow binaries to run on a variety of hardware.
| stat:awaiting tensorflower,type:feature,type:build/install | low | Major |
407,865,675 | flutter | [camera] Allow controlling the preview and output mirroring | I am currently using the camera pluggin on flutter. Is there a way to remove the mirror effect from the front camera (selfie camera) when you take a picture? If there is none, can it be added ? | c: new feature,customer: crowd,p: camera,package,team-ecosystem,P3,triaged-ecosystem | high | Critical |
407,867,122 | godot | Master peer ID not preserved on duplication | **Godot version:**
v3.1-beta3
**OS/device including version:**
Arch Linux rolling & NVidia GPU
**Issue description:**
Duplicated nodes do not seem to preserve parent's network master ID.
**Steps to reproduce:**
- Create some "child" node and append it to some "parent" node.
- Set network master ID for parent node.
- Child properly inherits parent's network master ID.
- Duplicate child and observe clone's network master ID being equal to 1.
**Minimal reproduction project:**
```gdscript
extends Node
func _ready() -> void:
var parent = Node.new()
var child = Node.new()
parent.add_child(child)
parent.set_network_master(1337)
prints(
parent.get_network_master(),
child.get_network_master()
) # 1337 1337
print(child.duplicate().get_network_master()) # 1
``` | enhancement,documentation,topic:network,topic:multiplayer | low | Minor |
407,944,630 | pytorch | Allow building C++ custom ops that imports another custom ops | ## π Feature
It seems like from the docs that you can only build the custom ops individually
https://pytorch.org/tutorials/advanced/cpp_extension.html
```Integrating a C++/CUDA Operation with PyTorch
Integration of our CUDA-enabled op with PyTorch is again very straightforward. If you want to write a setup.py script, it could look like this:
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
setup(
name='lltm',
ext_modules=[
CUDAExtension('lltm_cuda', [
'lltm_cuda.cpp',
'lltm_cuda_kernel.cu',
])
],
cmdclass={
'build_ext': BuildExtension
})
```
Can we support building the custom ops that rely on other already built custom ops?
## Motivation
I have a custom op `op_a`. Now I am working on a second custom op `op_b`. `op_b` calls `op_a`.
We should support importing. As Pytorch is used in larger and larger projects. This is going to be very common
cc @yf225 @glaringlee @zou3519 | module: cpp-extensions,triaged | low | Minor |
407,952,467 | rust | Tracking issue for making `dbg!(x)` work in `const fn` | We should make `dbg!(expr)` work in `const fn` by using some `lang_item` that is on the body of a function that contains the contents of `dbg!(...)`. We also need to make it work with inlining.
This sorta needs `impl const Debug` to work, but in the meantime we can just dump the miri state something something, @oli-obk can fill in... | T-lang,T-libs-api,C-tracking-issue,S-blocked,A-const-eval,needs-rfc,A-fmt,Libs-Tracked,S-tracking-design-concerns | medium | Critical |
407,953,472 | rust | target_feature doesn't trickle down to closures and internal fns | Leaves poorly optimized assembly in its wake.
```#[cfg(target_arch = "x86")]
use std::arch::x86::*;
#[cfg(target_arch = "x86_64")]
use std::arch::x86_64::*;
// Creates non inlined calls to intrinsics
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
#[target_feature(enable = "avx2")]
pub unsafe fn foo(input: &[__m256]) -> f32 {
let accum = |val: __m256| {
let roll = _mm256_setr_epi32(1, 2, 3, 4, 5, 6, 7, 0);
let mut sum = val;
let mut tmp = _mm256_permutevar8x32_ps(val, roll);
for i in 0..7 {
sum = _mm256_add_ps(tmp, sum);
tmp = _mm256_permutevar8x32_ps(tmp, roll);
}
sum
};
// Once we call a complex internal closure or fn multiple
// times, we find that the compiler hasn't told them that
// they can inline or use avx2 intrinsics. Not the sharpest.
let sum1 = accum(input[0]);
let sum2 = accum(input[1]);
_mm256_cvtss_f32(sum1) + _mm256_cvtss_f32(sum2)
}
// Works as expected
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
#[target_feature(enable = "avx2")]
pub unsafe fn bar(input: &[__m256]) -> f32 {
// When we pull this tool out of the shed every thing works
#[target_feature(enable = "avx2")]
unsafe fn accum(val: __m256) -> __m256 {
let roll = _mm256_setr_epi32(1, 2, 3, 4, 5, 6, 7, 0);
let mut sum = val;
let mut tmp = _mm256_permutevar8x32_ps(val, roll);
for i in 0..7 {
sum = _mm256_add_ps(tmp, sum);
tmp = _mm256_permutevar8x32_ps(tmp, roll);
}
sum
}
let sum1 = accum(input[0]);
let sum2 = accum(input[1]);
_mm256_cvtss_f32(sum1) + _mm256_cvtss_f32(sum2)
}
```
https://rust.godbolt.org/z/cIr7qS
I found this bug by triggering this one with closures. I wasn't able to trigger it from godbolt. I'm using the latest stable, so if I copied the code in it would work (as in not work).
https://github.com/rust-lang/rust/issues/50154
Making a separate issue since this one is a performance bug. | A-codegen,T-lang,A-SIMD,F-target_feature_11,A-target-feature | low | Critical |
407,967,424 | vscode | Support comments in UI editors (settings and keyboard shortcuts) | I prefer using the UI editor for both settings and keyboard shortcuts. However, I often find myself automatically going into the JSON files because they're packed with comments β for both modified and unmodified settings/keyboard rules (example shown below).
### Feature Request
- Support ability to add comments to the UI editors, possibly with an icon next to the setting/keyboard rule (if it has a comment), allowing the comment to be viewed by hovering over the icon.
- Add additional tags that would allow you to further filter the results (e.g. `@commented`, `@modified @commented`, `@unmodified @commented`).
### Note
This would also solve a bug where comments in the JSON file are removed (without any indication) when making changes in the UI editor. See #75599.
### Example
Here's an example `settings.json` file that has comments for both modified and unmodified settings:
```jsonc
{
// Disabling validation prevents errors on syntax constructs that aren't supported by the language
// service (e.g. proposed features such as the pipeline operator). It also prevents duplicate
// linting errors (from both ts and eslint). However, it also prevents the editor.showUnused
// (fading of unused variables) option from working.
// https://github.com/Microsoft/TypeScript/issues/29293)
// https://github.com/Microsoft/TypeScript/issues/13408
"javascript.validate.enable": false,
"typescript.validate.enable": false,
// Good for disabling annoying tooltips, but nice for eslint error details.
// Waiting for ability to customize tooltips or place the tooltips at top/bottom.
// https://github.com/Microsoft/vscode/issues/65996
// "editor.hover.enabled": false,
}
``` | feature-request,settings-editor | low | Critical |
408,006,380 | flutter | String interpolation that captures object references for debugging tools | String interpolation in Dart is a convenient way to write a debug string but it has the limitation that the output can only be a string with no way for debuggers to extract out the actual objects referenced to generate richer debugger view and crosslink to other debugging tools such as the widget inspector.
For example, on the web you could write
```
console.log('Element ', element, ' was clicked');
```
and have the element show as an interactive element that you could click on to view in the elements page of the chrome devtools.
If you instead wrote
```dart
print('Element $element was clicked');
```
you would get a non interactive string representation of the Element. However, string interpolation is far easier to read than a list of optional positional parameters separated by commas so it is far preferable to use string interpolation if the needed structure can be maintained.
Proposal: write a kernel transformer and a lint that operate on a blessed list of constructors and top level methods in package:flutter, extracting the object references out of the expressions and encoding the output as the existing flexible `DiagnosticsNode` structure already used by Flutter debugging tools. The toString() of the DiagnosticsNode will match the toString of the string literal but the DiagnosticsNode will also contain the list of all object references in the string interpolation expression. The constructors will still function without the transformer but will not capture any of the object references if run in that mode in a release build. Users cannot attach debuggers to release builds anyway.
A lint warning will trigger if the constructor is called with a value that is not a string literal as in that case there is no way to robustly extract out the object ids and we will not attempt to be clever and handle any cases where the value passed to the constructor is more than a string literal.
Example:
```dart
class ErrorMessage extends DiagnosticsNode {
// Calls to this constructor are replaced in debug mode by a kernel transformer which captures
// the contents of the string interpolation template. In release mode this object is identical to
// Diagnostics.message(message).
ErrorMessage(String message);
}
ErrorMessage('Element $element is color $color'); // Good. Captures references to the element and color object for use by debugging tools.
final String str = 'Element $element is color $color';
ErrorMessage(str); // lint error
ErrorMessage('Element $element' + 'is color $color'); // lint error because want to keep things simple.
ErrorMessage('Element $element'
'is color $color'); // good.
// Hypothetical method that logs objects to the console and to the console in IDEs.
debugLog(('Element $element is color $color'); // good
final String str = 'Element $element is color $color';
debugLog(str); // lint error
Function foo = debugLog; // lint warning as otherwise there could be calls to debugLog that we can't
foo(str); // protected by the lint warning on the previous line.
```
If Dart string interpolation was a more powerful string templating feature this kernel transformer wouldn't be needed but given the current Dart language it is the most practical way to support the sorts of minimal string templates handy for debugging.
More complex examples:
```dart
ErrorMessage('Curves must map 0.0 to near zero and 1.0 to near one but '
'${activeCurve.runtimeType} mapped $t to $transformedValue, which '
'is near $roundedTransformedValue.'
```
This example suggests the heuristic that we should treat foo.runtimeType as capturing an object reference to foo instead of foo.runtimeType as the runtimeType call was added to make the regular string output prettier.
We may also want to generate lint errors on cases that are ambiguous to resolve at first so we don't over promise what templates we can robustly handle.
For example:
```dart
ErrorMessage('Button is ${button.onClick != null ? 'enabled' : 'disabled'}');
```
might be a lint error at first as we are unclear what object reference to include. Showing a reference to the onClick event handler might be surprising or it might be what the error message author expected.
Additional considerations: for some cases debugging tools will want to display a richer debugging view of the object inline. This should only be done if the richer object view can generally fit in a similar sized area as the toString such as in the case of an icon or a color. For cases where the visual debug view is much larger, additional techniques need to be considered to evaluate how to display the visual view of the object adjacent to the message.
See https://github.com/flutter/flutter/issues/27327
for a related flutter issue that motivated this bug.
Open questions:
Should special patterns like
```dart
ErrorMessage(
'Some curve:'
' $curve'
);'
```
be treated as "markdown" syntax for fully opting into property style view of the object?
Essentially that expression could be interpreted as meaning `DiagnosticsProperty('Some curvet', curve)`
which opens up opting into all rich debugger views of an object rather than being limited the subset of debugger views of objects that place nicely with inline display.
For example:
This would be the way a curve property could be displayed if we are confident we should opt into a full property view: which ignoring my poor markdown skills would look something like this:
`Some curve`:  | c: new feature,tool,engine,dependency: dart,P2,team-engine,triaged-engine | low | Critical |
408,086,783 | go | cmd/go: clarify relative import paths and modules in documentation | In the documentation for cmd/go, at the end of the section about relative import paths
https://golang.org/cmd/go/#hdr-Remote_import_paths, the text says:
> To avoid ambiguity, Go programs cannot use relative import paths within a work space.
The documentation should also add that relative import paths don't work with modules.
| Documentation,NeedsFix,modules | low | Minor |
408,093,912 | TypeScript | TS doesn't see when we add symbol properties to functions. | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.20190207
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
function, property, symbol
**Code**
```ts
const mySymbol = Symbol()
interface Foo {
[mySymbol]: true
(): any
}
const foo: Foo = () => {} // error:
// Property '[mySymbol]' is missing in type '() => void'
// but required in type 'Foo'.
foo[mySymbol] = true
```
**Expected behavior:**
Just like with regular (non-symbol) props, show no error if the symbol property actually has been added .
**Actual behavior:**
There is an error.
**Playground Link:** [link](http://www.typescriptlang.org/play/index.html#src=const%20mySymbol%20%3D%20Symbol()%0D%0A%0D%0Ainterface%20Foo%20%7B%0D%0A%20%20%20%20%5BmySymbol%5D%3A%20true%0D%0A%20%20%20%20()%3A%20any%0D%0A%7D%0D%0A%0D%0Aconst%20foo%3A%20Foo%20%3D%20()%20%3D%3E%20%7B%7D%20%2F%2F%20error%3A%20%0D%0A%2F%2F%20Property%20'%5BmySymbol%5D'%20is%20missing%20in%20type%20'()%20%3D%3E%20void'%20%0D%0A%2F%2F%20but%20required%20in%20type%20'Foo'.%0D%0Afoo%5BmySymbol%5D%20%3D%20true)
| Bug | low | Critical |
408,111,753 | vscode | Search with non-standard encodings not supported |

Issue Type: <b>Bug</b>
Set worskpace encoding to cp437.
Do a worskspace search for anything.
The search box is surrounded in red, a popup appears underneath it saying "Unknown encoding: cp437".
I had the same problem once and found I had to unset the option search.useRipgrep to have it working. That worked. But now, I have a warning on this preference that says "deprecated" and to use pcre (which doesn't work).
That's a regression.
VS Code version: Code 1.31.0 (7c66f58312b48ed8ca4e387ebd9ffe9605332caa, 2019-02-06T08:51:24.856Z)
OS version: Linux x64 4.15.0-1032-oem
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz (8 x 2874)|
|GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: disabled_software<br>surface_synchronization: enabled_on<br>video_decode: unavailable_off<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|2, 3, 3|
|Memory (System)|15.39GB (1.47GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (20)</summary>
Extension|Author (truncated)|Version
---|---|---
project-manager|ale|10.3.2
quitcontrol-vscode|art|3.0.0
better-toml|bun|0.3.2
whitespace-plus|dav|0.0.5
mustache|daw|1.1.1
gitlens|eam|9.5.0
EditorConfig|Edi|0.12.8
githd|hui|2.1.0
rpm-spec|Lau|0.2.3
vscode-duplicate|mrm|1.2.1
indent-rainbow|ode|7.2.4
vscode-subword-navigation|ow|1.2.0
vscode-docker|Pet|0.5.2
rust|rus|0.5.3
whitespace|san|0.0.5
crates|ser|0.3.6
code-settings-sync|Sha|3.2.4
local-history|xyz|1.7.0
plsql-language|xyz|1.7.0
markdown-all-in-one|yzh|2.0.1
</details>
<!-- generated by issue reporter --> | feature-request,upstream,search | high | Critical |
408,112,425 | godot | Should ARVRController method name match InputEvent nomenclature? | InputEvent uses "[device](http://docs.godotengine.org/en/3.0/classes/class_inputevent.html#member-variables)" property to let programmer know which input device emitted the event, but ARVRController uses [get_joystick_id()](http://docs.godotengine.org/en/3.0/classes/class_arvrcontroller.html#class-arvrcontroller-get-joystick-id) method to identify it.
It is a bit confusing know that they are the same ID stuff due to different name usage.
Or aren't they the same ID ? | enhancement,discussion,topic:core,documentation | low | Minor |
408,131,323 | TypeScript | Type checking and IntelliSense with JSDoc typed js dependencies | ## Search Terms
JSDoc types dependenies
## Suggestion
Possible alternative to #7546: Maybe type checking from JSDoc annotated dependencies can be achieved more easily in a different way. I have created a [gist](https://gist.github.com/ahocevar/9a7253cb4712e8bf38d75d8ac898e36c) with a demo project that successfully uses a dependency ([[email protected]](https://www.npmjs.com/package/ol/v/6.0.0-beta.1) with [fully JSDoc typed sources](https://github.com/openlayers/openlayers/pull/9178). This works well to get IntelliSense in VS Code for projects authored in pure JavaScript, but requires a not very obvious [`jsconfig.json` configuration](https://gist.github.com/ahocevar/9a7253cb4712e8bf38d75d8ac898e36c#file-jsconfig-json). Similar configurations (with `tsconfig.json`) partially work with TypeScript projects, but are very [fragile](https://github.com/openlayers/openlayers/pull/9178#issuecomment-461577897) quite [easily](https://github.com/Toterbiber/OpenlayersAngularTest), especially in [more complex projects](https://github.com/openlayers/openlayers/pull/9178#issuecomment-461588832). And they only work when the `"noEmit": true` option is used.
However, it looks like a much lower hanging fruit than making `.d.ts` file generation work from JSDoc annotated JavaScript code (#7546). As the above example shows, it works already, but it should work out of the box and also in more complex configurations with TypeScript.
## Use Cases
Full type checking and `tsc` support in both JavaScript and TypeScript projects with pure JavaScript dependencies. The dependencies are either JSDoc typed directly or published with their JSDoc typed sources.
This has to work without `.d.ts` files, because these [cannot be generated from `.js` sources](#7546), neither with `tsc` nor reliably with third party applications.
## Examples
Two possible scenarios:
1. A package that only contains untranspiled JSDoc-typed modules. Unrealistic, because class inheritance is only fully recognized by TypeScript when the ES2015 `class` and `extends` keywords are used, which packages don't usually do, because it limits interoperability.
1. A package that contains transpiled modules in addition to the original JSDoc-typed sources. Realistic and feasible, especially when source maps can point to these sources directly. An example for this would be [[email protected]](https://www.npmjs.com/package/ol/v/6.0.0-beta.1).
So let's assume the latter case is the more common one. If TypeScript would look for a `tsconfig.json` in dependencies (e.g. `node_modules/ol/tsconfig.json`), that file could have a config option to point to the annotated sources. Maybe something like
```js
{
"compilerOptions": {
"jsSources": [
"src/**/*.js"
]
}
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals). | Suggestion,Awaiting More Feedback | low | Major |
408,161,573 | pytorch | `nn.Linear` allows 1d input tensors | ## π Bug
The [docs](https://pytorch.org/docs/master/nn.html#torch.nn.Linear) for `nn.Linear` claim that the input has to be `[N, *, M]`, but it accepts `[M]` as well. This is because `nn.Linear` dispatches (via `functional.linear`) to `addmm` and `matmul`, which perform this kind of broadcasting. It's unclear if it's a bug or a feature, but certainly needs adapting either the code or the docs.
This report is a follow-up to my [answer](https://stackoverflow.com/q/54591124/4280242) on StackOverflow, which traces the root problem.
## To Reproduce
```python
import torch
import torch.nn as nn
n = 5
lin = nn.Linear(n, n)
inp = torch.randn(n)
print(lin(inp))
```
## Expected behavior
Error stating that 2d input was expected and 1d was received.
## Environment
PyTorch version: 1.0.0
Is debug build: No
CUDA used to build PyTorch: None
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] Could not collect
[conda] pytorch-cpu 1.0.0 py3.6_cpu_1 pytorch
[conda] torch-dimcheck 0.0.1 <pip>
[conda] torchvision 0.2.1 <pip>
[conda] torchvision-cpu 0.2.1 py36_1 pytorch
cc @brianjo @mruberry | module: docs,triaged | low | Critical |
408,176,271 | flutter | Camera stream image streams frame with wrong number of pixels | CameraImage stream format should be `YUV420` and
```
(Y) plane[0].length= Y
(U) plane[1].length = Y/4
(V) plane[2].length= Y/4 .
But it is streaming with
(U) plane[1].length = Y/2
(V) plane[2].length = Y/2 and Image format remains same(YUV420 ).
``` | c: new feature,p: camera,package,team-ecosystem,P2,triaged-ecosystem | low | Minor |
408,202,688 | pytorch | Implement `numpy.random.choice` equivalent | ## π Feature
Implement `numpy.random.choice` equivalent.
## Motivation
In some cases, it is useful to get random samples from a torch Tensor efficiently. For instance, during the training of some Region-Based Detectors, it is necessary to control the proportion of positive and negative regions of interest (RoIs) over mini-batches.
Here is a workaround adopted in maskrcnn-benchmark: https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/maskrcnn_benchmark/modeling/balanced_positive_negative_sampler.py#L49-L50, which can be inefficient if `positive.numel()` is big and `num_pos` is small, for instance.
## Pitch
Implement `torch.random.choice` to have an equivalent behavior to the numpy implementation.
| high priority,module: bootcamp,feature,triaged,module: numpy | high | Critical |
408,205,787 | TypeScript | Incorrect type for `ClassDecorator` | The current (3.2.2) type for class decorator function seems to be incomplete:
```ts
declare type ClassDecorator = <TFunction extends Function>(target: TFunction) => TFunction | void;
```
It says that target should extend a Function, but it should be something like `NewableFunction` I guess, since its a class constructor and it can be passed to `new`. | Suggestion,Experience Enhancement | low | Major |
408,224,750 | flutter | CameraPreview becomes sluggish and drops frames when startImageStream is set on Pixel 3 | # Issue
When startImageStream is set, not only does CameraPreview appear to become sluggish, i encounter dropped frames on my Pixel 3 running API 28. This occurs even if ResolutionPreset.low is used, and if the function fed to startImageStream is an empty one. The black-out occurences appear to be random, and get worse as i switch to medium and then to high resolution. Note that the sluggishness also occurs on my XiaoMi A1 running API 27, but the frames are not getting dropped. GIF of the issue on my Pixel 3:

# Steps to reproduce
To reproduce the issue, we can add the required lines to the simple example provided in the camera plugin's readme section:
```dart
import 'dart:async';
import 'package:flutter/material.dart';
import 'package:camera/camera.dart';
List<CameraDescription> cameras;
Future<void> main() async {
cameras = await availableCameras();
runApp(CameraApp());
}
class CameraApp extends StatefulWidget {
@override
_CameraAppState createState() => _CameraAppState();
}
class _CameraAppState extends State<CameraApp> {
CameraController controller;
@override
void initState() {
super.initState();
controller = CameraController(cameras[0], ResolutionPreset.low);
controller.initialize().then((_) {
if (!mounted) {
return;
}
controller.startImageStream(streamStuffHere);
setState(() {});
});
}
void streamStuffHere(CameraImage img) { }
@override
void dispose() {
controller?.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
if (!controller.value.isInitialized) {
return Container();
}
return AspectRatio(
aspectRatio:
controller.value.aspectRatio,
child: CameraPreview(controller));
}
}
```
# Flutter Doctor
```
[β] Flutter (Channel stable, v1.0.0, on Linux, locale en_SG.UTF-8)
β’ Flutter version 1.0.0 at /home/bl2ead/Desktop/software/flutter
β’ Framework revision 5391447fae (2 months ago), 2018-11-29 19:41:26 -0800
β’ Engine revision 7375a0f414
β’ Dart version 2.1.0 (build 2.1.0-dev.9.4 f9ebf21297)
[β] Android toolchain - develop for Android devices (Android SDK 28.0.3)
β’ Android SDK at /home/bl2ead/Android/Sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-28, build-tools 28.0.3
β’ Java binary at: /home/bl2ead/.local/share/JetBrains/Toolbox/apps/AndroidStudio/ch-0/182.5199772/jre/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01)
β’ All Android licenses accepted.
[β] Android Studio (version 3.3)
β’ Android Studio at /home/bl2ead/.local/share/JetBrains/Toolbox/apps/AndroidStudio/ch-0/182.5199772
β’ Flutter plugin version 32.0.1
β’ Dart plugin version 182.5215
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01)
[β] IntelliJ IDEA Ultimate Edition (version 2018.2)
β’ IntelliJ at /home/bl2ead/.local/share/JetBrains/Toolbox/apps/IDEA-U/ch-0/182.4892.20
β’ Flutter plugin version 31.3.3
β’ Dart plugin version 182.5124
[β] Connected device (1 available)
β’ Pixel 3 β’ 8AHX0TE7U β’ android-arm64 β’ Android 9 (API 28)
β’ No issues found!
``` | c: performance,p: camera,package,team-ecosystem,P2,triaged-ecosystem | low | Major |
408,225,402 | rust | Implement "small substs optimization" for substs of length 1 | One of the core types in the compiler is `ty::Subst`, which represents a list of "substitutions" - arguments to generic parameters. For example, `Option<u32>` has a `Substs` of `[Uint(u32)]` and `T: Sized` has a `Substs` of `[Param(T)]`.
A `ty::Substs` is an interned `&'tcx List<Kind<'tcx>>`. It is represented as a thin pointer to a struct of this form:
```Rust
pub struct List<T> {
len: usize,
data: [T; 0],
opaque: OpaqueListContents,
}
```
Where a `List` of length 0 is represented by this bit of evil code:
```Rust
impl<T> List<T> {
#[inline(always)]
pub fn empty<'a>() -> &'a List<T> {
#[repr(align(64), C)]
struct EmptySlice([u8; 64]);
static EMPTY_SLICE: EmptySlice = EmptySlice([0; 64]);
assert!(mem::align_of::<T>() <= 64);
unsafe {
&*(&EMPTY_SLICE as *const _ as *const List<T>)
}
}
}
```
And `List`s of non-zero length are represented by a `List` that references an interner. This is inefficient, because I'm quite sure that many `Subst`s are of length 1 (e.g., consider all the trait bounds of traits with no type parameter).
When we look into the interner, there are 2 problems:
1. This requires a memory lookup, which increases CPU cache usage and can cause CPU cache misses.
2. When we are creating a new type, we need to intern the substs, which causes hashmap lookups and even more CPU cache misses.
## Potential for measurements
It might be nice to count the amount of `Substs` of each length while compiling popular crates. This should be rather easy if you modify the interner.
## Implementation Strategy
### Stage 1 - getting a newtype
There are a few representations for `Substs` that are possible, but in any case I think the best way to go forward would be to first hide the representation.
Currently a `Substs` is this typedef:
```Rust
pub type Substs<'tcx> = List<Kind<'tcx>>;
```
And it is normally used as `&'tcx Substs<'tcx>`.
We would need to replace it with something that has a hidden representation, i.e. initially a newtype of the form:
```Rust
pub struct SubstsRef<'tcx> {
inner: &'tcx Substs<'tcx>
}
```
I think a reasonable way of going around this would be to first have a
```rust
pub type SubstsRef<'tcx> = &'tcx Substs<'tcx>;
```
Then retire the use of the old `Substs` typedef in most places, then switch `SubstsRef` to be a newtype.
When using a newtype, it's probably a good idea to have a `Deref` impl of this form:
```Rust
impl<'tcx> Deref for SubstsRef<'tcx> {
type Target = [Kind<'tcx>];
#[inline]
fn deref(&self) -> &Self::Target { self.inner }
}
```
This will avoid needing to implement specific methods in most cases.
Also remember to implement the "derive" trait impls (`Hash`, `PartialEq`, etc.) as needed.
## Stage 2 - improving the representation
My preferred representation is as follows:
Use the first 2 bits as the tag, in the style of [`TAG_MASK`], e.g.
```rust
const TYPE_TAG: usize = 0b00;
const REGION_TAG: usize = 0b01;
const LIST_TAG: usize = 0b10;
```
Then represent things as follows:
A substs of length 0: `List::empty() | LIST_TAG`
A substs of a single type: `ty | TYPE_TAG`
A substs of a single region: `ty | REGION_TAG`
A substs of length >1: `substs | LIST_TAG`
You'll want to define `Substs` as follows:
```rust
#[repr(C)]
pub struct SubstsRef<'tcx> {
ptr: NonZeroUsize,
marker: PhantomData<Kind<'tcx>>
}
```
Then you can implement `Deref` in this style:
```Rust
impl<'tcx> Deref for SubstsRef<'tcx> {
type Target = [Kind<'tcx>];
#[inline]
fn deref(&self) -> &Self::Target {
let ptr = self.ptr.get();
match ptr & TAG_MASK {
REGION_TAG | TYPE_TAG => {
// We still match layout with `Kind`.
let this: &[Kind; 1] = mem::transmute(self);
this
}
LIST_TAG => {
let inner: &List<Kind> = &*((ptr & !TAG_MASK) as *const _);
inner
}
_ => intrinsics::unreachable()
}
}
}
```
Then you'll basically only need to change the interning functions not to go through the interner for substs of length 1. As long as you always consistently map substs of length 1 to not use an interner, you can do `PartialEq`, `Hash` etc. "by value".
Then you can "cut the ribbon" and do a `@bors try` to see how much perf you gain.
## Stage 2a - Cleanup
Remove occurrences of `InternalSubsts` in comments.
## Items for later investigation
It might be worth investigating whether it's worth extending the small substs optimization to substs of length 2 to get rid of even more interner calls - that would increase the size of `TyS` and things, so it might be a space-time tradeoff. | C-enhancement,I-compiletime,E-mentor,T-compiler,C-optimization | low | Major |
408,226,886 | go | cmd/go: define error codes and use them to describe errors. | Go command, particularly, `go list` and `go mod download`, now serves as the canonical way to retrieve information about modules and packages. Programs and libraries that need to understand how Go handles build and interprets source code are supposed to invoke the command and interpret the command's output. For example, `golang.org/x/tools/go/packages` depends on invocation of `go list`, and we expect some module proxies to utilize `go mod download` or `go list`.
Go command provides -json and other flags to output in a structured form to ease the result parsing. But, handling error cases programmatically is still difficult. To be useful like a library, Go command line tool output should allow programs to distinguish different failure cases (invalid arguments, failed precondition, resource unavailability (network, tool, disk, ...) , permission issue, ...).
Currently,
* Command line exit code: Go command exits with a non-zero exit code for "some" error cases (often 1). But this exit code doesn't carry enough information. Depending on the exit code or error messages from a command line anyway is not a reliable way.
* Error/Err fields, as in `-json` or `go list -e -f` results, provide some error details, but they are "string" types. Parsing and depending on the message is not reliable or scalable.
<pre>
$ go mod download -json golang.org/x/[email protected]
{
"Path": "golang.org/x/foo",
"Version": "v1.0.1",
"Error": "unrecognized import path \"golang.org/x/foo\" (parse https://golang.org/x/foo?go-get=1: no go-import meta tags ())"
}
$ go mod download -json golang.org/x/[email protected]
go: finding golang.org/x/text v0.3.7
{
"Path": "golang.org/x/text",
"Version": "v0.3.7",
"Error": "unknown revision v0.3.7"
}
</pre>
One option is that we define a set of status codes (like https://github.com/grpc/grpc/blob/master/doc/statuscodes.md or like HTTP, JSON error codes)
and use that to describe the kind of the error.
(sidenote: I wished Go2 error proposals also covered encoding/decoding of the error types and root causes but it seems that was not discussed.)
Moreover, we also need to fix the Go command to report the root cause of the error accurately.
For example, I ran the following example while I had not network access. Even though the root cause of the failure is the (temporary) network issue, the error message is not distinguishable from what Go returns when the module does not exists.
<pre>
$ go list -m --versions --json -e golang.org/x/text
{
"Path": "golang.org/x/text",
"Error": {
"Err": "module \"golang.org/x/text\" is not a known dependency"
}
}
</pre>
I am happy to file a separate issue about misleading error messages if there is no existing open issue yet.
@jayconrod @bcmills
@katiehockman @heschik @ianthehat | NeedsInvestigation,GoCommand | low | Critical |
408,240,317 | pytorch | Give clearer guidance about multithreading in PyTorch, and how to disable it | e.g. https://github.com/pytorch/pytorch/issues/16894
There's a lot of people asking about this, but no canonical source of information about it.
cc @jlin27 @mruberry | module: docs,triaged,module: multithreading | low | Major |
408,275,895 | godot | Android in app purchase : can not save item that purchased and using iap.consume("pid") | **Godot version:**
<!-- Specify commit hash if non-official. -->
Godot 3.1 beta 3
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Android
**Issue description:**
<!-- What happened, and what was expected. -->
I am using `logcat | grep godot` to checking the output.
+++
If using `iap.purchase("pid1")` and return ` godot : purchase_success : pid1`.
But when I exit game then comeback, it's not return `has_purchased : pid1` as I expect.
+++
If I set `iap.set_auto_consume(false)` at the beginning of the game, my purchased item return `has_purchased : pid1` after I exit the game.
So if my game has both Consumables and Non-Consumables, how can I save my items purchased.
+++
I try to test `iap.consume("pid1")` (replace my source code `iap.purchase("pid1")`), but it's not return anything.
**Steps to reproduce:**
1/ Set up IAP and set purchase button:
a.
`iap.sku_details_query(["pid1"])`
`iap.connect("has_purchased",self,"iap_has_purchased")`
`iap.request_purchased()`
`#iap.set_auto_consume(false)`
....
`func iap_has_purchased(item_name):`
`print("iap_has_purchased :: " + str(item_name))`
b.
`func _on_NoAds_pressed() -> void:`
`iap.purchase("pid1")`
`#iap.consume("pid1")`
`iap.connect("purchase_success",self,"iap_purchase_success")`
`iap.connect("consume_success",self,"on_consume_success")`
2/ Remove comment `iap.set_auto_consume(false)`
3/ Comment `#iap.set_auto_consume(false)` and `#iap.purchase("pid1")`
Remove comment `iap.consume("pid1")`
=> not return anything.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| bug,platform:android,topic:thirdparty | low | Critical |
408,278,709 | pytorch | Issue with dataloader using pin_memory = True | ## Issue description
Hello, I'm seeing an odd issue with using the pin_memory = true flag with the dataloader. I'm measuring the time taken to transfer data from the host RAM to GPU memory as follows:
transfer_time_start = time.time()
input = input.cuda(args.gpu, non_blocking=False)
target = target.cuda(args.gpu, non_blocking=False)
torch.cuda.synchronize()
transfer_time.update(time.time()-transfer_time_start)
with pin_memory = True in the dataloader, this gives me a transfer time of 0.03 sec, which for a batch size of 256, translates into 256 times 224 times 224 times 3 times 4/0.03 = 5.1GB, which is a bit low for my CPU-GPU interconnect (x16, PCIe3) which should deliver ~12GB.
I then tried calling pin_memory() manually on the tensor returned by the enumerate call, as shown below:
for i, (input, target) in enumerate(train_loader):
input = input.pin_memory()
# measure data loading time
data_time.update(time.time() - end)
transfer_time_start = time.time()
input = input.cuda(args.gpu, non_blocking=False)
target = target.cuda(args.gpu, non_blocking=False)
torch.cuda.synchronize()
transfer_time.update(time.time()-transfer_time_start)
Now the transfer time dropped to 0.014, which translates to ~11GB, which is as expected. Anyone has any ideas why setting pin_memory = True in the data loader may not return a tensor already in pinned memory?
Also attached below are two plots showing the transfer time (green plot) from host memory to the GPU.
This plot shows the transfer time when I call pin_memory manually

You can see that the transfer time stays consistently low.
Whereas this one shows the transfer time without calling pin_memory manually. Now the transfer time is highly variable and averages to around 0.03 sec

If I insert a sleep of 50 ms after the enumerate call, I again obtain a nice, low transfer time. This indicates that the data loader simply needs a bit more time to finish the pinned_memory transfer? But inserting the sleep shouldn't be necessary as the dataloader shouldn't return from the enumerate call if the transfer is not complete yet?
## System Info
Please copy and paste the output from our
PyTorch version: 1.0.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.4 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.1.85
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
GPU 3: GeForce GTX 1080 Ti
Nvidia driver version: 387.26
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.2
Versions of relevant libraries:
[pip] numpy==1.15.4
[pip] torch==1.0.0
[pip] torchvision==0.2.1
[conda] blas 1.0 mkl
[conda] mkl 2019.1 144
[conda] mkl_fft 1.0.10 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.0.0 py3.7_cuda9.0.176_cudnn7.4.1_1 pytorch
[conda] torchvision 0.2.1 py_2 pytorch
cc @SsnL | module: dataloader,triaged | low | Critical |
408,279,735 | go | cmd/go: 'go list -m -json' in an empty directory returns cryptic output | <pre>
$ go version
go version devel +56c9f8e8cf linux/amd64
$ cd $(mktemp -d)
$ GO111MODULE=on go list -m -json
{
"Path": "command-line-arguments",
"Main": true
}
</pre> | NeedsFix,modules | low | Major |
408,283,232 | flutter | Pass the --observatory-port flags in intent to FlutterActivityDelegate | Internal: b/362786296
Dream is asking for us to make the observatory port part of the args that we parse out of the intent that a FlutterActivityDelegate receives, so that they can wait to ensure that observatory is initialized before attempting to startup, and they can use the port for debugging the embedded FlutterView.
Obviously, since Intents can be made by any applications, there are some security issues that would need to be addressed. | c: new feature,tool,engine,customer: dream (g3),P2,team-engine,triaged-engine | low | Critical |
408,288,719 | vue | Provide way to destroy app in SSR | ### What problem does this feature solve?
This issue is related to: https://github.com/vuejs/vue-router/issues/2606
Providing a way to destroy the app or mark the SSR request as complete (maybe on `$ssrContext`) is a potential fix to this problem, though maybe not the best one.
To recap:
A memory leak happens when the `router-view` is programmed to appear conditionally, and the component matching the view has a `beforeRouteEnter` guard and a callback is passed to it's `next(...)` method (e.g. `next(vm => {})`).
This will cause `vue-router` to poll every 16ms until the `router-view` materializes.
In a typical SSR application an instance of the app is created per request, which means the `router-view` will never appear, causing infinitely recursing poll methods.
### What does the proposed API look like?
A potential fix to this would be to detect when the app is destroyed in `vue-router`'s `poll` method, and allow the user to destroy the app that they created in `entry-server.js`.
A simplified example:
```javascript
export default context => {
return new Promise((resolve, reject) => {
const { app, router } = createApp(context)
const { url } = context
router.push(url)
router.onReady(() => {
resolve(app)
}, reject)
}).then(destroyApp)
}
```
<!-- generated by vue-issues. DO NOT REMOVE --> | feature request | low | Minor |
408,362,557 | storybook | Anchor tags within preview panel not working | Apologies if this isn't a bug or there's standard way to solve it, I couldn't find it.
**Describe the bug**
An anchor tag within a page in the preview panel iframe doesn't take you to that position within the page, but reloads the top iframe with the page content.
**To Reproduce**
Steps to reproduce the behavior:
- Create a story within the preview panel, and add a link like <a href="#about">About</a>
- Add a target for that link further down the page`<a id="about"></a>`
**Expected behavior**
When clicking the "About" link it would take you to the target (i.e. scroll down to the appropriate spot).
Result: it reloads the page in to the top frame. | bug,ui,core | medium | Critical |
408,419,495 | go | cmd/go: misleading error for a version following the module directive in go.mod | A module can not declare its own version in go.mod even though
* this is a basic need, not everyone gets its code through a git dump
* the json structure outputted by "go mod edit -fmt -json" clearly provides for it, and
* the error message reafirms a version should be possible
`module test/dummy v0.0.1`
````
go: errors parsing go.mod:
go.mod:1: usage: module module/path [version]
```
| NeedsFix,modules | low | Critical |
408,445,242 | rust | Expose raw Stdout/err/in | Currently there is not easy/obvious way to get an unbuffered Stdout/err/in. The types do [exist in stdio](https://github.com/rust-lang/rust/blob/master/src/libstd/io/stdio.rs#L23), however they are not public for reasons not noted.
For example these types would be useful for CLI applications that write a lot of data at once without it getting unnecessarily flushed.
One can use platform specific extensions such as `from_raw_fd` on unix, and `from _raw_handle` on windows as a workaround. | T-libs-api,C-feature-request | medium | Major |
408,461,263 | rust | Tracking issue for #[ffi_const] | Annotates an extern C function with C [`const`](https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#Common-Function-Attributes) attribute.
https://doc.rust-lang.org/beta/unstable-book/language-features/ffi-const.html | A-FFI,T-lang,B-unstable,C-tracking-issue,S-tracking-needs-summary,T-opsem | low | Minor |
408,461,314 | rust | Tracking issue for #[ffi_pure] | Annotates an extern C function with C [`pure`](https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#Common-Function-Attributes) attribute. | A-FFI,T-lang,B-unstable,C-tracking-issue,S-tracking-needs-summary,T-opsem | low | Major |
408,470,032 | TypeScript | Improve typings of Array.map when called on tuples | ## Search Terms
- Array.map
- map tuple
## Suggestion
Using Array.map on a tuple should return a tuple instead of an array. In one of my projects I could achieve that using
```typescript
declare interface Array<T> {
map<U>(callbackfn: (value: T, index: number, array: T[]) => U, thisArg?: any): { [K in keyof this]: U };
}
```
I haven't encountered any negative side effects.
## Use Cases
This can be useful when you want to use tuples as a fixed length array.
## Examples
```typescript
type Vec3D = [number, number, number];
let vec: Vec3D = [1, 2, 3];
let scaledVec: Vec3D = vec.map(x => 2 * x);
```
This is currently an error:
https://www.typescriptlang.org/play/#src=type%20Vec3D%20%3D%20%5Bnumber%2C%20number%2C%20number%5D%3B%0D%0Alet%20vec%3A%20Vec3D%20%3D%20%5B1%2C%202%2C%203%5D%3B%0D%0Alet%20scaledVec%3A%20Vec3D%20%3D%20vec.map(x%20%3D%3E%202%20*%20x)%3B
But with the proposed change it would not be an error:
https://www.typescriptlang.org/play/#src=declare%20interface%20Array%3CT%3E%20%7B%0D%0A%20%20%20%20map%3CU%3E(callbackfn%3A%20(value%3A%20T%2C%20index%3A%20number%2C%20array%3A%20T%5B%5D)%20%3D%3E%20U%2C%20thisArg%3F%3A%20any)%3A%20%7B%20%5BK%20in%20keyof%20this%5D%3A%20U%20%7D%3B%0D%0A%7D%0D%0A%0D%0Atype%20Vec3D%20%3D%20%5Bnumber%2C%20number%2C%20number%5D%3B%0D%0Alet%20vec%3A%20Vec3D%20%3D%20%5B1%2C%202%2C%203%5D%3B%0D%0Alet%20scaledVec%3A%20Vec3D%20%3D%20vec.map(x%20%3D%3E%202%20*%20x)%3B
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code (At least I think so...)
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Needs Investigation,Fix Available,Rescheduled | medium | Critical |
408,500,862 | create-react-app | CRA2 doesn't work with reflect-metadata package | <!--
PLEASE READ THE FIRST SECTION :-)
-->
### Is this a bug report?
Yes
### Did you try recovering your dependencies?
No
### Which terms did you search for in User Guide?
None
### Environment
```bash
Environment Info:
System:
OS: Linux 4.4 Ubuntu 14.04.5 LTS, Trusty Tahr
CPU: x64 Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz
Binaries:
Node: 9.11.2 - /usr/local/bin/node
Yarn: 1.6.0 - ~/.yarn/bin/yarn
npm: 5.6.0 - ~/.npm-global/bin/npm
Browsers:
Chrome: 71.0.3578.98
Firefox: 64.0
npmPackages:
@types/react: ^16.8.2 => 16.8.2
@types/react-dom: ^16.8.0 => 16.8.0
react: ^16.8.1 => 16.8.1
react-dom: ^16.8.1 => 16.8.1
react-scripts: 2.1.3 => 2.1.3
npmGlobalPackages:
create-react-app: 1.5.2
```
### Steps to Reproduce
Just follow the steps from the `Reproducible Demo` part and open console in browser.
### Expected Behavior
Browsers console should print `someProperty type: Number`.
### Actual Behavior
It prints `couldn't get the type :(`.
### Reproducible Demo
```bash
git clone [email protected]:elderapo/cra2-reflect-metadata-bug.git
cd cra2-reflect-metadata-bug
yarn
yarn start
```
### The problem
I think the problem is that babel completely strips out TS types before compilation process which makes it impossible for `reflect-metadata` package to do its work.
| tag: underlying tools | medium | Critical |
408,508,598 | vue | Custom error message from prop validator | ### What problem does this feature solve?
Currently, if a custom validator fails, we get a console error log saying `Invalid prop: custom validator check failed for prop 'email'` which is not helpful if you're using a third-party component. The only way to find out what failed is to jump into the source code of the component and try to understand what does this custom validator do. If the custom validator can provide a custom message that immensely changes developer experience e.g. Instead of `Invalid prop: custom validator check failed for prop 'email'`, it can say, `Invalid prop: the prop 'email' should be a valid GMail address.`
### What does the proposed API look like?
No change in API signature only behavior of `validator` function. If a validator function throws an error, use it as a custom message for prop validation. Also, allow `{{name}}` interpolation in error message. So the `email` can be defined as:
``` js
...
props: {
email: {
validator(value) {
if (!value.endsWith('@gmail.com')) throw new Error('the prop '{{name}}' should be a valid GMail address.')
return true
}
}
...
```
<!-- generated by vue-issues. DO NOT REMOVE --> | feature request | medium | Critical |
408,520,339 | rust | rpath is incorrect when crate links against compiler libraries | When compiling something like [Miri](https://github.com/solson/miri/) that links against librustc and other compiler libraries, even with `rpath = true` in my `Cargo.toml`, the resulting binary does not work without `LD_LIBRARY_PATH`:
```
$ target/release/miri
target/release/miri: error while loading shared libraries: librustc_driver-e8ad12ee87dce48b.so: cannot open shared object file: No such file or directory
```
Looking at the bianary, I noticed a RUNPATH and no RPATH and learned that RPATH got deprecated in favor of RUNPATH:
```
$ readelf -d target/release/miri | egrep 'R(UN)?PATH'
0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN/../../../../rustc.2/build/x86_64-unknown-linux-gnu/stage2/lib/rustlib/x86_64-unknown-linux-gnu/lib:/home/r/src/rust/miri/lib/rustlib/x86_64-unknown-linux-gnu/lib]
```
However, after some research, two things strike me about this:
* First of all, the number of "../" is wrong -- there is one too much. Starting at the directory containing the binary (`~/src/rust/miri/target/release`), going up 4 times leads to `~/src`, while the `rustc.2` directory is at `~/src/rust/rustc.2`.
* ~~Second, according to [this](https://stackoverflow.com/questions/6324131/rpath-origin-not-having-desired-effect), the binary needs to have the `ORIGIN` flag set for `$ORIGIN` to actually work, and that flag is not set:~~ I verified this locally and that flag does not seem to be needed.
```
$ readelf -d target/release/miri | egrep -i flag
0x000000000000001e (FLAGS) BIND_NOW
0x000000006ffffffb (FLAGS_1) Flags: NOW PIE
```
I am also really surprised that it encodes the path to the sysroot as relative to the binary, but the path to other libs compiled in the same crate as absolute to the binary. Shouldn't it rather be the other way around? | T-compiler,C-bug,A-miri | low | Critical |
408,523,997 | godot | [2D physics] Random flipping of sprite when rotated. | **Godot version:**
3.1 beta3.
**OS/device including version:**
Linux 64-bit. Unlikely to be different under another OS.
**Issue description:**
The player sprite uses a `Raycast2D` to detect the angle of the floor underneath it (`PlayerPivot`). It also has this code:
(to disable when in the air):
```
if ($PlayerPivot.enabled): # If in the air and the pivot/floor edge detectors are enabled, disable them.
$PlayerPivot.enabled = false
$FloorEdgeLeft.enabled = false
$FloorEdgeRight.enabled = false
rotation = 0 # Avoid the player character being at odd angles when in the air and returning to the ground.
```
(to enable when on the floor, and check/change rotation angle):
```
if (!$"PlayerPivot".enabled): # If on the ground and the pivot/floor edge detectors are not enabled, enable them.
$"PlayerPivot".enabled = true
$"FloorEdgeLeft".enabled = true
$"FloorEdgeRight".enabled = true
else: # Make sure the player is angled to the ground.
ground_normal = $"PlayerPivot".get_collision_normal ()
ground_angle = (floor_normal.angle_to (ground_normal))
rotation = (rotation if player_speed < 0.05 else ground_angle)
```
This usually works as expected - the player sprite "follows" the angle of the floor underneath it. But sometimes, for reasons not entirely clear, sometimes the sprite will flip (usually by 90 degrees, but I've noticed other angles too). It seems to do this especially when having stopped mid-way on a slope and started to run up or down it, or running off the end of a slope and landing on a higher platform. This appears to be completely at random and is not reliably reproducible as far as I have determined.
I am looking at using the collision info for the player sprite's collision shape directly instead of a Raycast, but I'm not sure if I'd still run into the same problem. I'm also not 100% sure this isn't my error.
**Minimal reproduction project:**
https://github.com/Sslaxx/Sonic_Outbreak/commit/1ab26ed73835894d1dbacf4396320cba91f04855 (https://github.com/Sslaxx/Sonic_Outbreak/tree/godot_sonic_engine) | bug,topic:core,topic:physics | low | Critical |
408,526,151 | puppeteer | The `page.setCookie` method should also accept raw cookies | <!--
STEP 1: Are you in the right place?
- For general technical questions or "how to" guidance, please search StackOverflow for questions tagged "puppeteer" or create a new post.
https://stackoverflow.com/questions/tagged/puppeteer
- For issues or feature requests related to the DevTools Protocol (https://chromedevtools.github.io/devtools-protocol/), file an issue there:
https://github.com/ChromeDevTools/devtools-protocol/issues/new.
- Problem in Headless Chrome? File an issue against Chromium's issue tracker:
https://bugs.chromium.org/p/chromium/issues/entry?components=Internals%3EHeadless&blocking=705916
For issues, feature requests, or setup troubles with Puppeteer, file an issue right here!
-->
### Steps to reproduce
**Tell us about your environment:**
* Puppeteer version: 1.12.2
* Platform / OS version: macOS 10.14.3
* URLs (if applicable):
* Node.js version: 10.15.0
**What steps will reproduce the problem?**
```js
await page.setCookie('foo=bar');
```
**What is the expected result?**
It works.
**What happens instead?**
It doesn't.
---
Not being able to set raw cookies is very inconvenient. There are multiple situations where it would be useful:
1. Copy-pasting cookies from DevTools.
2. Accepting cookies as user input in a command-line tool.
3. Getting cookies from a server request, for example in Express where you read the cookie header, and then pass them to Puppeteer.
Yes, I could find an npm package for this, but the ones I found are not very good (`tough-cookie` included). I also think it's such a basic thing that a big percentage of Puppeteer users will need, that it should be supported by Puppeteer natively. | feature,confirmed | medium | Critical |
408,548,048 | terminal | Ambiguous width character in CJK environment | The operation in the English environment is perfect. However, the behavior in the CJK environment is unstable.
Type β, (\b), β, (\b), β ... , because the sequence is insufficient, the character shifts one cell to the right.

I thought that it was my mistake, I tried drawing by querying the cursor position, but it could not be solved.
Do you have any corrections? | Issue-Feature,Area-Rendering,Product-Conpty | low | Major |
408,549,914 | godot | Area2D duplicate signal when changing CollisionPolygon2D Polygon property | **Godot version:**
3.1 Beta 3
**OS/device including version:**
Windows 10 Pro
**Issue description:**
I'm trying to create dynamic 2d water, with damping and dispersion physics in the water surface. The water has Area2D with CollisionPolygon2D to detect RigidBody2D collision to calculate speed and apply force in a surface vertice. But the CollisionPolygon2D has the same poygon as the body (note: the water body and surface is drawed, not using a Polygon2D), so every frame is changing. But when a body enter the water, duplicate the "splash" function, but when the Collision is not changed, the "splash" function is called once.
OBS: I already tried using _physics_process(delta) function in instead of _process(delta) function to change the polygon, but the signal output more.
**Steps to reproduce:**
1. Create a Area2D;
2. Create a CollisionPolygon2D as child;
3. In a script, every frame change the 'polygon' property of the CollisionPolygon2D;
4. Connect the 'body_entered' signal to the script;
5. To analyse the output, put a print function on the signal;
6. Put a RigidBody2D (or other body falling to the Area2D) on the scene;
7. Run the scene.
**Minimal reproduction project:**
[collision duplication.zip](https://github.com/godotengine/godot/files/2849081/collision.duplication.zip)
| bug,confirmed,documentation,topic:physics | low | Major |
408,569,530 | create-react-app | Enhance CLI validation | ### Is this a bug report?
No, we've reverted #6080, but want to continue this work in the near future.
### Steps to Reproduce
Create an application using `npx create-react-app option1 option2 option3`. A project called `option1` is created, and other arguments are ignored without user feedback.
### Expected Behavior
When creating an application with `npx create-react-app [options]`, I should be warned if an option doesn't exist or is invalid.
### Actual Behavior
When creating an application with `npx create-react-app [options]`, I can pass in non-existent arguments. | issue: proposal,tag: enhancement | low | Critical |
408,574,777 | pytorch | Allow positional arguments to be passed as kwargs for autograd custom Function | ## π Bug
According to the [Extending PyTorch](https://pytorch.org/docs/stable/notes/extending.html) doc page, optional arguments are allowed in a `torch.autograd.Function`. This works well when passing the optional argument as a positional argument, but fails when passing the optional argument as a keyword argument.
## To Reproduce
For example, consider the following minimal non-working example, which implements the custom function `y = factor*sin(x)`:
```python
import torch
import numpy as np
class MySin(torch.autograd.Function):
@staticmethod
def forward(ctx, input, factor=1):
args = input.detach().numpy()
res = factor*np.sin(args)
ctx.save_for_backward(input)
ctx.factor = factor
return torch.from_numpy(res)
@staticmethod
def backward(ctx, grad_output):
args = ctx.saved_tensors[0].detach().numpy()
jacobian = ctx.factor*np.diag(np.cos(args))
grad_input = torch.from_numpy(grad_output.numpy().T @ jacobian)
return grad_input, None
my_sin = MySin.apply
```
Calling it with `factor` passed positional, or not at all, works fine:
```python
>>> x = torch.autograd.Variable(torch.tensor([0.812]), requires_grad=True)
>>> out = my_sin(x, 6)
>>> out.backward()
>>> print(x.grad, 6*np.cos(0.812))
tensor([4.1283]) 4.12829088343
```
However, passing it as a keyword argument results in an error:
```python
>>> x = torch.autograd.Variable(torch.tensor([0.812]), requires_grad=True)
>>> out = my_sin(x, factor=6)
>>> out.backward()
>>> x.grad
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-60-d83054d85504> in <module>()
1 x = torch.autograd.Variable(torch.tensor([0.812]), requires_grad=True)
----> 2 out = my_sin(x, factor=6)
3 out.backward()
4 x.grad
TypeError: apply() takes no keyword arguments
```
## Expected behavior
Passing the optional argument as a keyword argument should match the behaviour when passing as a positional argument:
```python
>>> x = torch.autograd.Variable(torch.tensor([0.812]), requires_grad=True)
>>> out = my_sin(x, factor=6)
>>> out.backward()
>>> print(x.grad, 6*np.cos(0.812))
tensor([4.1283]) 4.12829088343
```
This is useful for two reasons:
1. It is consistent with standard Python use, and helps to avoid unexpected behaviour
2. Sometimes it is useful to use custom functions to wrap 'black-box' NumPy functions which evaluate their value and their gradient, and don't have a torch equivalent. In a lot of cases, these depend on keyword arguments, and it would be nice to be able to specify `**kwargs` in the static `forward` method signature.
## Environment
PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: None
OS: Ubuntu 14.04.5 LTS
GCC version: (Ubuntu 5.5.0-12ubuntu1~14.04) 5.5.0 20171010
CMake version: version 3.2.2
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] numpy==1.13.3
[pip3] torch==1.0.1.post2
[pip3] torchvision==0.2.1
[conda] Could not collect
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 | module: autograd,triaged,has workaround,actionable | low | Critical |
408,581,892 | pytorch | Multiple CPU processes using same GPU model for inference | Hi guys,
under Windows, how can I use multiple processes, spawned by multiprocessing use the same GPU model for prediction? I am using Python 3.6 and pytorch 1.0.
I read this:
https://pytorch.org/docs/stable/multiprocessing.html#module-torch.multiprocessing
https://pytorch.org/docs/stable/notes/multiprocessing.html
According to this it seems possible: https://github.com/pytorch/pytorch/issues/3871
but under Windows?
According to this it seems it's not possible unter Windows:
https://pytorch.org/docs/stable/notes/windows.html#cuda-ipc-operations
I only want to predict values from the same trained model from multiple spawned CPU processes, but currently I can't make it work.
I get this error:
THCudaCheck FAIL file=c:\a\w\1\s\tmp_conda_3.6_091443\conda\conda-bld\pytorch_1544087948354\work\torch\csrc\generic\StorageSharing.cpp line=232 error=71 : operation not supported
Predictions from MainProcess on GPU is working fine, and multiprocessing on CPU works fine. But my model is pretty large right now (deep learning with residual networks), so I thought it should be easy to predict only on gpu and do the rest on CPU (multiprocess self-play stuff which needs those predictions).
Can I make this somehow work under windows?
cc @peterjc123 | module: windows,module: multiprocessing,triaged | medium | Critical |
408,586,251 | rust | Move as much unsafe code as possible out of librustc. | I am currently aware of several uses of `unsafe` code (feel free to add more):
* [ ] interning / lifting typesystem entities (in)to `TyCtxt`
* maybe we can figure out a generalization in the `salsa` ecosystem (cc @nikomatsakis)
* [ ] `ty::List<T>`, which is effectively a `Thin<[T]>`
* could be moved to `arena` or a crate `arena` can depend on
* potential crates.io replacement: [`thin-slice`](https://crates.io/crates/thin-slice)
* [ ] `ty::subst::Kind`, which is `ty::subst::UnpackedKind` but pointer-tagged
* could be generalized to a 1-bit or 2-bit pointer-tagging abstraction
* potential crates.io replacement: [`tagged_ptr`](https://crates.io/crates/tagged_ptr)
cc @rust-lang/compiler @RalfJung | C-cleanup,T-compiler | low | Minor |
408,598,460 | godot | Headless mono build not detecting installed mono libs | **Godot version:**
master, commit b67955afcae7a63a37fba7d046c7217bd8a6c3c3
**OS/device including version:**
Kubuntu 18.04
**Issue description:**
I was following the build instructions for building Godot with mono, but when I run the output executable against my Hello World C# project, I get the following output in my terminal:
> The assembly mscorlib.dll was not found or could not be loaded.
> It should have been installed in the `/media/andrew/UbuntuStorage1/Godot-Mono/godot-master/bin/data_Godot/Mono/lib/mono/4.5/mscorlib.dll' directory.
**Steps to reproduce:**
1. Install mono using the repositories found at https://www.mono-project.com/download/stable/
2. Create a Godot project (using the Mono binary available for download) with a GDScript that calls a simple C# class. In the C# class, include a Console.write() call to display "Hello World".
3. Follow Godot build instructions at https://docs.godotengine.org/en/latest/development/compiling/compiling_with_mono.html#requirements
4. Run the output Godot executable from step 3 against the project from step 2.
**Minimal reproduction project:**
[HiredGuns.zip](https://github.com/godotengine/godot/files/2849564/HiredGuns.zip)
| usability,topic:dotnet | low | Minor |
408,619,401 | vscode | Split JSON settings editor discussion issue | So as you can see here:

when I open my `settings.json` file I don't have it split up. Before updating whenever I opened `settings.json` I had all the settings on the left hand side and those changed by me on the right hand side. Now I only have the thing, which was previously on the right.
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.31.0
- OS Version: Windows 10
Steps to Reproduce:
1. Open settings
2. Click `{}` to open `settings.json`
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
| settings-editor,under-discussion | medium | Major |
408,620,524 | deno | Public API for compilers | For tracking purposes, please don't work on this without discussing with Ry or myself.
Having a public API that is similar to how we perform TypeScript compilation is a good idea. It would allow JS->JS transpilation (e.g. those who need Babel custom plugins or Flow) or other languages (e.g. CoffeeScript).
Related to #1738 and some other work in rationalising the compiler APIs internally, but we should be able to support loading a compiler in a web worker and instructing the privileged side what resources should be sent to that runtime compiler. | feat,public API | high | Critical |
408,687,109 | vscode | More flexible input variables: Multiple values & labels | I love the new input variables feature and was eager to thin out the tasks lists of my projects. Unfortunately, I realized that the mechanism does not suffice for most cases. Items of the `options` array may be cryptic ids and often a choice represents a combination of variables. May I suggest that you support multiple values and, consequently, labels?
Example:
```json
{
"tasks": [{
"label": "π³ Docker build",
"type": "shell",
"command": "docker",
"args": ["build", "-t", "${input:image.tag}", "."],
"options": {
"cwd": "${workspaceFolder}/${input:image.context}"
},
"problemMatcher": []
}],
"inputs": [{
"id": "image",
"type": "pickString",
"description": "Which image to build?",
"options": [{
"label": "π΅ Billing",
"value": {
"tag": "my.azurecr.io/myshop-billing",
"context": "backend/services/billing"
}
}]
}]
}
```
Valid values might be:
```json
{
"options": [
"a string",
{ "value": "with a", "label": ":)" },
{ "label": "with multiple", "value": { "s": "" } }
]
}
```
| feature-request,tasks | low | Major |
408,701,152 | go | net/http: ServeFile panics when StripPrefix over-strips and results in empty path | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.4 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
YES
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/ggicci/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/ggicci/workspace"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.11.4/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.11.4/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/ht/hdwl47w16m10r45w89drkkph0000gn/T/go-build233590303=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I did use `http.StripPrefix` accompanied with `http.ServeFile` to strip URL prefix and serve a folder. And the HTTP server will panic and the web page will receive nothing if I opened http://localhost:8080/download/.
Here's a code snippet to reproduce the problem:
```go
package main
import "net/http"
var myHandler = http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
http.ServeFile(rw, r, "./files")
})
func main() {
http.Handle("/download/", http.StripPrefix("/download/", myHandler))
http.ListenAndServe(":8080", nil)
}
```
### What did you expect to see?
I should see contents from my `files` folder. File list or index page content.
### What did you see instead?
No HTTP response, but panics information from stderr.
<details>
<pre>
2019/02/11 17:43:17 http: panic serving [::1]:56154: runtime error: index out of range
goroutine 19 [running]:
net/http.(*conn).serve.func1(0xc00008a960)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1746 +0xd0
panic(0x1269ae0, 0x14af4d0)
/usr/local/Cellar/go/1.11.4/libexec/src/runtime/panic.go:513 +0x1b9
net/http.serveFile(0x12fff00, 0xc00011a000, 0xc000122000, 0x12fdce0, 0xc000010030, 0x12b95f1, 0x5, 0x0)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/fs.go:586 +0xac9
net/http.ServeFile(0x12fff00, 0xc00011a000, 0xc000122000, 0x12b95ef, 0x7)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/fs.go:681 +0x13f
main.glob..func1(0x12fff00, 0xc00011a000, 0xc000122000)
/Users/ggicci/workspace/src/ggicci.me/go/reproduce-gohttostripprefix/main.go:6 +0x54
net/http.HandlerFunc.ServeHTTP(0x12cc038, 0x12fff00, 0xc00011a000, 0xc000122000)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1964 +0x44
net/http.StripPrefix.func1(0x12fff00, 0xc00011a000, 0xc00010c000)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2003 +0x18b
net/http.HandlerFunc.ServeHTTP(0xc00008ecc0, 0x12fff00, 0xc00011a000, 0xc00010c000)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1964 +0x44
net/http.(*ServeMux).ServeHTTP(0x14ba820, 0x12fff00, 0xc00011a000, 0xc00010c000)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2361 +0x127
net/http.serverHandler.ServeHTTP(0xc000091040, 0x12fff00, 0xc00011a000, 0xc00010c000)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2741 +0xab
net/http.(*conn).serve(0xc00008a960, 0x1300100, 0xc000096200)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1847 +0x646
created by net/http.(*Server).Serve
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2851 +0x2f5
2019/02/11 17:43:17 http: panic serving [::1]:56174: runtime error: index out of range
goroutine 20 [running]:
net/http.(*conn).serve.func1(0xc00008aa00)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1746 +0xd0
panic(0x1269ae0, 0x14af4d0)
/usr/local/Cellar/go/1.11.4/libexec/src/runtime/panic.go:513 +0x1b9
net/http.serveFile(0x12fff00, 0xc00015e000, 0xc00016c000, 0x12fdce0, 0xc00015a010, 0x12b95f1, 0x5, 0x0)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/fs.go:586 +0xac9
net/http.ServeFile(0x12fff00, 0xc00015e000, 0xc00016c000, 0x12b95ef, 0x7)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/fs.go:681 +0x13f
main.glob..func1(0x12fff00, 0xc00015e000, 0xc00016c000)
/Users/ggicci/workspace/src/ggicci.me/go/reproduce-gohttostripprefix/main.go:6 +0x54
net/http.HandlerFunc.ServeHTTP(0x12cc038, 0x12fff00, 0xc00015e000, 0xc00016c000)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1964 +0x44
net/http.StripPrefix.func1(0x12fff00, 0xc00015e000, 0xc00010c100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2003 +0x18b
net/http.HandlerFunc.ServeHTTP(0xc00008ecc0, 0x12fff00, 0xc00015e000, 0xc00010c100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1964 +0x44
net/http.(*ServeMux).ServeHTTP(0x14ba820, 0x12fff00, 0xc00015e000, 0xc00010c100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2361 +0x127
net/http.serverHandler.ServeHTTP(0xc000091040, 0x12fff00, 0xc00015e000, 0xc00010c100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2741 +0xab
net/http.(*conn).serve(0xc00008aa00, 0x1300100, 0xc000096300)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1847 +0x646
created by net/http.(*Server).Serve
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2851 +0x2f5
2019/02/11 17:43:17 http: panic serving [::1]:56175: runtime error: index out of range
goroutine 21 [running]:
net/http.(*conn).serve.func1(0xc00008aaa0)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1746 +0xd0
panic(0x1269ae0, 0x14af4d0)
/usr/local/Cellar/go/1.11.4/libexec/src/runtime/panic.go:513 +0x1b9
net/http.serveFile(0x12fff00, 0xc000194000, 0xc00010c300, 0x12fdce0, 0xc000088d50, 0x12b95f1, 0x5, 0x0)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/fs.go:586 +0xac9
net/http.ServeFile(0x12fff00, 0xc000194000, 0xc00010c300, 0x12b95ef, 0x7)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/fs.go:681 +0x13f
main.glob..func1(0x12fff00, 0xc000194000, 0xc00010c300)
/Users/ggicci/workspace/src/ggicci.me/go/reproduce-gohttostripprefix/main.go:6 +0x54
net/http.HandlerFunc.ServeHTTP(0x12cc038, 0x12fff00, 0xc000194000, 0xc00010c300)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1964 +0x44
net/http.StripPrefix.func1(0x12fff00, 0xc000194000, 0xc00010c200)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2003 +0x18b
net/http.HandlerFunc.ServeHTTP(0xc00008ecc0, 0x12fff00, 0xc000194000, 0xc00010c200)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1964 +0x44
net/http.(*ServeMux).ServeHTTP(0x14ba820, 0x12fff00, 0xc000194000, 0xc00010c200)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2361 +0x127
net/http.serverHandler.ServeHTTP(0xc000091040, 0x12fff00, 0xc000194000, 0xc00010c200)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2741 +0xab
net/http.(*conn).serve(0xc00008aaa0, 0x1300100, 0xc000096480)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1847 +0x646
created by net/http.(*Server).Serve
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2851 +0x2f5
2019/02/11 17:43:17 http: panic serving [::1]:56176: runtime error: index out of range
goroutine 5 [running]:
net/http.(*conn).serve.func1(0xc0001ae000)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1746 +0xd0
panic(0x1269ae0, 0x14af4d0)
/usr/local/Cellar/go/1.11.4/libexec/src/runtime/panic.go:513 +0x1b9
net/http.serveFile(0x12fff00, 0xc00011a0e0, 0xc000122300, 0x12fdce0, 0xc000010080, 0x12b95f1, 0x5, 0x0)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/fs.go:586 +0xac9
net/http.ServeFile(0x12fff00, 0xc00011a0e0, 0xc000122300, 0x12b95ef, 0x7)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/fs.go:681 +0x13f
main.glob..func1(0x12fff00, 0xc00011a0e0, 0xc000122300)
/Users/ggicci/workspace/src/ggicci.me/go/reproduce-gohttostripprefix/main.go:6 +0x54
net/http.HandlerFunc.ServeHTTP(0x12cc038, 0x12fff00, 0xc00011a0e0, 0xc000122300)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1964 +0x44
net/http.StripPrefix.func1(0x12fff00, 0xc00011a0e0, 0xc000122100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2003 +0x18b
net/http.HandlerFunc.ServeHTTP(0xc00008ecc0, 0x12fff00, 0xc00011a0e0, 0xc000122100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1964 +0x44
net/http.(*ServeMux).ServeHTTP(0x14ba820, 0x12fff00, 0xc00011a0e0, 0xc000122100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2361 +0x127
net/http.serverHandler.ServeHTTP(0xc000091040, 0x12fff00, 0xc00011a0e0, 0xc000122100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2741 +0xab
net/http.(*conn).serve(0xc0001ae000, 0x1300100, 0xc00005e200)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1847 +0x646
created by net/http.(*Server).Serve
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2851 +0x2f5
2019/02/11 17:44:47 http: panic serving [::1]:56544: runtime error: index out of range
goroutine 35 [running]:
net/http.(*conn).serve.func1(0xc0001c2000)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1746 +0xd0
panic(0x1269ae0, 0x14af4d0)
/usr/local/Cellar/go/1.11.4/libexec/src/runtime/panic.go:513 +0x1b9
net/http.serveFile(0x12fff00, 0xc00015e0e0, 0xc00016c200, 0x12fdce0, 0xc00015a070, 0x12b95f1, 0x5, 0x0)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/fs.go:586 +0xac9
net/http.ServeFile(0x12fff00, 0xc00015e0e0, 0xc00016c200, 0x12b95ef, 0x7)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/fs.go:681 +0x13f
main.glob..func1(0x12fff00, 0xc00015e0e0, 0xc00016c200)
/Users/ggicci/workspace/src/ggicci.me/go/reproduce-gohttostripprefix/main.go:6 +0x54
net/http.HandlerFunc.ServeHTTP(0x12cc038, 0x12fff00, 0xc00015e0e0, 0xc00016c200)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1964 +0x44
net/http.StripPrefix.func1(0x12fff00, 0xc00015e0e0, 0xc00016c100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2003 +0x18b
net/http.HandlerFunc.ServeHTTP(0xc00008ecc0, 0x12fff00, 0xc00015e0e0, 0xc00016c100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1964 +0x44
net/http.(*ServeMux).ServeHTTP(0x14ba820, 0x12fff00, 0xc00015e0e0, 0xc00016c100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2361 +0x127
net/http.serverHandler.ServeHTTP(0xc000091040, 0x12fff00, 0xc00015e0e0, 0xc00016c100)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2741 +0xab
net/http.(*conn).serve(0xc0001c2000, 0x1300100, 0xc0001580c0)
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1847 +0x646
created by net/http.(*Server).Serve
/usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2851 +0x2f5
</pre>
</details>
| NeedsFix | low | Critical |
408,714,702 | pytorch | Seg fault with test_rnn_retain_variables on ppc64le | ## π Bug
Observed segfault with the test case `test_rnn_retain_variables`:
Other tests that crash are:
```
test_cuda_rnn_fused
test_rnn_initial_hidden_state
test_einsum
```
## To Reproduce
The minimum to code to reproduce is:
```
import torch
import torch.nn as nn
device="cpu"
dtype=torch.double
rnn = nn.GRU(10, 20, num_layers=2).to(device,dtype)
input = torch.randn(5, 6, 10, device=device, dtype=dtype, requires_grad=True)
output = rnn(input)
```
Other observations are:
a. The above code works for `device="cuda"`
b. The above code works for `dtype=torch.float`
c. The above code works if the following is used for input:
`input = torch.randn(3, 3, 10, device=device, dtype=dtype, requires_grad=True)`
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
The tests should pass.
## Environment
```
PyTorch version: 1.0.0a0+7998997 (with some local changes)
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Red Hat Enterprise Linux Server 7.6 (Maipo)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
Nvidia driver version: 410.72
cuDNN version: Could not collect
```
## Additional context
<!-- Add any other context about the problem here. -->
| module: crash,triaged,module: POWER | low | Critical |
408,746,910 | godot | Trying to "get_plugin_name" triggers an error | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** 3.1.beta3.official
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Solus 3.9999
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
<!-- What happened, and what was expected. -->

In the documentation there is a method `EditorPlugin.get_plugin_name() -> String` but trying to call said method returns:

```
Invalid call. Nonexistent function 'get_plugin_name' in base 'EditorPlugin (plugin.gd)'
```
**Steps to reproduce:**
- Create an EditorPlugin
- In the plugin script call `get_plugin_name`
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| usability,topic:plugin | low | Critical |
408,780,016 | go | html/template: SGML processing Instructions escaped | Go version `go1.12beta2`.
```go
package main
import (
"bytes"
"html/template"
"log"
"strings"
)
func main() {
log.SetFlags(0)
tmpl, err := template.New("").Parse(`<?PITarget PIContent?>`)
if err != nil {
log.Fatal(err)
}
var b bytes.Buffer
tmpl.Execute(&b, nil)
s := b.String()
if strings.Contains(s, "<") {
log.Fatal(b.String())
}
}
```
The above prints `<?PITarget PIContent?>`.
I would expect it to be left untouched, e.g. `<?PITarget PIContent?>`.
https://en.wikipedia.org/wiki/Processing_Instruction
| NeedsDecision | low | Major |
408,780,625 | godot | Add a flag to create a project from the CLI | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.0.6
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Linux
**Issue description:**
<!-- What happened, and what was expected. -->
Could the godot CLI have a `-i/--init` flag that creates a new project at the provided path?
I often spin up new projects in `/tmp` to just quickly test things out, and I find the current process a little clumsy. In particular, regardless of where I start `godot` from, the project path always defaults to `$HOME`, so I have to navigate through a file dialog when I already had a shell open exactly where I wanted a new project. | enhancement,topic:editor | low | Major |
408,782,162 | create-react-app | 2.1.4 Update cause "Cannot find module '/Users/.../node_modules/react-scripts/node_modules/@babel/runtime/helpers/interopRequireDefault' from 'setupTests.ts'" | ### Environment
```
System:
OS: macOS 10.14.2
CPU: x64 Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
Binaries:
Node: 11.6.0 - ~/.nvm/versions/node/v11.6.0/bin/node
Yarn: 1.12.3 - ~/.nvm/versions/node/v11.6.0/bin/yarn
npm: 6.5.0-next.0 - ~/.nvm/versions/node/v11.6.0/bin/npm
Browsers:
Chrome: 72.0.3626.96
Firefox: 64.0
Safari: 12.0.2
npmPackages:
react: ^16.8.1 => 16.8.1
react-dom: ^16.8.1 => 16.8.1
react-scripts: ^2.1.4 => 2.1.4
npmGlobalPackages:
create-react-app: Not Found
```
### Steps to Reproduce
Starting from my project with `react-scripts: 2.1.3`,
- `yarn`
- `yarn test` working
- `yarn add [email protected]`
- `yarn test` working
- `yarn-deduplicate` [what is this](https://github.com/atlassian/yarn-deduplicate)
Introducing these modifications in the `yarn.lock`
```
-"@babel/[email protected]":
+"@babel/[email protected]", "@babel/plugin-transform-destructuring@^7.0.0", "@babel/plugin-transform-destructuring@^7.2.0":
version "7.3.2"
-"@babel/plugin-transform-destructuring@^7.0.0", "@babel/plugin-transform-destructuring@^7.2.0":
- version "7.2.0"
-
-ajv@^6.1.0, ajv@^6.5.3, ajv@^6.5.5:
- version "6.7.0"
-
-ajv@^6.9.1:
+ajv@^6.1.0, ajv@^6.5.3, ajv@^6.5.5, ajv@^6.9.1:
version "6.9.1"
-autoprefixer@^9.3.1:
- version "9.4.6"
-
-autoprefixer@^9.4.2:
+autoprefixer@^9.3.1, autoprefixer@^9.4.2:
version "9.4.7"
-caniuse-lite@^1.0.0, caniuse-lite@^1.0.30000884, caniuse-lite@^1.0.30000929:
- version "1.0.30000929"
-
-caniuse-lite@^1.0.30000918, caniuse-lite@^1.0.30000932:
+caniuse-lite@^1.0.0, caniuse-lite@^1.0.30000884, caniuse-lite@^1.0.30000918, caniuse-lite@^1.0.30000929, caniuse-lite@^1.0.30000932:
version "1.0.30000936"
[email protected]:
[email protected], core-js@^2.4.0, core-js@^2.4.1, core-js@^2.5.0, core-js@^2.5.7:
version "2.6.4"
-core-js@^2.4.0, core-js@^2.4.1, core-js@^2.5.0, core-js@^2.5.7:
- version "2.6.3"
-
-postcss@^7.0.0, postcss@^7.0.1, postcss@^7.0.13, postcss@^7.0.2, postcss@^7.0.5:
- version "7.0.13"
-
-postcss@^7.0.14, postcss@^7.0.6:
+postcss@^7.0.0, postcss@^7.0.1, postcss@^7.0.14, postcss@^7.0.2, postcss@^7.0.5, postcss@^7.0.6:
version "7.0.14"
-react-error-overlay@^5.1.0:
- version "5.1.2"
-
-react-error-overlay@^5.1.3:
+react-error-overlay@^5.1.0, react-error-overlay@^5.1.3:
version "5.1.3"
[email protected], resolve@^1.9.0:
[email protected], resolve@^1.1.6, resolve@^1.1.7, resolve@^1.2.0, resolve@^1.3.2, resolve@^1.5.0, resolve@^1.6.0, resolve@^1.8.1, resolve@^1.9.0:
version "1.10.0"
-resolve@^1.1.6, resolve@^1.1.7, resolve@^1.2.0, resolve@^1.3.2, resolve@^1.5.0, resolve@^1.6.0, resolve@^1.8.1:
- version "1.9.0"
-
```
- `yarn`
- `yarn test` not working anymore
```
Test suite failed to run
Cannot find module '/Users/.../node_modules/react-scripts/node_modules/@babel/runtime/helpers/interopRequireDefault' from 'setupTests.ts'
1 | import { configure } from 'enzyme';
2 | import Adapter from 'enzyme-adapter-react-16';
> 3 | import 'jest-extended';
| ^
4 | import 'jest-localstorage-mock';
5 | import { getSnapshotDiffSerializer, toMatchDiffSnapshot } from 'snapshot-diff';
6 |
at Resolver.resolveModule (node_modules/jest-resolve/build/index.js:221:17)
at Object.<anonymous> (src/setupTests.ts:3:30)
```
If needed I'll try to make a reproductible demo. My actual project is for work (and not open-source...).
| issue: bug,difficulty: complex,issue: needs investigation | high | Critical |
408,799,268 | pytorch | torch.multiprocessing.pool.Pool broken | ## π Bug
Cannot create a `torch.multiprocessing.pool.Pool` instance.
## To Reproduce
Steps to reproduce the behavior:
```
Python 3.6.3rc1+ (default, Feb 5 2019, 15:51:57)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.5.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: from torch.multiprocessing.pool import Pool
In [2]: p = Pool(1)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-ce2ba693018c> in <module>()
----> 1 p = Pool(1)
/usr/local/fbcode/gcc-5-glibc-2.23/lib/python3.6/multiprocessing/pool.py in __init__(self, processes, initializer, initargs, maxtasksperchild, context)
154 maxtasksperchild=None, context=None):
155 self._ctx = context or get_context()
--> 156 self._setup_queues()
157 self._taskqueue = queue.Queue()
158 self._cache = {}
/mnt/xarfuse/uid-169887/8eb75d00-ns-4026531840/torch/multiprocessing/pool.py in _setup_queues(self)
21
22 def _setup_queues(self):
---> 23 self._inqueue = SimpleQueue()
24 self._outqueue = SimpleQueue()
25 self._quick_put = self._inqueue._writer.send
TypeError: __init__() missing 1 required keyword-only argument: 'ctx'
```
## Expected behavior
No exception. | module: multiprocessing,triaged,small | low | Critical |
408,801,453 | opencv | bug in videoio crash while releasing capture | - OpenCV => 4.0.1
- Operating System / Platform => Windows 7 64 Bit
- Compiler => Visual Studio 2017
when calling capture release, sometimes it crashes when the app is going to quit and the debugger stops in file cap_dshow.cpp, the problem is that it happens only from time to time, hard to reproduce. When the release is called without app quitting, it is OK all the time. Some resources are already deleted?
```
videoDevice::~videoDevice(){
if(setupStarted){ DebugPrintOut("\nSETUP: Disconnecting device %i\n", myID); }
else{
if(sgCallback){
sgCallback->Release();
delete sgCallback;
}
return;
}
HRESULT HR = NOERROR;
//Stop the callback and free it
if( (sgCallback) && (pGrabber) )
{
pGrabber->SetCallback(NULL, 1);
DebugPrintOut("SETUP: freeing Grabber Callback\n");
--> sgCallback->Release();
```
| category: videoio,platform: win32 | low | Critical |
408,827,945 | rust | Consider aggregate types containing unconstructable types to also be unconstructable | Currently `Option<!>` is 0-sized, but `Option<(T, !)>` isn't, despite the fact that the `Some` variant of the latter is unconstructable. If this were fixed then you could implement `PhantomData` in userland as:
```rust
type PhantomData<T> = Option<(T, !)>;
```
instead of it being special-cased in the compiler. | A-type-system,T-lang,needs-rfc,T-types | low | Major |
408,829,242 | pytorch | Hardshrink for Sparse Tensors | ## π Feature
<!-- A clear and concise description of the feature proposal -->
RuntimeError: hardshrink is not implemented for type torch.sparse.FloatTensor
## Motivation
We are experimenting with sparse neural networks. We want to model the connection between neurons with sparse matrices. At times, if the weights are too small we want to prune connections, i.e. set the element in the sparse matrix to zero via a hardshrink.
cc @vincentqb | module: sparse,feature,triaged | low | Critical |
408,862,806 | godot | Allow BoneAttachments to be outside of skeleton | **Godot version:**
v3.1-beta3
**OS/device including version:**
ArchLinux rolling
**Issue description:**
It would be very convenient if BoneAttachment had extra "skeleton" parameter to allow putting them outside of skeleton. | enhancement,topic:core | low | Minor |
408,884,105 | flutter | `AnnotatedRegion<SystemUiOverlayStyle>` pattern docs are hard to find | We should point to `setSystemUIOverlayStyle`'s `AnnotatedRegion<SystemUiOverlayStyle>` pattern from the "See also:" section of the docs for `AnnotatedRegion`, 'SystemUiOverlayStyle', `SystemChrome`, `AppBar.backgroundColor`, `AppBar`, `AppBarTheme`, and anywhere else that makes sense. | framework,f: material design,d: api docs,c: proposal,P2,team-design,triaged-design | low | Major |
408,922,621 | kubernetes | Need documentation on expected permissions for mounted volumes in a Pod | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!-->
**What happened**:
We ran percona mysql as Pod on GKE, Minikube and a proprietary PMK platform. The percona image is built with best practices docker documents. It runs the process mysqld as user "mysql" because running as root is not advised. It sets ownership of mysql data directory and declares it a "volume", which is an indication to underlying platform that it should not manage the data in it. What we are seeing is that this volume gets mounted as "root:root" on proprietary kubernetes and "mysql:root" on GKE/Minikube.
**What you expected to happen**:
All kubernetes platforms should exhibit the same behavior. I'm looking for standard documentation around how permissions are supposed to behave on kubernetes, the correct behavior to adhere to.
**How to reproduce it (as minimally and precisely as possible)**:
Run percona mysql docker image (percona/mysql-5.7) as a Pod
**Anything else we need to know?**:
The proprietary kubernetes uses datera storage as backend.
**Environment**:
- Kubernetes version (use `kubectl version`):
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-30T21:39:16Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
| kind/bug,kind/documentation,sig/storage,lifecycle/frozen | low | Critical |
409,004,728 | flutter | Get webview cookies (including HttpOnly) | This is crucial for authenticating with a WebView. The ability to get the cookies will let you do further requests with the http plugin. | c: new feature,p: webview,package,team-ecosystem,P2,triaged-ecosystem | low | Critical |
409,016,093 | flutter | Support wildcard/globs in `flutter: assets:` YAML | See https://github.com/flutter/flutter/issues/4890#issuecomment-407327749.
I'd like to write:
```yaml
flutter:
assets:
- assets/icons/**.png
```
... instead of ...
```yaml
flutter:
assets:
- assets/icons/country_flags/
- assets/icons/currency_codes/
- assets/icons/credit_card_vendors/
``` | tool,a: assets,P3,team-tool,triaged-tool | medium | Critical |
409,016,836 | flutter | Meta issue for the difficulty of using assets with Flutter | I wanted to try and highlight how several outstanding issues related to assets really hurt productivity:
* [ ] Including all the files in a directory is only supported for the _current_ package:
```yaml
flutter:
assets:
- assets/country_flags/ # OK
- packages/country_flag_icons/assets/ # Silently does not work
```
Related: https://github.com/flutter/flutter/issues/22944
* [ ] Flutter _packages_ (i.e. not apps) cannot declare their own assets:
```yaml
# A package I'd like to host on Pub or GitHub.
name: country_flag_icons
flutter:
assets:
- assets/country_flags/
```
```yaml
name: my_app
dependencies:
country_flag_icons: ^1.0.0
```
... will result in runtime asset errors. See above issue for why this is difficult to workaround.
Related: https://github.com/flutter/flutter/issues/22921
* [ ] Asset inclusion only works by individual file or individual folder
```yaml
# Let me write...
flutter:
assets:
- assets/icons/**.png
```
Related: https://github.com/flutter/flutter/issues/27801
* [x] Assets added to a directory are not picked up during hot reload/restart:
... leading the need of killing and restarting the app from scratch when working with assets.
Related: https://github.com/flutter/flutter/issues/18896
* [x] Assets are not usable in (non-driver) tests:
... meaning that testing packages that provide assets is very difficult.
Related: https://github.com/flutter/flutter/issues/12999 | tool,a: assets,customer: crowd,P2,team-tool,triaged-tool | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.