id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
468,558,864 | opencv | OpenCV.js binding issue for perspectiveTransform using Point2fVector as input | I am trying to call `findHomography` and `perspectiveTransform` from OpenCV.js. The issue I came across was that the binding of these two functions is only specified for `Mat` inputs and not `std::vector<cv::Point2f>` which I want to use. So I went ahead an created new bindings for these functions with these structs as inputs. However, `findHomography` seems to work while `perspectiveTransform` results in a runtime error. I am not really sure why since I essentially did the same thing for both. Any help or suggestions would be much appreciated.
**Error Message** [LIVE EXAMPLE][1]
[![enter image description here][2]][2]
**Original Bindings**
Functions
```.cpp
Mat findHomography_wrapper(const cv::Mat& arg1, const cv::Mat& arg2, int arg3, double arg4, cv::Mat& arg5, const int arg6, const double arg7) {
return cv::findHomography(arg1, arg2, arg3, arg4, arg5, arg6, arg7);
}
void perspectiveTransform_wrapper(const cv::Mat& arg1, cv::Mat& arg2, const cv::Mat& arg3) {
return cv::perspectiveTransform(arg1, arg2, arg3);
}
```
Bindings
```.cpp
function("findHomography", select_overload<Mat(const cv::Mat&, const cv::Mat&, int, double, cv::Mat&, const int, const double)>(&Wrappers::findHomography_wrapper));
function("perspectiveTransform", select_overload<void(const cv::Mat&, cv::Mat&, const cv::Mat&)>(&Wrappers::perspectiveTransform_wrapper));
```
**Custom Bindings**
Functions
```.cpp
Mat findHomographyEasy(const std::vector<cv::Point2f>& arg1, const std::vector<cv::Point2f>& arg2, int arg3) {
return cv::findHomography(arg1,arg2,arg3);
}
void perspectiveTransformEasy(const std::vector<cv::Point2f>& arg1, std::vector<cv::Point2f>& arg2, const cv::Mat& arg3) {
cv::perspectiveTransform(arg1, arg2, arg3); // also tried with return here
}
```
Binding
```.cpp
function("findHomographyEasy", &binding_utils::findHomographyEasy); // WORKS PERFECTLY
function("perspectiveTransformEasy", &binding_utils::perspectiveTransformEasy); // CAUSES RUNTIME ERROR
```
**HTML / Javascript Usage**
```.html
<script src="./opencv.js" type="text/javascript"></script>
<script type="text/javascript">
cv['onRuntimeInitialized']=()=>{
match();
}
function match() {
// SOME STUFF...
var H = new cv.Mat();
H = cv.findHomographyEasy(obj, scene, cv.FM_RANSAC);
var obj_corners = new cv.Point2fVector();
obj_corners[0] = new cv.Point(0,0);
obj_corners[1] = new cv.Point(img1Raw.cols,0);
obj_corners[2] = new cv.Point(img1Raw.cols, img1Raw.rows);
obj_corners[3] = new cv.Point(0, img1Raw.rows);
console.log(img1Raw.cols); // 500
console.log(img1Raw.rows); // 363
var scene_corners = new cv.Point2fVector();
cv.perspectiveTransformEasy(obj_corners, scene_corners, H); // I know issue is here because I have surrounded this line with console.log
}
</script>
```
[1]: https://strmwr-cb94e.firebaseapp.com/
[2]: https://i.stack.imgur.com/R3O1o.png | category: javascript (js) | low | Critical |
468,570,427 | kubernetes | [Performance] Etcd get node-lease latency is higher than put latency and not within SLO limits | **What happened**:
When running GCE 5K scale tests (either kubemark or regular) we discovered that the get Lease latency is significantly higher than put Lease latency. This is counter-intuitive and what's even worse the 99th pctl of get lease latency is often not withing our SLOs. Some graphs
ApiServer Latency:

ETCD latency:


**What you expected to happen**:
The get latency should be lower than put latency on the etcd level, the e2e get lease latency should be within scalability SLOs
**How to reproduce it (as minimally and precisely as possible)**:
Run ci-kubernetes-e2e-gce-scale-performance
**Anything else we need to know?**:
SIG scalability has already reached out to the etcd team and asked about this.
/sig scalability
/assign
| kind/bug,sig/scalability,sig/api-machinery,lifecycle/frozen | medium | Major |
468,707,204 | TypeScript | Allow configuration of ts.server.maxFileSize | ## Search Terms
```
configure maxFileSize
largeFileReferenced
```
## Suggestion
Allow the maxFileSize property to be configurable.
## Use Cases
Currently, when trying to import large JSON files, the Typescript Language Service fails to provide intellisense due to the maxFileSize limit being exceeded.
## Examples
```
[Trace - 10:29:07 AM] <semantic> Event received: largeFileReferenced (0).
Data: {
"file": "/home/user/project/src/data.json",
"fileSize": 6534662,
"maxFileSize": 4194304
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Minor |
468,717,301 | opencv | OpenCV.js assigned Mat object memory isnt collected by garbage collection | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
Please:
* Read the documentation to test with the latest developer build.
* Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue.
* Try to be as detailed as possible in your report.
* Report only one problem per created issue.
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => 4.1
- Operating System / Platform => MacOS
- Compiler => using opencv.js from the source of your website
##### Detailed description
<!-- your description -->
If I push the Mat objects to arrays, the objects hang in the array and aren't possible to clean with V8 garbage collection. This is unfortunate and causes memory overflow when using with larger objects. I've filled the issue concerning this to google team, which you can read about with examples to reproduce in here: https://bugs.chromium.org/p/chromium/issues/detail?id=982797
##### Steps to reproduce
https://bugs.chromium.org/p/chromium/issues/detail?id=982797
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
-->
| RFC,category: javascript (js) | low | Critical |
468,733,743 | terminal | TitlebarControl should be a Template | The `TitlebarControl` introduced in #1948 should be a XAML Template, so we can style it easier. Right not the control is just straight up defined in XAML, but it should do the thing most controls do, where they're a ResourceDictionary with a Template.
| Product-Terminal,Issue-Task,Area-CodeHealth | low | Minor |
468,741,233 | terminal | Stop rendering when the terminal has been minimized/control has been hidden | Ported from MSFT:21315817
We shouldn't render the XAML island when we're minimized.
I don't remember any more of the context on this one. It _was_ assigned to @DHowett-MSFT, so he might remember. | Help Wanted,Area-UserInterface,Product-Terminal,Issue-Task | low | Minor |
468,743,436 | rust | Type inference breaks down in recursive call | This is as simple as I could make it. There is an error when I try to call `do_stuff` recursively, without explicitly giving the type of `T`:
```rust
fn do_stuff<T>(t: T)
where
u8: From<T>,
{
// This is an error:
do_stuff(1u8);
// This is ok:
do_stuff::<u8>(1u8);
}
fn main() {
// This is ok:
do_stuff(1u8);
}
```
```
error[E0308]: mismatched types
--> src/main.rs:7:14
|
7 | do_stuff(1u8);
| ^^^ expected type parameter, found u8
|
= note: expected type `T`
found type `u8`
```
Interestingly, when I switch around the `From` relationship, it works:
```rust
fn do_stuff<T>(t: T)
where
T: Into<u8>,
{
// This is ok now!:
do_stuff(1u8);
// This is ok:
do_stuff::<u8>(1u8);
}
fn main() {
// This is ok:
do_stuff(1u8);
}
```
----
```
$ rustc --version
rustc 1.36.0 (a53f9df32 2019-07-03)
``` | A-type-system,T-compiler,A-inference,C-bug,T-types | low | Critical |
468,748,416 | pytorch | JIT trace parameter sharing error if Module attributes happen to be the same | ## 🐛 Bug
You'll get an error if two attributes of a Module have the same values when using `torch.jit.trace`:
```
~/miniconda3/lib/python3.6/site-packages/torch/jit/__init__.py in check_unique(param)
1459 def check_unique(param):
1460 if param in id_set:
-> 1461 raise ValueError("TracedModules don't support parameter sharing between modules")
1462 id_set.add(param)
1463
ValueError: TracedModules don't support parameter sharing between modules
```
## To Reproduce
```
class SimpleModule(nn.Module):
def __init__(self, size=(784, 10)):
super().__init__()
self.weight = nn.Parameter(torch.Tensor(*size))
self.logits = self.weight if size[0] == 784 else self.weight + 0.01
nn.init.xavier_normal_(self.weight)
def forward(self, x):
x = self.weight @ x
return x
```
```
# Works
mod = SimpleModule(size=(785,10))
torch.jit.trace(mod, torch.randn((10,10)))
# Doesn't work
mod = SimpleModule(size=(784,10))
torch.jit.trace(mod, torch.randn((10,10)))
```
## Expected behavior
I would expect that it would be allowed to have complicated logic in a Module, in which a single case would be that two attributes happen to be the same.
## Environment
- PyTorch Version (e.g., 1.0): 1.1.0
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): conda
- Python version: 3.6.8
## Additional context
The issue is that the values of the parameters are checked to be equal in order to indicate parameter sharing, however, that is not always the case.
See also the related issue to improve the error message: https://github.com/pytorch/pytorch/issues/22677 | oncall: jit,triaged | low | Critical |
468,751,772 | TypeScript | Wildcard ambient modules declaration override rules | ## Search Terms
wildcard ambient module override
## Suggestion
I could not find written anywhere how wildcard ambient module declaration precedence work in case of overlaps.
In the [original pull request](https://github.com/microsoft/TypeScript/pull/8939/files#diff-08a3cc4f1f9a51dbb468c2810f5229d3R575) "prefix length" was used as the best fit criteria, but I could not locate where that function is located into current code.
Also, prefix length won't help in case of post-fixed `*`, but maybe that was just naming and it actually meant "longer match".
This feature should be better documented into [its handbook page](https://www.typescriptlang.org/docs/handbook/modules.html#wildcard-module-declarations) or, if it has been dropped after first implementation, it would be useful to know why so.
The most [closely related question](https://stackoverflow.com/questions/52373658/angular-typescript-wildcard-module-declaration) I found on StackOverflow doesn't have an answer.
## Use Cases
I have a setup with Vue (Quasar actually) + TypeScript + Jest, using SFC.
I have to mount Vue components into Jest tests (which are written into `.ts` files), but importing SFC won't work (they are not TS files).
Using a shim for all Vue files (the official solution) partially solves this problem, because at least you get typings for the general Vue instance, but you won't get typings for *that* particular SFC (data, props, computed, etc).
I currently separated the TS script from the SFC to be able to get the typings by importing from the two different files. Now I'm trying to define shims for the component `.vue` to work by binding its name with a wildcard to it's TS counterpart.
Unluckily, when I import `./demo/QBtn-demo.vue`, I still get `*.vue` shim instead of the specific component one, and nowhere seems to be found how can I force the override.
If I remove Vue shim, it works, but I'm forced to make a personal shim for *every* component.
I know it's possible by using triple slash references, but that's not the point of this issue.
Current workaround is to import both the `.vue` SFC and the TS script and then explicitly cast the SFC to the type of the specific instance.
## Examples
shims-vue.d.ts
```ts
declare module '*.vue' {
import Vue from 'vue';
export default Vue;
}
```
component.d.ts
```ts
declare module '*/QBtn-demo.vue' { // <= works when general Vue shim isn't present
import QBtnDemo from 'test/jest/__tests__/demo/QBtn-demo'; // <= this is the TS file
export default QBtnDemo;
}
```
QBtn-demo.vue
```vue
<script lang="ts" src="./QBtn-demo.ts"></script>
<template>
<div>
<p class="textContent">{{ input }}</p>
<span>{{ counter }}</span>
<q-btn id="mybutton" @click="increment()"></q-btn>
</div>
</template>
```
QBtn-demo.ts
```ts
import Vue from 'vue';
export default Vue.extend({
name: 'QBUTTON',
data: function(): { counter: number; input: string } {
return {
counter: 0,
input: 'rocket muffin',
};
},
methods: {
increment(): void {
this.counter++;
},
},
});
```
app,spec.ts
```ts
import { createLocalVue, mount } from '@vue/test-utils';
import { Quasar } from 'quasar';
import { VueConstructor } from 'vue';
import QBtnDemo from './demo/QBtn-demo.vue'; // <= Gets types as 'Vue' instead of 'QBtnDemo'
describe('Mount Quasar', () => {
const localVue = createLocalVue();
localVue.use(Quasar);
const wrapper = mount(QBtnDemo, { localVue });
const vm = wrapper.vm;
it('has a created hook', () => {
expect(typeof vm.increment).toBe('function'); // <= TS error: could not find 'increment'
});
it('sets the correct default data', () => {
expect(typeof vm.counter).toBe('number'); // <= TS error: could not find 'counter'
const defaultData = vm.$data;
expect(defaultData.counter).toBe(0);
});
it('correctly updates data when button is pressed', () => {
const button = wrapper.find('button');
button.trigger('click');
expect(vm.counter).toBe(1); // <= TS error: could not find 'counter'
});
});
```
app.spec.ts with casting workaround
```ts
import { createLocalVue, mount } from '@vue/test-utils';
import { Quasar } from 'quasar';
import { VueConstructor } from 'vue';
import QBtnDemoComponent from './demo/QBtn-demo.vue';
import QBtnDemo from './demo/QBtn-demo';
describe('Mount Quasar', () => {
const localVue = createLocalVue();
localVue.use(Quasar);
const wrapper = mount(QBtnDemoComponent as typeof QBtnDemo, { localVue });
const vm = wrapper.vm;
it('has a created hook', () => {
expect(typeof vm.increment).toBe('function'); // <= Infered correctly
});
it('sets the correct default data', () => {
expect(typeof vm.counter).toBe('number'); // <= Infered correctly
const defaultData = vm.$data;
expect(defaultData.counter).toBe(0);
});
it('correctly updates data when button is pressed', () => {
const button = wrapper.find('button');
button.trigger('click');
expect(vm.counter).toBe(1); // <= Infered correctly
});
});
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Needs Investigation | low | Critical |
468,785,039 | flutter | Can't swipe to dismiss scrollable Bottom Sheet | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
Per @dnfield https://github.com/flutter/flutter/issues/31739#issuecomment-511459403
## Use case
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
See the [Crane](https://material.io/design/material-studies/crane.html#layout) Material sample and [Google I/O 2019](https://play.google.com/store/apps/details?id=com.google.samples.apps.iosched&hl=en_US)
These both feature a bottom sheet that slides up (in the case of I/O, you select "Events" and the hit the "Filter" FAB) and scrolls.
With the I/O app, if you continue to swipe down when scrolled to the top, it will start to drag and then dismiss when a threshold is reached.
If the threshold is not reached (or you swipe upwards) then the bottom sheet will not be dismissed and it'll rebound to the top of the screen.
## Proposal
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
I'd think this should be a part of `DraggableScrollableSheet` as mentioned in #31739, but maybe it makes sense to bake the logic into the Bottom Sheet itself?
I've tried both modal and non-modal bottom sheets with and without the `DraggableScrollableSheet`, and can't seem to make this work as is.
The current issue with the `DraggableScrollableSheet` is that you can't have the sheet be maximized all the time. (Since, as I mentioned in https://github.com/flutter/flutter/issues/31739#issuecomment-511241746, any swipe will immediately dismiss the `BottomSheet` rather than scroll the `CustomScrollView`)
| c: new feature,framework,f: material design,f: scrolling,f: gestures,customer: crowd,c: proposal,P2,team-design,triaged-design | low | Critical |
468,789,503 | go | x/oauth2/clientcredentials: context values are not passed to oauth2 requests that retrieve tokens | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13beta1 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
It does reproduce with the latest version of golang.org/x/oauth2. I guess this is an issue about this subrepository.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/yann/.cache/go-build"
GOENV="/home/yann/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY="github.com/ulule/*"
GONOSUMDB="github.com/ulule/*"
GOOS="linux"
GOPATH="/home/yann/go"
GOPRIVATE="github.com/ulule/*"
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/yann/sdk/go1.13beta1"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/yann/sdk/go1.13beta1/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/yann/z/facebook/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build659375516=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
https://play.golang.org/p/iOdgycJZA4f
This example requires facebook credentials, but the bug does reproduce with any credential source.
### What did you expect to see?
There are two requests, one for retrieving an oauth2 token, and the other to get the actual URL.
I expected the two requests to see the same context values, which would generate an output like
```
context value
context value
```
### What did you see instead?
```
<nil>
context value
```
The context value is not passed to the first request. | help wanted,NeedsFix | low | Critical |
468,790,746 | TypeScript | Feature Request: Make ES module exports conform to an interface using triple slash directive | ## Search Terms
ESM, ES module, EcmaScript Module, Interface, Exports, triple slash directive, module
## Suggestion
I want to be able to enforce and ESM's exports to conform to an interface.
<details>
<summary>Here's an ESM that exports some values:</summary>

</details>
This ESM is going to be consumed by some system; __the ESM acts as configuration__ for that system.
<details>
<summary>The system could expose interfaces/types for all possible exports, and users apply them manually:</summary>

</details>
But this is manual work, and thus error-prone.
**I propose to be able to do:**

*The exact syntax TBD.*
Adding
```
/// <exports name="ConfigInterface" from="system-that-needs-configuration" />
```
will tell typescript about which interface the ESM exports should adhere to and where to find it.
If this is implemented in a triple slash directive this can also work for non-ts files.
## Use Cases
I want users of [storybook](https://github.com/storybookjs/storybook) to be able to configure it with ease using modern code.
CommonJS isn't tree-shake-able, which is important to us. There's a pretty detailed RFC for this feature for storybook here: https://docs.google.com/document/d/15aAALZBl0GTBEKgJN219ebzJ8LUJf2TVJ3hQdkNdLvQ/edit#
Tools like babel, eslint & webpack currently are or can be configured using CommonJS modules; as the ecosystem for ESM is improving, being able to add an interface to an ESM becomes really useful.
Tools could start using ESM for configuration more, which has clear benefits over CommoNJS.
## Examples
Here's a config file that is annotated with the triple slash directive:

Here's the interface that's being referred to:

This should warn the user that `logLevel = 'any'` is not a valid value.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
468,807,845 | go | cmd/doc: show documentation for explicitly-requested identifiers regardless of the `-u` flag | ### What version of Go are you using (`go version`)?
```
~/go/src$ go version
go version devel +87bf0b5c51 Tue Jul 16 13:17:46 2019 -0400 linux/amd64
```
### What did you do?
```
~/go/src$ go doc go/build.getToolDir
```
### What did you expect to see?
```
~/go/src$ go doc go/build.getToolDir
package build // import "go/build"
func getToolDir() string
getToolDir returns the default value of ToolDir.
```
### What did you see instead?
```
~/go/src$ go doc go/build.getToolDir
package build // import "go/build"
doc: no symbol getToolDir in package go/build
exit status 1
~/go/src$ go doc -u go/build.getToolDir
package build // import "go/build"
func getToolDir() string
getToolDir returns the default value of ToolDir.
```
----
The `doc` command by default hides all unexported identifiers, _even those explicitly requested by the user._ To coax it to display the requested result, you have to pass the `-u` flag, which has the secondary (and often unwanted) effect of displaying unexported fields and methods on the requested identifier.
Moreover, that behavior is inconsistent with the behavior for `internal` packages, for which `go doc` will happily display documentation even without the `-u` flag.
Instead, the `-u` flag should control *only* the behavior for nested declarations — variables, constants, types, functions, fields, and/or methods _associated with_ the requested identifier — not the requested identifier itself.
CC @robpike @mvdan @ianthehat | help wanted,NeedsFix | low | Minor |
468,809,691 | go | cmd/doc: show types for constants and variables that have initializers | Currently, if you ask `go doc` for an exported identifier, and it happens to be declared as a `var` or `const` with the type inferred from the initializer, the type of the variable does not appear in the output.
It is doubly frustrating if the initializer happens to refer to unexported identifiers, since `go doc` requires an extra flag before it will document those (#33133).
### What version of Go are you using (`go version`)?
```
~/go/src$ go version
go version devel +87bf0b5c51 Tue Jul 16 13:17:46 2019 -0400 linux/amd64
```
### What did you do?
```
~/go/src$ go doc go/build.ToolDir
```
### What did you expect to see?
The documentation for *and type of* the `go/build.ToolDir` variable.
### What did you see instead?
```
~/go/src$ go doc go/build.ToolDir
package build // import "go/build"
var ToolDir = getToolDir()
ToolDir is the directory containing build tools.
```
No indication of whether ToolDir is a `string`, a `[]byte`, or something else entirely. It isn't mentioned in the doc comment, and it shouldn't _need_ to be mentioned in the doc comment because the compiler already knows what it is. | NeedsInvestigation,FeatureRequest | low | Minor |
468,826,595 | pytorch | Unify tensor shape formatting in shape checks | I made some mistake with tensor shape for nn.Conv1d and expectedly got an error:
```
File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 198, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected 3-dimensional input for 3-dimensional weight 768 161 13 140185417077232, but got 4-dimensional input of size [40, 1, 161, 1923] instead
```
https://github.com/pytorch/pytorch/issues/19947 already reports overflown dimension of `140185417077232`
Another minor issues is mising square brackets and commas in the first weight tensor shape. Some unification would make it clearer for the user that it's tensor shape in question.
Torch version is `1.2.0.dev20190607` | module: error checking,module: convolution,triaged,enhancement | low | Critical |
468,827,059 | flutter | Shell unit-tests that assert subprocess death don't seem to work on Windows. | These have been disabled in the test harness for now. | engine,platform-windows,P2,team-engine,triaged-engine | low | Minor |
468,833,132 | go | proposal: issues: distinguish "blocks beta/rc" from "blocks final release" | When we (@golang/osp-team) triage the issues labeled with [`release-blocker`](https://github.com/golang/go/labels/release-blocker) prior to a release, we often end up sorting them into a finer granularity:
* “blocking the next beta” (issues that will need broad testing before the release),
* “blocking the release candidate” (known regressions with limited impact and clear testing steps),
* “blocking the final release” (documentation, certain kinds of test flakiness).
We end up repeating that classification for each pre-release build, and we spend time discussing the classifications when they could often be made by smaller numbers of people ahead of time.
Furthermore, it's probably useful for folks in the community to be able to see that classification, so that they can set expectations appropriately (and so that they can point out issues that might need more testing than we thought).
----
I propose that we do one of the following:
a. Create milestones for each pre-release (`Go1.13-beta.1`, `Go1.13-rc.1`, and so on).
* These pre-releases are conceptually before the main miletone (`Go1.13`), so a `release-blocker` on `Go1.13` would indicate the “blocking the final release” category.
b. Or, create labels for each pre-release (`next-beta`, `next-rc`), with the expectation of zero `next-beta` issues open when we cut a beta release and zero `next-rc` issues open for a given milestone when we cut the corresponding pre-release for that milestone.
* We would need to decide whether to have GopherBot ensure that `next-*` issues are also labeled `release-blocker`, or have GopherBot remove the `release-blocker` label as redundant.
| Proposal,Proposal-Hold | low | Minor |
468,837,089 | flutter | Benchmark targets are not run on Windows. | engine,platform-windows,P2,team-engine,triaged-engine | low | Minor |
|
468,842,910 | pytorch | [data loader] Graceful data loader threads exit on KeyboardInterrupt | During training with PyTorch 1.2.0.dev20190607 I pressed Ctrl+C and got the following:
```
KeyboardInterrupt
Traceback (most recent call last):
File "/miniconda/lib/python3.7/multiprocessing/queues.py", line 242, in _feed
send_bytes(obj)
Fatal Python error: could not acquire lock for <_io.BufferedWriter name='<stderr>'> at interpreter shutdown, possibly due to daemon threads
Thread 0x00007f49347f8700 (most recent call first):
Thread 0x00007f49357fa700 (most recent call first):
Thread 0x00007f4934ff9700 (most recent call first):
Thread 0x00007f494f7fe700 (most recent call first):
File "/miniconda/lib/python3.7/traceback.py", line 105 in print_exception
File "/miniconda/lib/python3.7/traceback.py", line 163 in print_exc
File "/miniconda/lib/python3.7/multiprocessing/queues.py", line 273 in _on_queue_feeder_error
File "/miniconda/lib/python3.7/multiprocessing/queues.py", line 264 in _feed
File "/miniconda/lib/python3.7/threading.py", line 865 in run
File "/miniconda/lib/python3.7/threading.py", line 917 in _bootstrap_inner
File "/miniconda/lib/python3.7/threading.py", line 885 in _bootstrap
Thread 0x00007f49c70e5700 (most recent call first):
File "/miniconda/lib/python3.7/threading.py", line 300 in wait
File "/miniconda/lib/python3.7/queue.py", line 179 in get
File "/miniconda/lib/python3.7/site-packages/tensorboard/summary/writer/event_file_writer.py", line 204 in run
File "/miniconda/lib/python3.7/threading.py", line 917 in _bootstrap_inner
File "/miniconda/lib/python3.7/threading.py", line 885 in _bootstrap
Current thread 0x00007f49df943700 (most recent call first):
train.sh: line 1: 4022 Aborted (core dumped) python3 train.py
```
It would be great if data loader threads could die less verbosely (and mysteriously) on KeyboardInterrupt. | needs reproduction,module: dataloader,triaged | low | Critical |
468,844,731 | rust | Link errors when compiling for i386 with +soft-float | It appears that since https://github.com/rust-lang/rust/pull/61408, rust now requires `fminf` and `fmaxf` functions to be present when compiling for i386 with +soft-float (using a custom target JSON). However, compiler-builtins [only expose those functions for some targets], which don't include x86.
Maybe compiler-builtins should include a `target-feature = "soft-float"` in the list of conditions to enable the math module?
[only expose those functions for some targets]: https://github.com/rust-lang-nursery/compiler-builtins/blob/master/src/lib.rs#L56 | A-linkage,O-x86_64,T-compiler,C-bug,O-x86_32 | low | Critical |
468,845,877 | electron | Starting devtools with activate: false makes it so that clicking on the main window does not bring it to the front | ### Preflight Checklist
* [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project.
* [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to.
* [x] I have searched the issue tracker for an issue that matches the one I want to file, without success.
### Issue Details
* **Electron Version:** 5.0.6
* **Operating System:** Windows 10
### Expected Behavior
When I open devtools with `activate: false`, and the devtools appears in front of the main window, I expect that clicking on the main window will focus it and bring it to the front.
### Actual Behavior
When doing the above, the main window does not get sent to the front, but it does get focus.
### To Reproduce
main.js:
```
const { app, BrowserWindow } = require('electron')
function createWindow() {
const mainWindow = new BrowserWindow();
mainWindow.webContents.openDevTools({ mode: "detach", activate: false });
}
app.on('ready', createWindow)
```
Run `npm start`. Make sure that devtools is at least partially covering the main window. Notice that clicking on the main window that's in the back does not bring it to the front.
Observations:
1. The main window does get focus, and you can successfully open menu items
2. Clicking on the devtools and then back to the main window will bring the main window to the front
3. You can resize the main window and still observe the issue (resizing doesn't work around it) | platform/windows,bug :beetle:,5-0-x,7-1-x,10-x-y | medium | Critical |
468,859,007 | flutter | Sliver garbage collect does not work in Sizedbox | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Steps to Reproduce
```dart
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'List playground',
home: TestPage(),
);
}
}
class TestPage extends StatefulWidget {
@override
State<StatefulWidget> createState() => _TestPageState();
}
class _TestPageState extends State<TestPage> {
List<String> items = ['1', '2', '3', '4', '5'];
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: SizedBox(
width: 44.4,
height: 30.0,
child: Directionality(
textDirection: TextDirection.ltr,
child: CustomScrollView(
slivers: <Widget>[
SliverFixedExtentList(
itemExtent: 22.2,
delegate: SliverChildBuilderDelegate(
(BuildContext context, int index) {
return TextWidget(
items[index],
);
},
childCount : items.length,
),
),
],
),
),
),
),
);
}
}
class TextWidget extends StatefulWidget {
const TextWidget(this.data);
final String data;
@override
TextWidgetState createState() => TextWidgetState();
}
class TextWidgetState extends State<TextWidget>{
@override
void dispose() {
print('disposed ${widget.data}');
super.dispose();
}
@override
Widget build(BuildContext context) {
return Text(widget.data);
}
}
```
When scrolling the sliver in the middle, It should disposes the elements that are not in view. Actual, it doesn't dispose them.
It turns out in RenderSliverFixedExtentBoxAdaptor.constraints.scrollOffset is always zero no matter how far you drag the scroll view, so it does not do the garbage collection to remove the widget that are not in view.
| framework,f: scrolling,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-framework,triaged-framework | low | Critical |
468,905,662 | flutter | Implement alternatives to long press on desktop | For `ListTile`, and `ReorderableListView`, there are handlers that support long press on mobile platforms. These don't work well on desktop, and so they need to instead support other affordances.
These affordances will need to be designed and implemented to fit with the Material Design spec. | framework,f: material design,c: proposal,a: desktop,P3,team-framework,triaged-framework | low | Major |
468,911,113 | go | cmd/go: report an error for cmd (and std?) modules outside $GOROOT | Trying to work on the src code of `go doc` but every time I try to compile a `./doc` binary it uses other src code, it doesn't look in the current directory am in, I have to force it `go build main.go`
```
gert@gert ~/Desktop/go/src/cmd/doc:master> go build
gert@gert ~/Desktop/go/src/cmd/doc:master> ls
dirs.go doc doc_test.go main.go pkg.go testdata
```
I can see a doc binary getting created but its not build form the `main.go` in current directory, I deliberately put a syntax error in main?
```
gert@gert ~/Desktop/go/src/cmd/doc:master> go build main.go
# command-line-arguments
./main.go:78:2: syntax error: unexpected --, expecting }
```
details:
```
go version devel +f938b9b33b Wed Jun 26 20:26:48 2019 +0000 darwin/amd64
GOARCH="amd64"
GOBIN="/Users/gert/bin"
GOCACHE="/Users/gert/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/gert/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/gert/Desktop/go/src/cmd/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/dv/8tlwvjr91zjdyq4rk14lkkfm0000gn/T/go-build645754300=/tmp/go-build -gno-record-gcc-switches -fno-common"
```
Workaround for me was match `GOROOT` with the repo as in, `git clone` everything in `/usr/local/go` and bootstrap using a `/usr/local/go1` then I could work on `go doc` src code.
maybe related to #32724
| NeedsInvestigation,modules | low | Critical |
468,922,044 | TypeScript | The dom.iterable lib contains many interfaces that should also be in webworker | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
Many of the interfaces defined in [dom.iterable](https://github.com/microsoft/TypeScript/blob/d4765523f086cfed152b094c3b5db2b246c13233/src/lib/dom.iterable.d.ts) should also be available to web workers. These include:
- `Headers`
- `FormData`
- `URLSearchParams`
And since the `dom.iterable` library does not play well with the `webworker` library, I have to define these interfaces in my own, separate, library (instead of just including built-in libraries).
The `webworker` library should either _also_ have an "iterable" variant, or these interfaces should be abstracted to a higher level.
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.3, master
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
dom iterable worker
**Code**
```ts
// index.ts
const h = new Headers();
console.log([...h.entries()]);
```
```sh
const h = new Headers();
console.log([...h.entries()]);
tsc --target esnext --lib esnext,webworker index.ts
```
**Expected behavior:**
This should compile fine (as it does when using `--lib esnext,dom,dom.iterable`).
**Actual behavior:**
```txt
index.ts:3:19 - error TS2339: Property 'entries' does not exist on type 'Headers'.
3 console.log([...h.entries()]);
~~~~~~~
Found 1 error.
```
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
Can't show this in the playground as I can't specify `--lib` options.
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
Somewhat related to: https://github.com/microsoft/TypeScript/issues/20595
| Bug,Rescheduled | low | Critical |
468,930,988 | vscode | If pasting over text containing TextEditorDecorations, they are retained | Version: 1.36.1 (user setup)
Commit: 2213894ea0415ee8c85c5eea0d0ff81ecc191529
Date: 2019-07-08T22:59:35.033Z
Electron: 4.2.5
Chrome: 69.0.3497.128
Node.js: 10.11.0
V8: 6.9.427.31-electron.0
OS: Windows_NT x64 10.0.18362
If pasting over text with TextEditorDecorations applied, they are applied to the new text.

Expected: Text replaced by pasting should have decorators cleared/truncated.
| feature-request,semantic-tokens | low | Minor |
468,948,886 | pytorch | Support serializing IValue to bytes (and deserialize from bytes) | ## 🚀 Feature
Right now, torch.save only supports saving to file. But not serialize to bytes.
Similarly, it would be great to be able to deserialize the bytes too. This should work bidirectionally between Python and CPP.
## Motivation
We want to serialize IValue to bytes to send over network. The interface right now only supports serializing to a file and then loading the bytes from the file.
| oncall: jit,triaged | low | Major |
468,956,623 | flutter | Flutter doesn't work when installed to icloud sync folder | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
It seems like I've followed all the instructions on https://flutter.dev/docs/get-started/install/macos plus a youtube tutorial but I run into an error. Not sure what it is, but I'm new to flutter and want to explore it
## Steps to Reproduce
<!-- Please tell us exactly how to reproduce the problem you are running into. -->
1. flutter create my_app
2. cd my_app
3. flutter run
## Logs
<!--
Include the full logs of the commands you are running between the lines
with the backticks below. If you are running any "flutter" commands,
please include the output of running them with "--verbose"; for example,
the output of running "flutter --verbose create foo".
-->
```Launching lib/main.dart on iPhone Xʀ in debug mode...
Compiler message:
Error: SDK summary not found:
file:///Users/carla/Documents/flutter/bin/cache/artifacts/engine/common/flutter_
patched_sdk/platform_strong.dill.
Error: Error when reading
'file:///Users/carla/Documents/flutter/bin/cache/artifacts/engine/common/flutter
_patched_sdk/platform_strong.dill': No such file or directory
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:async'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:collection'
lib/main.dart:1:1: Error: Not found: 'dart:collection'
import 'package:flutter/material.dart';
^
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:convert'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:developer'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:ffi'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:_internal'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:isolate'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:math'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:mirrors'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:profiler'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:typed_data'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found:
'dart:nativewrappers'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:io'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found: 'dart:ui'
org-dartlang-untranslatable-uri:dart%3Acore: Error: Not found:
'dart:vmservice_io'
lib/main.dart:1:1: Error: Not found: 'dart:core'
import 'package:flutter/material.dart';
^
org-dartlang-untranslatable-uri:dart%3Acore:1:8: Error: Not found:
'dart:_internal'
import 'dart:_internal';
^
org-dartlang-untranslatable-uri:dart%3Acore:2:8: Error: Not found: 'dart:async'
import 'dart:async';
^
org-dartlang-untranslatable-uri:dart%3Acore:4:1: Error: Not found: 'dart:async'
export 'dart:async' show Future, Stream;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/about.da
rt:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/about.da
rt:6:8: Error: Not found: 'dart:developer'
import 'dart:developer' show Timeline, Flow;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/about.da
rt:7:8: Error: Not found: 'dart:io'
import 'dart:io' show Platform;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/animated
_icons.dart:8:8: Error: Not found: 'dart:math'
import 'dart:math' as math show pi;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/animated
_icons.dart:9:8: Error: Not found: 'dart:ui'
import 'dart:ui' as ui show Paint, Path, Canvas;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/animated
_icons.dart:10:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/app.dart
:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' as ui;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/app_bar.
dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/app_bar_
theme.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/arc.dart
:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/arc.dart
:6:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/bottom_a
pp_bar_theme.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/bottom_n
avigation_bar.dart:5:8: Error: Not found: 'dart:collection'
import 'dart:collection' show Queue;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/bottom_n
avigation_bar.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/bottom_s
heet.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/bottom_s
heet_theme.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/button.d
art:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/card_the
me.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/checkbox
.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/chip.dar
t:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/chip_the
me.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/colors.d
art:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show Color;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/data_tab
le.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/date_pic
ker.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/date_pic
ker.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/dialog.d
art:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/dialog_t
heme.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/drawer.d
art:5:8: Error: Not found: 'dart:math'
import 'dart:math';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/dropdown
.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/expand_i
con.dart:4:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/feedback
.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/flexible
_space_bar.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/floating
_action_button.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/floating
_action_button_location.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/floating
_action_button_theme.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/icon_but
ton.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/ink_ripp
le.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/ink_spla
sh.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/ink_well
.dart:5:8: Error: Not found: 'dart:collection'
import 'dart:collection';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/input_bo
rder.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/input_bo
rder.dart:6:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/input_de
corator.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/input_de
corator.dart:6:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/list_til
e.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/material
_button.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/material
_localizations.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/material
_state.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show Color;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/mergeabl
e_material.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/paginate
d_data_table.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/popup_me
nu.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/progress
_indicator.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/range_sl
ider.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/range_sl
ider.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/range_sl
ider.dart:7:8: Error: Not found: 'dart:ui'
import 'dart:ui' as ui;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/refresh_
indicator.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/refresh_
indicator.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/reordera
ble_list.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/scaffold
.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/scaffold
.dart:6:8: Error: Not found: 'dart:collection'
import 'dart:collection';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/scaffold
.dart:7:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/scrollba
r.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/search.d
art:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/shadows.
dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show Color, Offset;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/slider.d
art:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/slider.d
art:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/slider.d
art:7:8: Error: Not found: 'dart:math'
import 'dart:math';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/slider_t
heme.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/slider_t
heme.dart:6:8: Error: Not found: 'dart:ui'
import 'dart:ui' show Path, lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/snack_ba
r_theme.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/tab_cont
roller.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/tabs.dar
t:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/tabs.dar
t:6:8: Error: Not found: 'dart:ui'
import 'dart:ui' show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/text_fie
ld.dart:5:8: Error: Not found: 'dart:collection'
import 'dart:collection';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/text_sel
ection.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/theme_da
ta.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show Color, hashList;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/time.dar
t:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show hashValues;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/time_pic
ker.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/time_pic
ker.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/tooltip.
dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/material/user_acc
ounts_drawer_header.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/painting.dart:20:1:
Error: Not found: 'dart:ui'
export 'dart:ui' show Shadow, PlaceholderAlignment;
^
file:///Users/carla/Documents/flutter/.pub-cache/hosted/pub.dartlang.org/vector_
math-2.0.8/lib/vector_math_64.dart:22:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/.pub-cache/hosted/pub.dartlang.org/vector_
math-2.0.8/lib/vector_math_64.dart:23:8: Error: Not found: 'dart:typed_data'
import 'dart:typed_data';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/app.dart:
5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/app.dart:
6:8: Error: Not found: 'dart:collection'
import 'dart:collection' show HashMap;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/app.dart:
26:1: Error: Not found: 'dart:ui'
export 'dart:ui' show Locale;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/async.dar
t:9:8: Error: Not found: 'dart:async'
import 'dart:async' show Future, Stream, StreamSubscription;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/automatic
_keep_alive.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/banner.da
rt:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/basic.dar
t:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' as ui show Image, ImageFilter;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/binding.d
art:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/binding.d
art:6:8: Error: Not found: 'dart:developer'
import 'dart:developer' as developer;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/binding.d
art:7:8: Error: Not found: 'dart:ui'
import 'dart:ui' show AppLifecycleState, Locale, AccessibilityFeatures;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/binding.d
art:21:1: Error: Not found: 'dart:ui'
export 'dart:ui' show AppLifecycleState, Locale;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/bottom_na
vigation_bar_item.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show Color;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/debug.dar
t:5:8: Error: Not found: 'dart:collection'
import 'dart:collection';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/debug.dar
t:6:8: Error: Not found: 'dart:developer'
import 'dart:developer' show Timeline; // to disambiguate reference in dartdocs
below
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/editable_
text.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/editable_
text.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/editable_
text.dart:7:8: Error: Not found: 'dart:ui'
import 'dart:ui' as ui;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/fade_in_i
mage.dart:5:8: Error: Not found: 'dart:typed_data'
import 'dart:typed_data';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/focus_man
ager.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/focus_man
ager.dart:6:8: Error: Not found: 'dart:ui'
import 'dart:ui';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/framework
.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/framework
.dart:6:8: Error: Not found: 'dart:collection'
import 'dart:collection';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/framework
.dart:7:8: Error: Not found: 'dart:developer'
import 'dart:developer';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/framework
.dart:15:1: Error: Not found: 'dart:ui'
export 'dart:ui' show hashValues, hashList;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/icon_data
.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show hashValues;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/icon_them
e_data.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' show Color, hashValues;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/icon_them
e_data.dart:6:8: Error: Not found: 'dart:ui'
import 'dart:ui' as ui show lerpDouble;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/image.dar
t:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/image.dar
t:6:8: Error: Not found: 'dart:io'
import 'dart:io' show File;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/image.dar
t:7:8: Error: Not found: 'dart:typed_data'
import 'dart:typed_data';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/inherited
_model.dart:5:8: Error: Not found: 'dart:collection'
import 'dart:collection';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/list_whee
l_scroll_view.dart:4:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/list_whee
l_scroll_view.dart:5:8: Error: Not found: 'dart:collection'
import 'dart:collection';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/list_whee
l_scroll_view.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/localizat
ions.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/localizat
ions.dart:6:8: Error: Not found: 'dart:ui'
import 'dart:ui' show Locale;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/media_que
ry.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/media_que
ry.dart:6:8: Error: Not found: 'dart:ui'
import 'dart:ui' as ui;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/media_que
ry.dart:7:8: Error: Not found: 'dart:ui'
import 'dart:ui' show Brightness;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/navigatio
n_toolbar.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/navigator
.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/navigator
.dart:6:8: Error: Not found: 'dart:convert'
import 'dart:convert';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/navigator
.dart:7:8: Error: Not found: 'dart:developer'
import 'dart:developer' as developer;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/nested_sc
roll_view.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/nested_sc
roll_view.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/overlay.d
art:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/overlay.d
art:6:8: Error: Not found: 'dart:collection'
import 'dart:collection';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/overscrol
l_indicator.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async' show Timer;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/overscrol
l_indicator.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/page_view
.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/page_view
.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/routes.da
rt:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/safe_area
.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scroll_ac
tivity.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scroll_ac
tivity.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scroll_co
ntroller.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scroll_me
trics.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scroll_ph
ysics.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scroll_po
sition.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scroll_po
sition_with_single_context.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scroll_si
mulation.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scroll_vi
ew.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scrollabl
e.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scrollabl
e.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scrollabl
e.dart:7:8: Error: Not found: 'dart:ui'
import 'dart:ui';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/scrollbar
.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/semantics
_debugger.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/semantics
_debugger.dart:6:8: Error: Not found: 'dart:ui'
import 'dart:ui' show SemanticsFlag;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/single_ch
ild_scroll_view.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/sliver.da
rt:5:8: Error: Not found: 'dart:collection'
import 'dart:collection' show SplayTreeMap, HashMap;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/table.dar
t:5:8: Error: Not found: 'dart:collection'
import 'dart:collection';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/text_sele
ction.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/text_sele
ction.dart:6:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/transitio
ns.dart:5:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/widget_in
spector.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/widget_in
spector.dart:6:8: Error: Not found: 'dart:convert'
import 'dart:convert';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/widget_in
spector.dart:7:8: Error: Not found: 'dart:developer'
import 'dart:developer' as developer;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/widget_in
spector.dart:8:8: Error: Not found: 'dart:math'
import 'dart:math' as math;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/widget_in
spector.dart:9:8: Error: Not found: 'dart:typed_data'
import 'dart:typed_data';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/widget_in
spector.dart:10:8: Error: Not found: 'dart:ui'
import 'dart:ui' as ui
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/widget_in
spector.dart:22:8: Error: Not found: 'dart:ui'
import 'dart:ui' show Canvas, Offset;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/widgets/widget_sp
an.dart:5:8: Error: Not found: 'dart:ui'
import 'dart:ui' as ui show ParagraphBuilder, PlaceholderAlignment;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/foundation/basic_
types.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/foundation/basic_
types.dart:6:8: Error: Not found: 'dart:collection'
import 'dart:collection';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/foundation/basic_
types.dart:10:1: Error: Not found: 'dart:ui'
export 'dart:ui' show VoidCallback;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/foundation/bindin
g.dart:5:8: Error: Not found: 'dart:async'
import 'dart:async';
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/foundation/bindin
g.dart:6:8: Error: Not found: 'dart:convert'
import 'dart:convert' show json;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/foundation/bindin
g.dart:7:8: Error: Not found: 'dart:developer'
import 'dart:developer' as developer;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/foundation/bindin
g.dart:8:8: Error: Not found: 'dart:io'
import 'dart:io' show exit;
^
file:///Users/carla/Documents/flutter/packages/flutter/lib/src/foundation/bindin
g.dart:9:8: Error: Not found: 'dart:ui'
import 'dart:ui' as ui show saveCompilationTrace, Window, window;
^
Unhandled exception:
Crash when compiling package:flutter/src/foundation/bitfield.dart,
at character offset null:
dart:core: Internal problem: Unhandled null in accessor.
#0 internalProblem (package:front_end/src/fasta/problems.dart:45:3)
#1 unhandled (package:front_end/src/fasta/problems.dart:58:10)
#2 Loader.read (package:front_end/src/fasta/loader.dart:155:9)
#3 SourceLibraryBuilder.lookupImportCondition
(package:front_end/src/fasta/source/source_library_builder.dart:280:43)
#4 SourceLibraryBuilder.addImport
(package:front_end/src/fasta/source/source_library_builder.dart:303:13)
#5 OutlineBuilder.endImport
(package:front_end/src/fasta/source/outline_builder.dart:234:13)
#6 Parser.parseImport
(package:front_end/src/fasta/parser/parser.dart:667:16)
#7 Parser.parseTopLevelKeywordDeclaration
(package:front_end/src/fasta/parser/parser.dart:590:18)
#8 Parser.parseTopLevelDeclarationImpl
(package:front_end/src/fasta/parser/parser.dart:466:14)
#9 Parser.parseUnit (package:front_end/src/fasta/parser/parser.dart:348:15)
#10 SourceLoader.buildOutline
(package:front_end/src/fasta/source/source_loader.dart:271:37)
<asynchronous suspension>
#11 Loader.buildOutlines (package:front_end/src/fasta/loader.dart:198:13)
<asynchronous suspension>
#12 KernelTarget.buildOutlines.<anonymous closure>
(package:front_end/src/fasta/kernel/kernel_target.dart:251:20)
<asynchronous suspension>
#13 withCrashReporting (package:front_end/src/fasta/crash.dart:122:24)
<asynchronous suspension>
#14 KernelTarget.buildOutlines
(package:front_end/src/fasta/kernel/kernel_target.dart:249:12)
<asynchronous suspension>
#15 generateKernelInternal.<anonymous closure>
(package:front_end/src/kernel_generator_impl.dart:110:28)
<asynchronous suspension>
#16 withCrashReporting (package:front_end/src/fasta/crash.dart:122:24)
<asynchronous suspension>
#17 generateKernelInternal
(package:front_end/src/kernel_generator_impl.dart:58:10)
<asynchronous suspension>
#18 kernelForProgram.<anonymous closure>
(package:front_end/src/api_prototype/kernel_generator.dart:48:28)
<asynchronous suspension>
#19 CompilerContext.runWithOptions.<anonymous closure>
(package:front_end/src/fasta/compiler_context.dart:134:20)
<asynchronous suspension>
#20 CompilerContext.runInContext.<anonymous closure>.<anonymous closure>
(package:front_end/src/fasta/compiler_context.dart:122:46)
#21 new Future.sync (dart:async/future.dart:224:31)
#22 CompilerContext.runInContext.<anonymous closure>
(package:front_end/src/fasta/compiler_context.dart:122:19)
#23 _rootRun (dart:async/zone.dart:1124:13)
#24 _CustomZone.run (dart:async/zone.dart:1021:19)
#25 _runZoned (dart:async/zone.dart:1516:10)
#26 runZoned (dart:async/zone.dart:1463:12)
#27 CompilerContext.runInContext
(package:front_end/src/fasta/compiler_context.dart:121:12)
#28 CompilerContext.runWithOptions
(package:front_end/src/fasta/compiler_context.dart:132:10)
#29 kernelForProgram
(package:front_end/src/api_prototype/kernel_generator.dart:47:32)
<asynchronous suspension>
#30 compileToKernel (package:vm/kernel_front_end.dart:309:27)
<asynchronous suspension>
#31 FrontendCompiler.compile.<anonymous closure>
(package:vm/frontend_server.dart:359:56)
#32 new Future.<anonymous closure> (dart:async/future.dart:176:37)
#33 _rootRun (dart:async/zone.dart:1120:38)
#34 _CustomZone.run (dart:async/zone.dart:1021:19)
#35 _CustomZone.runGuarded (dart:async/zone.dart:923:7)
#36 _CustomZone.bindCallbackGuarded.<anonymous closure>
(dart:async/zone.dart:963:23)
#37 _rootRun (dart:async/zone.dart:1124:13)
#38 _CustomZone.run (dart:async/zone.dart:1021:19)
#39 _CustomZone.bindCallback.<anonymous closure>
(dart:async/zone.dart:947:23)
#40 Timer._createTimer.<anonymous closure>
(dart:async-patch/timer_patch.dart:21:15)
#41 _Timer._runTimers (dart:isolate-patch/timer_impl.dart:382:19)
#42 _Timer._handleMessage (dart:isolate-patch/timer_impl.dart:416:5)
#43 _RawReceivePortImpl._handleMessage
(dart:isolate-patch/isolate_patch.dart:172:12)
#0 internalProblem (package:front_end/src/fasta/problems.dart:45:3)
#1 unhandled (package:front_end/src/fasta/problems.dart:58:10)
#2 Loader.read (package:front_end/src/fasta/loader.dart:155:9)
#3 SourceLibraryBuilder.lookupImportCondition
(package:front_end/src/fasta/source/source_library_builder.dart:280:43)
#4 SourceLibraryBuilder.addImport
(package:front_end/src/fasta/source/source_library_builder.dart:303:13)
#5 OutlineBuilder.endImport
(package:front_end/src/fasta/source/outline_builder.dart:234:13)
#6 Parser.parseImport
(package:front_end/src/fasta/parser/parser.dart:667:16)
#7 Parser.parseTopLevelKeywordDeclaration
(package:front_end/src/fasta/parser/parser.dart:590:18)
#8 Parser.parseTopLevelDeclarationImpl
(package:front_end/src/fasta/parser/parser.dart:466:14)
#9 Parser.parseUnit (package:front_end/src/fasta/parser/parser.dart:348:15)
#10 SourceLoader.buildOutline
(package:front_end/src/fasta/source/source_loader.dart:271:37)
<asynchronous suspension>
#11 Loader.buildOutlines (package:front_end/src/fasta/loader.dart:198:13)
<asynchronous suspension>
#12 KernelTarget.buildOutlines.<anonymous closure>
(package:front_end/src/fasta/kernel/kernel_target.dart:251:20)
<asynchronous suspension>
#13 withCrashReporting (package:front_end/src/fasta/crash.dart:122:24)
<asynchronous suspension>
#14 KernelTarget.buildOutlines
(package:front_end/src/fasta/kernel/kernel_target.dart:249:12)
<asynchronous suspension>
#15 generateKernelInternal.<anonymous closure>
(package:front_end/src/kernel_generator_impl.dart:110:28)
<asynchronous suspension>
#16 withCrashReporting (package:front_end/src/fasta/crash.dart:122:24)
<asynchronous suspension>
#17 generateKernelInternal
(package:front_end/src/kernel_generator_impl.dart:58:10)
<asynchronous suspension>
#18 kernelForProgram.<anonymous closure>
(package:front_end/src/api_prototype/kernel_generator.dart:48:28)
<asynchronous suspension>
#19 CompilerContext.runWithOptions.<anonymous closure>
(package:front_end/src/fasta/compiler_context.dart:134:20)
<asynchronous suspension>
#20 CompilerContext.runInContext.<anonymous closure>.<anonymous closure>
(package:front_end/src/fasta/compiler_context.dart:122:46)
#21 new Future.sync (dart:async/future.dart:224:31)
#22 CompilerContext.runInContext.<anonymous closure>
(package:front_end/src/fasta/compiler_context.dart:122:19)
#23 _rootRun (dart:async/zone.dart:1124:13)
#24 _CustomZone.run (dart:async/zone.dart:1021:19)
#25 _runZoned (dart:async/zone.dart:1516:10)
#26 runZoned (dart:async/zone.dart:1463:12)
#27 CompilerContext.runInContext
(package:front_end/src/fasta/compiler_context.dart:121:12)
#28 CompilerContext.runWithOptions
(package:front_end/src/fasta/compiler_context.dart:132:10)
#29 kernelForProgram
(package:front_end/src/api_prototype/kernel_generator.dart:47:32)
<asynchronous suspension>
#30 compileToKernel (package:vm/kernel_front_end.dart:309:27)
<asynchronous suspension>
#31 FrontendCompiler.compile.<anonymous closure>
(package:vm/frontend_server.dart:359:56)
#32 new Future.<anonymous closure> (dart:async/future.dart:176:37)
#33 _rootRun (dart:async/zone.dart:1120:38)
#34 _CustomZone.run (dart:async/zone.dart:1021:19)
#35 _CustomZone.runGuarded (dart:async/zone.dart:923:7)
#36 _CustomZone.bindCallbackGuarded.<anonymous closure>
(dart:async/zone.dart:963:23)
#37 _rootRun (dart:async/zone.dart:1124:13)
#38 _CustomZone.run (dart:async/zone.dart:1021:19)
#39 _CustomZone.bindCallback.<anonymous closure>
(dart:async/zone.dart:947:23)
#40 Timer._createTimer.<anonymous closure>
(dart:async-patch/timer_patch.dart:21:15)
#41 _Timer._runTimers (dart:isolate-patch/timer_impl.dart:382:19)
#42 _Timer._handleMessage (dart:isolate-patch/timer_impl.dart:416:5)
#43 _RawReceivePortImpl._handleMessage
(dart:isolate-patch/isolate_patch.dart:172:12)
Compiler failed on /Users/carla/Documents/my_app/lib/main.dart
Error launching application on iPhone Xʀ.
```
<!-- If possible, paste the output of running `flutter doctor -v` here. -->
## Flutter doctor
```[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.1)
• Android SDK at /Users/carla/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling
support)
• Platform android-29, build-tools 29.0.1
• Java binary at: /Applications/Android
Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build
1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 10.2.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.2.1, Build version 10E1001
• CocoaPods version 1.7.4
[✓] iOS tools - develop for iOS devices
• ios-deploy 1.9.4
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 37.1.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build
1.8.0_152-release-1343-b01)
[✓] VS Code (version 1.36.1)
• VS Code at /Users/carla/Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.2.0
[✓] Connected device (1 available)
• iPhone Xʀ • F6EBF7B9-6570-4E1B-BEBC-C0B4A7D1C9F8 • ios •
com.apple.CoreSimulator.SimRuntime.iOS-12-2 (simulator)
• No issues found!
```
| tool,a: first hour,P2,team-tool,triaged-tool | low | Critical |
469,044,704 | pytorch | performance much worse on 2080ti than 1080ti | ## 🐛 Bug
I have a model that I have historically trained on 1080ti, and recently I discovered that the training speed is much worse (almost 2x slower) on 2080ti. The rest of the setup (nvidia driver + cpu + networking) is the same between the two.
I profiled my script using `nvprof python my_script.py`, and discovered that on the 2080ti, way too much time (~70%) is spent in this function:
```void cudnn::detail::convolveNd_wgrad_engine<float, int=3, int=512, int=6, int=5, int=3, int=3, int=3, bool=1>(int, int, int, float const *, int, cudnn::detail::convolveNd_wgrad_engine<float, int=3, int=512, int=6, int=5, int=3, int=3, int=3, bool=1>*, float const , kernel_gradNd_params, int, float, int)```
Any ideas what the problem could be?
I have attached the two profiles in case they are helpful.
[1080ti.log](https://github.com/pytorch/pytorch/files/3400744/1080ti.log)
[2080ti.log](https://github.com/pytorch/pytorch/files/3400745/2080ti.log)
- PyTorch Version (e.g., 1.0): 1.1.0
- OS (e.g., Linux): Ubuntu 16.04
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.6
- CUDA/cuDNN version: 10.0 / 7.4
- GPU models and configuration: 1080ti + 2080ti, nvidia driver 410.78
- Any other relevant information:
| module: performance,module: cuda,triaged | medium | Critical |
469,046,930 | flutter | [Google Map] InfoWindow can support a style on title and snippet? | google_maps_flutter: 0.5.19+2
1) Can i set the styling of the title or snippet of the InfoWindow?
eg, Font color, Font size.
2) can the title supports multi-line? Currently, it supports one line.
3) Will it support custom info window? For native, we can draw a bitmap for info window.
| c: new feature,d: api docs,customer: crowd,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Major |
469,051,635 | flutter | Flutter Driver: Needs to use WidgetsBinding.instance.isRootWidgetAttached because otherwise I need to use Future.Delayed before root widget is attached | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
Sometimes flutter driver tries to attach itself to root widget before it's created. Would be nice to be able to use WidgetsBinding.instance.isRootWidgetAttached because otherwise I need to use Future.Delayed, which is suboptimal
## Proposal
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
Sometimes flutter driver tries to attach itself to root widget before it's created. Would be nice to be able to use WidgetsBinding.instance.isRootWidgetAttached because otherwise I need to use Future.Delayed, which is suboptimal | c: new feature,tool,t: flutter driver,P3,team-tool,triaged-tool | low | Critical |
469,081,326 | go | cmd/internal/browser: Open() shouldn’t rely on Commands() | `Commands()` is used as the only backend for `Open()`, but there’s OS-specific ways to open things in a browser instead. Sadly on Linux, it is complex:
1. Consult `$BROWSER` for a “:”-delimited list of executables.
2. If it’s unset or none of the executables can be found:
1. `xdg-settings get default-web-browser` will return the .desktop file name, e.g. `firefox.desktop`
2. Find that file in one of `${XDG_DATA_HOME-$HOME/.local/share}:${XDG_DATA_DIRS-/usr/local/share/:/usr/share/}` (subdirectory `applications`)
3. Parse the `Exec` line in that file and replace [`%s`, `%u` and whatever](https://specifications.freedesktop.org/desktop-entry-spec/latest/ar01s07.html) with the file name/url/… to get a command line
5. Execute the command line you built from $BROWSER or the .desktop file
6. If all the above failed, fall back to `xdg-open`
As said, this is harder than it should be, but it’s the only way to do it correctly AFAIK. `xdg-open` does all this, but there’s no way to tell it “open the path/URL I give you with a *browser*”. If you e.g. want to open a SVG with a browser, you need to go the above route. | NeedsInvestigation | low | Critical |
469,103,728 | vue | Consistency in `$refs`: all children of $refs array using same format. | ### What problem does this feature solve?
I know this is intended as a feature, but I encountered a lot of issues when trying to access $refs due to all the different formats:
- If `ref` is inside a `v-for`, the `$refs` element will be an array. This wouldn't be necessary since is trivial to add an unique name to the `ref` inside a loop, [as seen in this example](https://forum.vuejs.org/t/this-refs-theid-returns-an-array/31995/4)
- If `ref` is outside of a `v-for`, it will be a DOM Node.
- Except that when the referenced item is a custom component, then it will be an object and to access the DOM node you would have to read the `$el` property...
So basically, if we wanted to select a ref's node, and we wouldn't know if it's a node or a custom component, if it's in a for-loop or not, this would be the code:
```
const ref = (this.$refs.test && this.$refs.test[0] && this.$refs.test[0].$el) ? this.$refs.test[0].$el
: (this.$refs.test && this.$refs.test.$el) ? this.$refs.test.$el
: (this.$refs.test && this.$refs.test[0]) ? this.$refs.test[0]
: this.$refs.test ? this.$refs.test
: null;`
```
So far I've never encountered this case, but I did encounter:
- The case where the ref could be either a node or a custom component — I wanted to add a class and remove it with a timeout, for animation purposes. At the end, my workaround was to create a custom component to handle the "empty state" of a component, to consistently be able to reference the node using `.$el` (this was easier than refactoring all the parts where `.$el` was being referenced)-
- The case where the ref could either be a node or an array of nodes, in a case of a list that had a default value (the default value was outside of the "for loop").
My proposal is to have a consistent way of referencing a node/component inside templates.
An idea I found in [another issue of this repo](https://github.com/vuejs/vue/issues/2044) is to use a special syntax when the ref is expected to be an array:
```
<div ref:multiple="example"></div>
```
Another part of the proposed change is to be able to access the DOM Node using `$el` (__always__ and exclusively that way).
Since this is a breaking change, it would be interesting to hear workarounds for this. For example, using a different keyword altogether.
In general, I think this would add a lot of sanity and consistency to the usage of `$refs`, and I think it's a common issue for a lot of beginners.
Note: this feature request is not compatible with [this other feature request](https://github.com/vuejs/vue/issues/4035)
### What does the proposed API look like?
``` vue-html
<!-- Refs inside loops: -->
<div v-for="...">
<div ref:multiple="example"></div>
(or ref:nested="example")
</div>
```
``` js
// Refs array
[
0: { $el: <div></div> }, // simple node item
1: { $el: <div></div>, methods, data, etc..... }, // ref in custom component
...
}
```
<!-- generated by vue-issues. DO NOT REMOVE --> | discussion | medium | Minor |
469,104,229 | terminal | Plugin: add support for [XYZ]MODEM file transfers | refer to the title | Issue-Feature,Help Wanted,Area-Extensibility,Product-Terminal | high | Critical |
469,114,058 | svelte | Allow binding validity on input, select and textarea elements | I would like to add a read-only binding for `validity` property ([ValidityState](https://developer.mozilla.org/en-US/docs/Web/API/ValidityState)) on form elements.
For Example
```html
<script>
let email;
let emailValidity = {};
</script>
<input type="email" required bind:value={email} bind:validity={emailValidity}>
Email is valid: {emailValidity.valid}
```
I already implemented the [changes on my fork](https://github.com/sveltejs/svelte/compare/master...bbuhler:input-validity-binding?expand=1). I wanted to create a PR but there it mentions if my PR implements a new feature I should raise an issue to discuss it before. So I here it is. | feature request,temp-stale | low | Major |
469,152,633 | terminal | Make sure MinMaxCloseControl supports FullScreen & Tablet Mode | Some thoughts about the Min/Max/Close buttons, and supporting FullScreen or Tablet Mode
Some properties to consider for this TitlebarControl. This could be also be useful for the Settings UI, if it is presented in a window.
* CanMaximise
* CanMinimise
* CanRestore
Which would hide those buttons if you wanted to make a settings screen which only shows a close button. Also consider how these TitleBar Controls will handle FullScreen/Tablet Mode

_Full Screen_

_Tablet Mode_
_Originally posted by @mdtauk in https://github.com/microsoft/terminal/pull/1948#issuecomment-512048300_ | Help Wanted,Issue-Bug,Area-UserInterface,Product-Terminal,Priority-3 | low | Minor |
469,157,206 | electron | Windows: disable smooth scrolling for ALL electron apps | ### Preflight Checklist
<!-- Please ensure you've completed the following steps by replacing [ ] with [x]-->
* [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project.
* [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to.
* [x] I have searched the issue tracker for a feature request that matches the one I want to file, without success.
### Problem Description
Smooth scrolling can be disabled on a per application basis using the command line switch, but users who want to disable it inside one electron app want to disable it in ALL electron apps.
### Proposed Solution
Solution a)
Add a Windows registry entry that can be written by user friendly 3rd party applications that disable smooth scrolling in all electron apps (system-wide or for current user).
Electron apps read the registry and if the key is set, they act the same as if the "--disable-smooth-scrolling" command line switch was enabled.
Solution b)
Use a settings file inside Local/Roaming AppData folders (e.g. electron/user.json).
(this might prove to be useful for other system wide electron settings later).
I don't know what the registry equivalent is on Macs but PC definitely needs this. Using the file solution might prove to be cross-platform.
### Alternatives Considered
I know about the command line switches but it'd require a 3rd party application to:
- find all electron apps and modify existing icons so all use the command line switch
- icons change with each version of the application (because of popular app-x.x.x. folder structure), so it'd need to check for those changes too
- with the rise of electron apps it is becoming an O(n) problem that has a simple O(1) solution
### Additional Information
| enhancement :sparkles: | low | Major |
469,162,154 | godot | Unable to revert ParticlesMaterial's Orbit Velocity properties | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
v3.1.1.stable.mono.official
v3.2.dev.mono.custom_build.d087a9e32
**OS/device including version:**
Windows 10
**Issue description:**
When using the Inspector to customise a ParticlesMaterial I am unable to revert the properties under the Orbit Velocity group:

| bug,topic:editor,confirmed | low | Minor |
469,289,077 | rust | Type inference in the presence of recursive impls may result in an error message that mentions seemingly unrelated types | Here's an example (tested with 1.36.0 and 1.38.0-nightly):
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=820e0d5fd61e7d9275a4378977488d8a
```rust
trait Trait {}
struct Struct1<T>(T);
struct Struct2<T>(T);
//error message depends on the order of impls
impl <T> Trait for Option<Struct2<T>> where Option<T>:Trait {}
impl <T> Trait for Option<Struct1<T>> where Option<T>:Trait {}
fn foo<E: Trait>() {}
fn main() {
foo::<Option<Struct1<_>>>();
}
```
The error message:
```
error[E0275]: overflow evaluating the requirement `std::option::Option<Struct2<_>>: Trait`
--> src/main.rs:13:5
|
13 | foo::<Option<Struct1<_>>>();
| ^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: consider adding a `#![recursion_limit="128"]` attribute to your crate
= note: required because of the requirements on the impl of `Trait` for `std::option::Option<Struct2<Struct2<_>>>`
= note: required because of the requirements on the impl of `Trait` for `std::option::Option<Struct2<Struct2<Struct2<_>>>>`
....
```
I understand that Rust doesn't guarantee that trait resolution is decidable, so overflows are expected in certain cases. Of course, message "type annotations needed" would be much more helpful than "overflow evaluating the requirement", but I guess this is the price we have to pay for 'aggressive' type inference.
What is really confusing is the fact that the compiler reports that it fails to evaluate `Option<Struct2<_>>:Trait`, when the code clearly shows that the original obligation is `Option<Struct1<_>>:Trait` (`Struct2` is not mentioned anywhere besides the first impl).
It seems that obligations are evaluated in the following order:
1. Option<Struct1<_>> : Trait
2. Option<_> : Trait
3. Option<Struct2<_>>: Trait
4. Option<Struct2<Struct2<_>>>: Trait
5. Option<Struct2<Struct2<Struct2<_>>>>: Trait
...
<br/>
Impls that look like this `impl <T> Trait for Something<Struct<T>> where Something<T>:Trait {..}` are not so rare.
For example, `impl<'a, E> Read for &'a PollEvented<E> where E: Evented, &'a E: Read {..}` from https://docs.rs/tokio-reactor/0.1.5/tokio_reactor/struct.PollEvented.html.
Here's how this impl may cause confusing error message:
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=da7d0c59e214a8536d6872d5069c6500
```rust
#[allow(unused_imports)]
use tokio;
fn foo<T:std::io::Read>() {}
fn main() {
foo::<&_>();
}
```
The error message:
```
error[E0275]: overflow evaluating the requirement `&tokio_reactor::poll_evented::PollEvented<_>: std::io::Read`
--> src/main.rs:7:5
|
7 | foo::<&_>();
| ^^^^^^^^^
|
= help: consider adding a `#![recursion_limit="128"]` attribute to your crate
= note: required because of the requirements on the impl of `std::io::Read` for `&tokio_reactor::poll_evented::PollEvented<tokio_reactor::poll_evented::PollEvented<_>>`
= note: required because of the requirements on the impl of `std::io::Read` for `&tokio_reactor::poll_evented::PollEvented<tokio_reactor::poll_evented::PollEvented<tokio_reactor::poll_evented::PollEvented<_>>>`
...
```
`PollEvented` is not mentioned anywhere in the code (but the impl is in scope).
Interesting, that if I replace `use tokio;` with an impl for a type that is not in scope:
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=5cdebfaedd00f3142924dd256ac1116a
```rust
impl Struct{}//not found in this scope
fn foo<T:std::io::Read>() {}
fn main() {
foo::<&_>();
}
```
The error message is even more confusing:
```
error[E0275]: overflow evaluating the requirement `&tar::archive::ArchiveInner<_>: std::io::Read`
--> src/main.rs:6:5
|
6 | foo::<&_>();
| ^^^^^^^^^
|
= help: consider adding a `#![recursion_limit="128"]` attribute to your crate
= note: required because of the requirements on the impl of `std::io::Read` for `&tokio_reactor::poll_evented::PollEvented<tar::archive::ArchiveInner<_>>`
= note: required because of the requirements on the impl of `std::io::Read` for `&tokio_reactor::poll_evented::PollEvented<tokio_reactor::poll_evented::PollEvented<tar::archive::ArchiveInner<_>>>`
...
```
<br/><br/>
It seems that there are a number of existing issues that have the same underlying cause (type inference in the presence of recursive impls):
https://github.com/rust-lang/rust/issues/61800
https://github.com/rust-lang/rust/issues/60603
https://github.com/rust-lang/rust/issues/57854
https://github.com/rust-lang/rust/issues/39959
https://github.com/rust-lang/rust/issues/49017
https://github.com/rust-lang/rust/issues/37748
https://github.com/rust-lang/rust/issues/34137 | C-enhancement,A-diagnostics,A-trait-system,T-compiler | low | Critical |
469,294,984 | TypeScript | Functions with same intersection and conditional type in parameter list not assignable to each other | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
function, intersection, conditional type, parameter list, assignable
**Code**
```ts
/**
* We should never be able to create a value of this type legitimately.
*
* `ErrorMessageT` is our error message
*/
interface CompileError<ErrorMessageT extends any[]> {
/**
* There should never be a value of this type
*/
readonly __compileError : never;
}
type ErrorA = CompileError<["I am error A"]>;
type ErrorB = CompileError<["I am error B"]>;
declare const errorA : ErrorA;
/**
* Different compile errors are assignable to each other.
*/
const errorB : ErrorB = errorA;
/**
* Pretend this is `v1.0.0` of your library.
*/
declare function foo <N extends number> (
/**
* This is how we use `CompileError<>` to prevent `3` from being
* a parameter
*/
n : (
N &
(Extract<3, N> extends never ?
unknown :
CompileError<[3, "is not allowed; received", N]>)
)
) : void;
/**
* Argument of type '3' is not assignable to parameter of type
* 'CompileError<[3, "is not allowed; received", 3]>'.
*/
foo(3);
/**
* OK!
*/
foo(5);
/**
* Argument of type '3 | 5' is not assignable to parameter of type
* 'CompileError<[3, "is not allowed; received", 3 | 5]>'.
*/
foo(5 as 3|5);
/**
* Argument of type 'number' is not assignable to parameter of type
* 'CompileError<[3, "is not allowed; received", number]>'.
*/
foo(5 as number);
///////////////////////////////////////////////////////////////////
/**
* The same as `foo<>()` but with a different error message.
*
* Pretend this is `v1.1.0` of your library.
*/
declare function bar <N extends number> (
n : (
N &
(Extract<3, N> extends never ?
unknown :
CompileError<[3, "is not allowed; received", N]>)
)
) : void;
/**
* Expected: Assignable to each other
* Actual: Not assignable to each other
*/
const fooIsAssignableToBar : typeof bar = foo;
const barIsAssignableToFoo : typeof foo = bar;
```
**Expected behavior:**
The following should have no errors,
```ts
const fooIsAssignableToBar : typeof bar = foo;
const barIsAssignableToFoo : typeof foo = bar;
```
**Actual behavior:**
It has errors
**Playground Link:**
[Playground](http://www.typescriptlang.org/play/#code/PQKhFgCgAIWh1AptAzgCwPYFcA2ATaAO0QDdEAnaAI2QEMqdkAXDaAY3MVqbuhNpxZkGAGbQmaAJYpxATwAOyRgHNJTSQFtuiHLIB0UWNENwABgFFy5DOQCyiFClrLEAFVPRp0bJQrXKGg5OLibAUJKEPOQitGzIAMIYGvKSjJb+ADzpNvaOzm7QiAAePIR4MrSEsgDaALoAfNAA3obQoBAw0EauaBTI6Nj4RKQU1Lz8gsJiEl5MCoitsGGdnLR4GIS60AD622xJKWlWNtAAXMNk5ADcUAC+UHOK0NnkAILQALzQicmpiC8ZaoAIgAktBaBpCsdKK8gQ0bpBHsgXgAhT7fA5-AHAsEQqH+aAouH1BFQPCINg4WicdgbFBMfE2d7nF6vBHtEzQAAikhEIj6kVpv0YjPIFRptEckmUhHoIpYhViaG8EgoBhgIGW+0I9NFaJZ0LRXz8TNJkA5GugAAVOKUCDMZF5TCQAIx6AAMHo8omgsh80Bwkio5Gp+lCZIpVJpIiwhDY6g20BEGFYGQAcoUSogyjJCFgNDRyI0ABStC1dbpSR0yTAAd2gteQWBQyFMP0O-2hGXqHgV8k4ZEFpgAzB4RNZITQIspFnBaNB5NSIYgorPll1CGdoKXOl0MwAyRbb8wlEPxjLDgA00DTjWKdtzI0oAH4j7GANaEDC1zenI-trEu2qK9oCBLwvwZAQcG-RA8CuaBODiSQyDwIFrzTBoAEpWmwyBMK3EgMEkOCoCgcs4FechlHzbMGR9JFoAAcmHRjPFzDBIKlGU5WYVhFxDQIom8aZ5k5RiAKOTJgOvMD2MgnBoMbOCEIpRBkNgtDoGHBpGPVJYoGTDBi2HTD2TATkAHkAGkAEJw0gQziwAVlMsjzMtSjqMCQV6PmJjh2gAAfaAnNY8COPBLjZQYXiFyXQTRl8xQxIkzspJA2SiAiqCYOUxC1JQzSAuCpydL0zUDJTZzIq0wKXLMjojE8mifJEp5GLzAsKDCuTIpQaVovlPj4pXRK2oWS1xMxSSbEBDLwvkxTYPg-L1NQ69OsLMr7Mcpyas2ihXMgMjgFOs7zouy6ruum7bru0jzXcyt+mXGrTEM7ti0wjwqCwBlazUZV5zwXl+U4QUTQCIJ8nK4xLRtUaynEKs2OgZ03Tdd1vTEP0sEoQNg1DcrlnJSlqWQGM4wTTcqGpaB00zB8iHzQsS1aX9tyPA8j2LE8mDPJgL3Qu8sxzC5RlfXdoA-L8fzOf9prS2bpNAhbwQU3KVtUtbNIw+pcK6XD8POQjiLNcjniKRR41g85XiinjxFYLg2GVDjenITlXnjLABHONNsodmKncVV2VQ9+ztV1QyQRQe3+u4mLXAwFE6fOJEfVpygvkMhEo4ZLPY-jgaeOTgAxFMtwzsRDPRLOriAA)
**Related Issues:**
https://github.com/microsoft/TypeScript/issues/21756
Also, related to my comment here,
https://github.com/microsoft/TypeScript/issues/23689#issuecomment-512114782
----- | Needs Investigation | low | Critical |
469,295,215 | create-react-app | Don't compile async/await in node_modules in development | ### Is your proposal related to a problem?
The problem that I have with the preset transpiling async/await is that debugging gets really hard. With the uprising of hooks a lot of libraries are abstracting a lot of functionality away which is great but this also means that if they expose functionality as a HOF that debugging and reading these is near impossible in development. It's not that I am bad at debugging either but more that the function get's compiled down more than it needs to be apart from reaching through another few layers for the generator part and the syntax gets completely fumbled up.

Code before transpiling
```
const handleSubmit = (callback) => async (e) => {
if (e && !nativeValidation) {
e.preventDefault();
e.persist();
}
let fieldErrors;
let fieldValues;
let firstFocusError = true;
const fields = fieldsRef.current;
const currentFieldValues = validationFields ? (validationFields.map(name => fieldsRef.current[name])) : Object.values(fields);
isSubmittingRef.current = true;
reRenderForm({});
const { errors, values } = await currentFieldValues.reduce(
async (previous, field) => {
if (!field) return previous;
const resolvedPrevious = await previous;
const ref = field.ref
const name = field.ref
if (!fields[name]) return Promise.resolve(resolvedPrevious);
const fieldError = await validateField(field, fields, nativeValidation);
if (fieldError[name]) {
if (submitFocusError && firstFocusError && ref.focus) {
ref.focus();
firstFocusError = false;
}
resolvedPrevious.errors = {
...(resolvedPrevious.errors || {}),
...fieldError,
};
return Promise.resolve(resolvedPrevious);
}
resolvedPrevious.values[name] = getFieldValue(fields, ref);
return Promise.resolve(resolvedPrevious);
},
Promise.resolve({ errors, values, }),
);
fieldErrors = errors;
fieldValues = values;
if (isEmptyObject(fieldErrors)) {
await callback(combineFieldValues(fieldValues), e);
errorsRef.current = {};
} else {
errorsRef.current = fieldErrors as any;
}
if (isUnMount.current) return;
isSubmittedRef.current = true;
submitCountRef.current += 1;
isSubmittingRef.current = false;
reRenderForm({});
};
```
Code after transpiling
```
var handleSubmit = function handleSubmit(callback) {
return (
/*#__PURE__*/
function () {
var _ref26 = _asyncToGenerator(
/*#__PURE__*/
_regeneratorRuntime.mark(function _callee5(e) {
var fieldErrors, fieldValues, fields, currentFieldValues, _ref27, errors, values;
return _regeneratorRuntime.wrap(function _callee5$(_context5) {
while (1) {
switch (_context5.prev = _context5.next) {
case 0:
if (e) {
e.preventDefault();
e.persist();
}
fields = fieldsRef.current;
currentFieldValues = Object.values(fields);
isSubmittingRef.current = true;
reRenderForm({});
if (!validationSchema) {
_context5.next = 12;
break;
}
fieldValues = getFieldsValues(fields);
_context5.next = 9;
return validateWithSchema(validationSchema, fieldValues);
case 9:
fieldErrors = _context5.sent;
_context5.next = 19;
break;
case 12:
_context5.next = 14;
return currentFieldValues.reduce(
/*#__PURE__*/
function () {
var _ref28 = _asyncToGenerator(
/*#__PURE__*/
_regeneratorRuntime.mark(function _callee4(previous, field) {
var resolvedPrevious, ref, name, fieldError;
return _regeneratorRuntime.wrap(function _callee4$(_context4) {
while (1) {
switch (_context4.prev = _context4.next) {
case 0:
_context4.next = 2;
return previous;
case 2:
resolvedPrevious = _context4.sent;
ref = field.ref, name = field.ref.name;
if (fields[name]) {
_context4.next = 6;
break;
}
return _context4.abrupt("return", Promise.resolve(resolvedPrevious));
case 6:
_context4.next = 8;
return validateField(field, fields);
case 8:
fieldError = _context4.sent;
if (!fieldError[name]) {
_context4.next = 12;
break;
}
resolvedPrevious.errors = Object.assign({}, resolvedPrevious.errors || {}, fieldError);
return _context4.abrupt("return", Promise.resolve(resolvedPrevious));
case 12:
// @ts-ignore
resolvedPrevious.values[name] = getFieldValue(fields, ref);
return _context4.abrupt("return", Promise.resolve(resolvedPrevious));
case 14:
case "end":
return _context4.stop();
}
}
}, _callee4);
}));
return function (_x10, _x11) {
return _ref28.apply(this, arguments);
};
}(), Promise.resolve({
errors: {},
values: {}
}));
case 14:
_ref27 = _context5.sent;
errors = _ref27.errors;
values = _ref27.values;
fieldErrors = Object.assign({}, errors, filterUndefinedErrors(errorsRef.current));
fieldValues = values;
case 19:
isSubmittedRef.current = true;
submitCountRef.current += 1;
isSubmittingRef.current = false;
if (isEmptyObject(fieldErrors)) {
callback(combineFieldValues(fieldValues), e);
} else {
errorsRef.current = fieldErrors;
reRenderForm({});
}
case 23:
case "end":
return _context5.stop();
}
}
}, _callee5);
}));
return function (_x9) {
return _ref26.apply(this, arguments);
};
}()
);
};
```
Disable transpiling of node_modules in development. Is there a reason for that?
### Additional context
The library in use is react-hook-form and source-map support is enabled (the after code is the source mapped version)
| issue: proposal | low | Critical |
469,336,027 | opencv | Compile error building 4.1.0 with x86_64-w64-mingw32-gcc under cygwin | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
Please:
* Read the documentation to test with the latest developer build.
* Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue.
* Try to be as detailed as possible in your report.
* Report only one problem per created issue.
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => 4.1.0
- Operating System / Platform => Windows 64 / Cygwin
- Compiler => x86_64-w64-mingw32-gcc (GCC) 7.4.0
##### Detailed description
Clean install of cygwin (packages below)
Fresh download of opencv
Trying to build opencv using `w64-ming32-gcc` gives the following error
```
opencv-4.1.0/3rdparty/openexr/IlmThread/IlmThreadMutexPosix.cpp: In constructor ‘IlmThread::Mutex::Mutex()’:
opencv-4.1.0/3rdparty/openexr/IlmThread/IlmThreadMutexPosix.cpp:55:53: error: cannot convert ‘CRITICAL_SECTION* {aka _RTL_CRITICAL_SECTION*}’ to ‘void**’
for argument ‘1’ to ‘int pthread_mutex_init(void**, const pthread_mutexattr_t*)’
if (int error = ::pthread_mutex_init (&_mutex, 0))
^
```
This is repeated a total of 4 times for `init`, `lock`, `unlock`, and `destroy`.
##### Steps to reproduce
Unzip opencv zip file (taken from github).
Using cygwin term, cd to unzip folder
```
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX="$(pwd)/dist" -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_C_COMPILER=/usr/bin/x86_64-w64-mingw32-gcc.exe -DCMAKE_CXX_COMPILER=/usr/bin/x86_64-w64-mingw32-g++.exe ../
cmake --build .
```
Cmake version
```
$ cmake --version
cmake version 3.14.5
``` | priority: low,category: build/install,platform: win32,category: 3rdparty | low | Critical |
469,348,499 | flutter | WebViewPlatform calls implementation's build more than once with the same set of arguments. Also, there's no dispose to clean up resources of a web view. | We're seeing on Fuchsia that using WebViewPlatform requires us to make cls like this to both allow us to dispose of the webview:
https://fuchsia-review.googlesource.com/c/topaz/+/301291
and to prevent us from calling the webview created callback multiple times:
https://fuchsia-review.googlesource.com/c/topaz/+/302155
While somewhat easy for an implementation to work around, I believe based on the API description an implementor of WebViewPlatform should expect 'build' to be called only once with the same parameters. It should be up to flutter web view's own state to ensure it hangs on to created webviews instead of calling build with the same parameters again.
I believe some example code that may trigger the build to be called multple times with same parameters is something like this:
``` dart
ValueNotifier<bool> toggle = ValueNotifer<bool>(false);
Timer.repeated(Duration(milliseconds:100),(_){toggle.value = !toggle.value;});
build(BuildContext context) {
return ValueListenableBuilder(valueListenable:toggle, builder:(_,toggleValue,__)=> toggleValue ? WebView(...) : Container(child:WebView(...));
}
```
We're putting undue burden on the implementations of WebViewPlatform on knowing when the same webview should be returned or a different one.
| customer: fuchsia,p: webview,package,team-ecosystem,P2,triaged-ecosystem | low | Minor |
469,373,126 | storybook | Addon-docs: Use MDX shortcodes | MDX 1.0 introduced "shortcodes" which allow you to [use certain elements without needing to import them explicitly](https://mdxjs.com/blog/shortcodes).
Should we use this for `<Meta>`, `<Story>`, `<Preview>`, `<Source>`, and `<Props>` so you can use them in `*.stories.mdx` without having to import them every time?
- [ ] Update compiler
- [ ] Update codemods
- [ ] Update docs
Pros: much more convenient
Cons: makes the MDX less portable / discoverable | feature request,addon: docs,mdx | low | Major |
469,403,238 | TypeScript | Suggestion: Improve type of `constructor` on the instance type of a class | ## Search Terms
constructor type
## Suggestion
This was previously discussed in this thread: https://github.com/DefinitelyTyped/DefinitelyTyped/pull/36660#discussion_r304173536
> I could see a possible future mechanism for `this.constructor` that returned the static side of the containing class without call or construct signatures (but retaining the apparent type of `Function`).
Currently, the type of the `constructor` property on most objects is `Function`. It has been suggested that for any `class C {}`, the type of the `constructor` property on the instance should be `typeof C`. However this suffers a significant drawback in that it severely limits subclasses as any subclass of `C` must have the same (or compatible) construct signatures as `C`.
Here is an example of this issue: https://www.typescriptlang.org/play/index.html#code/MYGwhgzhAEDKCuAHApgJwMLitA3gKGmjAC5oIAXVASwDsBzAbgOmAHsaLV5hzXUAKEmUq06ASlzNC5ABZUIAOjDQAvESaEAvnm15QkGAgBGmA9GQAPcshoATQ0jSns+QkdKdRGlu07deAqyI5BCkOEQeIvQANNDuwtT00JoSroRkjoHBimBi3tJyikaq0EEhCkbe2rq01qgAZmDAyHCZzjBphGwclP58pOQAniis9a0oGFgQTDU0dY3NrSZTkuk+PVw8-dBDI2PG7TNAA
Instead, I would suggest a mechanism to type `constructor` as all of the static members of `typeof C` but none of the call/construct signatures of `typeof C`, yet still having an apparent type of `Function`.
## Use Cases
In @ljharb's `qs`, he'd like to be able to use the `constructor` property of a `Buffer`-like object to access the `isBuffer` method on the constructor in a type-safe way (i.e. `obj.constructor.isBuffer`).
## Examples
```ts
/// <reference types="node" />
function isBuffer(obj: { constructor: { isBuffer(obj: any): obj is Buffer; } }) {
return obj.constructor.isBuffer(obj);
}
const buf = Buffer.alloc(10);
isBuffer(buf); // Buffer class would have a constructor that is `Buffer`
```
## Workaround
There exists a possible workaround for this currently, though it is somewhat complicated:
```ts
type StaticMembers<TClass extends Function> = Pick<TClass, keyof TClass> & Function;
class Buffer extends Uint8Array {
...
static isBuffer(obj: any): obj is Buffer;
...
}
interface Buffer {
readonly constructor: StaticMembers<typeof Buffer>;
}
```
Playground link: https://www.typescriptlang.org/play/index.html#code/MYGwhgzhAEDKCuAHApgJwMLitA3gKGmjAC5oIAXVASwDsBzAbgOmAHsaLV5hzXUAKEmUq06ASlzNC5ABZUIAOjDQAvESaEAvnm15QkGAgBGmA9GQAPcshoATQ0jSns+QkdKdRGlu07deAqyI5BCkOEQeIvQANNDuwtT00JoSroRkjoHBimBi3tJyikaq0EEhCkbe2rrkAJ4ocORg5FTAALLIALZGaBAAPAAqzjCW1nYwAGLwNDxU7AB8JQAKrQDWg8Oxq8i1rABm0ENYEIsAZNBTMy3sTHi01qh7YMDIcJnDkuk+HJT+fKSwJotdpdHqofp1FD7N4oDDHeZMXT3NBPF5vEzHT7pNg-Lg8f6NZqtDrdXp9SHIaHGYYInRAA
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- Whether this is a breaking change needs to be tested.
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | medium | Major |
469,403,593 | rust | Document that str, slices, (more?) can't safely straddle allocation boundaries | Slicing, indexing, and other safe operations on slices and strings pervasively use `<*T>::offset` and APIs built on top of it. These have the requirement that
> Both the starting and resulting pointer must be either in bounds or one byte past the end of the same allocated object.
So if one allocates two pieces of memory and after proper checking miraculously finds they are directly adjacent, one can't safely construct a slice/str/etc. that spans both of these allocations. At least, one can't do very many things with the result it without causing UB from crossing the boundary between the allocations.
I couldn't find anything documenting this. It should be noted on the unsafe constructors (`from_raw_parts` etc.) at minimum. These already link to `offset`'s documentation but only refer to its "no larger than isize::MAX" requirement, with no mention that the other requirements are also relevant.
cc https://github.com/oberien/str-concat/issues/8
cc @rust-lang/wg-unsafe-code-guidelines
(Similar issues apply to references-to-arrays and field accesses in aggregates, but this is due to the compiler's codegen for language primitives rather than due to standard library code, so it should go into the UCG and I believe we're more or less covering that already.) | C-enhancement,T-lang,A-docs | low | Major |
469,442,272 | neovim | tests: flaky(?): :edit term://* runs TermOpen early enough to set buffer-local 'scrollback' | This was seen on CI:
```
[ ERROR ] 1 error, listed below:
[ ERROR ] ...neovim/neovim/test/functional/terminal/edit_spec.lua @ 34: :edit term://* runs TermOpen early enough to set buffer-local 'scrollback'
test/functional/ui/screen.lua:567: Row 1 did not match.
Expected:
|*96: foobar |
| |
|^[Process exited 0] |
| |
Actual:
|*26: foobar |
| |
|^[Process exited 0] |
| |
```
I've thought this might just be flaky, but it is odd that there is "Process exited 0" already, but not the whole output is visible.
Code/Test: https://github.com/neovim/neovim/blob/353b3852fd03168585d868c5c7580e2f0599cb19/test/functional/terminal/edit_spec.lua#L34-L67
_Originally posted by @blueyed in https://github.com/neovim/neovim/pull/10528#issuecomment-512563901_ | test | low | Critical |
469,445,605 | TypeScript | Using && with a string produces union type with empty string | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
The below code is a super simplified version of a real world scenario I just ran into. When trying to assign a value only if some other condition is true and otherwise returning `undefined`, TypeScript produces a strange union type that includes empty string `''`.
The linked Playground contains a slightly larger example that gets closer to demonstrating my real world use case.
I would argue that in cases where the compiler can clearly know that something is truthy, it should evaluate to the type of the value on the right hand side of `&&`, even with `strictNullChecks` off.
**TypeScript Version:** 3.5.1 (With `strictNullChecks` off)
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** `logical and empty string`, `string && empty`
**Code**
```ts
const x = true && 'a'
```
**Expected behavior:**
Type of `x` is `'a'`
**Actual behavior:**
Type of `x` is `'' | 'a'`
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
https://www.typescriptlang.org/play/?strictNullChecks=false#code/C4TwDgpgBAysBOBLAdgcwKrMQe2VAvFAOQCGRUAPsQEZECwAUIwGYCuyAxsDnqhMHFbNmACgCUALlgIUGLLkpR2AEwjMUEZVADejKPqgdcAZ2BQANthKqthBKwgBuPQfj9W8PAFkSwABYAdPAkyMrYALbiUAD8FlY2UABkicRkUFIqahrKzgwAvkA
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
Couldn't find any 🕵🏼♂️ | Bug | low | Critical |
469,468,710 | rust | "Foo is ambiguous" only when deriving Debug | [playpen](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=903c3c6207c05d529bd58cc739350d65)
This may be related to #62768.
Given this code:
```rust
pub use Foo::*;
#[derive(Debug)]
pub enum Foo {
Foo(i32),
}
```
The compiler errors, saying that the `Foo` in `use Foo::*` may refer both to the enum (type-namespace) `Foo` and the yet-to-be-imported variant (value-namespace) `Foo::Foo`:
```rust
error[E0659]: `Foo` is ambiguous (glob import vs macro-expanded name in the same module during import/macro resolution)
--> src/lib.rs:1:9
|
1 | pub use Foo::*;
| ^^^ ambiguous name
|
note: `Foo` could refer to the enum defined here
--> src/lib.rs:3:1
|
3 | / pub enum Foo {
4 | | Foo(i32),
5 | | }
| |_^
note: `Foo` could also refer to the variant imported here
--> src/lib.rs:1:9
|
1 | pub use Foo::*;
| ^^^^^^
= help: consider adding an explicit import of `Foo` to disambiguate
```
However, if I remove the `#[derive(Debug)]`, then the code compiles just fine as expected ([playpen](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=a80dbcca330ff235b52204d29358261c)). | A-resolve,T-compiler,C-bug | low | Critical |
469,498,244 | flutter | Expose attributes of widgets to Flutter Driver | ## Use case
When retrieving a SerializableFinder, it would be helpful if we could find out if a button is enabled/disabled by accessing the button's attributes.
There are software requirements for example where a button can only be enabled for clicking if text fields are filled out.
## Proposal
Add a function in FlutterDriver or SerializableFinder where we can retrieve the attributes in a form of map or other data structure you think is better for this. We see these attributes in Flutter Inspector and it would be help us in our tests if we could make use of these attributes.
| c: new feature,tool,t: flutter driver,P3,team-tool,triaged-tool | low | Major |
469,518,611 | scrcpy | Lens correction | Is there a way to apply lens correction for streams coming from Oculus Quest / Go? With a recorded MP4 video file I can use ffmpeg like this to fix up lens distortion:
`ffmpeg -i input.mp4 -vf 'lenscorrection=k2=0.1:k1=-0.4' -r 24 output.mp4`
Is it possible for the output of scrcpy to be corrected in realtime? | feature request | low | Major |
469,567,032 | godot | [.WebGL-053FF6D0]RENDER WARNING: there is no texture bound to the unit 30 | ___
***Bugsquad note:** This issue has been confirmed several times already. No need to confirm it further.*
___
**Godot version:** 3.1.1
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Windows 10 - Chrome
<!-- Specify GPU model and drivers if graphics-related. -->
Firefox doesn't suffer from this.
**Issue description:**
<!-- What happened, and what was expected. -->
Everytime I launch any project in Chrome, I get 254 of the error in the title and in some case it's `unit 31` instead of `unit 30` until it stops with
```
WebGL: too many errors, no more errors will be reported to the console for this context.
```
It's enough to slow down my 3D project with physics running right from the start. Although a 2D project would still get this error.
_Workaround_: Your game should have a menu scene so when the player got through it, WebGL has already stopped reporting 254 errors.
**Steps to reproduce:** Export to HTML5 and run it on Chrome.
**Minimal reproduction project:** An empty project would suffice.
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. --> | bug,platform:web,topic:rendering,confirmed,topic:3d | low | Critical |
469,594,110 | flutter | flutter drive --trace-skia | when we flutter run an app can add --trace-skia to get timeline with skia functon.
But flutter drive can not add --trace-skia, so can not get time line with skia funcion. | c: new feature,tool,c: performance,t: flutter driver,P3,team-tool | low | Minor |
469,635,541 | kubernetes | Migrate all uses of leader-election to use Lease API | Currently, Kubernetes components (scheduler, kcm, ...) are using leader election that is based on either Endpoints or ConfigMap objects.
Given that both of these are watched by different components, this is generating a lot of unnecessary load.
We should migrate all leader-election to use Lease API (that was designed exactly for this case).
The tricky part is that in order to do that safely, I think the only reasonable way of doing that would be to:
- in the first phase switch components to:
1. acquire lock on the current object (endpoints or configmap)
2. acquire lock on the new lease object
3. only the proceed with its regular functionality
[ loosing any of those two, should result in panicing and restarting the component]
- in the second phase (release after) remove point 1
@kubernetes/sig-scalability-bugs | kind/cleanup,sig/scalability,help wanted,priority/important-longterm,lifecycle/frozen | medium | Critical |
469,715,349 | flutter | DropdownMenuItem RTL isn't working | in this sample code i can't set right to left texts on dropdown menus
```dart
List<DropdownMenuItem<SessionsEntity>> buildDropdownMenuItems(List sessions) {
List<DropdownMenuItem<SessionsEntity>> items = List();
for (SessionsEntity session in sessions) {
items.add(
DropdownMenuItem(
value: session,
child: Directionality(
textDirection: TextDirection.rtl,
child: Text(session.sessionName,
textAlign: TextAlign.right,
textDirection: TextDirection.rtl,
style: Theme.of(context).textTheme.caption.copyWith(
color: Colors.black,
fontFamily: 'ShabnamLight'
))),
),
);
}
return items;
}
```
all of items are LEFT to RIGHT | framework,f: material design,a: internationalization,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Major |
469,792,876 | TypeScript | should not throw error at `.d.ts` when `func` + `namespace` has member `default` | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
.d.ts
```ts
declare function execAll<T extends RegExp = RegExp>(inputRegExp: T | RegExp, input: string, options?: IExecAllOptions<T>): IMatches<T>;
declare namespace execAll {
var default: typeof execAll;
}
import IExecAllOptions = execAll.IExecAllOptions;
import IMatches = execAll.IMatches;
declare namespace execAll {
function execall<T extends RegExp = RegExp>(inputRegExp: T | RegExp, input: string, options?: IExecAllOptions<T>): IMatches<T>;
interface IExecAllOptions<T extends RegExp = RegExp> {
resetLastIndex?: boolean;
/**
* allow change cloneRegexp function
*/
cloneRegexp?: ICloneRegexp<T>;
/**
* only use this when u know what u doing
*/
leftContext?: boolean;
rightContext?: boolean;
removeHiddenData?: boolean;
}
interface ICloneRegexp<T extends RegExp = RegExp> {
(inputRegExp: T | RegExp, ...argv: any[]): T;
}
type IExecAllRegExpExecArray<T extends RegExp = RegExp> = RegExpExecArray & string[] & {
/**
* The 0-based index of the match in the string.
*/
index: number;
/**
* es2018
*/
groups?: {
[k: string]: string;
};
};
type IMatches<T extends RegExp = RegExp> = (IExecAllRegExpExecArray<T> & {
match: string;
sub: string[];
leftContext?: string;
rightContext?: string;
})[] & {
/**
* regular expressions
*
* @readonly
*/
readonly re: T;
/**
* regular expressions that contains the string against which a regular expression is matched.
*
* @readonly
*/
readonly input: string;
/**
* last matched index
*
* @readonly
*/
readonly lastIndex: number;
};
const SYMBOL: unique symbol;
}
export = execAll;
```
**Expected behavior:**
> no error
**Actual behavior:**

```
Error:(3, 9) TS1134: Variable declaration expected.
Error:(3, 16) TS1134: Variable declaration expected.
Error:(3, 18) TS1134: Variable declaration expected.
```
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Bug | low | Critical |
469,839,565 | flutter | Different Appbar with same bottomNavigatioBar without nesting Scaffolds | If you want to have a bottom navigation bar(CircularNotchedRectangle) with a nice floatingActionButton on all your app sections(pages) but different appBar the only solution is to use nested Scaffolds.
Based on some observations and comments (https://github.com/flutter/flutter/issues/23106#issue-370255150) nesting scaffolds is not so good strategy.
The comment (https://github.com/flutter/flutter/issues/23106#issuecomment-452689090) didn't get the attention that deserved.
Here is my situation, I have Scaffold without appBar with bottom navigation bar and the body is a TabController.
Main Page(HomePage)
```
Scaffold(
floatingActionButtonLocation: FloatingActionButtonLocation.centerDocked,
floatingActionButton: FloatingActionButton(...),
bottomNavigationBar: BottomAppBar(...),
body: TabBarView(
physics: NeverScrollableScrollPhysics(),
controller: _tabController,
children: _tabList,
),
```
Every section use (fancy appBar if needed):
```
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text("Settings"),
),
body:....
```
I think in iOS the bottom navigationBar is a controller of its own. On Android, BottomAppBar in MainActivity (???). Maybe Scaffold is doing too many things.
A widget that wraps bottomNavigationBar should be something usefull and from there you cand have independent Scaffolds.
| c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | medium | Critical |
469,843,401 | vscode | [css][html] Publish language servers modules on npm | We'd like to reuse more easily language servers from VSCode (HTML, CSS, JSon...), On npm.js we can see the related "languageservice" which provide some part of the logic, but those don't talk LSP.
Could the language server modules be published on npm.js for easier reuse? Or merged with the languageservice modules directly? | feature-request,css-less-scss | low | Major |
469,850,846 | kubernetes | Admission webhooks affected by dead tcp connections | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**:
I was doing some tests regarding mutating webhook (especially for istio), and i found a strange behavior.
Stopping a worker instance with a pod of the webhook running on it seems to break the admission controller for this webhook during approximatively 15 minutes.
After some debugging, the admission controller seems to cache the pod ip associated with this webhook service.
kubectl get events are full of:
```
0s Warning FailedCreate replicaset/sleep-1-d64b54564 Error creating: Internal error occurred: failed calling admission webhook "sidecar-injector.morven.me": Post https://sidecar-injector-webhook-svc.webhook.svc:443/mutate?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
```
Solutions to mitigate these errors:
* Remove the label sidecar-injector=enabled from the namespace, wait the scheduling of the pods, add the label again, and delete each pod to reschedule the pods (and the mutating webhook associated with it)
The logs of kubernetes master are full of things like:
```
I0717 17:00:35.211387 1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"sleep", UID:"d5faa2cf-a8ad-11e9-83cc-0a4693bb62ee", APIVersion:"apps/v1", ResourceVersion:"5094155", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling admission webhook "sidecar-injector.morven.me": Post https://sidecar-injector-webhook-svc.webhook.svc:443/inject?timeout=30s: read tcp 10.50.1.212:58344->10.50.0.77:443: read: connection timed out
```
The 10.50.0.77 ip matches the ip of the pod which was killed because of the shutdown of the node.
**What you expected to happen**:
Mutating webhook admission controller should invalidate the cache associated with the webhook once the first timeout occurred.
**How to reproduce it (as minimally and precisely as possible)**:
On a running kubernetes cluster:
1. Install a mutating webhook: https://github.com/morvencao/kube-mutating-webhook-tutorial#deploy + an application using this webhook; TLDR:
```
kubectl create namespace webhook
git clone https://github.com/morvencao/kube-mutating-webhook-tutorial;
cd kube-mutating-webhook-tutorial;
./deployment/webhook-create-signed-cert.sh \
--service sidecar-injector-webhook-svc \
--secret sidecar-injector-webhook-certs \
--namespace webhook
cat deployment/mutatingwebhook.yaml | \
deployment/webhook-patch-ca-bundle.sh > \
deployment/mutatingwebhook-ca-bundle.yaml
# Add failurePolicy: Fail to the mutating webhook configuration to trigger the timeout error, and update the namespace:
head -n 14 deployment/mutatingwebhook-ca-bundle.yaml
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: sidecar-injector-webhook-cfg
labels:
app: sidecar-injector
webhooks:
- name: sidecar-injector.morven.me
failurePolicy: Fail
clientConfig:
service:
name: sidecar-injector-webhook-svc
namespace: webhook
path: "/mutate"
...
kns webhook;
kubectl create -f deployment/configmap.yaml
kubectl create -f deployment/deployment.yaml
kubectl create -f deployment/service.yaml
kubectl create -f deployment/mutatingwebhook-ca-bundle.yaml
kns default;
kubectl label namespace default sidecar-injector=enabled
kubectl create -f deployment/nginxconfigmap.yaml
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sleep
spec:
replicas: 1
template:
metadata:
annotations:
sidecar-injector-webhook.morven.me/inject: "yes"
labels:
app: sleep
spec:
containers:
- name: sleep
image: tutum/curl
command: ["/bin/sleep","infinity"]
imagePullPolicy:
EOF
```
At the end, you should have something like :
```
kubectl get pods -n default; kubectl get pods -n webhook
NAME READY STATUS RESTARTS AGE
sleep-d64b54564-zfhcx 2/2 Running 0 10s
NAME READY STATUS RESTARTS AGE
sidecar-injector-webhook-deployment-c89cb69b6-n8kff 1/1 Running 0 76s
```
2. Shutdown the worker where the mutating webhook is running. (kubectl get pods -o wide to see the node)
3. Wait the node to be down (take a look at the rescheduling of the mutating webhook; kubectl get pods -n webhook)
4. Scale up the deployment (in this case, the "sleep" deployment) or add some pods within a namespace with the label sidecar-injector=enabled:
```
k scale deployment -n default sleep --replicas=5
```
4. Nothing happens, except "timeout" errors:
```
kubectl get events -n default
1s Warning FailedCreate ReplicaSet Error creating: Internal error occurred: failed calling admission webhook "sidecar-injector.morven.me": Post https://sidecar-injector-webhook-svc.webhook.svc:443/mutate?timeout=30s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
...
```
You have to wait 15min to see the pods be scheduled.
**Anything else we need to know?**:
Related issues on istio side :
* Automatic injection fails in AWS (closed) : https://github.com/istio/istio/issues/14267
* Istio sidecar injection fails on scale down in EKS (open) : https://github.com/istio/istio/issues/13762
* Rolling upgrade of the worker nodes results in brief outage of application pods (open) : https://github.com/istio/istio/issues/13840
Doing a kubectl drain on the specific node instead of a shutdown seems to work.
**Environment**:
- Kubernetes version (use `kubectl version`):
```
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-26T00:04:52Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.6-eks-d69f1b", GitCommit:"d69f1bf3669bf00b7f4a758e978e0e7a1e3a68f7", GitTreeState:"clean", BuildDate:"2019-02-28T20:26:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
```
- Cloud provider or hardware configuration:
Aws, Kubernetes Version 1.12 Platform Version eks.2
- OS (e.g: `cat /etc/os-release`):
- Kernel (e.g. `uname -a`):
- Install tools:
- Network plugin and version (if this is a network-related bug):
- Others:
| kind/bug,priority/important-soon,area/client-libraries,sig/api-machinery,area/admission-control,lifecycle/frozen | medium | Critical |
469,893,330 | pytorch | BatchNorm1d fails on first run through GPU | ## 🐛 Bug
BatchNorm1D results in
```
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
```
on first run through. Afterwords it works as intended. Note that this does not happen on CPU.
## To Reproduce
```
import torch
a = torch.rand((2, 32, 200), requires_grad=True).cuda()
batchnorm = torch.nn.BatchNorm1d(32).cuda()
batchnorm(a)
```
At which point the error will occur. Running
```
batchnorm(a)
```
again outputs a tensor, resulting in desired behavior.
## Expected behavior
BatchNorm1d working the first time around.
## Environment
```
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 18.04.2 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.11.4
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce RTX 2070
Nvidia driver version: 410.93
cuDNN version: /usr/local/cuda-9.0/lib64/libcudnn.so.7.0.5
Versions of relevant libraries:
[pip3] msgpack-numpy==0.4.3.1
[pip3] numpy==1.16.4
[conda] torch 1.1.0 pypi_0 pypi
[conda] torchvision 0.3.0 pypi_0 pypi
``` | module: nn,module: cuda,triaged | low | Critical |
469,918,494 | go | os/signal: TestNohup flaky | Seen on the `solaris-amd64-oraclerel` builder in https://build.golang.org/log/d5185d101eddc0f5a05103f11b74caf24421bf46:
```
--- FAIL: TestNohup (1.25s)
signal_test.go:329: ran test with -send_uncaught_sighup=1 and it succeeded: expected failure.
Output:
PASS
FAIL
FAIL os/signal 3.575s
```
See previously #8682, #5526.
CC @ianlancetaylor | Testing,NeedsInvestigation,compiler/runtime | medium | Critical |
469,947,814 | pytorch | Proposal: Optional AutogradMeta for Variable | # Motivation
Making AutogradMeta optional for a Variable that does not need gradient computation (i.e. it doesn’t require grad, and it doesn’t have a grad_fn) provides the following benefits:
1. Memory savings for Variable that doesn't need gradient computation.
2. Removal of the `Variable` class and the `make_variable` API, and make Variable and Tensor the same concept.
# Plan
- [x] **Part 1: Make all Variable APIs work for non-AutogradMeta Variables**
* There are a list of Variable APIs that always assume the Variable contains AutogradMeta:
```
Variable::grad_fn()
Variable::grad_fn_unsafe()
Variable::set_grad_accumulator()
Variable::try_get_grad_accumulator()
Variable::grad_accumulator()
Variable::set_gradient_edge()
Variable::output_nr()
Variable::is_leaf()
Variable::add_hook()
Variable::hooks()
Variable::clear_hooks()
Variable::is_view()
Variable::base()
Variable::set_name()
Variable::name()
Variable::backward()
Variable::set_data()
Variable::rebase_history()
TensorImpl::set_requires_grad()
TensorImpl::requires_grad()
TensorImpl::grad()
```
These functions internally calls get_autograd_meta() , and if the Variable doesn’t have AutogradMeta, it will throw a nasty segfault.
The right behavior is to check whether AutogradMeta exists, and if AutogradMeta doesn’t exist for that Variable:
- For setter methods:
- Create AutogradMeta on the fly for that Variable, and then proceed as usual.
- For getter methods:
- Return a sensible null value.
- Caveat: some functions are expected to return a mutable/const reference, and returning a mutable/const reference to NULL might not be a good idea / might not work. One idea is that we have an API that ask “whether we can do something” (e.g. has_hooks())before we “do something” (e.g. hooks()), but we need to check whether this design can work for all cases.
- [x] **Part 2: Don’t create AutogradMeta in make_variable(...) when not required**
- Don’t create AutogradMeta in make_variable(...) when requires_grad=false and gradient_edge is undefined
- Make TensorImpl.is_variable() only check the at::NonVariableTypeMode guard, because now a Variable that doesn’t have AutogradMeta is still a Variable
- Maintain the invariant: a Tensor should only have AutogradMeta if it requires grad or has grad_fn
- [x] **Part 3: Deprecate TensorOptions.is_variable()**
- Deprecate TensorOptions.is_variable() (have it always return true, and throw warning when the user tries to set this field)
- For getType(TensorOptions), we only check at::NonVariableTypeMode::is_enabled() to decide whether to choose Variable path.
- [ ] **Part 4: Replace Variable wrapping functions**
- Replace make_variable(...) with appropriate API that attaches AutogradMeta when needed. Audit all call sites of make_variable(...) to understand their expected behavior regarding whether we need do shallow-copy, or if we can just attach AutogradMeta to the original Variable.
- Replace as_variable(...) with appropriate API that attaches AutogradMeta when needed.
- [ ] **Part 5: Documentation improvement**
- Improve "NOTE: After the Variable/Tensor merge" comment based on #18223 (comment) (https://github.com/pytorch/pytorch/pull/18223#discussion_r274071728)
- Improve “Note [Tensor versus Variable in C++]” commend based on https://github.com/pytorch/pytorch/pull/17072/files#r276326234
- [x] **Part 6: Remove Variable class**
- Move autograd-specific functions from Variable class to free functions in `torch::autograd::`
```cpp
Function* grad_fn_unsafe() const;
void set_grad_accumulator(std::weak_ptr<Function> grad_accumulator);
std::shared_ptr<Function> try_get_grad_accumulator() const;
std::shared_ptr<Function> grad_accumulator() const;
Edge gradient_edge() const;
void set_gradient_edge(Edge edge) noexcept;
void bump_version() noexcept;
void set_version_counter(const c10::VariableVersion& version_counter) noexcept;
const c10::VariableVersion& version_counter() const noexcept;
uint32_t current_version() const noexcept; // Replaced by _version() in Tensor
void rebase_history(Edge gradient_edge);
void add_hook(std::shared_ptr<FunctionPreHook> hook);
const std::vector<std::shared_ptr<FunctionPreHook>>& hooks() const noexcept;
void clear_hooks();
bool is_view() const noexcept;
const Variable& base() const;
void set_name(const std::string& name);
const std::string& name() const noexcept;
PyObject* pyobj() const noexcept;
void set_pyobj(PyObject* pyobj) noexcept;
```
- Remove Variable and use at::Tensor everywhere.
- [ ] **Part 7: Add compute_requires_grad() to ATen core**
- There are various places in the codebase where we need to check tensor.requires_grad() and GradMode::is_enabled() at the same time. Ideally we should use compute_requires_grad() to simplify the check.
- Clean up mentions of "Variable and Tensor are merged“.
| module: autograd,triaged | low | Minor |
469,965,789 | bitcoin | Ensure we have sufficient transaction-relay peers | Currently, we have no protections in place if our outbound peers are running in `-blocksonly` mode. In practice, we rely on our outbound peers to be our best source of announced transactions, but if all our outbound peers were `-blocksonly`, we would make no effort to disconnect any to find an alternative peer to receive transactions from.
We discover at connection time (in the VERSION message) whether our peer will relay transactions on our link. Perhaps we should tolerate some number of `-blocksonly` peers as outbounds, and once we hit that threshold we should disconnect new blocksonly outbounds?
I suppose another issue could be if it's worth having logic that tries to determine whether a peer is silently withholding transactions from us... We have logic for blocks, where we try to find a new outbound peer if we haven't received a new block in a while, but I don't know if there's a reasonable way to do this for transactions. My guess is that this is not a very pressing concern, but perhaps we could do something that just tried making new outbound connections if we aren't receiving any transactions at all? | Brainstorming,P2P | low | Major |
469,968,810 | flutter | [tool]`--wrap-column` flag has no effect | Related: https://github.com/flutter/flutter/issues/23074.
In the following, there is no wrapping of the option description.
```
> flutter help --wrap-column=80 --verbose | grep column
--wrap-column Sets the output wrap column. If not set, uses the width of the terminal. No wrapping occurs if not writing to a terminal. Use --no-wrap to turn off wrapping when connected to a terminal.
```
I was expecting option descriptions to be wrapped nicely to the specified column. So something like the following.
```
--wrap-column Sets the output wrap column. If not set, uses
the width of the terminal. No wrapping occurs
if not writing to a terminal. Use --no-wrap to
turn off wrapping when connected to a terminal.
```
Nor does `flutter help --wrap --wrap-column=80 --verbose | less` work.
My environment is as follows.
```
> flutter --version
Flutter 1.7.8+hotfix.3 • channel stable • https://github.com/flutter/flutter.git
Framework • revision b712a172f9 (9 days ago) • 2019-07-09 13:14:38 -0700
Engine • revision 54ad777fd2
Tools • Dart 2.4.0
```
I am using bash in Terminal.app. | tool,a: quality,has reproducible steps,P2,found in release: 3.0,found in release: 3.1,team-tool,triaged-tool | low | Minor |
470,040,109 | pytorch | Add automatic tuning flags to utils.data.dataloader | ## 🚀 Feature
Adding automatic tuning flags for `batch_size` and `num_workers` to `torch.utils.data.dataloader`
## Motivation
We'd like to help users to get good performance (throughput) in their data loading jobs. This will allow users to focus on defining the logic for the model inference.
Today, users have to tune batch_size and num_workers manually when they hit performance issues. Tuning requires some expert knowledge of PyTorch and systems design. Even with this knowledge, users may spend hours or days benchmarking, profiling, identifying bottlenecks, and tweaking parameters to improve performance.
This feature is especially relevant to IO bound pipelines, commonly found in inference workloads, where images are not cached in memory and must be loaded from disk. Optimal `batch_size` and `num_workers` configs can improve an imgs/sec metric by as much as 10x.
## Pitch
Just as Tensorflow has introduced the `tf.data.experimental.AUTOTUNE` feature for `batch_size`, `prefetch_size`, and `num_parallel_calls` in datasets, PyTorch should include a similar feature for torch.utils.data.dataloader.
## Alternatives
If autotuning flags are not introduced, users can guess at the optimal configs, or hyperparameter tune these configs before running full training / inference jobs on big data.
In practice, setting `batch_size` to be the largest possible without running into OOM errors works well, as does setting `num_workers` equal to the number of cores on the machine. However, these configs are not always optimal, and may not take advantage of features like hyperthreading.
Hyperparameter tuning can also work to discover optimal parameters, but lacks information about buffer sizes and has to deal with more noise and variance in imgs/sec result metrics. Additionally, extensive engineering is required to enable online tuning; otherwise, valid inference results may be discarded.
## Additional context
My Questions
* Are there any features like this in the PyTorch Pipeline?
* Would this be useful to other folks in the contributor or user community of PyTorch?
* Has anyone thought about autotuning before and/or have ideas/stubs of implementation?
| feature,module: dataloader,low priority,triaged | low | Critical |
470,059,181 | react | onBeforeInput fires after browser updates the DOM for special characters like "中" or 😣on Firefox and Edge | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
**Do you want to request a *feature* or report a *bug*?**
Bug
**What is the current behavior?**
onBeforeInput fires after browser updates the DOM for special characters like "中" or 😣on Firefox and Edge
https://codesandbox.io/s/modest-franklin-muirj
NOTE: pasting it would not trigger the bug, you have to type it in. You can use control-command-space to open the emoji keyboard on mac
**What is the expected behavior?**
DOM should not update before onBeforeInput fires
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
React: 16.3.1
Firefox: 68.0.1
Mac: 10.14.5
| Component: DOM,Type: Needs Investigation | medium | Critical |
470,113,429 | pytorch | ConcatDataset returns different error messages setting out of range plus index and minus index. | ## 🐛 Bug
ConcatDataset returns different error messages setting out of range plus index and minus index.
## To Reproduce
```
>>> import torch
>>> from torch.utils import data
>>> x=data.ConcatDataset((range(10),range(10)))
>>> x[-100]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mueda/code/python/rc/doc/.venv/lib/python3.7/site-packages/torch/utils/data/dataset.py", line 78, in __getitem__
raise ValueError("absolute value of index should not exceed dataset length")
ValueError: absolute value of index should not exceed dataset length
>>> x[100]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mueda/code/python/rc/doc/.venv/lib/python3.7/site-packages/torch/utils/data/dataset.py", line 85, in __getitem__
return self.datasets[dataset_idx][sample_idx]
IndexError: list index out of range
```
## Expected behavior
I think it's better that x[100] and x[-100] have the same error message.
My way to solve this is to chage [these lines](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataset.py#L197-L200) like this.
```
if idx < 0:
idx = len(self) + idx
if not (0 <= idx < len(self)):
raise ValueError("absolute value of index should not exceed dataset length")
```
## Environment
- PyTorch Version (e.g., 1.0): 1.1.0
- OS (e.g., Linux): Ubuntu 18.04
- How you installed PyTorch (`conda`, `pip`, source): pip
- Python version: Python 3.7.3
- CUDA/cuDNN version: None
- GPU models and configuration: None
| module: docs,low priority,triaged | low | Critical |
470,124,196 | kubernetes | Allow exposing status.containerStatuses[*].imageID through Downward API | **What would you like to be added**:
It would be great if `status.containerStatuses[0].imageID` and `status.containerStatuses[0].image` could be exposed through Downward API and passed as an environment variable. That would allow the container to obtain information about which exactly digest of the image it is running, which can then be useful for any logging from that container.
**Why is this needed**:
It is useful for the container to know which exactly version of the image it is running. By allowing passing the image and digest/ID to the container, this is achievable. | sig/node,kind/feature,priority/important-longterm,lifecycle/frozen,needs-triage | high | Critical |
470,159,491 | pytorch | The speed of `torch.einsum` and `torch.matmul` when using `fp16` is slow | ## 🐛 Bug
I found that the speed of `torch.einsum` when using fp16 is much slower than using fp32.
when the shapes of inputs are (a,b,c) and (a,c,d), `matmul` became much slower as well.
## To Reproduce
```python
import os
os.environ['CUDA_VISIBLE_DEVICES']='0'
import torch
from time import time
a = torch.empty(24,32,40,48, dtype=torch.float32).to('cuda')
b = torch.empty(64,32,40,48, dtype=torch.float32).to('cuda')
c = torch.empty(40,80,24, dtype=torch.float32).to('cuda')
d = torch.empty(40,24,16, dtype=torch.float32).to('cuda')
st = time()
for _ in range(1000):
c.matmul(d)
print(time()-st)
st = time()
for _ in range(1000):
torch.einsum('ibnd,jbnd->ijbn', a, b)
print(time()-st)
a = torch.empty(24,32,40,48, dtype=torch.float16).to('cuda')
b = torch.empty(64,32,40,48, dtype=torch.float16).to('cuda')
c = torch.empty(40,80,24, dtype=torch.float16).to('cuda')
d = torch.empty(40,24,16, dtype=torch.float16).to('cuda')
st = time()
for _ in range(1000):
torch.matmul(c,d)
print(time()-st)
st = time()
for _ in range(1000):
torch.einsum('ibnd,jbnd->ijbn', a, b)
print(time()-st)
```
Steps to reproduce the behavior:
just run it
## Expected behavior
the speed of fp16 is much much slower than fp32.
```shell
~# python debug_fp32.py
0.027073144912719727
0.04812788963317871
~# python debug_fp16.py
0.3655080795288086
10.85773253440857
```
## Environment
v100, cuda 10.1, pytorch 1.1
cc @ngimel @vincentqb @vishwakftw @jianyuh @nikitaved @pearu @VitalyFedyunin | module: performance,module: cuda,triaged,module: linear algebra | low | Critical |
470,162,203 | create-react-app | Issues with lerna and Yarn workplaces | <!--
Please note that your issue will be fixed much faster if you spend about
half an hour preparing it, including the exact reproduction steps and a demo.
If you're in a hurry or don't feel confident, it's fine to report bugs with
less details, but this makes it less likely they'll get fixed soon.
In either case, please use this template and fill in as many fields below as you can.
Note that we don't provide help for webpack questions after ejecting.
You can find webpack docs at https://webpack.js.org/.
-->
### Describe the bug
We have a lerna monorepo setup with CRA project, Typescript Node.js Server and Javascript projects for shared codes. React script has jest dependacy with 24.7.1 and I installed jest as a devDependancy for one of shared package. It installed jest 28.x.x version and CRA preflight check fails.
### Did you try recovering your dependencies?
<!--
Your module tree might be corrupted, and that might be causing the issues.
Let's try to recover it. First, delete these files and folders in your project:
* node_modules
* package-lock.json
* yarn.lock
Then you need to decide which package manager you prefer to use.
We support both npm (https://npmjs.com) and yarn (http://yarnpkg.com/).
However, **they can't be used together in one project** so you need to pick one.
If you decided to use npm, run this in your project directory:
npm install -g npm@latest
npm install
This should fix your project.
If you decided to use yarn, update it first (https://yarnpkg.com/en/docs/install).
Then run in your project directory:
yarn
This should fix your project.
Importantly, **if you decided to use yarn, you should never run `npm install` in the project**.
For example, yarn users should run `yarn add <library>` instead of `npm install <library>`.
Otherwise your project will break again.
Have you done all these steps and still see the issue?
Please paste the output of `npm --version` and/or `yarn --version` to confirm.
-->
I tried deleting `node_modules`, `yarn.lock` folder and running `lerna bootsrap`
### Which terms did you search for in User Guide?
<!--
There are a few common documented problems, such as watcher not detecting changes, or build failing.
They are described in the Troubleshooting section of the User Guide:
https://facebook.github.io/create-react-app/docs/troubleshooting
Please scan these few sections for common problems.
Additionally, you can search the User Guide itself for something you're having issues with:
https://facebook.github.io/create-react-app/
If you didn't find the solution, please share which words you searched for.
This helps us improve documentation for future readers who might encounter the same problem.
-->
### Environment
<!--
To help identify if a problem is specific to a platform, browser, or module version, information about your environment is required.
This enables the maintainers quickly reproduce the issue and give feedback.
Run the following command in your React app's folder in terminal.
Note: The result is copied to your clipboard directly.
`npx create-react-app --info`
Paste the output of the command in the section below.
-->
```
System:
OS: macOS 10.14.5
CPU: (8) x64 Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
Binaries:
Node: 10.15.3 - /usr/local/bin/node
Yarn: 1.16.0 - /usr/local/bin/yarn
npm: 6.4.1 - /usr/local/bin/npm
Browsers:
Chrome: 75.0.3770.142
Firefox: 67.0.4
Safari: 12.1.1
npmPackages:
react: Not Found
react-dom: Not Found
react-scripts: Not Found
npmGlobalPackages:
create-react-app: Not Found
```
### Steps to reproduce
<!--
How would you describe your issue to someone who doesn’t know you or your project?
Try to write a sequence of steps that anybody can repeat to see the issue.
-->
(Write your steps here:)
1. Setup lerna mono repo.
2. Create CRA project as a package.
3. Create a normal javascript package with jest as devDependancy.
4. Try to start the CRA.
### Expected behavior
<!--
How did you expect the tool to behave?
It’s fine if you’re not sure your understanding is correct.
Just write down what you thought would happen.
-->
CRA start without failure
### Actual behavior
<!--
Did something go wrong?
Is something broken, or not behaving as you expected?
Please attach screenshots if possible! They are extremely helpful for diagnosing issues.
-->
Pre flight check fails with jest version mismatch.
### Reproducible demo
<!--
If you can, please share a project that reproduces the issue.
This is the single most effective way to get an issue fixed soon.
There are two ways to do it:
* Create a new app and try to reproduce the issue in it.
This is useful if you roughly know where the problem is, or can’t share the real code.
* Or, copy your app and remove things until you’re left with the minimal reproducible demo.
This is useful for finding the root cause. You may then optionally create a new project.
This is a good guide to creating bug demos: https://stackoverflow.com/help/mcve
Once you’re done, push the project to GitHub and paste the link to it below:
-->
(Paste the link to an example project and exact instructions to reproduce the issue.)
<!--
What happens if you skip this step?
We will try to help you, but in many cases it is impossible because crucial
information is missing. In that case we'll tag an issue as having a low priority,
and eventually close it if there is no clear direction.
We still appreciate the report though, as eventually somebody else might
create a reproducible example for it.
Thanks for helping us help you!
-->
https://github.com/iamchathu/lerna-monorepo-test
| issue: bug | low | Critical |
470,227,186 | vue-element-admin | 有咩有谁在这个项目当中引用过el-image?我引入el-image会报组件注册不正确的错误 | ## Question(提问)
有咩有谁在这个项目当中引用过el-image?我引入el-image会报组件注册不正确的错误
<!--
提问之前,请确定你已经过自己的努力,尝试解决过这个问题。
若是代码相关问题,请不要只截图,请提供在线 demo,以便节约彼此的时间。
Before asking a question, please make sure that you have tried your best to solve this problem.
If it's a code-related issue, please don't just take screenshots. Please provide an online demo to save each other's time.
-->
#### Steps to reproduce(问题复现步骤)
<!--
1. [xxx]
2. [xxx]
3. [xxxx]
-->
#### Screenshot or Gif(截图或动态图)


#### Link to minimal reproduction(最小可在线还原demo)
<!--
Please only use Codepen, JSFiddle, CodeSandbox or a github repo
-->
#### Other relevant information(格外信息)
- Your OS:
- Node.js version:
- vue-element-admin version:
| feature | low | Major |
470,228,221 | flutter | Refactor logs to operate on functionality instead of log level | Refactor logs to operate on functionality instead of log level, e.g.,
`Log.debugPrintSystemChannels();`
`Log.debugPrintLifecycle();`
etc. | team,framework,P3,team-framework,triaged-framework | low | Critical |
470,247,710 | pytorch | creation of a tensor from a numba.cuda array | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Having `torch.from_numba(cuda_arr)` can be useful in many cases.
## Motivation
With the easy to use APIs in numba, I would like to do the preprocessing on gpu and then pass resultant gpu arrays to pytorch model. But currently, we have to convert these gpu arrays to numpy and then use the `torch.from_numpy()` API.
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
## Pitch
<!-- A clear and concise description of what you want to happen. -->
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
## Additional context
```Python
import numba
import torch
cuda_arr = numba.cuda.device_array(shape=(5,4))
# here, we need to perform this unnecessary conversion from gpu->cpu
numpy_arr = cuda_arr.copy_to_host()
gpu_tensor = torch.from_numpy(numpy_arr)
```
<!-- Add any other context or screenshots about the feature request here. -->
| feature,low priority,triaged,module: numba | low | Major |
470,249,611 | pytorch | [c++] torch::conv2d() expected output_padding to be a single integer value or a list of 3 values | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. When I use c++ to deploy my model ,I need to rite a NMS function ,I found torch::conv2d() needs a param I can't provide
2.my source code is
```
torch::Tensor soft_nms_3d(torch::Tensor scale_logits, int ksize, float com_strength) {
int num_scales = scale_logits.sizes()[3];
torch::Tensor max_each_scale = torch::max_pool2d(scale_logits.permute({0, 3, 1, 2}), {ksize, ksize}, {1},
{int(ksize / 2)}).permute({0, 2, 3, 1});
std::tuple<torch::Tensor, torch::Tensor> torch_max_tmp = torch::max(max_each_scale, -1, true);
torch::Tensor max_all_scale;
torch::Tensor max_all_scale_idx;
std::tie(max_all_scale, max_all_scale_idx) = torch_max_tmp;
torch::Tensor exp_maps = torch::exp(com_strength * (scale_logits - max_all_scale));
torch::Tensor input = exp_maps.permute({0, 3, 1, 2});
torch::Tensor weight = torch::full({1, num_scales, ksize, ksize, 1}, 1);
torch::Tensor bias=torch::ones({0});
torch::Tensor sum_exp = torch::conv2d(input,weight,bias,{1},{int(ksize/2)},0,1);
torch::Tensor probs = exp_maps / (sum_exp + 1e-8);
return probs;
}
```
3. And thrown error is:
```
terminate called after throwing an instance of 'c10::Error'
what(): expected output_padding to be a single integer value or a list of 3 values to match the convolution dimensions, but got output_padding=[0, 0] (convolution_expand_param_if_needed at /home/wang/software/pytorch/aten/src/ATen/native/Convolution.cpp:286)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6a (0x7fc42195a43a in /home/wang/software/pytorch/torch/lib/tmp_install/lib/libc10.so)
frame #1: <unknown function> + 0x52aa4a (0x7fc433c2da4a in /home/wang/software/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #2: at::native::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long, bool, bool, bool) + 0x3c6 (0x7fc433c33c56 in /home/wang/software/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #3: at::TypeDefault::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long, bool, bool, bool) const + 0xfd (0x7fc433fcb95d in /home/wang/software/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #4: torch::autograd::VariableType::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long, bool, bool, bool) const + 0x2a7 (0x7fc43b2aece7 in /home/wang/software/pytorch/torch/lib/tmp_install/lib/libtorch.so.1)
frame #5: at::native::convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long) + 0xe0 (0x7fc433c2c660 in /home/wang/software/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #6: at::TypeDefault::convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long) const + 0xc6 (0x7fc433fcb7d6 in /home/wang/software/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #7: torch::autograd::VariableType::convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long) const + 0x83 (0x7fc43b2b3ca3 in /home/wang/software/pytorch/torch/lib/tmp_install/lib/libtorch.so.1)
frame #8: at::native::conv2d(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long) + 0xaf (0x7fc433c2bfbf in /home/wang/software/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #9: at::TypeDefault::conv2d(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long) const + 0xbc (0x7fc433fcbeec in /home/wang/software/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #10: torch::autograd::VariableType::conv2d(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long) const + 0x75 (0x7fc43b374e75 in /home/wang/software/pytorch/torch/lib/tmp_install/lib/libtorch.so.1)
frame #11: soft_nms_3d(at::Tensor, int, float) + 0x54f (0x407a1f in ./rfnet)
frame #12: main + 0x77c (0x4069cc in ./rfnet)
frame #13: __libc_start_main + 0xf0 (0x7fc421004830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #14: _start + 0x29 (0x407069 in ./rfnet)
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
I need a c++ interface of conv2d function which can fill weights and bias to do convolution ,this job also can be done through other functions, but I think this function may exists errors.
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Collecting environment information...
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 7.5.17
GPU models and configuration: GPU 0: GeForce GTX 1060
Nvidia driver version: 390.87
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] torch==1.1.0
[pip] torchvision==0.3.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl_fft 1.0.12 py36ha843d7b_0
[conda] mkl_random 1.0.2 py36hd81dba3_0
[conda] pytorch 1.1.0 py3.6_cuda9.0.176_cudnn7.5.1_0 pytorch
[conda] torchvision 0.3.0 py36_cu9.0.176_1 pytorch
| module: docs,module: cpp,module: nn,low priority,module: convolution,triaged | low | Critical |
470,272,196 | create-react-app | create-react-app freezes at extracting "rxjs" | 
#ITS NOW SHOWING TWO LOADERS CAN U PLEASE HELP ME WITH IT ????

##ITS NOW 2 DAYS BUT THIS ISSUE IS COMING UP AGAIN AND AGAIN .
Please help ... | issue: bug | low | Minor |
470,304,311 | ant-design | TreeSelect的节点提供连接线以及目录树 | - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
目前的TreeSelect展开的树形列表虽然功能上已经基本完善,但是在某些情况下用户体验并不是很好,比如在节点数量比较多,节点层数比较复杂的情况下只用△的展示方式并不友好,所以希望提供像Tree组件的showLine以及DirectoryTree的可选方式展示。
### What does the proposed API look like?
<TreeSelect showLine />
参数:showLine
说明:是否展示连接线
类型:boolean
默认值:false
const { DirectoryTreeSelect } = TreeSelect;
<DirectoryTreeSelect />
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | help wanted,Inactive | low | Minor |
470,319,537 | TypeScript | checkJs should recognize properties assigned with lodash | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
js, lodash, properties
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
Recognize properties of some object when using lodash assign method
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
Any initalization of an object using:
``` js
_.assign(obj, { ...someProps });
// Instead of
this.oneProp = oneProp;
this.anotherProp = anotherProp;
```
## Examples
<!-- Show how this would be used and what the behavior would be -->

## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [ ] This could be implemented without emitting different JS based on the types of the expressions
* [ ] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
470,328,778 | pytorch | Inconsistent axis argument names in torch.diagonal and torch.transpose | `torch.diagonal` uses `dim1` and `dim2` while `torch.transpose` uses `dim0` and `dim1`. | module: docs,low priority,triaged | low | Minor |
470,362,670 | go | cmd/compile: malformed DWARF ranges (child not contained in parent) | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +79bb1a3653 Thu Jul 18 10:16:59 2019 -0400 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
linux/amd64
</pre></details>
### What did you do?
Build this program:
<pre>
package main
import "C"
import "log"
func main() {
log.Printf("foo")
}
</pre>
in the usual way, e.g. "go build main.go". Then run
<pre>
llvm-dwarfdump -verify -verbose main
</pre>
to check the resulting dwarf.
### What did you expect to see?
Clean run
### What did you see instead?
A number of errors of the form: "error: DIE address ranges are not contained in its parent's ranges:"
Here is one instance:
```
0x00006511: DW_TAG_inlined_subroutine
DW_AT_abstract_origin (0x00000000000014b8 "runtime.add")
DW_AT_ranges (0x00001370
[0x0000000000004984, 0x0000000000004988)
[0x00000000000049f9, 0x00000000000049fe))
DW_AT_call_file ("/ssd2/go/src/runtime/chan.go")
DW_AT_call_line (121)
```
which is contained in this DIE:
```
0x000064e4: DW_TAG_inlined_subroutine
DW_AT_abstract_origin (0x0000000000001696 "runtime.chanbuf")
DW_AT_low_pc (0x000000000000497c)
DW_AT_high_pc (0x0000000000004984)
DW_AT_call_file ("/ssd2/go/src/runtime/chan.go")
DW_AT_call_line (484)
```
so definitely an inconsistency. Note that the top-level parent is:
```
0x0000640e: DW_TAG_subprogram
DW_AT_name ("runtime.chanrecv")
DW_AT_low_pc (0x00000000000048a0)
DW_AT_high_pc (0x0000000000004f52)
DW_AT_frame_base (DW_OP_call_frame_cfa)
DW_AT_decl_file ("/ssd2/go/src/runtime/chan.go")
DW_AT_external (0x01)
```
There is nothing in the DWARF spec as far as I know that mandates this sort of address range nesting consistency, but I think it would probably be nice if a given inlined subroutines ranges were completely nested inside the parent DIE.
| NeedsFix,Debugging | low | Critical |
470,382,416 | pytorch | update docs that sorting is not needed in | ## 📚 Documentation
based on dicussion here:
https://discuss.pytorch.org/t/why-lengths-should-be-given-in-sorted-order-in-pack-padded-sequence/3540/10
`pack_padded_sequence`
does not need sorting anymore, so perhaps the fact the docs mention sorting at all should be deleted.
> For unsorted sequences, use enforce_sorted = False. If enforce_sorted is True, the sequences should be sorted by length in a decreasing order, i.e. input[:,0] should be the longest sequence, and input[:,B-1] the shortest one. enforce_sorted = True is only necessary for ONNX export. | module: docs,module: rnn,triaged | low | Minor |
470,383,168 | youtube-dl | [ARD Radio] Audiothek Site support request | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.07.16. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.07.16**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single audio: https://www.ardaudiothek.de/lesungen/manfred-krug-liest-die-kuh-im-propeller-von-michail-soschtschenko/64767418
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Can you please add support for ARD Audiothek (ARD Radio)?
| site-support-request | low | Critical |
470,412,279 | go | cmd/vet: reject flag.Parse during func init | In #31859, @cespare suggested rejecting flag.Parse during func init, which is always incorrect (other packages not yet initialized may want to define flags).
We could add a special runtime hook of some kind to allow flag to see whether main.main has started, but that would be unfortunate.
There also may be lots of code in the wild that does parse flags during init and kind of works out OK, and if it's working well enough we don't want to break it unnecessarily.
A vet check, on by default during go test, seems like the perfect compromise to me. | help wanted,NeedsFix,early-in-cycle,Analysis | low | Major |
470,435,143 | go | encoding/asn1: valid GeneralizedTime with UTC offset of +0000 not parsed | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.7 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
macOS 10.14.5
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/[user]/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/[user]/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12.7/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12.7/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/84/_6l41bt970l9fmsrwc_p1gv00000gn/T/go-build217266963=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
Attempted to upgrade to TLS on a server I don't control.
It appears that the time format is valid, according to the documentation I've found ([here](https://www.obj-sys.com/asn1tutorial/node14.html), among other places).
Public cert from that server included in [this failing test](https://play.golang.org/p/L1jCqoW6J4K):
```golang
func TestUTCOffset(t *testing.T) {
certPEM := []byte(`-----BEGIN CERTIFICATE-----
MIIDNzCCAqCgAwIBAgIJAOG5Q5oZboH9MA0GCSqGSIb3DQEBBAUAMHQxCzAJBgNV
BAYTAmNhMREwDwYDVQQHEwhJbm5pc2ZpbDENMAsGA1UEChMESkpFSTEZMBcGA1UE
AxMQSkpFSSBXZWJBZG1pbiBDQTEoMCYGCSqGSIb3DQEJARYZc3VwcG9ydEBwYWNl
dGVjaG5pY2FsLmNvbTAiFw0wODExMTIwMjE3NTJaFxEzNjAzMjkwMjE3MjkrMDAw
MDBFMQswCQYDVQQGEwJjYTERMA8GA1UEBxMISW5uaXNmaWwxDTALBgNVBAoTBEpK
RUkxFDASBgNVBAMTC2d3LmpqZWkuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB
iQKBgQCpGszlFBX7X8t2ruSxDgmFHmunwbpbXlqT+Ekh/TBP/I4qwbepPY+nL9jo
0+ngO7cENWnXLc1B2N32uXakD0ygJzgN6ftbwX0nMWBOG5dcc+TQCl518q9aTAEx
R1LFXXvAM1uCknjYINnyzbs7xFxdhIVAZG6m/hcPPtiu6c1WnQIDAQABo4H7MIH4
MB0GA1UdDgQWBBRmqWzvNfSxFKPuGh1xRA6MPotYqjCBpgYDVR0jBIGeMIGbgBSH
ssHbNYjHbMAf3QJ9/EExdAbmTaF4pHYwdDELMAkGA1UEBhMCY2ExETAPBgNVBAcT
CElubmlzZmlsMQ0wCwYDVQQKEwRKSkVJMRkwFwYDVQQDExBKSkVJIFdlYkFkbWlu
IENBMSgwJgYJKoZIhvcNAQkBFhlzdXBwb3J0QHBhY2V0ZWNobmljYWwuY29tggkA
4blDmhlugfwwFgYDVR0RBA8wDYILZ3cuamplaS5jb20wCQYDVR0TBAIwADALBgNV
HQ8EBAMCBeAwDQYJKoZIhvcNAQEEBQADgYEAAsbIrUXDZ9bnPsauS/0iIZQhJWDc
gQg8UrWOHrdjh/bVG0Cgv5kx6EGE60Q5OuvyQU1wcAN8YgqgttrlLWjWIMV1/lZD
IovhMR42FRMpYtmW+YDVRY70fPqFAGox1r5/6fi7TCXKZmkNFQ0SoayW6xQmtqct
48cZbI2/iiwqeVE=
-----END CERTIFICATE-----`)
certDERBlock, _ := pem.Decode(certPEM)
_, err := x509.ParseCertificate(certDERBlock.Bytes)
if err != nil {
t.Fatal(err)
}
}
```
### What did you expect to see?
The certificate would get parsed.
### What did you see instead?
`asn1: time did not serialize back to the original value and may be invalid: given "360329021729+0000", but serialized as "360329021729Z"`
The problem looks like it may be in the same area as #15842. | NeedsInvestigation | low | Critical |
470,436,580 | flutter | Expose hot restart to flutter_driver | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
Writing e2e tests could be more automated, and less time consuming, if we could wipe app state at the end of any given test. Often, we don't want the state from the previous test to carry over to a new test or workflow that we're testing.
## Proposal
Expose a method on `FlutterDriver` that performs a hot restart on the app that's running for the driver. (i.e. FlutterDriver.restartApp`), or add a flag to the driver that can configure it to restart at the end of every test. (i.e. `FlutterDriver.restartAfterEachTest = true`).
### more context
My team is re-writting our company's app in Flutter from iOS. The QAEs are attempting to port the E2E tests from our current app with Flutter Driver. Currently, they're written using cucumber.
There actually is a cucumber port on Pub that offers this feature. It accomplishes this by writing an "R" to stdin to the vm running the app while driver is running. That library isn't great for our use case, I'm just pointing it out for context | c: new feature,tool,t: flutter driver,P3,team-tool,triaged-tool | low | Critical |
470,446,879 | flutter | Initial image load is slow when AssetManifest.json is larger than 10KB | When AssetManifest.json is larger than 10KB, [AssetBundle.loadString](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/services/asset_bundle.dart#L70) will spawn a new isolate to perform utf8 decoding, and that is taking 1.5 - 3 seconds on my test device (Pixel 2) in debug mode.
Release mode is much better, takes only ~150ms.
This is affecting our internal customers because the slow image load makes the scuba tests flaky. | framework,c: performance,a: assets,customer: google,a: images,a: build,P2,team-framework,triaged-framework | low | Critical |
470,475,903 | TypeScript | `this` and `typeof` are not type keywords in completions | In services/utilities.ts:1213, `this` and `typeof` are not in
```ts
export const typeKeywords: ReadonlyArray<SyntaxKind> = [
SyntaxKind.AnyKeyword,
SyntaxKind.BigIntKeyword,
SyntaxKind.BooleanKeyword,
SyntaxKind.FalseKeyword,
SyntaxKind.KeyOfKeyword,
SyntaxKind.NeverKeyword,
SyntaxKind.NullKeyword,
SyntaxKind.NumberKeyword,
SyntaxKind.ObjectKeyword,
SyntaxKind.ReadonlyKeyword,
SyntaxKind.StringKeyword,
SyntaxKind.SymbolKeyword,
SyntaxKind.TrueKeyword,
SyntaxKind.VoidKeyword,
SyntaxKind.UndefinedKeyword,
SyntaxKind.UniqueKeyword,
SyntaxKind.UnknownKeyword,
];
```
even though they are type keywords. Unfortunately, adding them breaks some tests to do with find-all-refs with global this -- patterns like `function f(this) { this }` start identifying the `this` inside the function as the global `this`. | Bug,Help Wanted,Domain: Completion Lists | low | Major |
470,485,662 | pytorch | Versioning for libtorch nightlies | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Currently, all the nightly builds have the same `CAFFE2_VERSION` even though there may be major changes between them.
For example, both `1.2.0.dev20190601` and `1.2.0.dev20190717` list the version as `10200`, but there are many breaking changes between the two:
- `torch::jit::load` returned a shared pointer in the past and no longer does
- Getting a schema requires `method.function().getSchema()` instead of `method.getSchema()`
- A `ClassType` argument is added as the first input to every model
- Lots of changes in `List`/`GenericList` and `Dict`/`GenericDict`
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
Since torch development is moving pretty quickly, it's important to be able to support one or two nightlies between major releases. This is difficult because there doesn't appear to be a good way to differentiate between versions.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
Can the nightly builds define an additional version? Maybe something like this:
```
#define CAFFE2_NIGHTLY_VERSION 20190717
```
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
There's a comment that mentions a versioning strategy. Is this still correct?
https://github.com/pytorch/pytorch/blob/693871ded3d3c643d7d57b990fc415198392b5f0/caffe2/core/macros.h.in#L8-L9
| module: binaries,module: build,triaged,better-engineering | low | Minor |
470,516,655 | pytorch | Test utility for non-contiguous tensors | Currently in tests, non-contiguous tensors are mostly created in a homebrewed way, e.g., via transpose. This may work for now but it seems relying too much on the underlying implementation which guarantees nothing about contiguity. It would be nice if there is a utility that consistently produces non-contiguous tensors for testing purposes. | module: tests,triaged,enhancement | low | Minor |
470,520,993 | TypeScript | Optionally provide path/link to file with transpilation error | I see this in my terminal:
<kbd>
<img width="975" alt="Screen Shot 2019-07-19 at 1 04 27 PM" src="https://user-images.githubusercontent.com/11139560/61562857-18523d00-aa27-11e9-8a2b-67025d2dab95.png">
</kbd>
-----
I have two feature requests:
1. for us hard of seeing people, it's _really_ hard to read the cyan and yellow color in a light terminal.
Is there some way to turn on darker colors - by passing a dark/light option to pretty?
2. It would be awesome if the transpiler could include the path to the file in the terminal. My editor is cool and if I click the file path in the terminal it will open the file.
My tsconfig.json is:
```
{
"compilerOptions": {
"outDir": "dist",
"allowJs": false,
"pretty": true, // << makes it pretty, but I can't read in a light-colored terminal
"resolveJsonModule": true,
"sourceMap": false,
"skipLibCheck": true,
"rootDir": "src",
"declaration": false,
"baseUrl": ".",
"target": "es2018",
"module": "commonjs",
"noImplicitAny": true,
"removeComments": true,
"allowUnreachableCode": true,
"lib": [
"es2017",
"es2018"
]
},
"compileOnSave": false,
"include": [
"src"
]
}
```
[1]: https://i.stack.imgur.com/bssEZ.png | Suggestion,Awaiting More Feedback | low | Critical |
470,566,427 | pytorch | [RFC] RPC Based Distributed Model Parallel | with @pritamdamania87 @zhaojuanmao @aazzolini @gqchen @pietern @satgera @ezyang @zdevito @suo @manojkris @gchanan @soumith @dzhulgakov @yifuwang @bddppq @joxu-cn @dwarakrajagopal @jspisak
PyTorch currently provides simple APIs for single machine data parallel, distributed data parallel, and single machine model parallel. However, when it comes to distributed model parallel, applications have to build their own scaffold to stitch together local autograd graphs into one global graph. This proposal aims to fill in that gap by providing an RPC-Based distributed model parallel API. In short, applications may run RPC to execute code remotely in the forward pass, and autograd will automatically travel across RPC boundaries in the backward pass.
# API
## Core Concepts
**RRef[T] -** (abbreviation ref) A *reference *to a value of some type `T` (e.g. Tensor) on a remote worker. This handle keeps the referenced remote tensor value alive on the owner, but there is no implication that the value will be transferred to the local worker in the future. It is valid to have a reference to local value as well, and values of type `T` can be* implicitly converted* to `RRef[T]`. This implicit conversion will be critical later to allow the expression of different types of RPC. Think of it like the implicit conversion from `std::string` to `const std::string &`. See System Design section for more details about `RRef`.
```python
ref.owner() # what is the worker this value lives on
v = ref.local_value() # if ref.owner() is local worker, then
# this returns the the underlying value, otherwise error.
# you can create a ref to a local tensor
t = torch.rand(3, 4)
ref2 = torch.RRef(t)
# in TorchScript, T can be automatically converted to RRef[T]
ref3 : RRef[Tensor] = t
```
**Future[T] -** (abbreviation fut) a guarantee that at some future point in time the value of type `T` will be available locally. The action to create `T` locally is assumed to be scheduled and in-progress. Future is already supported in TorchScript and we are extending this to remote calls.
```python
v = fut.wait() # block the current thread until v is ready
# local cpu task creation returns a future to the computed tensors
fut = torch.fork(lambda x, y: x + y, torch.rand(3, 4), torch.rand(3, 4))
```
## Core Functions
```python
# synchronous
result : T = torch.rpc(on : Worker, remote_callable : Callable, *args)
# asynchronous
result : Future[T] = torch.async_rpc(on : Worker, remote_callable : Callable, *args)
# remote reference
result : RRef[T] = torch.remote(on : Worker, remote_callable : Callable, *args)
```
Each function above invokes `remote_callable` on a remote worker. Value types in the `args` list are *copied by value* to the remote worker. `RRef[T]` types in the `args` list are *copied by reference* to the remote worker (again see the analogy between `std::string` and `const std::string&`).
The *synchronous* variant copies the result value back, blocking the calling thread until the response occurs. The *asynchronous* variant returns immediately with a future. The remote knows that the call *will expect to receive the value* so it will send a message back at some point with the result without further prompting.
The *remote reference* variant returns immediately with an `RRef` of the return value. The remote knows that the caller *does not expect to receive the result value*.
Below shows how these functions are used:
```python
# make some local tensors
a : Tensor = torch.rand(3, 4)
b : Tensor = torch.rand(3, 4)
# define a remote function, visible to all machines.
# type annotations define expected input/output types.
def remote_function(a : Tensor, b : RRef[Tensor]) -> Tensor:
# 'b' in the type signature is a remote reference, so we must copy it here
# to use it locally.
# to_here() is defined later in the syntax sugar section, it synchronously
# copies the tensor to this worker.
b_l : Tensor = b.to_here()
return a + b_l
# run remote_function on a different device.
# a is copied by value since it is a Tensor
# b is copied by reference remote machine due to the RRef[Tensor]
# type annotation in the signature, which causes an implicit conversion to a
# reference type.
# torch.remote always creates an RRef of the result type.
# It does not wait for the remote's response.
# There is no implied copy of the tensor data yet.
c : RRef[Tensor] = torch.remote("worker1", remote_function, a, b)
# we can explicitly request the data to be copied back here:
c_l : Tensor = c.to_here()
# another example:
def remote_function2(a : Tensor, b : Tensor) -> Tensor:
return a + b
# Here we call torch.rpc which returns the value directly without
# creating a remote reference.
# we synchronously wait for remote_function2 to return.
c : Tensor = torch.rpc("worker2", remote_function2, a, b)
# When the RPC call is returning a non-reference type, we need to wait for
# a response from the remote host. To avoid synchronously waiting, use the
# async flag to get a future instead.
c_f : Future[Tensor] = torch.async_rpc("worker2", remote_function2, a, b)
# even before calling wait, the remote knows that the data should be sent back
# to the caller as soon as it is ready.
# force the local thread to wait for the remote's response
c = c_f.wait()
# if you omit type annotations in the remote function, the assumption is that
# arguments are passed without any implicit conversions
def remote_function3(a, b):
# no annotations mean that a, b will be Tensor since there is no conversion
return a + b
c: Tensor = torch.rpc("worker2", remote_function3, a, b)
```
### RRef Forks
### Implicit Conversions for RRef Arguments
We allow implicit conversion between `T` and `RRef[T]` for arguments of RPC functions. Both the actual and formal parameter can either be a `T` or an `RRef[T]`, leading to four cases that might occur:
**T → T (passing a T to an rpc that accepts a T):** the value T is copied by value, and send over the wire as part of the message invoking the RPC
**T → RRef[T] (passing a T to an rpc that accepts RRef[T]):** The caller constructs a remote reference to the argument, and sends the *reference* over the wire to the callee. The data is not sent. The callee can then use the reference as a handle to either request the data later or to make further remote calls.
**RRef[T] → T (passing an RRef[T] to an rpc that accepts T):** The callee expects to get an actual value, so the callee needs to turn the reference into a value. The network behavior depends on where the `RRef[T]` lives.
* If the `RRef[T]` lives on the caller, then the implementation looks up the actual value of `T` locally and pass it by value along the wire similar to the T → T case.
* If the `RRef[T]` lives on the callee, then the implementation just sends the reference and the callee does the lookup locally.
* If the `RRef[T]` lives on some third machine, then the caller sends 2 messages. One to the third machine telling it to send the data in the remote reference directly to the callee, and one to the callee telling it to start the RPC and expect this input to be coming from the third machine. This effectively forward value of the `RRef[T]` to the callee without the caller having to load it or the callee having to request it later
Examples:
```python
def remote_function1() -> Tensor:
return torch.ones(2)
def remote_function2(a : Tensor) → Tensor:
b = a * 2
return b
aref : RRef[Tensor] = remote("worker1", remote_function1)
# this local worker will make two RPC calls: one to tell worker1 to send the
# tensor to worker2, and another one to tell worker2 to expect this Tensor input
# from worker1. remote_function2 will run on worker2 only after it received the
# tensor from worker1.
bref : RRef[Tensor] = remote("worker2", remote_function2, aref)
```
**RRef[T] → RRef[T]** (**passing an RRef[T] to an RPC that accepts RRef[T]): **The callee expects an `RRef[T]`, but we must make sure we correctly keep track of references to the value on a remote. So the actual behavior depends on where the `RRef[T]` lives.
* If `RRef[T]` lives on the caller, then we simply pass it to the remote and record that this remote now has a live reference to the value.
* If the `RRef[T]` lives on the callee, then we pass it to the remote, and it becomes a local reference on the remote.
* If `RRef[T]` lives on some third machine, then we must forward the reference. To do this the caller sends two messages. One to the third machine telling it to create a remote reference and send it to the callee, and one to the callee telling from where to expect the remote. The callee code is not invoked until the remote is transferred to ensure sane reference counting.
Examples:
```python
def remote_function1() -> Tensor:
return torch.ones(2)
def remote_function2(a : RRef[Tensor]) -> Tensor:
int delta = 10
return a.to_here() + delta
aref : RRef[Tensor] = remote("worker1", remote_function1)
# this local worker will make two RPC calls: one to tell worker1 to create a
# remote reference and send it to worker2, and another one to tell worker2 to
# expect this remote reference input from worker1. remote_function2 code will
# not run on worker2 until it receives the remote reference from worker1 to
# ensure proper reference counting.
bref : RRef[Tensor] = remote("worker2", remote_function2, aref)
```
When an `RRef[T]` goes dead on machine A, a message is sent to the owner of `T` telling it that the reference from machine A is dead.
### Explicit RRef type for return values
The above implicit `RRef` argument conversion does not apply to return values. If `remote_function` returns `RRef[T]`, calling it remotely using `torch.remote` would return `RRef[RRef[T]]` instead of `RRef[T]`. This is because when the return value `RRef` of `torch.remote` is first created on the caller who does not know the owner of the real data `T`. T could be stored on the callee of `torch.remote`, but it could also be on a different worker as callee may also make another remote call within `remote_function` and return an `RRef[T]` owned by a different worker. Moreover, the caller is allowed to share the returned `RRef` with other workers immediately after `torch.remote` returns. However, as by then, the caller does not know the real owner of `T` yet, sharing the `RRef` would break the reference count algorithm.
Examples:
```python
def remote_function3() -> RRef[Tensor]:
return torch.remote("Worker2", torch.ones, 2, 2)
cref : RRef[RRef[Tensor]] = remote("worker1", remote_function3)
```
## Initialization API
Users may choose communication backend for RPC, and users are responsible for setting up the backend properly before calling the `init_rpc` method.
```python
# backend: specifies the underlying communication implementation
# init_method: contains the information to initialize/connect a name store to
# resolve names
# name: is a unique identifier for the current worker
torch.distributed.init_rpc(backend="pg", init_method="file:///...", name="worker1")
```
The `init_rpc` method will create an `RpcAgent` under the hood and will make the current worker ready to send and receive RPC calls. If you call `init_rpc` and use the `ProcessGroup` (`pg`) backend, it acts as a global barrier, where all the node names as collectively synchronized before continuing. This is not the case if you use a peer to peer backend (e.g. tensor pipes), where calling `init_rpc` will register the node name in the specified store and start serving.
Applications don’t need to explicitly register functions for remote execution, but we do assume same functions are defined on both caller and callee. This is often true as all workers can import the same set of libraries or even share the same Python script.
## Syntax Sugar
Other operations are now implementable using syntax sugar.
### Retrieving Value From RRef
```python
# helper private RPC functions
def _identity(v : Tensor) -> Tensor:
# copy the tensor by value to this remote,
return v
def _to_here(v : RRef[T]) -> T:
# take a reference, send it to the device that owns it
# and have that device return the actual tensor by value
return v.local_value()
class RRef[T]:
...
# copy a remote tensor to the local worker, sync version
def to_here(self) -> T:
return torch.rpc(_to_here, self, on=self.owner())
```
### Builtin Operators
```python
# proxy methods for all builtin functions exist on references for
# existing TorchScript types like Tensors. They always follow a fixed pattern:
def _mm(a : RRef[Tensor], b : RRef[Tensor]) -> RRef[Tensor]:
return a.local_value() + b.local_value()
class RRef[Tensor]:
def mm(self : RRef[Tensor], other : RRef[Tensor]) -> RRef[Tensor]:
on = same_worker(self.owner(), other.owner())
return torch.remote(on, _mm, self, other)
c : Tensor = a.mm(b).to_here()
```
### Callable and RRef
If `RRef[T]` holds a callable object `T`, the application may directly call the `RRef` which will be translated into `torch.remote` call to the owner of the callable.
```python
# if T is callable for RRef[T], rref(x) will be translated to calling T(x)
# on the owner of the RRef
def _call_rref(v : RRef[T], *args):
return v.local_value()(*args)
class RRef[T]:
def __call__(self, *args):
return torch.remote(self.on(), _call_rref, self, *args)
net = torch.remote("Worker1", Net)
net(inputs)
```
### Optimizer and RRef
As models might have remote sub-modules (i.e., `RRef[nn.Module]`), we should provide an optimizer sugar to handle it. The optimizer sugar (`torch.optim.remote`) takes a local optimizer constructor, a distributed model parallel model, and an argument list for the local optimizer constructor. The `torch.optim.remote` recursively creates a local optimizer on every remote sub-module owner, and exposes the same step API as a local optimizer which recursively calls every local optimizer.
```python
class Net1(nn.Module):
...
class Net2(nn.Module):
...
class DMP(nn.Module):
def __init__(self):
self.net1 = dist.remote("worker1", Net1)
self.net2 = dist.remote("worker2", Net2)
dmp = dist.remote("worker0", DMP)
# dist.optimizer creates an optimizer on all RRef owners
optimizer = dist.optimizer(torch.optim.SGD, dmp, lr=0.1)
with dist.autograd.context():
loss = dmp(inputs)
dist.autograd.backward(loss)
optimizer.step()
```
## Model Parallel Training Examples
### Multi-Machine Model Training
```python
# 1. load data
inputs_rref = torch.remote("worker1", load_inputs, path_to_inputs)
labels_rref = torch.remote("worker2", load_labels, path_to_inputs)
# 2. define model
class Net1(nn.Module):
...
class Net2(nn.Module):
...
class DMP(nn.Module):
def __init__(self):
self.net1 = torch.remote("worker1", Net1)
self.net2 = torch.remote("worker2", Net2)
def forward(self, inputs_rref):
# RRef[T].__call__(args) is a sugar that translates to
# dist.remote(T, RRef.on(), args)
outputs1_rref = self.net1(inputs_rref)
outputs2_rref = self.net2(outputs1_rref)
return outputs2_rref
# 3. training, run it where you want to call autograd
def train(inputs_rref, labels_rref):
dmp = DMP()
# torch.optim.remote creates an optimizer on every RRef destination
optimizer = dist.optimizer(torch.optim.SGD, dmp, lr=0.1)
outputs_rref = dmp(inputs_rref)
loss = loss_func(outputs_rref.to_here(), labels_rref.to_here())
autograd_ctx_id = dist.autograd.backward(loss)
optimizer.step(autograd_ctx_id)
dist.rpc(dev2, train, args=(inputs_rref, labels_rref))
```
### Parameter Server Training
```python
class ParameterServer:
def __init__(self):
self.params = torch.zeros(100, 100).to(0)
def get_params(self) -> Tensor:
return self.params
def add_grads(self, grad: Tensor):
return self.params += grad.to(0)
def train(ps)
for _ in range(10):
params = torch.rpc("ps", ParameterServer.get_params, args=(ps, ))
# run forward and backward
torch.rpc("ps", ParameterServer.add_grads, args=(ps, params.grad))
torch.distributed.barrier(group=TRAINER_GROUP)
ps = torch.remote("worker1",ParameterServer)
torch.remote("worker2", train, args=(ps,))
torch.remote("worker3", train, args=(ps,))
```
# System Design
## Distributed Autograd
### Basic Idea
In the first version, `dist.autograd.backward` does not support `RRef` arguments, but `RRef` can still help build the autograd graph. The overall idea is as follows.
* When calling `torch.rpc` or `RRef.to_here()`, `send` and `recv` autograd functions will be inserted to connect local autograd graphs on multiple workers into one distributed autograd graph.
* Every distributed backward pass is assigned a globally unique id (***`autograd_context_id`***), and every participating worker will keep a dedicate context for it.
* When the backward computation reaches a `recv` function, it packs the gradient and the `autograd_context_id` in the message, and pass it to its `send` counterpart.
* Upon receiving a message for a `send` function in the backward pass, it uses the `autograd_context_id` in the message to identify which backward pass it belongs to, and uses the gradient in the message to continue autograd computation locally.
### Send and Recv Autograd Functions
Let’s start with a simple example where there is just one synchronized RPC call and there is only one tensor passed across worker boundaries. Code is on the left and the autograd graph is on the right where `AccumulateGrad` autograd functions for leaf nodes are omitted for simplicity.
```python
# the add function should be
# defined on both workers
def add() -> Tensor:
a = torch.rand(2, 2)
b = torch.rand(2, 2)
c = a + b
return c
# make RPC call from worker0
# to execute add on worker1
c1 = dist.rpc(add, on="worker1")
d = torch.ones_like(c1)
e = c1 * d
e.sum().backward()
```

The `send` and `recv` autograd functions are inserted during the forward pass, which connect two local graphs into one distributed graph. In the backward pass, the gradient will be passed to the `recv` autograd function on `worker0`, and the `recv` autograd function will then transmit the gradient tensor to `worker1`’s `send` autograd function. Then, `worker1` can kick off the local autograd engine to resume the backward pass. There are a few more details need to be clarified in this simple example:
* On `worker1`, how do we keep the autograd graph alive after the RPC call returns?
* In short, the distributed autograd engine on `worker1` will keep a reference to the `send` function which can keep the graph alive.
* Reasoning: The graph can be kept alive by keeping a reference to either tensor `C` or the `send` autograd function, as both of them hold a reference to the `add` autograd function. We choose to keep a reference to the `send` function instead of tensor `C`, because `C` as a non-leaf node produced by `add` is not needed in the backward pass. It should be freed as soon as possible. It is not memory efficient to hold C alive just because we want to have an entrance point to the autograd graph.
* In the backward pass, how does `recv` on `worker0` find the correct `send` on `worker1` to talk to?
* This can be done by assigning a globally unique ID (**worker*****_id + local send/recv id***) for each `send` / `recv` function pair.
* When can `worker1` delete its local autograd graph?
* `send` should have the same lifetime as its corresponding `recv` function. This can be done by sending a message from `worker0` to `worker1` when `recv` is destructed on `worker0`. The `recv` function is kept alive by the `loss` tensor. So, conceptually, the global autograd graph will be deleted when the final loss tensor is gone.
### Hidden Autograd Path and Circular Dependency
Things can become complicated when an autograd graph contains multiple send/recv pairs. Consider the following example.
```python
# all functions shoud be defined on all workers
def worker0_func(c2: Tensor) -> Tensor:
g = torch.rand(2, 2)
h = g + c2
return h
def worker1_func_top() -> Tensor:
a = torch.rand(2, 2)
b = torch.rand(2, 2)
c = a + b
return c
def worker1_func_bottom(c: Tensor, e1: Tensor) -> Tensor:
f = c + e1
return f
def worker2_func(c1: Tensor) -> Tensor:
d = torch.rand(2, 2)
e = c1 + d
return e
# on Worker3
c_ref = torch.remote(worker1_func_top, on="Worker1")
h1 = torch.rpc(worker0_func, c_ref, on="Worker0")
e_ref = torch.remote(worker2_func, c_ref, on="Worker2")
f1 = torch.rpc(worker1_funct_bottom, c_ref, e_ref, on="Worker1")
i = h1 + f1
i.sum().backward()
```

This example highlights two problems that we need to address:
* **Hidden Autograd Path:** Existing local autograd engine starts from loss (or all outputs), and do a discovery/marking phase to identify all participating functions before executing the real autograd computation. So that all paths in the autograd graph are known upfront. However, we don’t have this luxury in distributed autograd because some parts of the autograd graph reside on remote workers. For example, when grad arrives at `send5`, worker1 cannot tell whether `send3` will be in the backward pass if it only looks at local information. More specifically, `i.sum().backward()` will be the same as `f1.sum().backward()` from worker1’s perspective, but the former involves `send3` and the latter does not.
* To address this problem, we propose to record all globally upstream (upstream in the forward pass, downstream in the autograd graph) `send` / `recv` pairs in the forward pass, so that we know exactly which `send` / `recv` to wait for in the backward pass.
* **Circular Dependency:** there are circular dependencies between worker1 and worker2, i.e., it is impossible to finish autograd computation on one worker before kicking off on another worker. One option is to start autograd computation on `worker1` first, and having an autograd thread blocking there waiting for grads for `send1`, but this is less ideal.
* To address this problem, we propose to only create the `send` autograd function and put it in the ready queue when the grad is received. Note that, when computing dependency count for `add1`, the autograd engine still takes `send1` into account, so that the engine will only start computing grads for add1 after both `add2` and `send1` finish.
Note that we need to record information in the forward pass and do the discovery in the backward pass because we don’t know which `send` function will be participating in the autograd computation. However, if the application can guarantee that all `send` functions will receive grad in the backward pass, we can skip all these complexity and have a more efficient version. Both scenarios are useful, so we propose to have two modes:
* **Smart Mode** supports running backward on a subgraph of the global autograd graph, but there will be extra overhead in both forward and backward pass.
* **Fast Mode** skips dependency recording in the forward pass and graph discovery in the backward pass, but the application needs to guarantee that *_all send autograd function will receive grad in the backward pass_*.
The two sections below describe the two algorithms in more details.
### Distributed Autograd Algorithm Smart mode
**Forward pass:**
For every `send` **x**:
1. Find `send` functions in **x**’s lineage, by:
1. Finds all locally reachable `recv` functions from `send` **x** in the autograd graph. In the example above, `send2` finds `recv1`, `send4` finds `recv3`, and `send5` finds `recv2`.
2. Use those found `recv` functions to find globally reachable `recv` functions in `send` **x**’s lineage. Note that this can be done, because in step 2 we send enough information from `send` to `recv`. In the example above `send4` knows `send3`, and `send5` knows `send1` and `send2`.
2. Then, `send` **x** includes ids of its lineage `send` functions in the message. Intuitively, it means that if there is a grad received for `send` **x**, the backward pass must reach all `send` functions in its lineage as well. It helps a node to determine whether it should wait for a `send` grad.
```python
# pseudo code to demonstrate how send works in forward
def find_global_lineage(tensor):
# find local lineage
recvs = find_recvs(tensor.grad_fn)
dep_ids = {recv.id for recv in recvs}
# find global lineage
dep_ids.update({dep_id for recv in recvs for dep_id in recv.dep_ids})
return dep_ids
def send(func, tensors, on):
msg = Message(func)
for tensor in tensors:
lineage = find_global_lineage(tensor)
# connect send to autograd graph
send = SendFunc()
send.next = tensor.grad_fn
# remember the send by its id
RpcAgent.send_map[send.id] = send
# coalesce data
msg.data.append((tensor, send.id, lineage))
send_msg(msg, on)
def recv(func, data, from):
tensors = []
for tensor, send_id, lineage in data:
# use send_id as recv_id, and remember global lineage
recv = RecvFunc(send_id, lineage)
tensor.grad_fn = recv
tensors.append(tensor)
return func(tensors)
```
**Backward pass**:
On the node that calls `torch.distributed.backward`:
1. Find all `send` functions in the lineage of the loss tensor. In the above example, it will be all 5 `send` functions. These ids will be propagated to the `recv` functions and will be passed to the counterpart `send` functions accordingly.
1. Optimizations can be added, e.g., drop unnecessary ids in backward pass to reduce message size.
On every node:
1. Upon receiving the first message (be it a dedicated discovery message or grad of a send), record its `autograd_context_id`, and retrieve all participating `send` ids from the message. Compute dependency count from those `send` functions (and also from loss `grad_fn` if loss is on this node). Set dependency count for `send` functions as 1. If there is any autograd function has dependency count 0, put them into the ready queue.
2. Upon receiving a `send` grad, decrement the dependency count of that `send` by 1, and add it to the ready queue. Note this is done on an `RpcAgent` thread, and some autograd engine thread will pick up the autograd function for execution.
```python
# pseudo code to demonstrate backward
graph_tasks = {}
def backward(loss):
global graph_tasks
autograd_context_id = gen_autograd_id()
lineage = find_global_lineage(loss)
# these send will participate in the autograd pass
roots = local_sends.intersection(lineage)
# propagate the autograd_id and deps info to all
# participating workers. This is non-blocking and can
# run concurrently with the real backward computation.
# This step is not absolutely necessary, but can help other
# workers to kick off autograd earlier.
disseminate(autograd_context_id, lineage)
# below is a handwaving impl to show how it works with local autograd engine
graph_task = GraphTask()
graph_tasks[autograd_context_id] = graph_task
roots.append(loss.grad_fn)
# setup dependency count properly
compute_dependencies(GraphRoot(roots), graph_task)
# insert the task to local engine ready queue. Only the FunctionTask
# for loss is inserted now, send FunctionTasks will be inserted later
# when their grad becomes available.
ready_queue.push_back(FunctionTask(graph_task, loss.grad_fn, ...))
return autograd_context_id
def on_grad_send(send_id, grad, autograd_id):
global graph_tasks
graph_task = graph_tasks[autograd_id]
send_func = RpcAgent.send_map[send_id]
ready_queue.push_back(FunctionTask(graph_task, send_func, grad))
```
### Distributed Autograd Algorithm Fast mode
The problem with the above approach is that including ids in `send` / `recv` messages incurs overhead, especially when there are a lot of tensors communicated across multiple workers. And this discovery phase is only necessary when running autograd on subgraph. For example, `f1.sum().loss()` requires the discovery phase to avoid waiting for `send3`, but it is easier for `i.sum().loss()` as all `send` are involved in the backward. So, we propose to have one additional mode for distributed autograd to bypass `send` / `recv` dependency discovery in both forward and backward **if all send for non-leaf or `requires_grad` tensors will receive grad in the backward pass**. The mode can be toggled when initializing RPC agents:
```python
# all_requires_grad (bool): If True, the application guarantees that all
# send functions on non-leaf or requires_grad tensors will receive grad
# in the backward pass. Hence, we can skip the distributed dependency
# discovery algorithm (fast mode). If False, run smart mode, where
# messages beween send/recv will contain dependency ids in both forward
# and backward pass. (default False)
torch.distributed.init_rpc(name, backend="pg", all_requires_grad=False)
```
Internally, `RpcAgent` will create a thread-local driver ID, where a driver is the worker that pieces together the autograd graph. In the above example, `Worker3` is the driver. In the forward pass, every `send` function originated from this driver will be tagged with its thread-local driver ID, and this applies to all downstream (upstream in the autograd graph) `send` functions as well. This can be done by either propagating this driver ID to RPC calls recursively, or do an active driver ID discovery by walking the autograd graph before sending a tensor. If this information is ambiguous, e.g., one `send` function traces back to two upstream (downstream in the autograd graph) `recv` functions from two different drivers, it will throw an error. In the backward pass, the thread-local driver id of the loss will be included in the entire autograd execution to identify participating `send` functions. Note that, in this mode, the application cannot keep two disjoint autograd graphs alive at the same time, as that would break the assumption that all send (originated from the driver) will receive grad in the backward pass.
### Concurrent distributed Backward passes
```python
A = torch.rand(2, 2)
B = torch.rand(2, 2)
# on all workers
def add() -> Tensor:
global A, B
return A + B
# on worker0
C = torch.remote(add, on="worker2").to_here()
C.sum().backward()
# on worker1
C = torch.remote(add, on="worker2").to_here()
C.sum().backward()
```
In the above example, there are two concurrent backward passes triggered by `worker0` and `worker1` respectively, and both will reach `worker2`. To avoid race, the distributed autograd engine will use the globally unique `autograd_context_id` to create a dedicated context on every participating worker. Later, pass this `autograd_context_id` to optimizer to apply gradients. More concretely, this would work as follows:
1. Compute all the leaf nodes in the autograd graph.
2. As part of running distributed backwards, use the outputs parameter of the autograd engine to avoid executing `AccumulateGrad` for the leaf nodes we have and instead return the appropriate `output_edges` to execute for accumulating gradients.
3. Store the `output_edges` with the `autograd_context_id`. This would ensure multiple backward passes won't accumulate gradients in the same context.
4. This completes the backward pass and gradients are accumulated in the autograd engine per `autograd_context_id.`
5. Now we run the optimizer on each of the worker nodes and pass the `autograd_context_id` to the optimizer.
6. The optimizer applies all the gradients to the leaf nodes that we computed originally.
7. The context and enclosing gradients should be destroyed when the `autograd_context_id` is destructed on the caller of `backward()`.
Some pseudo-code to illustrate this:
```python
optimizer = dist.optimizer(model)
loss = model(inputs)
bw_ctx_id = dist.autograd.backward(loss, timeout=60) # timeout of 60s
optimizer.step(bw_ctx_id)
```
## RRef
(more details are described in #26759)
`RRef` is an important concept for building a distributed autograd graph. Each `RRef` is owned by a single worker (i.e., owner) and can be used by multiple users. The owner stores the real data referenced by its `RRef`s, and keeps track of the global reference counts for its `RRef`s. Every `RRef` can be uniquely identified by a global id `ref_id`, which is assigned at the time it is first created either on a user or on the owner.
The owner only keeps one `RRef` instance for each data object, while users can fork as many `RRef` instances as necessary. All usage on the owner should retrieve the `RRef` instance using the globally unique `ref_id`. A fork of `RRef` will be created when it is used as an argument or return value in a RPC call, but users don't need to worry about forking/forwarding and reference counting (RC) `RRef`s. These will be handled transparently, and every fork will also have its own `fork_id`, which is guaranteed to be unique across all `RRef` instances for the same data object.
`RRef` needs to support fast and scalable RPC. Hence, in the RC design, we avoid using any global master to keep `RRef` states. Besides, when worker X invokes RPC on worker Y, Y should be able to start immediately after receiving the RPC request, without waiting for any third-party owner Z (unless Y needs to pull real data from Z), even if neither X nor Y owns the `RRef`. We propose the following algorithm:
1. If the owner is the RPC caller, the owner will update RC for the `RRef` accordingly.
2. If the owner is the RPC callee, the owner will drop the new fork, and use the unique `RRef` id in the fork to access its singleton local `RRef` instance.
3. If the RPC is between two users:
1. The caller sends an RPC message to the callee, and also notifies the owner on the new fork.
2. The owner, upon receiving the notification, updates its local RC and then tells the callee the new fork is now known by the owner.
3. The callee can starts executing the RPC as soon as it receives the RPC message from the caller, and does not need to wait for the message from the owner. However, it cannot delete its local `RRef` fork until owner's message arrives.
### Reference Count
The right time to delete an `RRef` on owner is when there are no living forks on any user and Python GC also agrees to delete the `RRef` instance on the owner. The tricky part is to determine if there are any living forks.
A user can get a fork in three situations:
1. Receiving a fork from the owner.
2. Receiving a fork from another user.
3. Creating a new `RRef` fork owned by another worker.
`#1` is the simplest case where the owner initiates the fork, and hence it can easily increase local RC. The only requirement is that any fork must notify the owner before destruction. Hence, we need the first guarantee:
* G1. The owner will be notified when any fork is deleted.*
Note that the notification might come delayed or out-of-order.
With `#2` and `#3`, it is possible that the owner only partially knows the `RRef` fork graph or not even knowing it at all. For example, the `RRef` could be constructed on a user, and before the owner receives the RPC call, the creator user might have already shared the `RRef` with other users, and those users could further share the `RRef`. One invariant is that the fork graph of any `RRef` is a tree rooted at the owner, because forking an `RRef` always creates a new `RRef` instance, and hence every `RRef` has a parent. One nasty detail is that when an `RRef` is created on a user, technically the owner is not its parent but we still consider it that way and it does not break the argument below.
The owner's view on any node (fork) in the tree has three stages 1) **unknown** → 2) **known** → 3) **deleted**, and the owner's view on the entire tree keeps changing. The owner deletes its `RRef` instance when it thinks there is no living forks, i.e., all the forks could be either indeed deleted or unknown. Therefore, the dangerous case is when some forks are unknown and others are deleted. We only need a simple guarantee to prevent this situation:
*G2. No fork x can be deleted on a user before the owner knows x’s parent fork.
*
This works because owner's view on x can only change from **known** to **deleted** when x's parent is **known** or **deleted**. If the parent is **known**, owner will not delete local `RRef`. If the parent is **deleted**, this rule recursively applies to the parent's parent, until it reaches the root (owner). To implement the guarantee, we only need to make the caller include its own `fork_id` when notifying the owner on a new fork.
G1 and G2 guarantee correct RC, but does not prevent a user deleting before finishes its own prior RPC calls using that `RRef` fork. This should be OK, because when the caller deserializes the RPC message, it would hold a reference () to that `RRef`, preventing it from been deleted.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera | feature,triaged,module: rpc | medium | Critical |
470,609,917 | godot | Can't change color space of vertex colors in GLES2 using a SpatialMaterial | **Godot version:**
3.1 stable
**OS/device including version:**
GNU/Linux x64
**Issue description:**
Vertex colors look very different in GLES2 and the "Is Srgb" toggle in the material doesn't do anything
Note: the "Is Srgb" toggle does make a difference in GLES3, but the colors look right when it's off.
Edit: This happens when I export the model in the gltf2 format, when I export it in .dae with the default collada exporter it looks bad in gles3 without "Is Srgb" turned on so I guess the issue is that you can't change the color space of vertex colors in GLES2.
Comparisons:
Blender

Godot(GLES2)

Godot(GLES3)

| bug,topic:rendering,confirmed,topic:3d | medium | Major |
470,614,096 | TypeScript | Convert to ES2015 class do not recognize inheritance |
Issue Type: <b>Bug</b>
```js
function Derived() {
Base.call(this, /*some base args*/);
/* some other constructor work */
};
Derived.prototype = new Base();
Derived.prototype.constructor = Derived;
```
must be converted to
```js
class Derived extends Base {
constructor() {
super(/*some base args*/);
/* some other constructor work */
}
}
```
Right now it converted to
```js
class Derived {
constructor() {
Base.call(this, /*some base args*/);
/* some other constructor work */
}
}
;
Derived.prototype = new Base();
Derived.prototype.constructor = Derived;
```
VS Code version: Code 1.36.1 (2213894ea0415ee8c85c5eea0d0ff81ecc191529, 2019-07-08T22:59:35.033Z)
OS version: Windows_NT x64 10.0.17134
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz (8 x 1992)|
|GPU Status|2d_canvas: enabled<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>oop_rasterization: disabled_off<br>protected_video_decode: enabled<br>rasterization: enabled<br>skia_deferred_display_list: disabled_off<br>skia_renderer: disabled_off<br>surface_synchronization: enabled_on<br>video_decode: enabled<br>viz_display_compositor: disabled_off<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|undefined|
|Memory (System)|7.89GB (2.36GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (14)</summary>
Extension|Author (truncated)|Version
---|---|---
search-crates-io|bel|1.2.1
better-toml|bun|0.3.2
bracket-pair-colorizer-2|Coe|0.0.28
pegjs-syntax|fut|0.1.1
nwscript|glo|0.0.2
restructuredtext|lex|112.0.0
hg|mrc|1.3.0
vscode-language-pack-ru|MS-|1.36.2
rust|rus|0.6.1
crates|ser|0.4.3
vscode-hexdump|sle|1.7.2
code-spell-checker|str|1.7.17
code-spell-checker-russian|str|0.2.2
sort-lines|Tyr|1.8.0
</details>
<!-- generated by issue reporter --> | Suggestion,Awaiting More Feedback | low | Critical |
470,614,299 | TypeScript | Feature Request: Split out JsDoc comments for constructor function when convert function to ES2015 class | Add ability to convert JsDoc comments in following situation:
```js
/**
* @class
* Some constructor description.
* @classdesc
* Some class description
*/
function Class() {}
```
must be converted to
```js
/**
* Some class description
*/
class Class {
/**
* Some constructor description.
*/
constructor() {}
}
```
Right now you get following result:
```js
/**
* @class
* Some constructor description.
* @classdesc
* Some class description
*/
class Class {
constructor() {}
}
```
| Suggestion,Awaiting More Feedback,Domain: Refactorings | low | Minor |
470,635,592 | godot | Why is _ptr in cowdata.h mutable? | I tried compiling without the keyword and it compiles just fine...is there any use cases where this is necessary?
https://github.com/godotengine/godot/blob/e44041ae41c6b76678f52f772abb5e4834ed40a1/core/cowdata.h#L56 | discussion,topic:core | low | Minor |
470,651,082 | TypeScript | [Feature Request] Print project directory when diagnostics enabled | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
diagnostics, project directory
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
I have about 40+ sub-projects in a `composite` project right now and when building, it takes about 10 minutes.
It's hard to troubleshoot which projects are the bottleneck when I don't even know which projects are being built.
Enabling diagnostics only tells me how long a project took but doesn't tell me what that project is.
So, my feature request is to print the project directory when `diagnostics` is enabled.
-----
At the moment, I've modified my own `tsc.js`,
```ts
//Find this line
reportTimeStatistic("Total time", programTime + bindTime + checkTime + emitTime);
```
```ts
//Replace with this snippet
reportTimeStatistic("Total time", programTime + bindTime + checkTime + emitTime);
if (program.getRootFileNames().length > 0) {
const path = program.getRootFileNames()[0].replace(/\/[^/]+?\.[^.]+?$/, "");
reportStatisticalValue("Project", (
path.indexOf(program.getCurrentDirectory()) == 0 ?
path.substr(program.getCurrentDirectory().length) :
path
));
}
```
It works enough for me,
```
Files: 2944
Lines: 279749
Nodes: 1473021
Identifiers: 417022
Symbols: 295304
Types: 73
Memory used: 748010K
Assignability cache size: 0
Identity cache size: 0
Subtype cache size: 0
I/O Read time: 0.02s
Parse time: 0.21s
Program time: 0.84s
Bind time: 0.18s
printTime time: 0.01s
Emit time: 0.01s
I/O Write time: 0.01s
Total time: 1.03s
Project: /src/app-route-handler
```
-----
I don't really know much about the TypeScript API. There's probably a proper way to get this information but with my limited knowledge, I managed to hack that together.
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
470,672,766 | godot | Confusing EditorPlugin.forward_*_gui_input Behavior | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** 3.1.1
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Solus Gnome 4.0
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
<!-- What happened, and what was expected. -->
`EditorPlugin.forward_canvas_gui_input` and `EditorPlugin.forward_spatial_gui_input` `return` value is confusing with the method's signature. Let me explain.
Reading the method name we may think that it will forward an `InputEvent` to other classes if it `return true`, right? Because if we question ourselves "Will the `InputEvent` be forwarded?" we can check what the method returns.
But it turns out to be the opposite, it forwards the `InputEvent` to other classes when the method `return false` and it consumes and prevents the `InputEvent` to reach other classes if `return true`.
**Steps to reproduce:**
1. Create a dummy plugin
2. Implement EditorPlugin.handles method
3. Implement EditorPlugin.forward_canvas_gui_input
4. Add the following:
```
func forward_canvas_gui_input(event):
var forward = true
print(event)
return forward
```
Then try the following:
1. Add a `Node2D` to a scene
2. Try to move it
You will see that the [InputEventMouseMotion] is printed but your `Node2D` doesn't move, try to set `forward = false` instead and repeat, now it moves.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[dummy_plugin.zip](https://github.com/godotengine/godot/files/3413621/dummy_plugin.zip)
| discussion,topic:plugin,topic:input | low | Critical |
470,678,421 | rust | Provide From<E> for Box<dyn Error + Send> and Box<dyn Error + Sync> | [As explained in the Rust User forum](https://users.rust-lang.org/t/impl-e-error-from-e-for-box-dyn-error-send/30507), I noticed that the following impls are not available:
```rust
impl<'a, E: Error + Send + 'a> From<E> for Box<dyn Error + Send + 'a> {}
impl<'a, E: Error + Sync + 'a> From<E> for Box<dyn Error + Sync + 'a> {}
```
The first one is pretty useful when working with [rayon](https://crates.io/crates/rayon) and errors, which only need to be `Send` to be passed across threads. In theory, a stateful error with a `Cell` or a `RefCell` is a possible example of `dyn Error + Send + !Sync`.
The second impl is mainly for coherence and symmetry, I still do not have in mind a possible realistic situation.
In any case, IMHO this feature should not require a RFC. In that case I can create a PR and follow these steps:
- [ ] Implement the two impls
- [ ] Adjust the docs
It is the first time I contribute to Rust, and I am not sure if this feature should require a stabilization process. In this case just let me know, and I will update this document in order to include the addition of a feature gate and everything needed.
Any feedback is appreciated to make me do things nicely and your review less painful. | T-libs-api,C-feature-request | low | Critical |
470,684,091 | rust | Library load disallowed by System Policy on macOS 10.15 beta 4 | On macOS 10.15 beta 4 (19A512f), when I try to run `rustc` (or invoke it through a command like `cargo build`) I get the following message:
```
dyld: Library not loaded: @rpath/librustc_fs_util-e4dabb5766b9af43.dylib
Referenced from: /Users/soren/.rustup/toolchains/stable-x86_64-apple-darwin/bin/rustc
Reason: no suitable image found. Did find:
/Users/soren/.rustup/toolchains/stable-x86_64-apple-darwin/bin/../lib/librustc_fs_util-e4dabb5766b9af43.dylib: code signature in (/Users/soren/.rustup/toolchains/stable-x86_64-apple-darwin/bin/../lib/librustc_fs_util-e4dabb5766b9af43.dylib) not valid for use in process using Library Validation: Library load disallowed by System Policy
/Users/soren/.rustup/toolchains/stable-x86_64-apple-darwin/bin/../lib/librustc_fs_util-e4dabb5766b9af43.dylib: stat() failed with errno=1
/Users/soren/.rustup/toolchains/stable-x86_64-apple-darwin/lib/librustc_fs_util-e4dabb5766b9af43.dylib: code signature in (/Users/soren/.rustup/toolchains/stable-x86_64-apple-darwin/lib/librustc_fs_util-e4dabb5766b9af43.dylib) not valid for use in process using Library Validation: Library load disallowed by System Policy
/Users/soren/.rustup/toolchains/stable-x86_64-apple-darwin/lib/librustc_fs_util-e4dabb5766b9af43.dylib: stat() failed with errno=1
fish: 'rustc' terminated by signal SIGABRT (Abort)
```
This happens even if I give all of the binaries in `~/.rustup/toolchains/nightly-x86_64-apple-darwin/bin` Developer Tools permission in System Preferences.
I would tell you what version of `rustc` I have installed, but I can't run it to check. | O-macos | low | Critical |
470,704,402 | rust | MaybeUninit<T> could be Copy for all T | At the moment it only implements `Copy` when `T: Copy`, but there's no memory-safety reason for it not to always be `Copy`.
| C-enhancement,T-lang,T-libs-api | low | Major |
470,708,115 | node | Memory leak with debugger running. | * **Version**: v11.15.0
* **Platform**: Linux
**Edit:** Removed information about https://github.com/nodejs/node/issues/28786 making this hard to debug, as it has been fixed in ``v12.7.0``.
This bug might also be the same as, or related to, [this bug](https://github.com/nodejs/node/issues/28420), but because I'm unsure whether it's the same and because this one contains a minimal example, I've filed a separate issue.
Run this with ``node --inspect script.js`` and attach chrome:
```js
// edit: simplified example
async function noop() {}
async function run() {
while (true) {
await noop();
}
}
run();
```
You can see memory usage getting out of hand as soon as you attach google chrome:

*Before connecting with Chrome's debugger.*

*A few seconds after connecting with Chrome's debugger.*
This is what you see if you manage to grab a heapdump before it crashes:

The eventual crash when it runs out of memory is in a comment below.
| memory,inspector | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.