id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
536,923,860 | go | cmd/link: crash with -E <nonexisting symbol> | <pre>
$ go version
go version devel +3a3093d5c7 Mon Dec 9 21:50:59 2019 +0000 linux/amd64
</pre>
```
$ cat main.go
package main
func main() {
}
$ go build -ldflags="-E nonexistent" main.go
# command-line-arguments
panic: runtime error: index out of range [-1]
goroutine 1 [running]:
cmd/link/internal/ld.(*Link).pclntab(0xc000082480)
/home/elias/dev/go-tip/src/cmd/link/internal/ld/pcln.go:374 +0x1e40
cmd/link/internal/ld.Main(0x8912a0, 0x10, 0x20, 0x1, 0x7, 0x10, 0x6bddc3, 0x1b, 0x6b9e74, 0x14, ...)
/home/elias/dev/go-tip/src/cmd/link/internal/ld/main.go:243 +0xc62
main.main()
/home/elias/dev/go-tip/src/cmd/link/main.go:68 +0x1bc
``` | NeedsDecision | low | Critical |
536,957,118 | angular | XML prolog data being printed when including SVG in `templateUrl` | <!--π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
Oh hi there! π
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
-->
# π Bug report
### Command (mark with an `x`)
<!-- Can you pin-point the command or commands that are effected by this bug? -->
<!-- βοΈedit: -->
- [ ] new
- [x] build
- [x] serve
- [ ] test
- [ ] e2e
- [ ] generate
- [ ] add
- [ ] update
- [ ] lint
- [ ] xi18n
- [ ] run
- [ ] config
- [ ] help
- [ ] version
- [ ] doc
### Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- βοΈ--> No
### Description
<!-- βοΈ--> Since v8 we have been able to reference SVG files in the `templateUrl` of a component. However, if the the SVG includes an XML declaration, Angular prints the declaration on the screen and then displays the SVG as expected. Since XML prolog data is valid in an SVG file, but not for inline SVG elements in html, I would expect the compiler to strip out all XML prolog data when compiling SVG files.
## π¬ Minimal Reproduction
<!--
Simple steps to reproduce this bug.
Please include: commands run (including args), packages added, related code changes.
If reproduction steps are not enough for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue.
A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem.
Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior.
Issues that don't have enough info and can't be reproduced will be closed.
You can read more about issue submission guidelines here: https://github.com/angular/angular-cli/blob/master/CONTRIBUTING.md#-submitting-an-issue
-->
1.
```
ng new svg-app --defaults
ng g component svg --module app
```
2. Rename `svg.component.html` to `svg.component.svg` and replace contents with:
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="18" height="18" viewBox="0 0 18 18">
<defs>
<path id="a" d="M0 0h18v18H0V0z"/>
</defs>
<clipPath id="b">
<use xlink:href="#a" overflow="visible"/>
</clipPath>
<path clip-path="url(#b)" d="M15 5H4C1.8 5 0 6.8 0 9s1.8 4 4 4h10v-1H4c-1.7 0-3-1.3-3-3s1.3-3 3-3h11c1.1 0 2 .9 2 2s-.9 2-2 2H6c-.6 0-1-.4-1-1s.4-1 1-1h8V7H6c-1.1 0-2 .9-2 2s.9 2 2 2h9c1.7 0 3-1.3 3-3s-1.3-3-3-3z"/>
</svg>
```
>*SVG copied from https://github.com/google/material-design-icons/blob/master/file/svg/design/ic_attachment_18px.svg?short_path=2fb6055*
3. Update the `templateUrl` in `svg.component.ts` to be `./svg.component.svg`
4. Add the `<app-svg></app-svg>` element tag to `app.component.html`
5. ng serve
## π₯ Exception or Error
<pre><code>
// Example output
<?xml version="1.0" encoding="UTF-8"?>π’
</code></pre>
## π Your Environment
<pre><code>
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ β³ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 8.3.19
Node: 12.8.1
OS: darwin x64
Angular: 8.2.14
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Package Version
------------------------------------------------------------
@angular-devkit/architect 0.803.19
@angular-devkit/build-angular 0.803.19
@angular-devkit/build-ng-packagr 0.803.19
@angular-devkit/build-optimizer 0.803.19
@angular-devkit/build-webpack 0.803.19
@angular-devkit/core 8.3.19
@angular-devkit/schematics 8.3.19
@angular/cdk 8.2.3
@angular/cli 8.3.19
@angular/material 8.2.3
@ngtools/webpack 8.3.19
@schematics/angular 8.3.19
@schematics/update 0.803.19
ng-packagr 5.7.1
rxjs 6.5.3
typescript 3.5.3
webpack 4.39.2
</code></pre>
**Anything else relevant?**
Same issue exists on 9.x release candidates.
Tested on Chrome and Safari (macOS)
Here is a StackBlitz repro: https://stackblitz.com/edit/angular-yvcjdp
| freq1: low,area: core,core: basic template syntax,type: use-case,cross-cutting: SVG,P4 | low | Critical |
536,970,194 | pytorch | Retain Subgraph or Save Intermediate Grad support? | ## π Feature
<!-- A clear and concise description of the feature proposal -->
Hi, I'd like to know if it possible to support *retain **subgraph*** for backward calculation.
Or some ways to save **intermediate gradients** that I need exactly for later use. The gradient hook is not precise enough.
## Motivation
I've tried current available **retrain_graph=True**, it works fine when model is small, but causes **out of memory error when model gets bigger**. Actually, I only need some intermediate gradients values that related to 1/50 subgraph or even smaller to be retained. Not the structure of the subgraph.
I also tried hooks, and hooks are just not precise enough to get exactly the gradient tensors I need especially when multi-head model is used. The hook always sums them up, while I need exactly values of each gradient.
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
## Pitch
### Two examples, one for subgraph, the other for save intermediate grad. Both should work.
#### Subgraph example:
```python
from torch.autograd import grad
input_x = torch.tensor([1.0, 2.0]) # input data
groundtruths = torch.tensor([...]) # some ground truth labels
model = build_model(...) # This model might be a very big multi-head model
preds = model(input_x) # multi-head prediction
# This is $dx$ of $\frac{dy}{dx}$
x1 = model.head_a.fc1.weight
x2 = model.head_a.fc1.bias
# multi-head loss, assume 3 heads, loss1 is $dy$ of $\frac{dy}{dx}$
loss1, loss2, loss3 = loss_eval(preds, groundtruths)
loss = sum([loss1, loss2, loss3])
# only retain exactly the needed part, not whole graph
loss.backward(retain_graph=True, retain_subgraph_y=[loss1], retain_subgraph_x=[x1, x2])
# The following gradients from part of the graph are what I need,
# but retain whole graph uses too much memory
# Here retain_graph only retain the subgraph, others released for memory efficiency.
gradient1 = grad([loss1], [x1], allow_unused=True, retain_graph=True)
gradient2 = grad([loss1], [x2], allow_unused=True, retain_graph=True)
```
<!-- A clear and concise description of what you want to happen. -->
## Alternatives
#### Example of Save Specific Intermediate Grad:
I think this is **much more *simple* and *efficient***. The definition of model /input/groundtruth/intermediate variables are the same as above.
```python
# Before backward, use a dict to record the gradients need to be saved
model.save_grad_during_backward(name='g1', y=[loss1], x=[x1])
model.save_grad_during_backward(name='g2', y=[loss1], x=[x2])
# No need to save any part of the graph, only save specific gradients during backpropagation
loss.backward(retain_graph=False)
# Get exactly values of gradients
gradient1 = model.saved_grad['g1']
gradient2 = model.saved_grad['g2']
```
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
cc @ezyang @SsnL @albanD @zou3519 @gqchen | module: autograd,triaged | low | Critical |
536,985,727 | flutter | Cannot run flavored application on iOS Device because flutter uses one certificate for project | We have a flavored Flutter application. Each flavor has a different bundle ID from a different Apple account.
The code signing is setup correctly on the Xcode Build Configurations, I can archive and deploy all my applications without a problem, but i can only launch one flavor on an iOS device from either Android Studio or VS Code because there is a "global" iOS signing certificate set for the project. I.e., if the "project certificate" is set to the certificate of flavor "A", when I try to run the flavor "B" on an iOS Device directly from both IDEs it will try to sign the flavor "B" with the certificate of flavor "A" and the Xcode build will fail.
The only way I have to launch the apps on a real iOS device is to perform a `flutter build ios debug --flavor <flavor> -t <main-file.dart> --no-codesign` and then run the desired flavor directly from Xcode, but of course when doing that I have no debugging on the dart code.
I use this flow of performing two builds when archiving and distributing the applications and it works as expected, the only problem I face is with the debug.
Since this issue is related only to code signing, the flavors also works as expected on the iOS Simulator, I can launch any flavor I want on debug on a simulator.
I could use `flutter config --clear-ios-signing-cert` to change the cert, but of course this isn't the ideal solution because I would still be stuck to one flavor at a time.
Am I doing any configuration wrong? Is there any way to configure flutter in order to get the code signing certificate for debug builds from the iOS configurations on Xcode instead of relying on the certificate set for the project?
**Steps to reproduce**
1) I run the **alpha** flavor using `flutter run -t lib/alpha-main.dart --flavor alpha`
2) It asked me to selected a development certificate from a list of available certificates and I selected the certificate for **alpha**.
3) The certificate for **alpha** is now set as default.
4) The app build and run on my device as expected.
5) I now tries to run **beta** flavor using `flutter run -t lib/beta-main.dart --flavor beta`
6) flutter run tries to sign **beta** app with the **alpha** signing certificate.
7) The build fails.
```
flutter doctor -v
[β] Flutter (Channel stable, v1.9.1+hotfix.6, on Mac OS X 10.15.1 19B88, locale
pt-BR)
β’ Flutter version 1.9.1+hotfix.6 at /Users/edisonlsm/Library/flutter
β’ Framework revision 68587a0916 (3 months ago), 2019-09-13 19:46:58 -0700
β’ Engine revision b863200c37
β’ Dart version 2.5.0
[β] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
β’ Android SDK at /Users/edisonlsm/Library/Android/sdk
β’ Android NDK location not configured (optional; useful for native profiling
support)
β’ Platform android-29, build-tools 29.0.2
β’ Java binary at: /Applications/Android
Studio.app/Contents/jre/jdk/Contents/Home/bin/java
β’ Java version OpenJDK Runtime Environment (build
1.8.0_202-release-1483-b49-5587405)
β’ All Android licenses accepted.
[β] Xcode - develop for iOS and macOS (Xcode 11.1)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Xcode 11.1, Build version 11A1027
β’ CocoaPods version 1.7.5
[β] Android Studio (version 3.5)
β’ Android Studio at /Applications/Android Studio.app/Contents
β’ Flutter plugin version 41.1.2
β’ Dart plugin version 191.8593
β’ Java version OpenJDK Runtime Environment (build
1.8.0_202-release-1483-b49-5587405)
[!] IntelliJ IDEA Community Edition (version 2018.3.4)
β’ IntelliJ at /Applications/IntelliJ IDEA CE.app
β Flutter plugin not installed; this adds Flutter specific functionality.
β Dart plugin not installed; this adds Dart specific functionality.
β’ For information about installing plugins, see
https://flutter.dev/intellij-setup/#installing-the-plugins
[β] VS Code (version 1.40.2)
β’ VS Code at /Applications/Visual Studio Code.app/Contents
β’ Flutter extension version 3.7.1
[β] Connected device (2 available)
β’ iPhone β’ <id> β’ ios β’ iOS
13.2.3
β’ iPhone 11 Pro Max β’ <id> β’ ios β’
com.apple.CoreSimulator.SimRuntime.iOS-13-1 (simulator)
```
| platform-ios,tool,d: api docs,t: xcode,P3,team-ios,triaged-ios | low | Critical |
536,989,721 | terminal | Move shader files to `hlsl` files and package as a part of app | Follow-up from #3468.
Right now we're just packaging them in the renderer itself, as string literals. As _great_ as that is, we should probably make them their own files.
| Area-Rendering,Product-Terminal,Issue-Task | low | Minor |
537,022,260 | flutter | API should including callback functions to get soft keyboard status including the keyboard visibility and height. | Text input is core function for every application. So it is very important for a framework with rich API's on inputting user experience.
But now there is no api in flutter to get keyboard status.
It should be a call back listener, that we could get keyboard visibility status, keyboard height, then we could adjust our app's layout, adding some overlays besides the keyboard to dismissing keyboard.
Even more, It's better the overlay helper could sync animation with the keyboard show up.
Following pic shows typical use of keyboard overlay helpers.
<img src="https://user-images.githubusercontent.com/3226361/70722230-dad1b180-1d31-11ea-9f52-49f705751de2.png" width="400">
| a: text input,c: new feature,framework,c: proposal,P3,team-framework,triaged-framework | low | Minor |
537,051,617 | angular | Routing in Angular Based Web Components using @angular/elements | <!--π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
Oh hi there! π
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
-->
# π feature request
### Relevant Package
<!-- Can you pin-point one or more @angular/* packages the are relevant for this feature request? -->
<!-- βοΈedit: --> This feature request is for @angular/elements and @angular/router
### Description
<!-- βοΈ--> When attempting to use an Angular app to host a second Angular app wrapped as a Web Component routing does not work for the hosted app.
### Describe the solution you'd like
<!-- βοΈ--> It would be nice if an instance of the hosting app could be passed in so that the router.forChild() could be used to register the routes for the web component. Another option would be to rework both elements and router so that multiple forRoot instances of the router do not cause issues.
### Describe alternatives you've considered
<!-- βοΈ--> I have tried named outlets and ui-router. UI-router got me the closest by using the Angular Router in the hosting application and ui-router in the hosted component. The issue I ran into there was that I could not directly navigate by URL into a sub-route using ui-router.
| feature,freq1: low,area: router,area: elements,feature: under consideration,feature: votes required | high | Critical |
537,063,603 | go | x/build/cmd/coordinator: give buildlets access to suspend themselves for N seconds | Watching the debugging in #35482 (including people running out to buy laptops), I realize we could probably provide some help in the build system.
GCE VMs support suspend:
https://cloud.google.com/sdk/gcloud/reference/alpha/compute/instances/suspend
> gcloud alpha compute instances suspend is used to suspend a Google Compute Engine virtual machine. Suspending a VM is the equivalent of sleep or standby mode: the guest receives an ACPI S3 suspend signal, after which all VM state is saved to temporary storage
So we could have the coordinator do that `suspend` on behalf of buildlets, followed by a `resume` in the {buildlet/user/test}-requested duration.
I figure we'd pass an environment variable to tests containing a URL containing a secret build-specific secret, and tests could hit that URL with a `seconds` parameter to say how long they'd like to be suspended.
We'd need to also suspend the buildlet health checking on the coordinator side so they don't fail health checks and get killed.
This would permit writing unit tests that test [program time vs real time](https://github.com/golang/go/issues/35482#issuecomment-563004219), at least in longtest mode.
I assume this would be useful, @aclements, @ianlancetaylor?
/cc @golang/osp-team | Testing,Builders,NeedsDecision,FeatureRequest | low | Critical |
537,065,588 | scrcpy | how do the android device use computer keyboardοΌlike F1γF2γF3γESCγctrl+c ? or keep android soft keyboard hidden? | question,keyboard | low | Minor |
|
537,068,970 | pytorch | [JIT] Slice with optional not supported | ## π Tensor Slice with optional
```
def slice_with_optional(val, start: int, end: Optional[int] = None):
return val[start:end]
jit.script(slice_with_optional)
```
> Arguments for call are not valid.
The following operator variants are available:
aten::slice(Tensor(a) self, int dim=0, int start=0, int end=9223372036854775807, int step=1) -> (Tensor(a)):
Expected a value of type 'int' for argument 'end' but instead found type 'Optional[int]'.
...
Originally reported in https://github.com/pytorch/pytorch/issues/24256
Related issue https://github.com/pytorch/pytorch/issues/27543
cc @suo | oncall: jit,triaged | low | Minor |
537,093,910 | flutter | [google_maps_flutter] InfoWindow doesn't support iOS DarkMode | **Issue description:**


When the dark mode is activated on iOS, the InfoWindow of a marker is displayed with a white background, the title written in white, and the snippet written in black. As the InfoWindow is not customizable at all (I hope it will soon be a widget), it seems that I can't fix it on my side.
**Target Platform:** iOS
**Target OS version/browser:** 13.2.2
**Devices:** only tested on iOS simulators (iPhone 8 Plus/ iPhone 11 Pro Max)
No problem with flutter analyze
**Flutter doctor:**
[β] Flutter (Channel stable, v1.12.13+hotfix.5, on Mac OS X 10.14.6 18G1012, locale fr-FR)
β’ Flutter version 1.12.13+hotfix.5 at /Users/maud.farizon/development/flutter
β’ Framework revision 27321ebbad (2 days ago), 2019-12-10 18:15:01 -0800
β’ Engine revision 2994f7e1e6
β’ Dart version 2.7.0
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
β’ Android SDK at /Users/maud.farizon/Library/Android/sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-29, build-tools 29.0.2
β’ ANDROID_HOME = /Users/maud.farizon/Library/Android/sdk
β’ ANDROID_SDK_ROOT = /Users/maud.farizon/Library/Android/sdk
β’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[β] Xcode - develop for iOS and macOS (Xcode 11.2.1)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Xcode 11.2.1, Build version 11B500
β’ CocoaPods version 1.6.1
[β] Android Studio (version 3.5)
β’ Android Studio at /Applications/Android Studio.app/Contents
β’ Flutter plugin version 42.1.1
β’ Dart plugin version 191.8593
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[β] Connected device (2 available)
β’ Nexus 5X β’ 01e299881cdfa202 β’ android-arm64 β’ Android 8.1.0 (API 27)
β’ iPhone 8 Plus β’ BB5B1D58-4A20-46D9-BAD3-1D84FF4A19B5 β’ ios β’ com.apple.CoreSimulator.SimRuntime.iOS-13-2 (simulator)
! Doctor found issues in 1 category.
| c: new feature,platform-ios,customer: crowd,p: maps,package,c: proposal,P3,team-ios,triaged-ios | low | Minor |
537,094,070 | flutter | [google_maps_flutter] Need My location button click listener | I want to know when the MyLocation button is clicked so that I can update the map with more relevant markers based on the user's new location.
It this is not coming in the near future, the alternative would be to implement my own location button on top of the map, have a listener and manually get the current location, move the map and update the markers.
Let me know your plans. Thanks. | c: new feature,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Major |
537,098,723 | youtube-dl | Download album cover as a separate file | - [x] I'm reporting a feature request
- [x] I've verified that I'm running youtube-dl version **2019.11.28**
- [x] I've searched the bugtracker for similar feature requests including closed ones
## Description
It would be very helpful to have the ability to download an album cover as a separate file, f.i. named `cover.jpg`, to the album folder, in services like Yandex Music and the like. I know that embedding is possible with `--embed-thumbnail`, but I'm asking for a way to download the cover as a separate file.
This feature request was opened, because this is not possible at the moment, as stated [here](https://github.com/ytdl-org/youtube-dl/issues/23377#event-2873922227). | request | low | Critical |
537,112,404 | flutter | Missing cache artifacts (e.g. idevice_id, ideviceinfo, iproxy) | **TL;DR** this can be worked around by wiping out your cache and then running the doctor (`rm -rf flutter_repo/bin/cache && flutter doctor`), but please post here any stacktraces, your `flutter doctor -v`, and the contents of the relevant stamp file (e.g. if `idevice_id` is missing the stampfile would be in `flutter_repo/bin/cache/libimobiledevice.stamp` so that we can identify the source of the problem (I have not been able to reproduce the remaining cases).
There have been cases of the flutter tool not finding cached executables that it expected to be present. https://github.com/flutter/flutter/pull/45267 (assures issuing `flutter run` will update universal cache) and https://github.com/flutter/flutter/pull/43767 were to mitigate this, however, it is still occurring. | tool,P2,team-tool,triaged-tool | low | Minor |
537,130,794 | flutter | Expose capabilities/requirements from dart:ui for text edit size/transform | Today, only Flutter Web needs to know about the size and transform of editable text widgets. However, we calculate it for all platforms and send a method up.
One approach is to guard this with `kIsWeb`, which we would like to avoid.
Another approach (suggested in https://github.com/flutter/flutter/pull/46843#discussion_r357264257) would be to expose the requirement for this from dart:ui so the framework could more cleanly check if it needs to do this work.
Yet another approach would be to move the entire method channel interface into dart:ui, which should be treated separately (and may not be desireable due to introducing more compelxity/weight to dart:ui, including all the codec/channel logic that currently lives in the framework).
/cc @yjbanov @goderbauer @nturgut @mdebbar | c: new feature,framework,engine,P2,team-engine,triaged-engine | low | Major |
537,132,247 | opencv | Error while ploting colored point clouds | I'm trying to plot a set of collored 3D points, using Viz3D.
So this is my code:
```
std::vector<cv::Point3f> pointVec;
std::vector<cv::viz::Color> colorVec;
pointVec.push_back(cv::Point3f(1, 2, 3));
pointVec.push_back(cv::Point3f(5, 2, 3));
colorVec.push_back(cv::viz::Color(10, 2, 3));
colorVec.push_back(cv::viz::Color(50, 23, 3));
cv::viz::Viz3d myWindow("Coord");
myWindow.showWidget("Coord Wid", cv::viz::WCoordinateSystem());
cv::viz::WCloud cloud_widget(pointVec, colorVec);
myWindow.showWidget("cloud", cloud_widget);
myWindow.spin();
```
When I compile, I get the error at "traits.hpp" saying
> 'type' is not a member of 'cv::DataType<cv::viz::Color>".
Comenting my code line by line, I found that the error is at `myWindow.showWidget("cloud", cloud_widget)`.
When I went to the documentation, I found this function that prints the "cloud" points with the "colors" in the respective array: `cv::viz::WCloud::WCloud ( InputArray cloud, InputArray colors) `Β
I can't find out what's wrong with this implementation.
| category: build/install,category: viz | low | Critical |
537,134,801 | go | x/exp/shiny/screen: Weird transition when uploading scaled texture in Ubuntu and macOS | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.9.7 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
NA
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/manvendra.s/go"
GORACE=""
GOROOT="/usr/local/opt/[email protected]/libexec"
GOTOOLDIR="/usr/local/opt/[email protected]/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/r2/drncpk212xxdw_zd5ns7nrrn09q2nk/T/go-build553917634=/tmp/go-build -gno-record-gcc-switches -fno-common"
CXX="clang++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
</pre></details>
### What did you do?
I am trying to write a CHIP-8 emulator in Go using golang.org/x/shiny/screen for displaying video buffer onto the screen.
The original resolution of the emulator screen is 32x64 pixels. So, I enlarge the graphical view of the emulator screen by scaling the existing buffer by some factor and then publishing it on the screen.
Here's the code snippet for the same:
``` go
case paint.Event:
log.Print("Paint event, re-painting the buffer..")
tex, _ := s.NewTexture(dim)
defer tex.Release()
tex.Upload(image.Point{}, drawBuff, drawBuff.Bounds())
scaledDim := image.Rectangle{
image.Point{0, 0},
image.Point{EmuWidth * EmuScale, EmuHeight * EmuScale}}
window.Scale(scaledDim, tex, drawBuff.Bounds(), draw.Over, &screen.DrawOptions{})
window.Publish()
```
### What did you expect to see?
On Windows:
https://i.stack.imgur.com/1iMZc.gif
### What did you see instead?
On Mac and Ubuntu 19.10:
https://i.stack.imgur.com/cf2kf.gif
| NeedsInvestigation | low | Critical |
537,152,820 | pytorch | Request for Lint Pass to Detect Modification on Parameters/Attributes during TorchScript Inference | ## π Feature
A lint pass to help user detect any violation of the immutability of the models during inference time.
## Motivation
We assume that in most of the cases, the model should be stateless during inference, which means the model itself is thread-safe. However, user may update the parameters/attributes accidentally. We need some way to help users identify such cases.
## Pitch
A lint pass for TorchScript models (maybe with some annotation to whitelist something)
## Additional context
This may help users detect problems at early stage.
cc: @suo @dzhulgakov
cc @suo | oncall: jit,feature,triaged | low | Minor |
537,157,286 | TypeScript | Docs: "Type assertion" vs "asserts" keyword | Hey there!
Iβm updating [Programming TypeScript](https://www.amazon.com/_/dp/1492037656) to include [assertions in control flow analysis](https://github.com/microsoft/TypeScript/pull/32695), and was looking for guidance about naming.
Type assertions (`x as T`) and assertions in control flow analysis (`asserts x is T`) are similarly named. Whatβs a good way to call these these features, in a way that doesnβt confuse people and aligns with the way the TS team is communicating it?
A couple of ideas:
1. Rename `x as T` to βtype coercionβ or βtype casting" (even though itβs not a runtime behavior), and call `asserts x is T` a βtype assertionβ
2. Keep `x as T` as-is (βtype assertionβ), and call `asserts x is T` a βuser-defined type assertionβ, similar to a βuser-defined type guardβ
Thanks! | Discussion | low | Major |
537,210,726 | terminal | Make the feature tests run in (and perhaps explode) a PTY | #3907 causes a crash when connected to a pseudoconsole; @miniksa suggested that we should make sure we're running our API tests against ptys as well.
zadjii-msft EDIT: Let's be _really_ sure that when we're making the feature tests run in conpty mode, we add a test to check #3907. The fix in #4021 is pretty small, whoever gets to this task should try reverting it and make sure that an appropriate test is added. | Product-Conhost,Issue-Task,Area-CodeHealth | low | Critical |
537,218,557 | rust | Can't return impl trait inside a type when using HRTBs | This code fails to compile:
```rust
struct Wrapper<T>(T);
fn allowed() -> impl for<'a> Fn(&'a u32) -> &'a u32 {
|x: &u32| x
}
fn not_allowed() -> Wrapper<impl for<'a> Fn(&'a u32) -> &'a u32> {
Wrapper(|x: &u32| x)
}
```
The message is:
```
error[E0271]: type mismatch resolving `for<'a> <[closure@src/lib.rs:8:13: 8:24] as std::ops::FnOnce<(&'a u32,)>>::Output == &'a u32`
--> src/lib.rs:7:29
|
7 | fn not_allowed() -> Wrapper<impl for<'a> Fn(&'a u32) -> &'a u32> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected bound lifetime parameter 'a, found concrete lifetime
|
= note: the return type of a function must have a statically known size
```
I think this may be related to #54729 but it isn't quite clear. Certainly the workaround mentioned there does not work in this case.
| A-type-system,T-compiler,C-bug,T-types | low | Critical |
537,227,922 | rust | Tracking issue for `..X`, and `..=X` (`#![feature(half_open_range_patterns)]`) | `X..` was stabilized in #83918
`..=X` was stabilized in #102275
`...X` was removed in #68120
`..X` is tracked in #37854
`X..` patterns in slices are currently gated via `half_open_range_patterns_in_slices`
This is a tracking issue for the feature gate `#![feature(half_open_range_patterns)]`. This feature provides half-open range patterns `X..`, `..X`, `..=X`, and `...X` (last one is also deprecated like `X...Y` is, for spec & implementation simplicity). These correspond to the `RangeFrom`, `RangeTo`, `RangeToInclusive`, and `RangeToInclusive` expression forms, with the same syntaxes, respectively. The correspondence is both syntactic and semantic (in the sense that e.g. a `X..` pattern matching on a scrutinee `s` holds exactly when `(X..).contains(&s)` holds). For implementations details, see #67258.
The implementation for the feature was introduced in https://github.com/rust-lang/rust/pull/67258 and is also strongly related to `#![feature(exclusive_range_pattern)]` (`X..Y`) which is also required for `X..` and `..X` (as the `RangeEnd::Exclusive` syntax is used).
**Steps:**
- Once `half_open_range_patterns` have had some time to bake on nightly, write up an RFC specifying both `exclusive_range_pattern` and `half_open_range` and proposing their stabilization.
**Unresolved questions:**
- [x] Possibly rethink the precedences, https://github.com/rust-lang/rust/issues/48501. We could ship without fixing this and leaving in the ambiguity errors. EDIT: [Lang team considers this an orthogonal issue](https://github.com/rust-lang/rust/issues/67264#issuecomment-720711656).
- [x] The primary question to resolve is whether the lack of clarity around `..X` for signed types being from -i32::MAX -- is that OK? Too confusing? EDIT: [Lang team considers this acceptable](https://github.com/rust-lang/rust/issues/67264#issuecomment-1209771052), leaving open the possibility of future lints to help catch mistakes. | T-lang,B-unstable,C-tracking-issue,disposition-merge,finished-final-comment-period,F-half_open_range_patterns,F-exclusive_range_pattern,A-patterns,S-tracking-design-concerns | high | Critical |
537,229,938 | create-react-app | `--template` cannot load a GItHub repository | ### Describe the bug
Hey folks π I'm not positive that this is a bug, talking with @iansu this seems to be more of a feature that doesn't (yet!) exist. I was wondering if `--template owner/repo` would work in the same way as `npm i owner/repo`, where it looks for a package on GitHub. For example:
```
npm install -g facebook/create-react-app
```
Would grab the code, as-is, from `master` of this repo.
### Environment
```
[email protected]
```
### Steps to reproduce
1. Run `npx create-react-app wheeler-mode --template iansu/cra-template-wheeler-mode`
Any `owner/repo` string should work in the same way!
2. Observe the error message as shown below!
### Expected behavior
I'd expect `yarn` to resolve the template similar to how it grabs it from `npm`, when given an `owner/repo` string that maps to a GitHub repository. Aside from being able to function similar to `yarn` under the hood, this even enables _private_ templates by implementing existing `git` credentials π π
### Actual behavior
```
β― npx create-react-app wheeler-mode --template iansu/cra-template-wheeler-mode
```
Here's the output:
```
Creating a new React app in /Users/jasonetco/dev/wheeler-mode.
Installing packages. This might take a couple of minutes.
Installing react, react-dom, and react-scripts with cra-template-iansu/cra-template-wheeler-mode...
yarn add v1.17.3
[1/4] π Resolving packages...
error Command failed.
Exit code: 128
Command: git
Arguments: ls-remote --tags --heads ssh://[email protected]/cra-template-iansu/cra-template-wheeler-mode.git
Directory: /Users/jasonetco/dev/wheeler-mode
Output:
ERROR: Repository not found.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
info Visit https://yarnpkg.com/en/docs/cli/add for documentation about this command.
Aborting installation.
yarnpkg add --exact react react-dom react-scripts cra-template-iansu/cra-template-wheeler-mode --cwd /Users/jasonetco/dev/wheeler-mode has failed.
Deleting generated file... package.json
Deleting generated file... yarn.lock
Deleting wheeler-mode/ from /Users/jasonetco/dev
Done.
```
Worth noting that `yarn` tried to clone the GitHub repo, but got the SSH URL wrong:
```
ssh://[email protected]/cra-template-iansu/cra-template-wheeler-mode.git
```
Should be:
```diff
- ssh://[email protected]/cra-template-iansu/cra-template-wheeler-mode.git
+ ssh://[email protected]/iansu/cra-template-wheeler-mode.git
```
Let me know how I can help! If y'all are open to a PR here, my first inclination would be to look for `/` in the provided template string and just not prepend `cra-template` before giving it over to `yarn`. | issue: proposal | low | Critical |
537,247,970 | TypeScript | Feature: triple-slash directive to override "target" compiler option | ## Search Terms
tsconfig override per-file compilerOptions code emit
## Suggestion
Add a triple-slash directive to override the `target` compiler option within specific .ts code files.
## Use Cases
With the final death of IE11 coming soon πππ, my team and (I'm sure) many other teams are finally upgrading TS projects from `"target": "ES5"` to `"ES6"` or beyond.
What many of us will discover is that there *can be* subtle (or not-so-subtle) runtime differences between ES5 and newer syntactic equivalents--like true ES6 `class` declarations and the ES5 equivalent function+prototype object.
Our team has run into a problem where a library we still need to use calls class constructor functions using `apply` (not `new`) in its own implementation of inheritance/mixins - this throws a `TypeError` at runtime (and no, it would be impossible to fix it without a prohibitive, major breaking change to the library).
For reference, here's the relevant section of the ECMAScript standard: http://www.ecma-international.org/ecma-262/6.0/#sec-ecmascript-function-objects-call-thisargument-argumentslist, specifically `2. If Fβs [[FunctionKind]] internal slot is "classConstructor", throw a TypeError exception.`.
Because of this and other runtime differences that exist between different `target` values, it would be extremely helpful to allow overriding this setting per-file, for those files where problems are encountered that can't be fixed in any other way. Locating the override *within* source code makes sense, since codegen options that affect runtime have parallel concern with the source code itself.
## Alternatives?
The only alternatives I'm aware of are:
- Continue to use ES5, don't upgrade projects
- Run a multi-step build with different tsconfig.json files, specifically excluding/including the files with issues (not sure if this is really possible in a large project)
- Transpile offending files down to ES5 using babel or similar (or maybe even `tsc` itself)
- Locate offending files in a separate project (may not be possible, or may require prohibitive refactoring/reworking)
Besides the first option, the alternatives all require separating the information "these files need to be ES5" from the *content* of those files. A triple-slash TS directive is the best option because it doesn't separate these parallel concerns.
## TS Syntax
Using a triple-slash directive makes sense since this is a pattern that is already supported and used extensively. I suggest the following syntax:
```
/// <compiler-options target="ES5"/>
```
This would also allow adding support for other compiler options in the future. However, it would only make sense to consider options that *only* affect code emit.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Critical |
537,250,738 | angular | Align with the optional chaining spec | # π feature request
### Relevant Package
`@angular/compiler`
### Description
[Optional chaining](https://github.com/tc39/proposal-optional-chaining)[1] reached stage 4. We've been supporting similar syntax in templates for a while now, calling it the ["safe navigation operator"](https://angular.io/guide/template-syntax#safe-navigation-operator)[2]. For simplicity and smaller payload, we can consider aligning with the spec in future versions of the framework.
There are a couple of semantical and syntactical differences between optional chaining and safe navigation.
## Syntax
Optional chaining has the following syntax:
```ts
obj?.prop // optional static property access
obj?.[expr] // optional dynamic property access
func?.(...args) // optional function or method call
```
Safe navigation supports only direct property access. Optional chaining supports this, as well as, method calls and function calls. Function calls are particularly useful in iterators:
```ts
iterator.return?.()
```
## Semantics
With optional chaining, the expression `a?.b` will be translated to `a == null ? undefined : a.b`. In Angular, the semantics of the same expression would be `null == a ? null : a.b`.
If `a` is `null` or `undefined`, the expression `typeof a?.b` would evaluate to `"object"` with optional chaining and `"undefined"` in Angular's safe navigation operator.
Except the mentioned difference above, method calls are compiled similarly:
```ts
a?.b()
a == null ? undefined : a.b()
```
In both, optional chaining and safe navigation in templates, stacking the operators is translated the same way: `(a?.b).c?.d` becomes `null == a ? null : null == a.b.c ? null : a.b.c.d`.
Another difference seems to be the way parentheses are handled. The optional chaining spec defines that `null==e.foo?null:e.foo.b.c` should be translated to `(a == null ? undefined : a.b).c`. In Angular the same expression translates to `null == a ? null : a.b.c`.
PS: looks like the last issue is fixed by https://github.com/angular/angular/pull/34221.
---
[1] Optional chaining spec https://github.com/tc39/proposal-optional-chaining
[2] Safe navigation https://angular.io/guide/template-syntax#safe-navigation-operator | feature,area: core,area: compiler,core: binding & interpolation | high | Critical |
537,308,513 | pytorch | [RPC] Support nn.Module pickling with share memory | ## π Feature
To share memory across processes for multiprocessing hogwild, Pytorch supports doing it at process spawning, with special [reduce functions](https://github.com/pytorch/pytorch/blob/master/torch/multiprocessing/reductions.py
).
With the introduction of `torch.distributed.rpc`, we will need to support packing share memory relating info while RPC send() pickles nn.Module.
There is a hacky implementation in https://github.com/pytorch/pytorch/issues/30633, but it relies on the special reduce functions mentioned above that are supposed to work with Python's ForkingPickler.
We will need a clean solution that works for RPC.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 | triaged,module: rpc | low | Minor |
537,321,681 | flutter | Tool crash Github template should include enabled/disabled features | For example, the results of flutter config (minus signing cert info)
```
Settings:
flutter-web: true
flutter-linux-desktop: true
enable-macos-desktop: true
~~ios-signing-cert: iPhone Developer: Jonah Williams (XXXX)~~ (But pretend this is a strikethough)
enable-web-incremental-compiler: false
``` | c: new feature,tool,a: triage improvements,P3,team-tool,triaged-tool | low | Critical |
537,327,193 | pytorch | Default shuffle behavior of DistributedSampler | ## π Feature
The **DistributedSampler** should shuffle data without explicitly set `set_epoch`.
## Motivation
I have found that the parameter shuffle of **DistributedSampler** is set True in default but the behavior and intuition are inconsistent. The GPU always load the same data between every epoch. It still needs manual set sampler.set_epoch to change the data loaded by GPU
## Pitch
Truely shuffle the data when the shuffle=True in **DistributedSampler**
## Alternatives
Just show this potential feature in doc.
## Additional context
The test code,
`CUDA_VISIBLE_DEVICES=1,3 python -m torch.distributed.launch --nproc_per_node=2 test.py`
test.py:
```import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from torch.utils.data.distributed import DistributedSampler
torch.distributed.init_process_group(backend="nccl")
input_size = 5
output_size = 2
batch_size = 2
data_size = 16
local_rank = torch.distributed.get_rank()
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)
class RandomDataset(Dataset):
def __init__(self, size, length, local_rank):
self.len = length
self.data = torch.stack([torch.ones(5), torch.ones(5)*2,
torch.ones(5)*3,torch.ones(5)*4,
torch.ones(5)*5,torch.ones(5)*6,
torch.ones(5)*7,torch.ones(5)*8,
torch.ones(5)*9, torch.ones(5)*10,
torch.ones(5)*11,torch.ones(5)*12,
torch.ones(5)*13,torch.ones(5)*14,
torch.ones(5)*15,torch.ones(5)*16]).to('cuda')
self.local_rank = local_rank
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
dataset = RandomDataset(input_size, data_size, local_rank)
sampler = DistributedSampler(dataset)
rand_loader = DataLoader(dataset=dataset,
batch_size=batch_size,
sampler=sampler)
e = 0
while e < 2:
t = 0
sampler.set_epoch(e) # this is key, if this line removed, the data will be same, else the data won't be same between iteration but will be same in every per run
for data in rand_loader:
print(data)
e+=1
```
cc @SsnL @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 | oncall: distributed,module: dataloader,triaged | low | Minor |
537,336,437 | angular | Not reset the template parser's error when using JIT compiler | <!--π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
Oh hi there! π
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
-->
# π bug report
### Affected Package
`@angular/compiler`
### Is this a regression?
No sure, But got the same problem on `v9-rc.4` and `v8.2`
### Description
#### Background
I am using Angular to rewrite the front end of [apache/zeppelin](https://github.com/apache/zeppelin/tree/web_angular), it allows users to enter dynamic templates and run, so I trying to implement this feature using the JIT compiler but blocked by this problem.
#### Steps to reproduce
Open the live DEMO https://ng-jit-template-parse-errors.now.sh/ and browser's console.
Enter the following code in the textarea.
```html
<button (click)="i = i++">Click {{i}}</button>
```
Click the render button and we will see the following errors in the console, this is good because this error is expected.
```
vendor-es2015.js:40954 ERROR Error: Template parse errors:
Parser Error: Unexpected end of expression: i = i++ at the end of the expression [i = i++] in ng:////template.html@0:17 ("<button (click)="[ERROR ->]i = i++">Click {{i}}</button>"): ng:////template.html@0:17
```
Then enter the correct template into the textarea and click the render button again.
```html
<button (click)="i = i + 1">Click {{i}}</button>
```
Expect no error, but still got the same error.
```
vendor-es2015.js:40954 ERROR Error: Template parse errors:
Parser Error: Unexpected end of expression: i = i++ at the end of the expression [i = i++] in ng:////template.html@0:17 ("<button (click)="[ERROR ->]i = i++">Click {{i}}</button>"): ng:////template.html@0:17
```
I guess the error was cached, so I called the method `compiler.clearCache()`, but it still didn't work.
I tried using the `Ι΅renderComponent` method to render the dynamic component, the error is disappeared, but I can not use other modules (like `FormsModule`) feature in this component.
## π¬ Minimal Reproduction
Stackblitz: https://stackblitz.com/github/hsuanxyz/ng-jit-template-parse-errors
Repo: https://github.com/hsuanxyz/ng-jit-template-parse-errors/
## π₯ Exception or Error
```
vendor-es2015.js:40954 ERROR Error: Template parse errors:
Parser Error: Unexpected end of expression: i = i++ at the end of the expression [i = i++] in ng:////template.html@2:17 ("<input [(ngModel)]="name"> <span>{{name}}</span>
<br>
<button (click)="[ERROR ->]i = i++">Click {{i}}</button>"): ng:////template.html@2:17
Parser Error: Unexpected end of expression: i = i++ at the end of the expression [i = i++] in ng:////template.html@2:17 ("<input [(ngModel)]="name"> <span>{{name}}</span>
<br>
<button (click)="i = i++">[ERROR ->]Click {{i}}</button>"): ng:////template.html@2:26
at syntaxError (vendor-es2015.js:10006)
at htmlAstToRender3Ast (vendor-es2015.js:22123)
at parseTemplate (vendor-es2015.js:24817)
at CompilerFacadeImpl.compileComponent (vendor-es2015.js:25661)
at Function.get (vendor-es2015.js:70705)
at getComponentDef (vendor-es2015.js:37333)
at assertComponentType (vendor-es2015.js:37743)
at ComponentFactoryResolver$1.resolveComponentFactory (vendor-es2015.js:65766)
at AppComponent.renderDynamicComponent (main-es2015.js:66)
at AppComponent_Template_button_click_2_listener (main-es2015.js:83)
```
## π Your Environment
**Angular Version:**
```bash
Angular CLI: 9.0.0-rc.6
Node: 12.11.1
OS: darwin x64
Angular: 9.0.0-rc.6
... animations, cli, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Ivy Workspace: Yes
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.900.0-rc.6
@angular-devkit/build-angular 0.900.0-rc.6
@angular-devkit/build-optimizer 0.900.0-rc.6
@angular-devkit/build-webpack 0.900.0-rc.6
@angular-devkit/core 9.0.0-rc.6
@angular-devkit/schematics 9.0.0-rc.6
@ngtools/webpack 9.0.0-rc.6
@schematics/angular 9.0.0-rc.6
@schematics/update 0.900.0-rc.6
rxjs 6.5.3
typescript 3.6.4
webpack 4.41.2
```
**Anything else relevant?**
<!-- βοΈIs this a browser specific issue? If so, please specify the browser and version. -->
<!-- βοΈDo any of these matter: operating system, IDE, package manager, HTTP server, ...? If so, please mention it below. -->
| freq1: low,area: compiler,type: use-case,P4,compiler: jit | low | Critical |
537,350,427 | go | x/image/tiff: grayscale tiled images are not decoded correctly | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.4 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/ivan/.cache/go-build"
GOENV="/home/ivan/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/srv/work/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/ivan/.local/share/umake/go/go-lang"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/ivan/.local/share/umake/go/go-lang/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/srv/work/go/src/go.googlesource.com/image/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build974283161=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
When you decode tiled tiff grayscale image you may find that tiles placed on the image boundary (right) are decoded incorrectly.
It happens because the tiff file contains the complete tile compressed, while the loop iterates only over the part of it that fits the image boundary:
```go
for y := ymin; y < rMaxY; y++ {
for x := xmin; x < rMaxX; x++ {
```
So if you read pixels from the non-complete tiles (from the right edge) all lines but the first one would contain real data mixed with garbage (bits from outside the image boundary).
### What did you expect to see?
### What did you see instead?
| NeedsDecision | low | Critical |
537,350,532 | ant-design | [4.0] Proposal: Fieldset component | - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
There's a standard HTML tag `<fieldset>` which is used to visually group form elements, see [example](https://www.w3schools.com/tags/tryit.asp?filename=tryhtml_fieldset). It would be great if antd had `<Fieldset>` with a standard antd styling and may some configuration like title and border style.
### What does the proposed API look like?
Example: https://codesandbox.io/s/antd-reproduction-template-4924q
This example uses standard `<fieldset>` with `<legend>`, imagine if we had something like:
```
<Fieldset title="" fullBorder titlePosition="right">
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | π‘ Feature Request,βοΈ New Component,Inactive | low | Major |
537,356,384 | terminal | vim slower to open/close in conhost v2 compared to v1 |
# Environment
```none
Windows build number: 17763
Windows Terminal version (if applicable): N/A, conhost only
Any other software? Vim for Windows, tested with 7.4-8.1
```
# Steps to reproduce
Open a CMD window with a shortcut that specifies a large window buffer. The default in RS5 appears to be 9000, which is sufficient. Use the command line vim to open a random text file and observe startup time. Quit vim and observe exit time.
(Note that vim has a startup time debugging facility - run "vim --startuptime logfile filetoopen" and it will report the time spent in various phases of startup to the log file.)
# Expected behavior
Expecting performance of open/close to be more in line with the legacy console. Note that the change to default to 9000 lines of history is also causal.
# Actual behavior
Vim appears to take around a second or two to start and a second or two to exit, which appears to be saving and restoring the console screen buffer, when using conhost v2. This does not occur with v1, and can be mitigated by reducing the screen buffer size.
| Product-Conhost,Help Wanted,Needs-Repro,Area-Performance,Issue-Task | low | Critical |
537,380,733 | flutter | Camera application is crashing and throws an exception - Access denied finding property "vendor.camera.aux.packagelist" | Lenovo TB-X605L has issue with activating/lookup device's camera with flutter / camera library.
Same code works fine (camera is able to be used to capture image / video) on Samsung S6 tablet (and works fine also Samsung A20, MotoOneZoom mobiles).
Re-issued as per https://github.com/flutter/flutter/issues/44178
## Steps to Reproduce
1. dependency version:
`camera: ^0.5.7`
2. Code sample:
```dart
Future<Null> _fetchCameras() async {
// SystemChrome.setPreferredOrientations([DeviceOrientation.portraitUp]);
try {
List<CameraDescription> cameras = await availableCameras();
if (controller == null) {
controller =
new CameraController(cameras[0], ResolutionPreset.veryHigh);
await controller.initialize();
}
} on CameraException catch (e) {
logError(e.code, e.description);
}
}
```
3. startup app
**Target Platform: Android**
**Target OS version/browser: 9**
**Devices: Lenovo TB-X605L & Samsung S6**
## Logs
```bash
12-11 14:38:54.037 763 16853 D NuPlayerDriver: notifyListener_l(0xecc30680), (1, 0, 0, -1), loop setting(0, 0)
12-11 14:38:54.026 16766 16766 W Binder:16766_2: type=1400 audit(0.0:466): avc: denied { read } for name="u:object_r:vendor_camera_prop:s0" dev="tmpfs" ino=6503 scontext=u:r:untrusted_app:s0:c134,c256,c512,c768 tcontext=u:object_r:vendor_camera_prop:s0 tclass=file permissive=0
12-11 14:38:54.041 16766 16779 E libc : Access denied finding property "vendor.camera.aux.packagelist"
12-11 14:38:54.041 753 7950 I CameraHardwareInterface: Opening camera 0
12-11 14:38:54.041 1203 1304 I [email protected]: Opening camera 0
12-11 14:38:54.041 1203 1304 I QCamera : <HAL><INFO> getCameraInfo: 342: Camera id 0 API version 256
12-11 14:38:54.041 1203 1304 I QCamera : <HAL><INFO> getCamInfo: 8809: camera 0 resource cost is 100
12-11 14:38:54.041 1203 1304 I QCamera : <HAL><INFO> cameraDeviceOpen: 407: Open camera id 0 API version 256
12-11 14:38:54.042 544 544 D audio_hw_primary: adev_set_parameters: enter: cameraFacing=back
12-11 14:38:54.042 544 544 D audio_hw_extn: audio_extn_set_anc_parameters: anc_enabled:0
12-11 14:38:54.042 544 544 D audio_hw_spkr_prot: audio_extn_fbsp_set_parameters: Speaker protection disabled
12-11 14:38:54.043 1203 1304 D vndksupport: Loading /vendor/lib/hw/power.qcom.so from current namespace instead of sphal namespace.
12-11 14:38:54.043 582 693 D APM_AudioPolicyManager: stopOutput() output 13, stream 1, session 105
12-11 14:38:54.046 1203 1304 I QCamera : <HAL><INFO> openCamera: 1827: [KPI Perf]: E PROFILE_OPEN_CAMERA camera id 0
12-11 14:38:54.047 570 611 E ANDR-PERF-MPCTL: Invalid profile no. 0, total profiles 0 only
12-11 14:38:54.048 781 781 I mm-camera: < INFO> 395: enable_memleak_trace: start memleak tracking.
12-11 14:38:54.048 781 781 W libc : Unable to set property "persist.camera.debug.logfile" to "0": connection failed; errno=13 (Permission denied)
12-11 14:38:54.049 781 781 I mm-camera: <MCT >< INFO> 63: mct_controller_new: Creating new mct_controller with session-id 1
12-11 14:38:54.049 781 16858 I mm-camera: <MCT >< INFO> 4529: mct_pipeline_start_session_thread: E sensor
12-11 14:38:54.049 781 16858 I mm-camera: <MCT >< INFO> 4536: mct_pipeline_start_session_thread: Calling start_session on Module sensor
12-11 14:38:54.050 781 16860 I mm-camera: <MCT >< INFO> 4529: mct_pipeline_start_session_thread: E iface
12-11 14:38:54.050 781 16860 I mm-camera: <MCT >< INFO> 4536: mct_pipeline_start_session_thread: Calling start_session on Module iface
12-11 14:38:54.051 781 16861 I mm-camera: <MCT >< INFO> 4529: mct_pipeline_start_session_thread: E isp
12-11 14:38:54.051 781 16861 I mm-camera: <MCT >< INFO> 4536: mct_pipeline_start_session_thread: Calling start_session on Module isp
12-11 14:38:54.051 781 16861 I mm-camera: <ISP >< INFO> 205: isp_module_start_session: session id 1
12-11 14:38:54.052 781 16863 I mm-camera: <MCT >< INFO> 4529: mct_pipeline_start_session_thread: E stats
12-11 14:38:54.052 781 16860 I mm-camera: <MCT >< INFO> 4539: mct_pipeline_start_session_thread: Module iface start_session rc = 1
12-11 14:38:54.052 781 16860 I mm-camera: <MCT >< INFO> 4547: mct_pipeline_start_session_thread: started_num = 1, success = 1
12-11 14:38:54.052 781 16860 I mm-camera: <MCT >< INFO> 4554: mct_pipeline_start_session_thread: X iface
12-11 14:38:54.052 781 16863 I mm-camera: <MCT >< INFO> 4536: mct_pipeline_start_session_thread: Calling start_session on Module stats
12-11 14:38:54.053 781 16864 I mm-camera: <MCT >< INFO> 4529: mct_pipeline_start_session_thread: E pproc
12-11 14:38:54.053 781 16864 I mm-camera: <MCT >< INFO> 4536: mct_pipeline_start_session_thread: Calling start_session on Module pproc
12-11 14:38:54.055 781 16865 I mm-camera: <MCT >< INFO> 4529: mct_pipeline_start_session_thread: E imglib
12-11 14:38:54.056 781 16865 I mm-camera: <MCT >< INFO> 4536: mct_pipeline_start_session_thread: Calling start_session on Module imglib
12-11 14:38:54.056 781 16864 I mm-camera: <MCT >< INFO> 4539: mct_pipeline_start_session_thread: Module pproc start_session rc = 1
12-11 14:38:54.056 781 16864 I mm-camera: <MCT >< INFO> 4547: mct_pipeline_start_session_thread: started_num = 2, success = 2
12-11 14:38:54.056 781 16864 I mm-camera: <MCT >< INFO> 4554: mct_pipeline_start_session_thread: X pproc
12-11 14:38:54.059 781 16863 E mm-camera: <STATS_AIS ><ERROR> 173: dsps_send_req: DSPS Send Request Timeout!!
12-11 14:38:54.059 781 16865 I mm-camera: <MCT >< INFO> 4539: mct_pipeline_start_session_thread: Module imglib start_session rc = 1
12-11 14:38:54.059 781 16865 I mm-camera: <MCT >< INFO> 4547: mct_pipeline_start_session_thread: started_num = 3, success = 3
12-11 14:38:54.059 781 16865 I mm-camera: <MCT >< INFO> 4554: mct_pipeline_start_session_thread: X imglib
12-11 14:38:54.060 781 16861 I mm-camera: <MCT >< INFO> 4539: mct_pipeline_start_session_thread: Module isp start_session rc = 1
12-11 14:38:54.060 781 16861 I mm-camera: <MCT >< INFO> 4547: mct_pipeline_start_session_thread: started_num = 4, success = 4
12-11 14:38:54.060 781 16861 I mm-camera: <MCT >< INFO> 4554: mct_pipeline_start_session_thread: X isp
12-11 14:38:54.061 781 16863 I mm-camera: <MCT >< INFO> 4539: mct_pipeline_start_session_thread: Module stats start_session rc = 1
12-11 14:38:54.061 781 16858 I mm-camera: <MCT >< INFO> 4539: mct_pipeline_start_session_thread: Module sensor start_session rc = 1
12-11 14:38:54.061 781 16863 I mm-camera: <MCT >< INFO> 4547: mct_pipeline_start_session_thread: started_num = 5, success = 5
12-11 14:38:54.061 781 16863 I mm-camera: <MCT >< INFO> 4554: mct_pipeline_start_session_thread: X stats
12-11 14:38:54.061 781 16858 I mm-camera: <MCT >< INFO> 4547: mct_pipeline_start_session_thread: started_num = 6, success = 6
12-11 14:38:54.061 781 16858 I mm-camera: <MCT >< INFO> 4554: mct_pipeline_start_session_thread: X sensor
12-11 14:38:54.067 781 781 I mm-camera: <MCT >< INFO> 4450: mct_pipeline_start_stream_internal: Adding session stream streamid= 0xf for session=1
12-11 14:38:54.067 781 781 I mm-camera: <MCT >< INFO> 4498: mct_pipeline_start_stream_internal: Linking session stream for session 1
12-11 14:38:54.067 781 781 I mm-camera: <MCT >< INFO> 510: mct_stream_start_link: Start linking Session-stream 0x1000f
12-11 14:38:54.068 781 781 I mm-camera: <ISP >< INFO> 801: isp_port_check_caps_reserve: port 0xec3e7c80 ide 1000f type 10 dim 0 0
12-11 14:38:54.068 781 781 I mm-camera: <PPROC >< INFO> 446: pproc_port_add_modules_to_stream: in identity 1000f stream 10 int_link = 0xec3fc400
12-11 14:38:54.068 781 781 I mm-camera: <PPROC >< INFO> 458: pproc_port_add_modules_to_stream: :LINK linking mods tmod and c2d for identity 1000f
12-11 14:38:54.068 781 781 I mm-camera: <C2D >< INFO> 1490: c2d_module_notify_add_stream: width 0, height 0, stride 0, scanline 0, is_type 0
12-11 14:38:54.068 781 781 I mm-camera: <PPROC >< INFO> 458: pproc_port_add_modules_to_stream: :LINK linking mods c2d and cpp for identity 1000f
12-11 14:38:54.068 781 781 I mm-camera: <CPP >< INFO> 2154: cpp_module_notify_add_stream: :width 0, height 0, stride 0, scanline 0, framelen 0
12-11 14:38:54.068 781 781 I mm-camera: <CPP >< INFO> 2319: cpp_module_notify_add_stream: : stream 10, fmt 1, asf_mode 0, sharpness_level 0.000000,asf mask 0, denoise 0, denoise_mask 0, dsdn mask 0,dsdn enable 0, tnr mask 0, tnr enable 0, ds_mask 0
12-11 14:38:54.068 781 781 I mm-camera: <PPROC >< INFO> 458: pproc_port_add_modules_to_stream: :LINK linking mods cpp and paaf for identity 1000f
12-11 14:38:54.069 781 781 I mm-camera: <PPROC >< INFO> 458: pproc_port_add_modules_to_stream: :LINK linking mods paaf and ezt for identity 1000f
12-11 14:38:54.069 781 781 I mm-camera: <PPROC >< INFO> 458: pproc_port_add_modules_to_stream: :LINK linking mods ezt and quadracfa for identity 1000f
12-11 14:38:54.069 781 781 E mm-camera: <STATS ><ERROR> 2822: stats_port_check_caps_reserve: Invalid Port capability type!
12-11 14:38:54.069 781 781 I chatty : uid=1006(camera) mm-qcamera-daem identical 3 lines
12-11 14:38:54.069 781 781 E mm-camera: <STATS ><ERROR> 2822: stats_port_check_caps_reserve: Invalid Port capability type!
12-11 14:38:54.070 781 781 I mm-camera: <MCT >< INFO> 4507: mct_pipeline_start_stream_internal: Session stream linked successfully session 1
12-11 14:38:54.073 555 597 I SDM : ResourceImpl::SetMaxBandwidthMode: new bandwidth mode=1
12-11 14:38:54.076 1203 1304 I Thermal-Lib: Thermal-Lib-Client: Registration successful for camera with handle:1
12-11 14:38:54.076 1203 1304 I Thermal-Lib: Thermal-Lib-Client: Registration successful for camcorder with handle:2
12-11 14:38:54.076 1203 1304 I QCamera : <HAL><INFO> openCamera: 1840: [KPI Perf]: X PROFILE_OPEN_CAMERA camera id 0, rc: 0
12-11 14:38:54.077 739 885 I ThermalEngine: Thermal-Server: Adding thermal event listener on fd 61
12-11 14:38:54.077 1203 16884 I Thermal-Lib: Thermal-Lib-Client: Client received msg camera 0
12-11 14:38:54.077 1203 16884 I Thermal-Lib: Thermal-Lib-Client: Client received msg camcorder 0
12-11 14:38:54.078 781 16879 E mm-camera: <MCT ><ERROR> 1056: mct_pipeline_decide_hw_wakeup: Couldn't find meta stream
12-11 14:38:54.080 781 16879 I chatty : uid=1006(camera) CAM_MctServ identical 1 line
12-11 14:38:54.080 781 16879 E mm-camera: <MCT ><ERROR> 1056: mct_pipeline_decide_hw_wakeup: Couldn't find meta stream
12-11 14:38:54.084 1203 1304 I QCamera : <HAL><INFO> getCameraInfo: 342: Camera id 0 API version 256
12-11 14:38:54.084 1203 1304 I QCamera : <HAL><INFO> getCamInfo: 8809: camera 0 resource cost is 100
12-11 14:38:54.087 16766 16766 E libc : Access denied finding property "vendor.camera.aux.packagelist"
12-11 14:38:54.087 1203 1304 I QCamera : <HAL><INFO> getCameraInfo: 342: Camera id 0 API version 256
12-11 14:38:54.087 1203 1304 I QCamera : <HAL><INFO> getCamInfo: 8809: camera 0 resource cost is 100
12-11 14:38:54.130 16766 16766 I CameraDeviceState: Legacy camera service transitioning to state CONFIGURING
12-11 14:38:54.131 16766 16888 I RequestThread-0: Configure outputs: 2 surfaces configured.
12-11 14:38:54.131 16766 16888 D Camera : app passed NULL surface
12-11 14:38:54.139 16766 16888 I RequestThread-0: configureOutputs - set take picture size to 1920x1080
12-11 14:38:54.171 781 16859 E mm-camera: <SENSOR><ERROR> hi556_qh_m10_eeprom_get_calibration_items: 49: is_wbc:0,is_afc:1,is_lsc:0,is_dpc:0,is_insensor:1, is_ois:0
12-11 14:38:54.171 781 16859 E mm-camera: hi556_qh_m10_insensor_get_raw_data:23,Enter
12-11 14:38:54.171 781 16859 E mm-camera: hi556_qh_m10_insensor_get_raw_data:30,Exit
12-11 14:38:54.173 1203 16855 I QCamera : <HAL><INFO> setDualCameraMode: 14872: Dual camera mode set 0
12-11 14:38:54.173 1203 16855 I QCamera : <HAL><INFO> setPreviewSize: 1513: Requested preview size 1280 x 720
12-11 14:38:54.175 781 16879 E mm-camera: <MCT ><ERROR> 1056: mct_pipeline_decide_hw_wakeup: Couldn't find meta stream
12-11 14:38:54.176 781 16859 E mm-camera: <SENSOR><ERROR> 601: actuator_load_lib: name=cn3937a
12-11 14:38:54.176 16766 16766 I CameraDeviceState: Legacy camera service transitioning to state IDLE
12-11 14:38:54.177 781 16859 E mm-camera: <SENSOR><ERROR> hi556_qh_m10_eeprom_autofocus_calibration: 323: Enter
12-11 14:38:54.177 781 16859 E mm-camera: hi556_qh_m10_eeprom_autofocus_calibration:349,liuying_af before adjust initial code 185, adjusted code_per_step: 1, qvalue: 1024
12-11 14:38:54.177 781 16859 E mm-camera: hi556_qh_m10_eeprom_autofocus_calibration:366,liuying_af otp_step_bound 120, new_step_bound 306 total_steps 255
12-11 14:38:54.177 781 16859 E mm-camera: hi556_qh_m10_eeprom_autofocus_calibration:371,liuying_af after adjust initial code 178, adjusted code_per_step: 1228, qvalue: 1024
12-11 14:38:54.177 781 16859 E mm-camera: <SENSOR><ERROR> hi556_qh_m10_eeprom_autofocus_calibration: 373: Exit
12-11 14:38:54.181 16766 16766 I RequestQueue: Repeating capture request set.
12-11 14:38:54.192 16766 16888 W LegacyRequestMapper: convertRequestMetadata - control.awbRegions setting is not supported, ignoring value
12-11 14:38:54.192 16766 16888 W LegacyRequestMapper: Only received metering rectangles with weight 0.
12-11 14:38:54.193 16766 16888 W LegacyRequestMapper: Only received metering rectangles with weight 0.
12-11 14:38:54.200 1203 16855 I QCamera : <HAL><INFO> setDualCameraMode: 14872: Dual camera mode set 0
$ `flutter analyze`
Nothing to fix
```
$ flutter doctor -v
```bash
[β] Flutter (Channel dev, v1.13.0, on Microsoft Windows [Version 10.0.18362.476], locale en-GB)
β’ Flutter version 1.13.0 at c:\dev\env\flutter
β’ Framework revision 09126abb22 (8 days ago), 2019-12-03 17:43:00 -0800
β’ Engine revision 6179380243
β’ Dart version 2.7.0 (build 2.7.0-dev.2.1 a4d799c402)
[β] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
β’ Android SDK at C:\dev\env\Android
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-29, build-tools 28.0.3
β’ ANDROID_HOME = C:\dev\env\Android
β’ Java binary at: C:\dev\tools\Android\Android Studio\jre\bin\java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
β’ All Android licenses accepted.
[β] Android Studio (version 3.5)
β’ Android Studio at C:\dev\tools\Android\Android Studio
β’ Flutter plugin version 42.0.1
β’ Dart plugin version 191.8593
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
[β] VS Code, 64-bit edition (version 1.40.2)
β’ VS Code at C:\Program Files\Microsoft VS Code
β’ Flutter extension version 3.7.0
[β] Connected device (1 available)
β’ Lenovo TB X605L β’ HA0K4WJ0 β’ android-arm64 β’ Android 9 (API 28)
β’ No issues found!
``` | e: device-specific,platform-android,p: camera,package,P2,team-android,triaged-android | low | Critical |
537,389,690 | godot | Sub-Viewport Update Mode UPDATE_ALWAYS does not update when not visible | **Godot version:**
Godot 3.2 Beta 3
**OS/device including version:**
Ubuntu 19.04
**Issue description:**
(TLDR : Update Always does not update always)
On a viewport set to Update Always, it still only updates when visible (Which is the expected behaviour only of "Update When Visible"
**Steps to reproduce:**
Make a script checking the values of a viewport texture and try disabling the visibility of a viewport container parent
**Minimal reproduction project:**
[Test5.zip](https://github.com/godotengine/godot/files/3959474/Test5.zip)
Color value is printed every frame | bug,topic:core | low | Minor |
537,427,200 | react | DevTools: Profiler: Enable correlating console logs to profiler output (and vice versa) | A challenge with the React DevTools profiler is that it's too hard to correlate profiler results with console logs. This makes diagnosing and fixing render performance issues much more difficult.
For example, yesterday I was debugging an app where each render usually took 10ms-20ms except every 20-30 renders it'd take 600-800ms. It was frustrating that I didn't have a way to correlate the profiler UI (which told me which renders were problematic) with the verbose console log output that might tell me exactly what went wrong during those renders. Instead I had to comb through logs and guess which output came from "bad" renders. This was tedious and error-prone.
Anyway, my proposal is for React DevTools to make it easy to correlate profiler results with console log output. Both directions would be useful:
1) **navigate from logs to profiler** - if I see a suspicious line in the logs, I'd like to easily navigate to the profiler with the specific component/commit selected that was running when the line was logged.
2) **navigate from profiler to logs** - if I see a suspiciously long commit, I'd like to select it in the profiler pane and have an easy way to see associated logs.
I don't have a strong opinion about how this should be accomplished, but below are a few ideas to spur discussion.
A minimal solution could be something like this:
a) The profiler assigns a unique ID to each commit
b) The profiler's right pane would show the ID for each commit
c) React would add a new hook that'd return that ID
d) Userland code could include the ID in logs.
Just this minimal support would be a vast improvement.
If we wanted to make it smoother, here's a few ideas that could be layered on top.
1. **Profiler->Console Links** The ID in the profiler UI could be a hyperlink that'd open the console drawer and put the ID in the console's CMD+F search box. This would be one-click navigation from profiler to logs. I don't know if Chrome allows this kind of cross-pane control, over the console UI so this might not be practical.
2. **Console -> Profiler Links** For one-click logging in the other direction., we could have a special URL format (e.g. `react://profiler/commit/2c1056b5-1be1-43d4-a105-1d840cf4f9c3`) that would enable userland code to emit links in the console that, when clicked, would navigate to the specific commit (in the profile pane) that was active when the logs were emitted. Similar caveat as above: I'm not sure if chrome extensions can be "deeplinked" like this.
3. **Log Components Where** Building on (1) and (2) above, we could enable console<->profiler linking without requiring changes to userland code. We could have a profiler setting (e.g. "log components where" with UX like "hide components where") that, when active, would emit a line to the console log at the start of each render of a matching component. The output would link back to the profiler, e.g.
`[RDT] Starting MyCoolComponent (react://profiler/commit/2c1056b51be143d4a1051d840cf4f9c3)`.
What do you think? I'm unfamiliar with React and RDT internals so there might be much better ways to solve log<->profiler correlation than my naive ideas above. But at least I wanted to call out the problem and encourage discussion about a solution.
| Type: Discussion,Component: Developer Tools,Type: Needs Investigation | low | Critical |
537,450,116 | godot | the cell index is not consequent when create tiles | **Godot version:3.1.2
**OS/device including version:**
**Issue description:**
When create tile use the new single tile button, the cell index is not consequent after delete some tiles.
**Steps to reproduce:**
1. add some new tiles
2. delete some tiles
3. add some new tiles
the cell index of the tile is not consequent, the index of deleted cell will not used.
**Minimal reproduction project:**
<img width="1174" alt="Screen Shot 2019-12-13 at 5 43 21 PM" src="https://user-images.githubusercontent.com/1130047/70790842-d4dedd80-1dd0-11ea-9b3d-8d22ef3fd2f1.png">
| discussion,topic:core,topic:2d | low | Major |
537,473,985 | react-native | [Android] Vector images from native resources is not working | Based on React Native [documentation](https://facebook.github.io/react-native/docs/images#images-from-hybrid-apps-resources), we should be able to fetch images from native resources.
This is works great on iOS, but in Android you can't load [vector images](https://developer.android.com/guide/topics/graphics/vector-drawable-resources.html) using `uri`.
React Native version:
```
react-native: 0.61.5
```
## Steps To Reproduce
Placing vector image named `image.xml` in `drawable` folder and using this code, is not working:
`<Image width={100} height={100} source={{ uri: 'image' }} style={{ width: 100, height: 100 }} />`
Placing the vector image in `assets` folder and using this code, is not working either:
`<Image width={100} height={100} source={{ uri: 'asset:/image.xml' }} style={{ width: 100, height: 100 }} />`
Note that placing `.jpg` or `.png` images is working without any issues.
| Platform: Android,Component: Image,Bug,Never gets stale | medium | Major |
537,492,568 | terminal | Fullscreen mode + "Show desktop" not working as expected | # Environment
```
Windows build number: Microsoft Windows Version 10.0.18362.476
Windows Terminal version: microsoft-windows-terminal 0.7.3291.0
```
# Steps to reproduce
> Open Windows Terminal
> Toggle fullscreen (F11)
> Use "show desktop" (win + d)
> Use "show desktop" again
# Expected behavior
All windows should hide and the desktop should be shown (First trigger of "show desktop"). On second tirgger all windows should be shown again, and the terminal window should be visible on top again.
# Actual behavior
On the second trigger of "show desktop", another window gets shown on top. Also sometimes the taskbar disappears, and you can see parts of the terminal peeking through.
PS: When using win+d with a AutoHotkey remapping, this gets even more buggy. (I can provide details if necessary)
| Help Wanted,Issue-Bug,Area-UserInterface,Product-Terminal,Priority-3 | low | Critical |
537,573,430 | pytorch | [feature request] Better handling for CUDA Out of Memory | Currently, users have little recourse when the CUDA allocator raises an OOM error due to fragmentation. Providing better handling when the allocator fails to allocate memory could alleviate some fragmentation related issues. For example, I occasionally make use of the following code to address fragmentation related issues:
```python
import gc
import torch
def refresh_cuda_memory():
"""
Re-allocate all cuda memory to help alleviate fragmentation
"""
# Run a full garbage collect first so any dangling tensors are released
gc.collect()
# Then move all tensors to the CPU
locations = {}
for obj in gc.get_objects():
if not isinstance(obj, torch.Tensor):
continue
locations[obj] = obj.device
obj.data = obj.data.cpu()
if isinstance(obj, torch.nn.Parameter) and obj.grad is not None:
obj.grad.data = obj.grad.cpu()
# Now empty the cache to flush the allocator
torch.cuda.empty_cache()
# Finally move the tensors back to their associated GPUs
for tensor, device in locations.items():
tensor.data = tensor.to(device)
if isinstance(tensor, torch.nn.Parameter) and tensor.grad is not None:
tensor.grad.data = tensor.grad.to(device)
```
I have verified that this can fix some OOM problems due to fragmentation. For example, I was running into an issue when trying to save a model when using NVIDIA's [apex](https://github.com/NVIDIA/apex) package. Apparently calling `amp.state_dict()` was trying to allocate memory. I consistently got the following OOM error during the save:
```
CUDA out of memory. Tried to allocate 246.00 MiB (GPU 0; 10.76 GiB total capacity; 8.98 GiB already allocated; 212.00 MiB free; 722.71 MiB cached)
```
Clearly there was enough free memory, but fragmentation likely made it impossible to allocate a contiguous block. Adding a call to `refresh_cuda_memory` before calling `amp.state_dict()` alleviated the issue. This approach does not work when using variable sized batches, since a user cannot know before running a batch if it will result in an OOM error and calling `refresh_cuda_memory` between each batch is likely too slow. Rather, a user can only react after the OOM error occurs by refreshing memory and trying the batch again.
Here are a couple of suggestions for how the CUDA caching allocator could address the issue:
1) The allocator itself might run a variant of the code above upon determining there is not enough available memory and then retry the allocation
and/or
2) Make a python hook available upon detecting OOM where users can free CUDA memory or try this refresh operation themselves. It would require care to ensure this does not result in infinite recursion.
If these seem like reasonable suggestions, I'm happy to try to implement this feature provided some guidance on what direction to take. Linking some of the people who discussed #1529. @vadimkantorov @ezyang @SsnL @VitalyFedyunin @soumith
cc @ngimel | module: cuda,triaged | low | Critical |
537,575,242 | go | cmd/compile: Finer grained visibility debug info for variables | Delve currently determines variable visibility by using a combination of DWARF lexical blocks and the variable declaration line: a variable is visible if the lexical block is active (current PC is contained in the range of the lexical block) and if the current line is greater than or equal to the declaration line. The reason for the "equal to" part is that variables declared in the header of a for statement are initialized on every iteration after the first one (there are other situations where this could happen, by manually setting breakpoints at specific addresses, but that's the most common case).
This is fine for the most part but when current line equals declaration line Delve will show uninitialized variables. As far as I can tell there is no way to improve this with the debug symbols currently exported by the compiler.
There has been [some interest in from users of Delve in having finer grained tracking of variable visibility](https://github.com/go-delve/delve/issues/1134). Since this isn't actually a thing Delve can do anything about I think there should be a bug here about this.
DWARF offers two ways of doing this: the DW_AT_start_scope and loclists.
DW_AT_start_scope specifies an offset, from the start of the containing lexical block of a variable, where the variable begins being visible. For this to be useful there needs to be a rough correspondence between the order of statements in the source and the order of instructions in the compiled output. I don't think the SSA backend guarantees that.
Since the compiler already tracks the information needed for loclists and simply discards it when optimizations are disabled I thought loclists could be an easier solution and decided to test what would happen if I [made the compiler emit loclists for non-optimized programs](https://go-review.googlesource.com/c/go/+/211278).
At first glance this works, however upon closer inspection it does not. I wrote [a program](https://github.com/aarzilli/loclist_experiment_check) (the program will assume there is an executable file called 'compile' in the current director and that it has been compiled with optimizations disabled) to compare visibility as determined from loclists with visibility as determined from lexical blocks and using this method we occasionally lose visibility of some variables at some statements. As a quick example the variable argID in cmd/compile/internal/ssa.critical will not be visible during the call on line critical.go:62 (incidentally I think this type of check could be also useful to test the coverage of loclists generated for optimized executables).
I don't know that the compiler should actually change anything, maybe Delve should just start showing variables as visible on the line after the declaration line.
cc @heschik @dr2chase @derekparker
@gopherbot label Debugging | NeedsInvestigation,Debugging,compiler/runtime | low | Critical |
537,579,397 | kubernetes | sample-apiserver returns "$type does not implement the protobuf marshalling interface and cannot be encoded to a protobuf message" error instead of falling back to second Accept type | **What happened**:
Creating an example object against the sample-apiserver returns the following error with a default client:
```
test/e2e/framework/framework.go:639
Dec 13 01:57:24.796: creating a new flunders resource
Unexpected error:
<*errors.StatusError | 0xc0011a4aa0>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {
SelfLink: "",
ResourceVersion: "",
Continue: "",
RemainingItemCount: nil,
},
Status: "Failure",
Message: "object *v1alpha1.Flunder does not implement the protobuf marshalling interface and cannot be encoded to a protobuf message",
Reason: "NotAcceptable",
Details: nil,
Code: 406,
},
}
object *v1alpha1.Flunder does not implement the protobuf marshalling interface and cannot be encoded to a protobuf message
occurred
test/e2e/apimachinery/aggregator.go:396
```
**What you expected to happen**:
The sample-apiserver would fall back to encoding the object in json
**How to reproduce it (as minimally and precisely as possible)**:
* Define a type that does not support protobuf
* Create a client that accepts protobuf,json
* Use that client to submit an object to the server
**Anything else we need to know?**:
Seen in a PR attempting to update the sample-apiserver used in e2e to 1.17.0 levels (https://github.com/kubernetes/kubernetes/pull/84735, https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/84735/pull-kubernetes-e2e-kind/1205296180716638211)
/sig api-machinery
/assign @smarterclayton
| kind/bug,sig/api-machinery,priority/important-longterm,lifecycle/frozen | low | Critical |
537,591,353 | go | x/crypto/ssh/knownhosts: cannot have multiple keys for same host | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.3 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN="/Users/pjt/projects/go/bin"
GOCACHE="/Users/pjt/Library/Caches/go-build"
GOENV="/Users/pjt/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/pjt/projects/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.13.3/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.13.3/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/p9/y23xtnms6r90wsl5lsz2tkfh0000gq/T/go-build606914329=/tmp/go-build -gno-record-gcc-switches -fno-common"
GOROOT/bin/go version: go version go1.13.3 darwin/amd64
GOROOT/bin/go tool compile -V: compile version go1.13.3
uname -v: Darwin Kernel Version 19.0.0: Thu Oct 17 16:17:15 PDT 2019; root:xnu-6153.41.3~29/RELEASE_X86_64
ProductName: Mac OS X
ProductVersion: 10.15.1
BuildVersion: 19B88
lldb --version: lldb-1100.0.30.11
Apple Swift version 5.1.3 (swiftlang-1100.0.282.1 clang-1100.0.33.15)
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I'm trying to open an ssh connection to `my-site.com:22` using `ssh.Dial` where the config uses the callback provided by `knownhosts.New("~/.ssh/known_hosts")`. I have a `known_hosts` file that looks like this:
```
my-site.com,host1.my-site.com,1.1.1.1 ecdsa-sha2-nistp256 <public-key-1>
my-site.com,host2.my-site.com,2.2.2.2 ecdsa-sha2-nistp256 <public-key-2>
```
### What did you expect to see?
Connection succeeds when either public key is provided.
### What did you see instead?
Connection only succeeds when I happen to connect to `host1.my-site.com`. If it tries to connect to `host2.my-site.com` I get a `KeyError`. I can connect to either host using the `ssh` program.
### Why did this happen?
Using `knownhosts.New` to build a host key callback rejects some hosts from the known_hosts file when there are multiple Public Keys of the same type. There is the [assertion](https://github.com/golang/crypto/blob/master/ssh/knownhosts/knownhosts.go#L304) in the `knownhosts` code which says "For each key algorithm, there can be one hostkey", which I don't believe is correct. I think we need to check keys from any line that matches the current host, rather than [only ones that have key types we haven't seen yet](https://github.com/golang/crypto/blob/master/ssh/knownhosts/knownhosts.go#L366). | NeedsFix | low | Critical |
537,616,665 | flutter | Support different dependencies when using flavors | ## Use case
When building or running with the `--flavor` option we often need to change the logic slightly, and in many cases, we need the ability to change the dependencies as well. Sometimes it's just a small library, but often times it's a large library that we would only want included in a specific flavor of the application. An ads SDK is a good example of this (perhaps a "Pro" and a "Lite" flavor where the only difference was ads).
## Proposal
Instead of spawning different versions of the same app for this, (or creating our own pre-build tools to create a custom `pubspec.yaml`), we want to use flavors to specify different dependencies. After all, the vast majority of the code is the same, so it would be nice to be able to maintain it all in the same place without resorting to other build tricks or app composing strategies. Otherwise, the flavor feature is doing very little other than simply using the native builders with different build settings. In fact, if we need any special config, we still have to "roll our own" solution for that as well. So the way it's implemented right now can be vastly improved to help the Flutter community be even more productive.
NOTE: This is very similar to what is being requested in #21682 (different assets per flavor) but here I'm asking that we implement this same type of control for dependencies as well. | c: new feature,tool,c: performance,customer: crowd,c: proposal,perf: app size,P3,team-tool,triaged-tool | high | Critical |
537,628,310 | go | crypto/tls: add docs detailing the sequence before/after Read()/Write() during TLS handshake | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.5 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What did you expect to see?
Documentation (e.g. for Listen()) telling that actual TLS handshake would only occur after Read(), Write() or explicit call to Handshake() as described here: https://github.com/golang/go/blob/c2edcf4b1253fdebc13df8a25979904c3ef01c66/src/crypto/tls/conn.go#L1324
### What did you see instead?
Explicit information only in the Handshake() documentation: https://github.com/golang/go/blob/c2edcf4b1253fdebc13df8a25979904c3ef01c66/src/crypto/tls/conn.go#L1324
| Documentation,NeedsInvestigation | low | Minor |
537,637,032 | go | cmd/go: describe difference in `go mod verify` vs verification during 'go mod tidy' | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ gotip version
go version devel +ef3ef8fcdf Wed Dec 11 15:43:50 2019 +0000 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ gotip env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="$HOME/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="$HOME/golang/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="$HOME/golang/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/tmp/s/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build777528967=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
tainted the hash in the `go.sum` file in workspace (e.g. modifying the hash values manually) and ran `go mod tidy` and `go mod verify`
### What did you expect to see?
`go mod verify` reports the problem.
### What did you see instead?
`go mod verify` reports everything is all good.
Thankfully, `go mod tidy` detects the problem.
```
$ go mod verify
all modules verified
$ go mod tidy
verifying golang.org/x/[email protected]: checksum mismatch
downloaded: h1:qgOY6WgZOaTkIIMiVjBQcw93ERBE4m30iBm00nkL0i8=
go.sum: h1:qgOY6WgZOaTkIIMiVjBQcw93ERBE4m30iBm00nkl0i8=
SECURITY ERROR
...
```
I guess it's because `go mod verify` checks only whether the version in the cache is valid. `go mod help verify` implies that already, but I found this behavior somewhat surprising. | NeedsInvestigation | low | Critical |
537,656,308 | vscode | "editor.suggestSelection": "first" does not work as described | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
**vscode version**
Version: 1.41.0
Commit: 9579eda04fdb3a9bba2750f15193e5fafe16b959
Date: 2019-12-11T17:58:38.338Z
Electron: 6.1.5
Chrome: 76.0.3809.146
Node.js: 12.4.0
V8: 7.6.303.31-electron.0
OS: Darwin x64 19.0.0
Steps to Reproduce:
**system os version**
macOS 10.15.1 (19B88)
## Issue
`editor.suggestSelection` does not work as described.
When I type `lv+tab` in the editor then the selected suggestion is not the `lv` snippet which is at the very top instead it suggests a function below it. To select `lv` I have to use the up arrow keys to change the selection.
This is particularly annoying because typing `iferr`, `for` and `forr` always inserted the correct snippets previously and I frequently used them, now it will always insert something random based on the current context.

I have the following user settings defined
```json
{
"editor.snippetSuggestions": "top",
"editor.suggestSelection": "first"
}
```
## Current behaviour
`lv+tab` inserts: `client.Redemptions.ListInvoice()`
## Expected behaviour
`lv+tab` should insert: `log.Printf("var: %#+v\n", var)`
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: IntelliSense does not work at all when extensions are disabled
| snippets,suggest,under-discussion | low | Minor |
537,691,677 | flutter | [web] Shrine login page crash on assert for Chrome on Android | In order to see the console log, use remote devices tab in chrome developer tools.
Run the app:
flutter run -d chrome --local-engine=host_debug_unopt --web-port=8080
Go to shrine login page. The text editing gives assertion exception in screen_builder.dart
https://github.com/flutter/engine/blob/master/lib/web_ui/lib/src/engine/surface/scene_builder.dart#L227
note: all the page is very slow with ddc. this error is not visible in release mode since the asserts are not there.
note: text editing in other pages still work.
| c: crash,engine,platform-web,c: rendering,P2,team-web,triaged-web | low | Critical |
537,715,677 | node | fs async functions (both callback and promises APIs) return Errors without stack trace | * **Version**: 13.3.0
* **Platform**: Windows 10 x64
```js
const fs = require('fs')
fs.readFile('nonexistentfile', (err) => console.log(err.stack))
```
**What is the expected output?**
The err.stack should contain error text and stack
Same as readFileSync function:
> Error: ENOENT: no such file or directory, open 'nonexistentfile'
> at Object.openSync (fs.js:446:3)
> at Object.readFileSync (fs.js:348:35)
> ...
**What do you see instead?**
err.stack only contains error text
> Error: ENOENT: no such file or directory, open 'C:\dev\projects\ntest\nonexistentfile'
`fs.writeFile` have same trouble
Πnother strange thing is that the error text is also different (local/absolute path)
| confirmed-bug,fs | medium | Critical |
537,720,064 | pytorch | Script method can't call a scripted function when it is decorated with `@torch.no_grad` | ## π Bug
With the new JIT API, a script method can no longer call a scripted function when it is decorated with `@torch.no_grad`
## To Reproduce
This issue can be reproduced with the example below:
```
$ cat test1.py
import torch
import torch.nn as nn
@torch.jit.script
@torch.no_grad()
def foo(x):
return x + 1
class Test1(torch.jit.ScriptModule):
@torch.jit.script_method
@torch.no_grad()
def forward(self, x):
return foo(x)
m1 = Test1()
print(m1.graph_for(torch.zeros(4, 3)))
$ python test1.py
graph(%self : ClassType<Test1>,
%x.1 : Float(*, *)):
%2 : int = prim::Constant[value=1]() # /home/selee/conda/envs/plugin/lib/python3.7/site-packages/torch/autograd/grad_mode.py:7:15
%3 : Float(*, *) = aten::add(%x.1, %2, %2) # /home/selee/conda/envs/plugin/lib/python3.7/site-packages/torch/autograd/grad_mode.py:7:11
return (%3)
$ cat test2.py
import torch
import torch.nn as nn
@torch.jit.script
@torch.no_grad()
def foo(x):
return x + 1
class Test2(nn.Module):
@torch.no_grad()
def forward(self, x):
return foo(x)
m2 = torch.jit.script(Test2())
print(m2.graph_for(torch.zeros(4, 3)))
$ python test2.py
Traceback (most recent call last):
File "test2.py", line 14, in <module>
m2 = torch.jit.script(Test2())
File "/home/selee/conda/envs/plugin/lib/python3.7/site-packages/torch/jit/__init__.py", line 1203, in script
return torch.jit.torch.jit._recursive.recursive_script(obj)
File "/home/selee/conda/envs/plugin/lib/python3.7/site-packages/torch/jit/_recursive.py", line 173, in recursive_script
return copy_to_script_module(mod, overload_stubs + stubs)
File "/home/selee/conda/envs/plugin/lib/python3.7/site-packages/torch/jit/_recursive.py", line 95, in copy_to_script_module
torch.jit._create_methods_from_stubs(script_module, stubs)
File "/home/selee/conda/envs/plugin/lib/python3.7/site-packages/torch/jit/__init__.py", line 1423, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
RuntimeError:
undefined value foo:
at /home/selee/conda/envs/plugin/lib/python3.7/site-packages/torch/autograd/grad_mode.py:12:15
@torch.no_grad()
def forward(self, x):
return foo(x)
~~~ <--- HERE
```
`test1.py` uses the deprecated JIT API, and `forward` in `Test1` can call the scripted `foo` as expected.
However, `test2.py` uses the new JIT API, and `forward` in `Test2` cannot call the scripted `foo`.
This looks like a regression with the new API.
## Expected behavior
The new API is expected to support it as the old API does.
## Environment
- PyTorch Version (e.g., 1.0): 1.3.1
- OS (e.g., Linux): Ubuntu 18.04.3 LTS
- How you installed PyTorch (`conda`, `pip`, source): `conda`
- Build command you used (if compiling from source):
- Python version: 3.7
## Additional context
<!-- Add any other context about the problem here. -->
cc @suo | oncall: jit,triaged | low | Critical |
537,728,805 | pytorch | pinned memory requires DeviceGuard in multi-process envs | `THCCachingHostAllocator` mentions that:
https://github.com/pytorch/pytorch/blob/af638ad5d7c8fc9ff97f0ad1cd2bbcfa3ced514e/aten/src/THC/THCCachingHostAllocator.cpp#L85-L88
However @baobablyh discovers that, if multiple processes call pinned memory with the same default device, it could hang either at the pinned memory tensor creation or some subsequent D2H communication (see #31095 for more discussion, #28883 and #30945 for code). We haven't found out the root cause yet, but it looks like pinned memory would have some side effect on the current device. While we continue investigating the root cause, we should add an `N.B.` to the above comments and `pinned_memory` API doc to warn future users/devs.
cc @ngimel | module: docs,module: multiprocessing,module: cuda,triaged | low | Minor |
537,737,813 | godot | AnimationPlayer - Change material properties not work | **Godot version:**
3.1.2 stable official
**OS/device including version:**
ubuntu 18
**Issue description:**
If you use AnimationPlayer and change during animation the emission property for example, but
change the material itself during the animation to, this new material does not get the emission property changed.
**Steps to reproduce:**
create an animation to a model with certain material, animate it changing material properties like emission, on another track change the material itself. | bug,topic:core | low | Major |
537,753,192 | go | math/rand: documentation on rng.go is lacking important context and information | <!-- Please answer these questions before submitting your issue. Thanks! -->
### Does this issue reproduce with the latest release?
Yes
### What did you expect to see?
Details about the algorithm used, links to source material, context about why this algorithm was used and how it differs from other methods.
### What did you see instead?
https://github.com/golang/go/blob/c2edcf4b1253fdebc13df8a25979904c3ef01c66/src/math/rand/rng.go#L7-L12
A single vague comment listing two names, without the title of sources used, the name of the algorithm used, or any details whatsoever. The names alone don't seem to be sufficient for finding the source material (at least based on a good amount of googling)
It seems like I'm not the only person having this issue:
https://www.seehuhn.de/blog/134.html
Could provide some more context on what algorithm is being used, why it was chosen, and how it differs from something like mersenne twister? I think it would be valuable knowledge, and a good addition to that documentation. | Documentation,NeedsInvestigation | low | Major |
537,778,523 | pytorch | Failed to config caffe2_rocksdb in cmake | ## Bug
Got this today.
Should be something new since I was able to build with exact same script a few days ago.
RocksDB is freshly compiled with CMake for latest release tag (6.5.2) a few hours before this.
Issue reproduced on both Ubuntu 18.04 and CentOS 7.
```
#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6 Target "caffe2_rocksdb" links to target "snappy::snappy" but the target was
#9 537.6 not found. Perhaps a find_package() call is missing for an IMPORTED
#9 537.6 target, or an ALIAS target is missing?
#9 537.6
#9 537.6
#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6 Target "caffe2_rocksdb" links to target "lz4::lz4" but the target was not
#9 537.6 found. Perhaps a find_package() call is missing for an IMPORTED target, or
#9 537.6 an ALIAS target is missing?
#9 537.6
#9 537.6
#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6 Target "caffe2_rocksdb" links to target "zstd::zstd" but the target was not
#9 537.6 found. Perhaps a find_package() call is missing for an IMPORTED target, or
#9 537.6 an ALIAS target is missing?
#9 537.6
#9 537.6
#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6 Target "caffe2_rocksdb" links to target "NUMA::NUMA" but the target was not
#9 537.6 found. Perhaps a find_package() call is missing for an IMPORTED target, or
#9 537.6 an ALIAS target is missing?
```
## Environment
- PyTorch Version (e.g., 1.0): master
- OS (e.g., Linux): CentOS 7 / Ubuntu 18.04
- How you installed PyTorch (`conda`, `pip`, source): source
- Build command you used (if compiling from source): cmake+ninja+gcc8
- Python version: 3.6
- CUDA/cuDNN version: 10.2
## Additional context
Screenshot

Full log in case needed:
```
#9 515.1 ++ cmake -DATEN_NO_TEST=ON -DBLAS=MKL -DBUILD_BINARY=ON -DBUILD_CUSTOM_PROTOBUF=OFF -DBUILD_SHARED_LIBS=ON -DBUILD_TEST=ON -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=gcc-8 -DCMAKE_CXX_COMPILER=g++-8 -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache '-DCMAKE_C_FLAGS=-fdebug-prefix-map='\''/tmp/scratch'\''='\''/usr/local/src'\'' -g -march=haswell -mtune=generic' '-DCMAKE_CXX_FLAGS=-fdebug-prefix-map='\''/tmp/scratch'\''='\''/usr/local/src'\'' -g -march=haswell -mtune=generic' -DCMAKE_INSTALL_PREFIX=/tmp/scratch/pytorch/install.ha2MkvZc8O/root/usr/local -DCMAKE_POLICY_DEFAULT_CMP0003=NEW -DCMAKE_POLICY_DEFAULT_CMP0060=NEW -DCMAKE_VERBOSE_MAKEFILE=ON -DCPUINFO_BUILD_TOOLS=ON -DINSTALL_TEST=ON -DPYTHON_EXECUTABLE=/usr/bin/python3 '-DTORCH_CUDA_ARCH_LIST=Pascal;Volta' -DUSE_FBGEMM=ON -DUSE_GFLAGS=ON -DUSE_GLOG=ON -DUSE_LEVELDB=ON -DUSE_LMDB=ON -DUSE_MKLDNN=ON -DUSE_NATIVE_ARCH=OFF -DUSE_OBSERVERS=ON -DUSE_OPENCV=ON -DUSE_OPENMP=ON -DUSE_PROF=ON -DUSE_ROCKSDB=ON -DUSE_SYSTEM_EIGEN_INSTALL=ON -DUSE_SYSTEM_NCCL=ON -DUSE_TENSORRT=OFF -DUSE_ZMQ=ON -DUSE_ZSTD=OFF -DWITH_BLAS=mkl -GNinja ..
#9 515.2 -- The CXX compiler identification is GNU 8.3.0
#9 515.3 -- The C compiler identification is GNU 8.3.0
#9 515.3 -- Check for working CXX compiler: /usr/bin/g++-8
#9 515.4 -- Check for working CXX compiler: /usr/bin/g++-8 -- works
#9 515.4 -- Detecting CXX compiler ABI info
#9 515.5 -- Detecting CXX compiler ABI info - done
#9 515.6 -- Detecting CXX compile features
#9 515.6 -- Detecting CXX compile features - done
#9 515.6 -- Check for working C compiler: /usr/bin/gcc-8
#9 515.6 -- Check for working C compiler: /usr/bin/gcc-8 -- works
#9 515.6 -- Detecting C compiler ABI info
#9 515.7 -- Detecting C compiler ABI info - done
#9 515.8 -- Detecting C compile features
#9 515.8 -- Detecting C compile features - done
#9 515.8 -- Performing Test COMPILER_WORKS
#9 515.9 -- Performing Test COMPILER_WORKS - Success
#9 515.9 -- Performing Test SUPPORT_GLIBCXX_USE_C99
#9 516.1 -- Performing Test SUPPORT_GLIBCXX_USE_C99 - Success
#9 516.1 -- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
#9 516.3 -- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
#9 516.3 -- std::exception_ptr is supported.
#9 516.3 -- Performing Test CAFFE2_IS_NUMA_AVAILABLE
#9 516.4 -- Performing Test CAFFE2_IS_NUMA_AVAILABLE - Success
#9 516.4 -- NUMA is available
#9 516.4 -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
#9 516.7 -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Success
#9 516.7 -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
#9 517.0 -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
#9 517.0 -- Current compiler supports avx2 extension. Will build perfkernels.
#9 517.0 -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
#9 517.2 -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success
#9 517.2 -- Current compiler supports avx512f extension. Will build fbgemm.
#9 517.2 -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
#9 517.6 -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success
#9 517.6 -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
#9 517.7 -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success
#9 517.7 -- Performing Test COMPILER_SUPPORTS_RDYNAMIC
#9 517.7 -- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Success
#9 517.9 -- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.11")
#9 517.9 -- Caffe2: Found protobuf with new-style protobuf targets.
#9 517.9 -- Caffe2 protobuf include directory: /usr/local/include
#9 517.9 -- Looking for pthread.h
#9 517.9 -- Looking for pthread.h - found
#9 517.9 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
#9 518.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
#9 518.0 -- Looking for pthread_create in pthreads
#9 518.0 -- Looking for pthread_create in pthreads - not found
#9 518.0 -- Looking for pthread_create in pthread
#9 518.1 -- Looking for pthread_create in pthread - found
#9 518.1 -- Found Threads: TRUE
#9 518.1 -- Trying to find preferred BLAS backend of choice: MKL
#9 518.1 -- MKL_THREADING = OMP
#9 518.1 -- Looking for sys/types.h
#9 518.2 -- Looking for sys/types.h - found
#9 518.2 -- Looking for stdint.h
#9 518.2 -- Looking for stdint.h - found
#9 518.2 -- Looking for stddef.h
#9 518.3 -- Looking for stddef.h - found
#9 518.3 -- Check size of void*
#9 518.4 -- Check size of void* - done
#9 518.4 -- Checking for [mkl_intel_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl]
#9 518.4 -- Library mkl_intel_lp64: /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so
#9 518.4 -- Library mkl_gnu_thread: /opt/intel/mkl/lib/intel64/libmkl_gnu_thread.so
#9 518.4 -- Library mkl_core: /opt/intel/mkl/lib/intel64/libmkl_core.so
#9 518.8 -- Library gomp: -fopenmp
#9 518.8 -- Library pthread: /usr/lib/x86_64-linux-gnu/libpthread.so
#9 518.8 -- Library m: /usr/lib/x86_64-linux-gnu/libm.so
#9 518.8 -- Library dl: /usr/lib/x86_64-linux-gnu/libdl.so
#9 518.8 -- Looking for cblas_sgemm
#9 519.0 -- Looking for cblas_sgemm - found
#9 519.1 -- MKL library found
#9 519.1 -- MKL libraries: /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so;/opt/intel/mkl/lib/intel64/libmkl_gnu_thread.so;/opt/intel/mkl/lib/intel64/libmkl_core.so;-fopenmp;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/libdl.so
#9 519.1 -- MKL include directory: /opt/intel/mkl/include
#9 519.1 -- MKL OpenMP type: GNU
#9 519.1 -- MKL OpenMP library: -fopenmp
#9 519.1 -- The ASM compiler identification is GNU
#9 519.1 -- Found assembler: /usr/bin/gcc-8
#9 519.1 -- Check if compiler accepts -pthread
#9 519.2 -- Check if compiler accepts -pthread - yes
#9 519.3 -- Brace yourself, we are building NNPACK
#9 519.3 -- Performing Test NNPACK_ARCH_IS_X86_32
#9 519.4 -- Performing Test NNPACK_ARCH_IS_X86_32 - Failed
#9 519.4 -- Found PythonInterp: /usr/bin/python3 (found version "3.6.9")
#9 519.4 -- NNPACK backend is x86-64
#9 519.4 -- Caffe2: Found gflags with new-style gflags target.
#9 519.4 -- Caffe2: Found glog with new-style glog target.
#9 519.5 -- LLVM FileCheck Found: /usr/local/bin/FileCheck
#9 519.5 -- Found Git: /usr/bin/git (found version "2.17.1")
#9 519.5 -- git Version: v1.4.0-505be96a
#9 519.5 -- Version: 1.4.0
#9 519.5 -- Performing Test HAVE_CXX_FLAG_STD_CXX11
#9 519.6 -- Performing Test HAVE_CXX_FLAG_STD_CXX11 - Success
#9 519.6 -- Performing Test HAVE_CXX_FLAG_WALL
#9 519.7 -- Performing Test HAVE_CXX_FLAG_WALL - Success
#9 519.7 -- Performing Test HAVE_CXX_FLAG_WEXTRA
#9 519.7 -- Performing Test HAVE_CXX_FLAG_WEXTRA - Success
#9 519.8 -- Performing Test HAVE_CXX_FLAG_WSHADOW
#9 519.8 -- Performing Test HAVE_CXX_FLAG_WSHADOW - Success
#9 519.8 -- Performing Test HAVE_CXX_FLAG_WERROR
#9 519.9 -- Performing Test HAVE_CXX_FLAG_WERROR - Success
#9 519.9 -- Performing Test HAVE_CXX_FLAG_PEDANTIC
#9 520.0 -- Performing Test HAVE_CXX_FLAG_PEDANTIC - Success
#9 520.0 -- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS
#9 520.1 -- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS - Success
#9 520.1 -- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32
#9 520.1 -- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 - Failed
#9 520.1 -- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL
#9 520.2 -- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL - Success
#9 520.2 -- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING
#9 520.3 -- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING - Success
#9 520.3 -- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS
#9 520.3 -- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS - Success
#9 520.3 -- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING
#9 520.4 -- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING - Success
#9 520.4 -- Performing Test HAVE_CXX_FLAG_WD654
#9 520.5 -- Performing Test HAVE_CXX_FLAG_WD654 - Failed
#9 520.5 -- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY
#9 520.6 -- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY - Failed
#9 520.6 -- Performing Test HAVE_CXX_FLAG_COVERAGE
#9 520.6 -- Performing Test HAVE_CXX_FLAG_COVERAGE - Success
#9 520.6 -- Performing Test HAVE_STD_REGEX
#9 520.6 -- Performing Test HAVE_STD_REGEX
#9 523.5 -- Performing Test HAVE_STD_REGEX -- success
#9 523.5 -- Performing Test HAVE_GNU_POSIX_REGEX
#9 523.5 -- Performing Test HAVE_GNU_POSIX_REGEX
#9 523.9 -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
#9 523.9 -- Performing Test HAVE_POSIX_REGEX
#9 523.9 -- Performing Test HAVE_POSIX_REGEX
#9 524.3 -- Performing Test HAVE_POSIX_REGEX -- success
#9 524.3 -- Performing Test HAVE_STEADY_CLOCK
#9 524.3 -- Performing Test HAVE_STEADY_CLOCK
#9 524.4 -- Performing Test HAVE_STEADY_CLOCK -- success
#9 524.5 -- Performing Test COMPILER_SUPPORTS_AVX512
#9 524.6 -- Performing Test COMPILER_SUPPORTS_AVX512 - Success
#9 524.6 -- Found OpenMP_C: -fopenmp (found version "4.5")
#9 524.6 -- Found OpenMP_CXX: -fopenmp (found version "4.5")
#9 524.7 -- Found OpenMP: TRUE (found version "4.5")
#9 524.7 -- Performing Test __CxxFlag__fmerge_all_constants
#9 524.8 -- Performing Test __CxxFlag__fmerge_all_constants - Success
#9 524.8 ** AsmJit Summary **
#9 524.8 ASMJIT_DIR=/tmp/scratch/pytorch/third_party/fbgemm/third_party/asmjit
#9 524.8 ASMJIT_TEST=FALSE
#9 524.8 ASMJIT_TARGET_TYPE=STATIC
#9 524.8 ASMJIT_DEPS=pthread;rt
#9 524.8 ASMJIT_LIBS=asmjit;pthread;rt
#9 524.8 ASMJIT_CFLAGS=-DASMJIT_STATIC
#9 524.8 ASMJIT_PRIVATE_CFLAGS=-Wall;-Wextra;-fno-math-errno;-fno-threadsafe-statics;-DASMJIT_STATIC
#9 524.8 ASMJIT_PRIVATE_CFLAGS_DBG=
#9 524.8 ASMJIT_PRIVATE_CFLAGS_REL=-O2;-fmerge-all-constants
#9 524.8 -- Found LMDB: /usr/local/include
#9 524.8 -- Found lmdb (include: /usr/local/include, library: /usr/local/lib/liblmdb.so)
#9 524.8 -- Found LevelDB: /usr/local/include
#9 524.8 -- Found LevelDB (include: /usr/local/include, library: /usr/local/lib/libleveldb.a)
#9 524.8 -- Found Snappy: /usr/local/include
#9 524.8 -- Found Snappy (include: /usr/local/include, library: /usr/local/lib/libsnappy.so)
#9 524.8 -- Found Numa: /usr/include
#9 524.8 -- Found Numa (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnuma.so)
#9 525.1 -- Could NOT find ZMQ (missing: ZMQ_INCLUDE_DIR ZMQ_LIBRARIES)
#9 525.1 CMake Warning at cmake/Dependencies.cmake:618 (message):
#9 525.1 Not compiling with ZMQ. Suppress this warning with -DUSE_ZMQ=OFF
#9 525.1 Call Stack (most recent call first):
#9 525.1 CMakeLists.txt:390 (include)
#9 525.1
#9 525.1
#9 525.2 -- OpenCV found (/usr/local/lib/cmake/opencv4)
#9 525.2 -- Found system Eigen at /usr/local/include/eigen3
#9 525.2 Python 3.6.9
#9 525.2 -- Setting Python's include dir to /usr/include/python3.6m from distutils.sysconfig
#9 525.2 -- Setting Python's library to /usr/lib/python3.6
#9 525.3 -- Found PythonInterp: /usr/bin/python3 (found suitable version "3.6.9", minimum required is "2.7")
#9 525.3 -- Found PythonLibs: /usr/lib/python3.6 (found suitable version "3.6.9", minimum required is "2.7")
#9 525.8 -- Found NumPy: /usr/local/lib/python3.6/dist-packages/numpy/core/include (found version "1.17.4")
#9 525.8 -- NumPy ver. 1.17.4 found (include: /usr/local/lib/python3.6/dist-packages/numpy/core/include)
#9 525.8 -- Found PythonInterp: /usr/bin/python3 (found version "3.6.9")
#9 525.9 -- Found PythonLibs: python3.6m
#9 525.9 -- System pybind11 found
#9 525.9 -- pybind11 include dirs: /usr/local/include;/usr/include/python3.6m
#9 526.0 -- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
#9 526.1 -- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
#9 526.1 -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
#9 526.1 CMake Warning at cmake/Dependencies.cmake:838 (message):
#9 526.1 Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF
#9 526.1 Call Stack (most recent call first):
#9 526.1 CMakeLists.txt:390 (include)
#9 526.1
#9 526.1
#9 526.1 -- Adding OpenMP CXX_FLAGS: -fopenmp
#9 526.1 -- Will link against OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/8/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so
#9 526.2 -- Found CUDA: /usr/local/cuda (found version "10.2")
#9 526.2 -- Caffe2: CUDA detected: 10.2
#9 526.2 -- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
#9 526.2 -- Caffe2: CUDA toolkit directory: /usr/local/cuda
#9 526.3 -- Caffe2: Header version is: 10.2
#9 526.3 -- Found CUDNN: /usr/lib/x86_64-linux-gnu/libcudnn.so
#9 526.3 -- Found cuDNN: v7.6.5 (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libcudnn.so)
#9 526.3 -- Added CUDA NVCC flags for: -gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70
#9 526.3 -- Found NCCL: /usr/include
#9 526.3 -- Determining NCCL version from /usr/include/nccl.h...
#9 526.3 -- Looking for NCCL_VERSION_CODE
#9 526.4 -- Looking for NCCL_VERSION_CODE - not found
#9 526.4 -- NCCL version < 2.3.5-5
#9 526.4 -- Found NCCL (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnccl.so)
#9 526.4 -- Could NOT find CUB (missing: CUB_INCLUDE_DIR)
#9 526.4 CMake Warning (dev) at third_party/gloo/CMakeLists.txt:21 (option):
#9 526.4 Policy CMP0077 is not set: option() honors normal variables. Run "cmake
#9 526.4 --help-policy CMP0077" for policy details. Use the cmake_policy command to
#9 526.4 set the policy and suppress this warning.
#9 526.4
#9 526.4 For compatibility with older versions of CMake, option is clearing the
#9 526.4 normal variable 'BUILD_BENCHMARK'.
#9 526.4 This warning is for project developers. Use -Wno-dev to suppress it.
#9 526.4
#9 526.4 -- Found CUDA: /usr/local/cuda (found suitable version "10.2", minimum required is "7.0")
#9 526.4 -- CUDA detected: 10.2
#9 526.4 -- Could NOT find NCCL (missing: NCCL_INCLUDE_DIR)
#9 526.4 CMake Warning at third_party/gloo/cmake/Dependencies.cmake:96 (message):
#9 526.4 Not compiling with NCCL support. Suppress this warning with
#9 526.4 -DUSE_NCCL=OFF.
#9 526.4 Call Stack (most recent call first):
#9 526.4 third_party/gloo/CMakeLists.txt:56 (include)
#9 526.4
#9 526.4
#9 526.5 CMake Warning at cmake/Dependencies.cmake:1083 (find_package):
#9 526.5 By not providing "Findhtrace.cmake" in CMAKE_MODULE_PATH this project has
#9 526.5 asked CMake to find a package configuration file provided by "htrace", but
#9 526.5 CMake did not find one.
#9 526.5
#9 526.5 Could not find a package configuration file provided by "htrace" with any
#9 526.5 of the following names:
#9 526.5
#9 526.5 htraceConfig.cmake
#9 526.5 htrace-config.cmake
#9 526.5
#9 526.5 Add the installation prefix of "htrace" to CMAKE_PREFIX_PATH or set
#9 526.5 "htrace_DIR" to a directory containing one of the above files. If "htrace"
#9 526.5 provides a separate development package or SDK, be sure it has been
#9 526.5 installed.
#9 526.5 Call Stack (most recent call first):
#9 526.5 CMakeLists.txt:390 (include)
#9 526.5
#9 526.5
#9 526.5 CMake Warning at cmake/Dependencies.cmake:1087 (message):
#9 526.5 htrace not found. Caffe2 will build without htrace prof
#9 526.5 Call Stack (most recent call first):
#9 526.5 CMakeLists.txt:390 (include)
#9 526.5
#9 526.5
#9 526.5 CMake Warning at cmake/Dependencies.cmake:1106 (message):
#9 526.5 Metal is only used in ios builds.
#9 526.5 Call Stack (most recent call first):
#9 526.5 CMakeLists.txt:390 (include)
#9 526.5
#9 526.5
#9 526.5 Generated: /tmp/scratch/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
#9 526.5 Generated: /tmp/scratch/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
#9 526.6 --
#9 526.6 -- ******** Summary ********
#9 526.6 -- CMake version : 3.16.1
#9 526.6 -- CMake command : /usr/local/bin/cmake
#9 526.6 -- System : Linux
#9 526.6 -- C++ compiler : /usr/bin/g++-8
#9 526.6 -- C++ compiler version : 8.3.0
#9 526.6 -- CXX flags : -fdebug-prefix-map='/tmp/scratch'='/usr/local/src' -g -march=haswell -mtune=generic -fvisibility-inlines-hidden -fopenmp -Wnon-virtual-dtor
#9 526.6 -- Build type : Release
#9 526.6 -- Compile definitions : TH_BLAS_MKL;ONNX_ML=1
#9 526.6 -- CMAKE_PREFIX_PATH : /usr/local/cuda;/usr/local/cuda
#9 526.6 -- CMAKE_INSTALL_PREFIX : /tmp/scratch/pytorch/install.ha2MkvZc8O/root/usr/local
#9 526.6 -- CMAKE_MODULE_PATH : /tmp/scratch/pytorch/cmake/Modules;/usr/local/share/cmake/pybind11;/tmp/scratch/pytorch/cmake/public/../Modules_CUDA_fix
#9 526.6 --
#9 526.6 -- ONNX version : 1.6.0
#9 526.6 -- ONNX NAMESPACE : onnx_torch
#9 526.6 -- ONNX_BUILD_TESTS : OFF
#9 526.6 -- ONNX_BUILD_BENCHMARKS : OFF
#9 526.6 -- ONNX_USE_LITE_PROTO : OFF
#9 526.6 -- ONNXIFI_DUMMY_BACKEND : OFF
#9 526.6 -- ONNXIFI_ENABLE_EXT : OFF
#9 526.6 --
#9 526.6 -- Protobuf compiler :
#9 526.6 -- Protobuf includes :
#9 526.6 -- Protobuf libraries :
#9 526.6 -- BUILD_ONNX_PYTHON : OFF
#9 526.6 --
#9 526.6 -- ******** Summary ********
#9 526.6 -- CMake version : 3.16.1
#9 526.6 -- CMake command : /usr/local/bin/cmake
#9 526.6 -- System : Linux
#9 526.6 -- C++ compiler : /usr/bin/g++-8
#9 526.6 -- C++ compiler version : 8.3.0
#9 526.6 -- CXX flags : -fdebug-prefix-map='/tmp/scratch'='/usr/local/src' -g -march=haswell -mtune=generic -fvisibility-inlines-hidden -fopenmp -Wnon-virtual-dtor
#9 526.6 -- Build type : Release
#9 526.6 -- Compile definitions : TH_BLAS_MKL;ONNX_ML=1
#9 526.6 -- CMAKE_PREFIX_PATH : /usr/local/cuda;/usr/local/cuda
#9 526.6 -- CMAKE_INSTALL_PREFIX : /tmp/scratch/pytorch/install.ha2MkvZc8O/root/usr/local
#9 526.6 -- CMAKE_MODULE_PATH : /tmp/scratch/pytorch/cmake/Modules;/usr/local/share/cmake/pybind11;/tmp/scratch/pytorch/cmake/public/../Modules_CUDA_fix
#9 526.6 --
#9 526.6 -- ONNX version : 1.4.1
#9 526.6 -- ONNX NAMESPACE : onnx_torch
#9 526.6 -- ONNX_BUILD_TESTS : OFF
#9 526.6 -- ONNX_BUILD_BENCHMARKS : OFF
#9 526.6 -- ONNX_USE_LITE_PROTO : OFF
#9 526.6 -- ONNXIFI_DUMMY_BACKEND : OFF
#9 526.6 --
#9 526.6 -- Protobuf compiler :
#9 526.6 -- Protobuf includes :
#9 526.6 -- Protobuf libraries :
#9 526.6 -- BUILD_ONNX_PYTHON : OFF
#9 526.6 -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
#9 526.6 -- Removing -DNDEBUG from compile flags
#9 526.6 -- MAGMA not found. Compiling without MAGMA support
#9 526.6 -- Could not find hardware support for NEON on this machine.
#9 526.6 -- No OMAP3 processor on this machine.
#9 526.6 -- No OMAP4 processor on this machine.
#9 526.7 -- Looking for cpuid.h
#9 526.7 -- Looking for cpuid.h - found
#9 526.7 -- Performing Test HAVE_GCC_GET_CPUID
#9 526.8 -- Performing Test HAVE_GCC_GET_CPUID - Success
#9 526.8 -- Performing Test NO_GCC_EBX_FPIC_BUG
#9 526.8 -- Performing Test NO_GCC_EBX_FPIC_BUG - Success
#9 526.8 -- Performing Test C_HAS_AVX_1
#9 527.0 -- Performing Test C_HAS_AVX_1 - Success
#9 527.0 -- Performing Test C_HAS_AVX2_1
#9 527.2 -- Performing Test C_HAS_AVX2_1 - Success
#9 527.2 -- Performing Test CXX_HAS_AVX_1
#9 527.3 -- Performing Test CXX_HAS_AVX_1 - Success
#9 527.3 -- Performing Test CXX_HAS_AVX2_1
#9 527.5 -- Performing Test CXX_HAS_AVX2_1 - Success
#9 527.5 -- AVX compiler support found
#9 527.5 -- AVX2 compiler support found
#9 527.5 -- Performing Test BLAS_F2C_DOUBLE_WORKS
#9 527.7 -- Performing Test BLAS_F2C_DOUBLE_WORKS - Failed
#9 527.7 -- Performing Test BLAS_F2C_FLOAT_WORKS
#9 527.9 -- Performing Test BLAS_F2C_FLOAT_WORKS - Success
#9 527.9 -- Performing Test BLAS_USE_CBLAS_DOT
#9 528.1 -- Performing Test BLAS_USE_CBLAS_DOT - Success
#9 528.1 -- Found a library with BLAS API (mkl).
#9 528.1 -- Found a library with LAPACK API (mkl).
#9 528.1 -- MIOpen not found. Compiling without MIOpen support
#9 528.1 disabling ROCM because NOT USE_ROCM is set
#9 528.1 -- MKLDNN_THREADING = OMP:COMP
#9 528.1 CMake Warning (dev) at third_party/ideep/mkl-dnn/cmake/options.cmake:33 (option):
#9 528.1 Policy CMP0077 is not set: option() honors normal variables. Run "cmake
#9 528.1 --help-policy CMP0077" for policy details. Use the cmake_policy command to
#9 528.1 set the policy and suppress this warning.
#9 528.1
#9 528.1 For compatibility with older versions of CMake, option is clearing the
#9 528.1 normal variable 'MKLDNN_ENABLE_CONCURRENT_EXEC'.
#9 528.1 Call Stack (most recent call first):
#9 528.1 third_party/ideep/mkl-dnn/cmake/utils.cmake:24 (include)
#9 528.1 third_party/ideep/mkl-dnn/CMakeLists.txt:74 (include)
#9 528.1 This warning is for project developers. Use -Wno-dev to suppress it.
#9 528.1
#9 528.2 -- Found OpenMP_C: -fopenmp (found version "4.5")
#9 528.2 -- Found OpenMP_CXX: -fopenmp (found version "4.5")
#9 528.2 -- OpenMP lib: provided by compiler
#9 528.2 -- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
#9 528.2 -- VTune profiling environment is unset
#9 528.2 -- Found MKL-DNN: TRUE
#9 528.2 -- Looking for clock_gettime in rt
#9 528.3 -- Looking for clock_gettime in rt - found
#9 528.3 -- Looking for mmap
#9 528.4 -- Looking for mmap - found
#9 528.4 -- Looking for shm_open
#9 528.4 -- Looking for shm_open - found
#9 528.4 -- Looking for shm_unlink
#9 528.5 -- Looking for shm_unlink - found
#9 528.5 -- Looking for malloc_usable_size
#9 528.6 -- Looking for malloc_usable_size - found
#9 528.6 -- Performing Test C_HAS_THREAD
#9 528.6 -- Performing Test C_HAS_THREAD - Success
#9 528.6 -- GCC 8.3.0: Adding gcc and gcc_s libs to link line
#9 528.7 -- NUMA paths:
#9 528.7 -- /usr/include
#9 528.7 -- /usr/lib/x86_64-linux-gnu/libnuma.so
#9 528.7 -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
#9 528.8 -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Success
#9 529.0 CMake Deprecation Warning at third_party/sleef/CMakeLists.txt:20 (cmake_policy):
#9 529.0 The OLD behavior for policy CMP0066 will be removed from a future version
#9 529.0 of CMake.
#9 529.0
#9 529.0 The cmake-policies(7) manual explains that the OLD behaviors of all
#9 529.0 policies are deprecated and that a policy should be set to OLD only under
#9 529.0 specific short-term circumstances. Projects should be ported to the NEW
#9 529.0 behavior and not rely on setting a policy to OLD.
#9 529.0
#9 529.0
#9 529.1 -- Found OpenSSL: /usr/lib/x86_64-linux-gnu/libcrypto.so (found version "1.1.1")
#9 529.1 -- Check size of long double
#9 529.2 -- Check size of long double - done
#9 529.2 -- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE
#9 529.2 -- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE - Success
#9 529.2 -- Performing Test COMPILER_SUPPORTS_FLOAT128
#9 529.3 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Success
#9 529.3 -- Performing Test COMPILER_SUPPORTS_SSE2
#9 529.5 -- Performing Test COMPILER_SUPPORTS_SSE2 - Success
#9 529.5 -- Performing Test COMPILER_SUPPORTS_SSE4
#9 529.7 -- Performing Test COMPILER_SUPPORTS_SSE4 - Success
#9 529.7 -- Performing Test COMPILER_SUPPORTS_AVX
#9 529.9 -- Performing Test COMPILER_SUPPORTS_AVX - Success
#9 529.9 -- Performing Test COMPILER_SUPPORTS_FMA4
#9 530.1 -- Performing Test COMPILER_SUPPORTS_FMA4 - Success
#9 530.1 -- Performing Test COMPILER_SUPPORTS_AVX2
#9 530.3 -- Performing Test COMPILER_SUPPORTS_AVX2 - Success
#9 530.3 -- Performing Test COMPILER_SUPPORTS_AVX512F
#9 530.5 -- Performing Test COMPILER_SUPPORTS_AVX512F - Success
#9 530.5 -- Performing Test COMPILER_SUPPORTS_OPENMP
#9 530.6 -- Performing Test COMPILER_SUPPORTS_OPENMP - Success
#9 530.6 -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES
#9 530.6 -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Success
#9 530.6 -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH
#9 530.7 -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Success
#9 530.7 -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM
#9 531.7 -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Success
#9 531.7 -- Configuring build for SLEEF-v3.4.0
#9 531.7 Target system: Linux-3.10.0-1062.1.1.el7.x86_64
#9 531.7 Target processor: x86_64
#9 531.7 Host system: Linux-3.10.0-1062.1.1.el7.x86_64
#9 531.7 Host processor: x86_64
#9 531.7 Detected C compiler: GNU @ /usr/bin/gcc-8
#9 531.7 -- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
#9 531.7 -- Building shared libs : OFF
#9 531.7 -- MPFR : LIB_MPFR-NOTFOUND
#9 531.7 -- GMP : /usr/lib/x86_64-linux-gnu/libgmp.so
#9 531.7 -- RT : /usr/lib/x86_64-linux-gnu/librt.so
#9 531.7 -- FFTW3 : LIBFFTW3-NOTFOUND
#9 531.7 -- OPENSSL : 1.1.1
#9 531.7 -- SDE : SDE_COMMAND-NOTFOUND
#9 531.7 -- RUNNING_ON_TRAVIS : 0
#9 531.7 -- COMPILER_SUPPORTS_OPENMP : 1
#9 531.8 AT_INSTALL_INCLUDE_DIR include/ATen/core
#9 531.8 core header install: /tmp/scratch/pytorch/build/aten/src/ATen/core/TensorBody.h
#9 531.8 core header install: /tmp/scratch/pytorch/build/aten/src/ATen/core/TensorMethods.h
#9 531.8 disable test because ATEN_NO_TEST is set
#9 531.9 -- Include NCCL operators
#9 531.9 -- Including IDEEP operators
#9 531.9 -- Including image processing operators
#9 532.0 -- Excluding video processing operators due to no opencv
#9 532.0 -- MPI operators skipped due to no MPI support
#9 532.0 -- Include Observer library
#9 535.7 -- /usr/bin/g++-8 /tmp/scratch/pytorch/caffe2/../torch/abi-check.cpp -o /tmp/scratch/pytorch/build/abi-check
#9 535.9 -- Determined _GLIBCXX_USE_CXX11_ABI=1
#9 536.0 -- pytorch is compiling with OpenMP.
#9 536.0 OpenMP CXX_FLAGS: -fopenmp.
#9 536.0 OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/8/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so.
#9 536.0 -- Caffe2 is compiling with OpenMP.
#9 536.0 OpenMP CXX_FLAGS: -fopenmp.
#9 536.0 OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/8/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so.
#9 536.0 -- Using ATen parallel backend: OMP
#9 536.2 -- Using lib/python3/dist-packages as python relative installation path
#9 536.6 --
#9 536.6 -- ******** Summary ********
#9 536.6 -- General:
#9 536.6 -- CMake version : 3.16.1
#9 536.6 -- CMake command : /usr/local/bin/cmake
#9 536.6 -- System : Linux
#9 536.6 -- C++ compiler : /usr/bin/g++-8
#9 536.6 -- C++ compiler id : GNU
#9 536.6 -- C++ compiler version : 8.3.0
#9 536.6 -- BLAS : MKL
#9 536.6 -- CXX flags : -fdebug-prefix-map='/tmp/scratch'='/usr/local/src' -g -march=haswell -mtune=generic -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow
#9 536.6 -- Build type : Release
#9 536.6 -- Compile definitions : TH_BLAS_MKL;ONNX_ML=1;ONNX_NAMESPACE=onnx_torch;IDEEP_USE_MKL;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1
#9 536.6 -- CMAKE_PREFIX_PATH : /usr/local/cuda;/usr/local/cuda
#9 536.6 -- CMAKE_INSTALL_PREFIX : /tmp/scratch/pytorch/install.ha2MkvZc8O/root/usr/local
#9 536.6 --
#9 536.6 -- TORCH_VERSION : 1.1.0
#9 536.6 -- CAFFE2_VERSION : 1.1.0
#9 536.6 -- BUILD_CAFFE2_MOBILE : ON
#9 536.6 -- USE_STATIC_DISPATCH : OFF
#9 536.6 -- BUILD_BINARY : ON
#9 536.6 -- BUILD_CUSTOM_PROTOBUF : OFF
#9 536.6 -- Protobuf compiler :
#9 536.6 -- Protobuf includes :
#9 536.6 -- Protobuf libraries :
#9 536.6 -- BUILD_DOCS : OFF
#9 536.6 -- BUILD_PYTHON : ON
#9 536.6 -- Python version : 3.6.9
#9 536.6 -- Python executable : /usr/bin/python3
#9 536.6 -- Pythonlibs version : 3.6.9
#9 536.6 -- Python library : python3.6m
#9 536.6 -- Python includes : /usr/include/python3.6m
#9 536.6 -- Python site-packages: lib/python3/dist-packages
#9 536.6 -- BUILD_CAFFE2_OPS : ON
#9 536.6 -- BUILD_SHARED_LIBS : ON
#9 536.6 -- BUILD_TEST : ON
#9 536.6 -- BUILD_JNI : OFF
#9 536.6 -- INTERN_BUILD_MOBILE :
#9 536.6 -- USE_ASAN : OFF
#9 536.6 -- USE_CUDA : ON
#9 536.6 -- CUDA static link : OFF
#9 536.6 -- USE_CUDNN : ON
#9 536.6 -- CUDA version : 10.2
#9 536.6 -- cuDNN version : 7.6.5
#9 536.6 -- CUDA root directory : /usr/local/cuda
#9 536.6 -- CUDA library : /usr/local/cuda/lib64/stubs/libcuda.so
#9 536.6 -- cudart library : /usr/local/cuda/lib64/libcudart.so
#9 536.6 -- cublas library : /usr/lib/x86_64-linux-gnu/libcublas.so
#9 536.6 -- cufft library : /usr/local/cuda/lib64/libcufft.so
#9 536.6 -- curand library : /usr/local/cuda/lib64/libcurand.so
#9 536.6 -- cuDNN library : /usr/lib/x86_64-linux-gnu/libcudnn.so
#9 536.6 -- nvrtc : /usr/local/cuda/lib64/libnvrtc.so
#9 536.6 -- CUDA include path : /usr/local/cuda/include
#9 536.6 -- NVCC executable : /usr/local/cuda/bin/nvcc
#9 536.6 -- CUDA host compiler : /usr/bin/gcc-8
#9 536.6 -- USE_TENSORRT : OFF
#9 536.6 -- USE_ROCM : OFF
#9 536.6 -- USE_EIGEN_FOR_BLAS :
#9 536.6 -- USE_FBGEMM : ON
#9 536.6 -- USE_FFMPEG : OFF
#9 536.6 -- USE_GFLAGS : ON
#9 536.6 -- USE_GLOG : ON
#9 536.6 -- USE_LEVELDB : ON
#9 536.6 -- LevelDB version : 1.22
#9 536.6 -- Snappy version : ..
#9 536.6 -- USE_LITE_PROTO : OFF
#9 536.6 -- USE_LMDB : ON
#9 536.6 -- LMDB version : 0.9.24
#9 536.6 -- USE_METAL : OFF
#9 536.6 -- USE_MKL : ON
#9 536.6 -- USE_MKLDNN : ON
#9 536.6 -- USE_MKLDNN_CBLAS : OFF
#9 536.6 -- USE_NCCL : ON
#9 536.6 -- USE_SYSTEM_NCCL : ON
#9 536.6 -- USE_NNPACK : ON
#9 536.6 -- USE_NUMPY : ON
#9 536.6 -- USE_OBSERVERS : ON
#9 536.6 -- USE_OPENCL : OFF
#9 536.6 -- USE_OPENCV : ON
#9 536.6 -- OpenCV version : 4.1.1
#9 536.6 -- USE_OPENMP : ON
#9 536.6 -- USE_TBB : OFF
#9 536.6 -- USE_PROF : ON
#9 536.6 -- USE_QNNPACK : ON
#9 536.6 -- USE_REDIS : OFF
#9 536.6 -- USE_ROCKSDB : ON
#9 536.6 -- USE_ZMQ : OFF
#9 536.6 -- USE_DISTRIBUTED : ON
#9 536.6 -- USE_MPI : OFF
#9 536.6 -- USE_GLOO : ON
#9 536.6 -- Public Dependencies : Threads::Threads;caffe2::mkl;glog::glog;caffe2::mkldnn
#9 536.6 -- Private Dependencies : qnnpack;pytorch_qnnpack;nnpack;cpuinfo;fbgemm;/usr/local/lib/liblmdb.so;/usr/local/lib/libleveldb.a;/usr/local/lib/libsnappy.so;/usr/lib/x86_64-linux-gnu/libnuma.so;opencv_core;opencv_highgui;opencv_imgproc;opencv_imgcodecs;opencv_optflow;opencv_videoio;opencv_video;fp16;gloo;aten_op_header_gen;foxi_loader;rt;gcc_s;gcc;dl #9 536.9 -- Configuring done
#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6 Target "caffe2_rocksdb" links to target "snappy::snappy" but the target was
#9 537.6 not found. Perhaps a find_package() call is missing for an IMPORTED
#9 537.6 target, or an ALIAS target is missing?
#9 537.6
#9 537.6
#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6 Target "caffe2_rocksdb" links to target "lz4::lz4" but the target was not
#9 537.6 found. Perhaps a find_package() call is missing for an IMPORTED target, or
#9 537.6 an ALIAS target is missing?
#9 537.6
#9 537.6
#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6 Target "caffe2_rocksdb" links to target "zstd::zstd" but the target was not
#9 537.6 found. Perhaps a find_package() call is missing for an IMPORTED target, or
#9 537.6 an ALIAS target is missing?
#9 537.6
#9 537.6
#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6 Target "caffe2_rocksdb" links to target "NUMA::NUMA" but the target was not
#9 537.6 found. Perhaps a find_package() call is missing for an IMPORTED target, or
#9 537.6 an ALIAS target is missing?
#9 537.6
#9 537.6
#9 537.6 CMake Warning (dev) at cmake/Dependencies.cmake:1068 (add_dependencies):
#9 537.6 Policy CMP0046 is not set: Error on non-existent dependency in
#9 537.6 add_dependencies. Run "cmake --help-policy CMP0046" for policy details.
#9 537.6 Use the cmake_policy command to set the policy and suppress this warning.
#9 537.6
#9 537.6 The dependency target "nccl_external" of target "gloo_cuda" does not exist.
#9 537.6 Call Stack (most recent call first):
#9 537.6 CMakeLists.txt:390 (include)
#9 537.6 This warning is for project developers. Use -Wno-dev to suppress it.
#9 537.6
#9 542.3 -- Generating done
#9 542.3 CMake Warning:
#9 542.3 Manually-specified variables were not used by the project:
#9 542.3
#9 542.3 CMAKE_POLICY_DEFAULT_CMP0003
#9 542.3
#9 542.3
#9 542.3 CMake Generate step failed. Build files cannot be regenerated correctly.
``` | module: build,triaged | low | Critical |
537,804,489 | flutter | Readable plural messages in arb files | Currently, arb file plural messages are parsed as a single massive string as follows:
`"helloWorlds": "{count,plural, =0{Hello}=1{Hello World}=2{Hello two worlds}few{Hello {count} worlds}many{Hello all {count} worlds}other{Hello other {count} worlds}}"`
However, it would be more readable if it could be something like:
```
"helloWorlds": "{
count,
plural,
=0{Hello}
=1{Hello World}
=2{Hello two worlds}
few{Hello {count} worlds}
many{Hello all {count} worlds}
other{Hello other {count} worlds}
}"
```
The problem with this is that newlines are control characters (see [SO post](https://stackoverflow.com/questions/16690101/can-a-json-value-contain-a-multiline-string) for some clarity) so you cannot have a literal newline within the string.
This means that a workaround would be something like an array of strings:
```
"helloWorlds": [
"count",
"plural",
"=0{Hello}",
"=1{Hello World}"
"=2{Hello two worlds}",
"few{Hello {count} worlds}",
"many{Hello all {count} worlds}",
"other{Hello other {count} worlds}"
]
```
I'd like to hear some thoughts on what our limitations are with how we can define [arb files](https://github.com/google/app-resource-bundle/wiki/ApplicationResourceBundleSpecification#plural-and-gender-support) such that it adheres to the [ICU standard for plural messages](http://userguide.icu-project.org/formatparse/messages). In other words, if there are alternative suggestions based on experiences with handling plural messages arb files, please feel free to chime in!
I tried concatenating using "+" as in the arb file spec, but using `json.decode` with this formatting resulted in an exception.
cc/ @rami-a @HansMuller | framework,a: internationalization,P3,team-framework,triaged-framework | medium | Major |
537,810,276 | godot | "drivers/unix/net_socket_posix.cpp:190 - Socket Error: 10054" while loading a scene that uses NativeScript | **Godot version:** v3.1.2.stable.official
**OS/device including version:** Windows 10
**Issue description:** While (I suppose) my game is loading NativeScript, it immediately crashes and prints " drivers/unix/net_socket_posix.cpp:190 - Socket error: 10054". It isn't the first time that I use GdNative, in another project GdNative works perfectly.
**Steps to reproduce:**
1. Compile the code on the minimal reproduction project (I am using visual studio 2019, but it worked before)
2. Make a .gdnlib and .gdns file, assign the latter an Area2D
**Minimal reproduction project:**
(I couldn't upload the godot_headers and include folders)
https://github.com/GbaCretin/eve_eternal_rain_bullet/ | bug,topic:gdextension,crash | low | Critical |
537,823,496 | go | x/website/_content: consider reusing repo list from x/build/repos instead of maintaining own hard-coded copy | This is an extension of #36047. /cc @bradfitz
There are hard-coded lists in the static templates for `x/website/cmd/golangorg` and `x/tools/cmd/godoc`, e.g.:
https://github.com/golang/website/blob/12a8390500dd3fedb41561111a59df98883b92b9/content/static/packageroot.html#L122-L138
In theory, `x/website` can be modified to get the list of repos from `x/build/repos` and pass that list to the template. However, that would require modifying `golang.org/x/tools/godoc` to either be able to pass data through from `x/website`, or having `golang.org/x/tools/godoc` itself get the list from `x/build`.
There may or may not be a constraint about whether we should add a requirement on `x/build` to `x/tools`, since `x/tools` contains many stable packages, while `x/build` is more internal and less oriented to be imported by library modules (e.g., see #29935).
I don't think it's worth implementing right now, because it's a lot of forced changes and not a lot of benefit. But there are changes planned in order to resolve #29206, and this may become more viable after that. Filing this issue to track this task. | NeedsInvestigation | low | Minor |
537,845,458 | flutter | Engine is sending async trace events with id collisions | Each async event tree should have a unique id (see [Trace Event Format](https://docs.google.com/document/d/1CvAClvFfyA5R-PhYUmn5OOQtYMH4h6I0nSsKchNAySU/edit#heading=h.jh64i9l3vwa1))
Events are being sent from the engine with colliding async ids
```
{name: Frame Request Pending, cat: Embedder, tid: 250009, pid: 249458, ts: 269544881397, ph: b, id: 1014a, args: {}},
{name: Frame Request Pending, cat: Embedder, tid: 250009, pid: 249458, ts: 269544897542, ph: e, id: 1014a, args: {}},
{name: PipelineItem, cat: Embedder, tid: 250009, pid: 249458, ts: 269547362701, ph: b, id: 1014a, args: {}},
{name: PipelineItem, cat: Embedder, tid: 250010, pid: 249458, ts: 269547367756, ph: e, id: 1014a, args: {}},
```
These events do not have a parent child relationship, as you can infer by their disjoint timestamps, so they should have different ids. I have only seen this issue on a dream(g3) simulator. | engine,e: embedder,P2,team-engine,triaged-engine | low | Minor |
537,846,051 | pytorch | nn.MultiHeadAttention with different similarity measures | ## π Feature
<!-- A clear and concise description of the feature proposal -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
## Pitch
<!-- A clear and concise description of what you want to happen. -->
Current nn.MultiHeadAttention uses matrix multiplication similarity, i.e., ([email protected]()), but variants of this similarity are not available directly, for example,
dot product similarity, i.e.,
```(Q*K.t())```,
additive similarity,
```(Wq*Q.t() + Wk*K.t())```,
general dot product similarity,
```(Q*W*K.t())```.
These variants should also be in PyTorch.
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @zhangguanheng66 | module: nn,triaged,function request | low | Minor |
537,846,242 | flutter | Duplicate trace events are being fired | Example:
I have seen this with Duration events as well as Async events. Sometimes we see an extra begin event, an extra end event, or both.
```
{"name":"PipelineItem","cat":"Dart","tid":250009,"pid":249458,"ts":267504008453,"ph":"e","id":"1124","args":{"isolateId":"isolates/1022783825870871"}},
{"name":"PipelineItem","cat":"Dart","tid":250009,"pid":249458,"ts":267504008453,"ph":"e","id":"1124","args":{"isolateId":"isolates/1022783825870871"}},
``` | framework,engine,P2,team-engine,triaged-engine | low | Major |
537,860,240 | godot | KinematicBody2D can push other KinematicBody2D | **Godot version:**
3.2, beta 3
**OS/device including version:**
Windows 10
**Issue description:**
A kinematicbody2d can unintentionally move another kinematicbody2d when the latter should not be able to move in a given direction.
**Steps to reproduce:**
Try the example project. Block on the left is stock KinematicBody2D and cannot be moved by the player. Block on the right is the same but with a script to implement gravity. That block can be pushed by player although there is no code to implement its horizontal motion.
**Minimal reproduction project:**
[test_unwanted_pushblock.zip](https://github.com/godotengine/godot/files/3963412/test_unwanted_pushblock.zip)
| bug,discussion,confirmed,topic:physics | medium | Major |
537,865,599 | node | Secure memory: Yay or nay | OpenSSL has support for a concept that is referred to as _secure memory_ or _secure heap_. Essentially, every time OpenSSL allocates memory, it indicates whether that memory should be allocated from the "normal" heap using `malloc` or from the "secure" heap. Allocations on the secure heap usually have the following security properties:
- Allocated memory is never swapped to the disk and never included in core dumps, making it less likely to leak sensitive information through those.
- Allocated memory is always overwritten on deallocation, also making it less likely to leak information.
- It is far more difficult to use buffer overflows to retrieve data from these allocations. (OpenSSL causes the process to terminate if memory surrounding secure allocations is accessed.)
Node.js does not use this feature, meaning that OpenSSL performs normal allocations. (OpenSSL still overwrites the memory on deallocation.) We can enable OpenSSL's implementation, but it is quite restrictive and will be difficult to configure for users of Node.js. The alternative is to provide a more suitable implementation within Node.js, which provides similar security properties while being easier to use and less restrictive. However, that requires:
- Upstream changes to OpenSSL's memory management, in order to allow overriding the built-in secure memory implementation. I have talked to some of the maintainers, and they would likely accept such changes.
- The actual secure heap implementation in Node.js. That is not super difficult, but the implementation is platform-specific and will not be easy to test.
I started working on this approach about a year ago, and never finished it. I got varying feedback at the Montreal summit, so let's discuss this in public before I either stop working on it or put in a lot more work. As someone pointed out, many cloud applications likely don't care about this level of security.
Even if we upstream the necessary changes in OpenSSL, it is unlikely to ever work with a shared OpenSSL library, so dynamic linking would disable the feature. Also, I don't want to break Electron (as I have done numerous times, sorry @codebytere!), so we would need to make sure to either propose the change to both OpenSSL and BoringSSL or to make it easy to opt out of the feature at compile time.
I also have a patch that solves https://github.com/nodejs/node/issues/18896, but relies on this particular feature, and is super dangerous to use. | memory,security | medium | Critical |
537,878,377 | flutter | [video_player] Videos don't play in Safari/Chrome on iOS | video_player_web,video_player can play in Mac safari normaly by platform web, but when I play in phone safari, it doesn't work. | customer: crowd,platform-web,p: video_player,package,has reproducible steps,customer: web10,customer: ninja,P2,found in release: 2.1,browser: safari-ios,team-web,triaged-web | low | Critical |
537,903,740 | rust | `--emit=[asm|llvm-bc|llvm-ir|obj]` not emitting allocator or metadata | At present, `rustc --emit=[asm|llvm-bc|llvm-ir|obj]` only generates artifacts of the regular module but not allocators or metadata.
While working on #64191 , I found such behavior was very confusing as I expected it to output all artifacts I need to generate the final binary. Therefore, I am proposing to make `rustc --emit=[asm|llvm-bc|llvm-ir|obj]` emits allocators and metadata, in addition to the regular module. For example:
- `rustc --emit=llvm-bc foo.rs` would outputs `foo.bc` and `foo.allocator.bc` [crate-type bin doesn't need metadata]
- `rustc --emit=llvm-ir foo.rs --crate-type=dylib` would outputs `foo.ll`, `foo.allocator.ll`, and `foo.metadata.ll`
| A-codegen,A-metadata,T-compiler,C-feature-request | low | Minor |
537,907,479 | rust | `--emit=metadata` emitting empty .rmeta file | At present, `rustc --emit=metadata foo.rs` generates an empty rmeta file named `libfoo.rmeta`. The logic behind is that [rmeta is not needed for certain crate types](https://github.com/rust-lang/rust/blob/12307b3b08edee543a78fb9d4a837fbd6d6ac0fa/src/librustc_interface/passes.rs#L922-L939). However, I feel we should still save it to a file when user explicitly requested it by `--emit=metadata`, especially the metadata is always available in memory regardless of output crate types.
| A-metadata,T-compiler | low | Major |
537,910,782 | rust | cfg(doctest) doesn't work as expected | First, create empty lib crate.
```bash
cargo init --lib foo
```
With `use crate::foo::bar`:
```rust
//! Hello
//!
//! ```
//! use crate::foo::bar;
//! bar();
//! ```
#[cfg(doctest)]
pub fn bar ()
{
println!("hello");
}
```
```text
Compiling foo v0.1.0 (/home/user/tmp/foo)
Finished test [unoptimized + debuginfo] target(s) in 0.01s
Running target/debug/deps/foo-d4d7a4ec5556181e
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests foo
running 1 test
test src/lib.rs - (line 3) ... FAILED
failures:
---- src/lib.rs - (line 3) stdout ----
error[E0432]: unresolved import `crate::foo::bar`
--> src/lib.rs:4:5
|
4 | use crate::foo::bar;
| ^^^^^^^^^^^^^^^ no `bar` in the root
error: aborting due to previous error
For more information about this error, try `rustc --explain E0432`.
Couldn't compile the test.
failures:
src/lib.rs - (line 3)
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out
error: test failed, to rerun pass '--doc'
```
With `use foo::bar`:
```rust
//! Hello
//!
//! ```
//! use foo::bar;
//! bar();
//! ```
#[cfg(doctest)]
pub fn bar ()
{
println!("hello");
}
```
```text
Compiling foo v0.1.0 (/home/user/tmp/foo)
Finished test [unoptimized + debuginfo] target(s) in 0.24s
Running target/debug/deps/foo-d4d7a4ec5556181e
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests foo
running 1 test
test src/lib.rs - (line 3) ... FAILED
failures:
---- src/lib.rs - (line 3) stdout ----
error[E0432]: unresolved import `foo::bar`
--> src/lib.rs:4:5
|
4 | use foo::bar;
| ^^^^^^^^ no `bar` in the root
error: aborting due to previous error
For more information about this error, try `rustc --explain E0432`.
Couldn't compile the test.
failures:
src/lib.rs - (line 3)
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out
error: test failed, to rerun pass '--doc'
```
```rust
//! Hello
//!
//! ```
//! use crate::bar;
//! bar();
//! ```
#[cfg(doctest)]
pub fn bar ()
{
println!("hello");
}
```
With `use crate::bar`:
```text
Compiling foo v0.1.0 (/home/user/tmp/foo)
Finished test [unoptimized + debuginfo] target(s) in 0.23s
Running target/debug/deps/foo-d4d7a4ec5556181e
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests foo
running 1 test
test src/lib.rs - (line 3) ... FAILED
failures:
---- src/lib.rs - (line 3) stdout ----
error[E0432]: unresolved import `crate::bar`
--> src/lib.rs:4:5
|
3 | use crate::bar;
| ^^^^^^^^^^ no `bar` in the root
error: aborting due to previous error
For more information about this error, try `rustc --explain E0432`.
Couldn't compile the test.
failures:
src/lib.rs - (line 3)
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out
error: test failed, to rerun pass '--doc'
``` | T-rustdoc,E-hard,T-cargo,C-bug,A-doctests | medium | Critical |
537,914,048 | go | time: add ExternalNow, etc for external time and timers | **Update May 5 2021**: The current proposed API is in https://github.com/golang/go/issues/36141#issuecomment-828667067. - rsc
- - -
### Vocabulary:
- Program time: monotonic, but stops when the computer is in S3 sleep.
- Real time: monotonic, but continues to advance when the computer is in S3 sleep.
- Wall time: non-monotonic thing on your wristwatch or wall clock that NTP messes with. This one plays no role in this discussion here at all.
- Operating system: this always refers to the tuple of OS+ParticularVersion+ParticularConfiguration.
(These vocabulary terms can be nitpicked - maybe program time should be cpu time or something - but we've been using them prior in discussion, so let's continue to use them so as not to introduce confusion.)
### Proposal:
- Find some way to introduce "real time" semantics into Go, which currently mostly uses "program time", except on Windows, where it's always been "real time" for historical reasons.
### Motivation:
- Network protocols need to keep track of timeouts independent of whether a computer is asleep, since parties on a network exist in the real world, rather than virtualized on a CPU.
### Landscape:
- On some operating systems, the poll/select/kqueue/WaitFor*Object/futex family of functions takes a timeout that is measured in "real time", and on others measured in "program time".
- Most operating systems support a "program time" counter. Some support a "real time" counter, but some do not, depending on configuration or existence of S3.
- Most operating systems offer a notifier for resuming from sleep, though some may not, depending on configuration or existence of S3.
- Important observation: operating systems that do not offer a notifier support "program time" rather than "real time".
### Possibilities:
a. Make the existing `time.` and `time.Timer.` functions use "real time" exclusively, when possible. Introduce a function `runtime.RealtimeTimers() bool` to indicate whether Go successfully enabled "real time" timers rather than "program time" timers, the fallback.
b. Introduce additional duplicated functions to `time.` and `time.Timer.` that use "real time" rather than "program time". Introduce a function `time.RealtimeTimersAreRealTime() bool` to indicate whether Go successfully enabled "real time" timers on this new set of functions, or if the new set of functions behave identically to the old.
c. Introduce additional duplicated functions to `time.` and `time.Timer.` that use "real time" rather than "program time", and throw an error if "real time" capabilities aren't available, forcing users to introduce verbose fallback code if they only want to support "real time" opportunistically.
d. Add a function `runtime.UseRealtimeTimers() error` that attempts to change the runtime to use "real time" timers everywhere, like (a).
e. Add runtime function `runtime.UseRealtimeTimers(yes bool) error` that attempts to change the runtime to use "real time" or "program time" timers everywhere, like (a) but the ability to toggle. Add runtime function `runtime.RealtimeTimers() bool` to indicate the current state. The default start-state would be either OS-defined or "real time" or "program time", depending on what we decide.
f. Other options?
My personal preference would be (a) or (e), but I'm open to discussion.
CC @ianlancetaylor @bradfitz @aclements @rsc | Proposal,Proposal-Accepted | high | Critical |
537,914,348 | youtube-dl | --playlist--start and --playlist-end more verbosity suggestion | ## Checklist
- [x] I'm reporting a feature request
- [x] I've verified that I'm running youtube-dl version **2019.11.28**
- [x] I've searched the bugtracker for similar feature requests including closed ones
## Description
While using `youtube-dl <playlisit-id> --playlist-start 10` on a playlist containing 60 videos, `youtube-dl` displays 50 videos in playlist and counts them from 1 to 50 (_Downloading video 1 od 50_). When user does not know the exact count of videos in playlist, they don't know if `--playlist-start` works or not. There is no indication of that whatsoever.
My idea is to show more verbose info when using `--playlist--start` and `--playlist-end`. Assuming one is running `youtube-dl <playlisit-id> --playlist-start 10 --playlist-end 30` on a playlist mentioned earlier, the message could be as follows:
```
...
[youtube:playlist] playlist <playlist-name>: Downloading 20 out of 60 videos
[download] Downloading video 1 of 20 (playlist video 10)
...
```
This way the user gets two important pieces of information:
1. The `--playlist--start` and `--playlist-end` switches do work,
2. In case a download fails somewhere in the middle of the queue, the `playlist video nn` on `[download]` line tells exactly with what `--playlist--start` index should the download be re-started
I believe this change won't require much work, but the result would pretty big for users. | request | low | Critical |
537,928,496 | pytorch | Intel OMP multiprocessing assertion failure: Assertion failure at z_Linux_util.cpp(2338) | ## π Bug
We observe a weird assertion happening after 4-12 hours of training an image classifier with pytorch nightly and intel omp 2019.4, one of data loader workers fails with this error:
```
OMP: Error #13: Assertion failure at z_Linux_util.cpp(2338).
OMP: Hint Please submit a bug report with this message, compile and run commands used, and machine configuration info including native compiler and operating system versions. Faster response will be obtained by including all program sources. For information on submitting this issue, please see http://www.intel.com/software/products/support/.
...
in update
v = v.item()
File "/private/home/szagoruyko/miniconda3/envs/pytorch-nightly/lib/python3.7/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 42786) is killed by signal: Aborted.
```
downgrading to intel-omp 2019.1 seemingly fixes the issue.
Also found this issue https://github.com/ContinuumIO/anaconda-issues/issues/11294 but it looks like it's not present in 2019.4
(cc: discussed offline with @fmassa and @pietern )
## Environment
```
PyTorch version: 1.4.0.dev20191112
Is debug build: No
CUDA used to build PyTorch: 9.2
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.7
Nvidia driver version: 410.79
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.17.3
[pip] torch==1.4.0.dev20191112
[pip] torchfile==0.1.0
[pip] torchnet==0.0.4
[pip] torchvision==0.5.0a0+bfd4b2a
[pip] torchviz==0.0.1
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.15 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] pytorch 1.4.0.dev20191112 py3.7_cuda9.2.148_cudnn7.6.3_0 pytorch-nightly
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchnet 0.0.4 pypi_0 pypi
[conda] torchvision 0.5.0.dev20191112 py37_cu92 pytorch-nightly
[conda] torchviz 0.0.1 dev_0 <develop>
```
cc @SsnL | module: dependency bug,module: multiprocessing,triaged | low | Critical |
537,975,625 | flutter | Support toggling window decoration visibility on desktop | ## Use case
A Flutter application running on desktop should be able to control its decoration or its frame, that is to show or hide the surrounding window frame which typically holds the title and close icon. | c: new feature,engine,platform-mac,platform-windows,platform-linux,a: desktop,P3,team-macos,triaged-macos | low | Major |
537,981,264 | opencv | cudaGLSetGLDevice deprecated errors | In building OpenCV 4.1.2 with OpenGL and CUDA enabled, I'm seeing warnings about the function
cudaGLSetGLDevice being deprecated. It is called in opengl.cpp. Is this a known problem, or is it something that results from a certain combination of flags?
In opengl.cpp:
```
void cv::cuda::setGlDevice(int device)
{
#ifndef HAVE_OPENGL
CV_UNUSED(device);
throw_no_ogl();
#else
#ifndef HAVE_CUDA
CV_UNUSED(device);
throw_no_cuda();
#else
cudaSafeCall( cudaGLSetGLDevice(device) );
#endif
#endif
}
```
| category: gpu/cuda (contrib) | low | Critical |
537,986,610 | godot | Joints Gizmos disappear after switching scenes tab in Editor | **Godot version:**
Godot 3.2 Beta 3
**OS/device including version:**
Windows 7 x64bit
**Issue description:**
Joints Gizmos disappear after switching scenes tab in Editor
**Steps to reproduce:**
1 - Create a Physical skeleton from your skeleton
2 - Select your Physical Bones and Change the Joint Type for Example to : ConeJoint
3 - Switch Scene tab and Go Back again to this scene , Joints Gizmos Disappears
4 - close that scene after saving it and re-open it again , joints Gizmos are back again | bug,topic:editor,confirmed,usability,topic:3d | low | Minor |
537,986,655 | react | Controlled numeric input gets cleared when unfocused | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
**Do you want to request a *feature* or report a *bug*?**
I would like to report a bug.
**What is the current behavior?**
A numeric input field gets cleared accidentally on several occasions.
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:**
1. Open [this sandbox](https://codesandbox.io/embed/sad-rgb-mltuj)
2. Type β12.β into the field, with the trailing decimal separator
3. Unfocus the control
4. Append β.β to the fieldβs value to see β12..β
5. Unfocus the control once again and see that the number has completely disappeared
**What is the expected behavior?**
Similar to how uncontrolled inputs work (remove the `value` prop and then repeat the steps above), the input should not be cleared on blur.
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
Iβm using React 16.12.0 and experienced the same behavior with the latest version of Chrome and iOS Safari. Regarding this bug, I have no experience with previous versions of React.
| Type: Bug,Component: DOM,Resolution: Backlog | low | Critical |
537,992,767 | rust | RcFromIter and ArcFromIter have unused specializations | [`RcFromIter`](https://github.com/rust-lang/rust/blob/1.39.0/src/liballoc/rc.rs#L1498-L1543) and [`ArcFromIter`](https://github.com/rust-lang/rust/blob/1.39.0/src/liballoc/sync.rs#L2061-L2106) have three impls each:
- One is for a general `I: Iterator<Item = T>` (A)
- One is for `I: TrustedLen<Item = T>` (B)
- One is for `slice::Iter<'a, T>` (C)
The way `default` is specified suggests that (C) specializes (B) and (B) specializes (A).
However, in the impl (C), the iterator `slice::Iter<'a, T>` is `Iterator<Item = &'a T>`, not `Iterator<Item = T>`. I guess (C) is in fact an unrelated implementation that is exclusive to (A)?
As the only user of `RcFromIter` is `<Rc<[T]> as FromIterator>::from_iter` and the call looks like resolved to (A), (C) seems totally unused. I guess we want to implement `RcFromIter` for `iter::Cloned<slice::Iter<'a, T>>` and `iter::Copied<slice::Iter<'a, T>>` instead?
Moreover, the first type parameter `T` of `RcFromIter<T, I>` seems unnecessary too. | T-libs-api,A-specialization,C-bug,A-iterators | low | Minor |
538,008,243 | create-react-app | formatWebpackMessages clean up messages filename | ### Is your proposal related to a problem?
when the error/warning contains `(loader?xxxxxx)` , the filename will display like this.
```
/src/App.scss (/codes/test-appt/node_modules/css-loader/dist/cjs.js??ref
--7-oneOf-5-1!/codes/test-appt/node_modules/postcss-loader/src??postcss!/codes/test-appt/node_mo
dules/resolve-url-loader??ref--7-oneOf-5-3!/codes/test-appt/node_modules/sass-loader/dist/cjs.js??ref--7-
oneOf-5-4!./config/stylelint-loader.js??ref--6-0!./src/App.scss)
```

### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
when cleanup the filename
https://github.com/facebook/create-react-app/blob/f26de73e645e30a030af716caf6f61bcda24ef08/packages/react-dev-utils/formatWebpackMessages.js#L63-L64
remove the loader info
```js
lines[0] = lines[0].replace(/\s+\(.*\)$/, ''); // remove (loader?query....)
```
| issue: proposal,needs triage | low | Critical |
538,011,100 | go | runtime: go1.13.5/go tip worst gc pause time increase from go1.9.7 in darwin/amd64 | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.5 darwin/amd64
go version devel +7d30af8 Fri Dec 13 20:41:04 2019 +0000 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/a/Library/Caches/go-build"
GOENV="/Users/a/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/a/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/m9/qtbxkp6s3p96fk54rln7qhj80000gp/T/go-build165542294=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
```
package main
import (
"fmt"
"time"
)
const (
windowSize = 200000
msgCount = 1e7
)
type (
message []byte
buffer [windowSize]message
)
var worst time.Duration
func mkMessage(n int) message {
m := make(message, 1024)
for i := range m {
m[i] = byte(n)
}
return m
}
func pushMsg(b *buffer, highID int) {
start := time.Now()
m := mkMessage(highID)
(*b)[highID%windowSize] = m
_ = m
elapsed := time.Since(start)
if elapsed > worst {
worst = elapsed
}
}
func main() {
var b buffer
for i := 0; i < msgCount; i++ {
pushMsg(&b, i)
}
fmt.Println("Worst push time: ", worst)
}
```
Code modified from https://making.pusher.com/golangs-real-time-gc-in-theory-and-practice/
### What did you expect to see?
Worst gc pause time decrease or equal.
### What did you see instead?
Worst gc pause time increase.
```
go version go1.9.7 darwin/amd64:
Worst push time: 9.354054ms
Worst push time: 8.229477ms
Worst push time: 7.651093ms
Worst push time: 8.560363ms
go version go1.13.5 darwin/amd64:
Worst push time: 190.080638ms
Worst push time: 15.075733ms
Worst push time: 14.391396ms
Worst push time: 14.358026ms
go version devel +7d30af8 Fri Dec 13 20:41:04 2019 +0000 darwin/amd64
Worst push time: 12.889516ms
Worst push time: 14.144591ms
Worst push time: 12.965626ms
Worst push time: 14.668102ms
``` | NeedsInvestigation,compiler/runtime | low | Critical |
538,016,012 | pytorch | torch runtime error when manual link libmkldnn.so | ## π Bug
I manual link libmkldnn.so, because I want to use some functions inside mkldnn, but pytorch get runtime error when I manual link libmkldnn.so
## To Reproduce
Steps to reproduce the behavior:
```
import ctypes, os
mkl_lib_name="MY_MKL_PATH/lib/libmkldnn.so"
dlopen_flags = os.RTLD_NOW | os.RTLD_GLOBAL # | os.RTLD_DEEPBIND
ctypes.CDLL(mkl_lib_name, dlopen_flags)
nchw = [2, 3, 100, 100]
oihw = [4, 3, 5, 5]
import torch
m = torch.nn.Conv2d(3, 4, 5, 1, 2)
m(torch.rand(*nchw))
```
get runtime error
```
Traceback (most recent call last):
File "a.py", line 13, in <module>
m(torch.rand(*nchw))
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 345, in forward
return self.conv2d_forward(input, self.weight)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: std::exception
```
## Expected behavior
No runtime error
## Environment
PyTorch version: 1.3.0
Is debug build: No
CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 16.04.3 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] numpy==1.17.3
[pip3] torch==1.3.0
[pip3] torchvision==0.4.1
[conda] Could not collect
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh | module: build,triaged,module: mkldnn | low | Critical |
538,020,609 | neovim | ext_messages: msg_show("confirm_sub") is not always cleared | <!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`: NVIM v0.5.0-241-g9f3d483c7
- Operating system/version: Linux 5.0.0-36-generic #39~18.04.1-Ubuntu X86_64
### Steps to reproduce using `nvim -u NORC`
1) Connect to neovim with a GUI, make sure to set `ext_messages` to true
2) Set the buffer's content to "This is a test"
3) Run `nvim_input(":%s/a/A/gc<CR>")`
Neovim will ask the GUI to draw the confirm prompt with the following message:
```json
[ "msg_show", "confirm_sub", [ [ 137, "replace with A (y/n/a/q/l/^E/^Y)?" ] ], false ]
```
Run `nvim_input("y")` and neovim will send the following message to clear the confirm prompt:
```json
[ "msg_showcmd", [] ]
```
Getting a `msg_showcmd` message instead of a `msg_clear` message is weird but I can work around that. However, if the user sets `noshowcmd`, neovim will not send a `msg_showcmd` event and the `confirm_sub` prompt will never be cleared (until a `msg_show` message is received, that is). This behavior differs from the TUI, where the prompt is always cleared, no matter what `showcmd` setting the user is using.
I wanted to write a small reproducer to help you confirm/test this bug but I wasn't sure where to start/how to implement tests for this behavior in neovim's testsuite. | bug,ui-extensibility,messages | low | Critical |
538,025,889 | TypeScript | EmitResolver cannot handle JsxOpeningLikeElement and JsxOpeningFragment that didn't originate from the parse tree | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.7.x-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
Debug Failure. False expression., typescript transformer, jsx element
**Code**
- clone `https://github.com/madou/untitled-css-in-js-project`
- run `git checkout b229dc749e4614bb8d9194c8de340a82f10c8f8a`
- run `yarn`
- run `yarn test`
- [notice one test fails](https://github.com/madou/untitled-css-in-js-project/blob/b229dc749e4614bb8d9194c8de340a82f10c8f8a/src/transformers/__tests__/index.test.tsx#L6) `should not blow up when transforming with const`
- [notice a similar test but with](https://github.com/madou/untitled-css-in-js-project/blob/b229dc749e4614bb8d9194c8de340a82f10c8f8a/src/transformers/__tests__/index.test.tsx#L24) `var` instead of `const` passes
[the node transformation is done here](https://github.com/madou/untitled-css-in-js-project/blob/b229dc749e4614bb8d9194c8de340a82f10c8f8a/src/transformers/css-prop/visitors/visit-jsx-element-with-css-prop.tsx#L115) - if i return the original jsx element node then the test passes. but then that defeats the purpose of the transformer.. π
the code is essentially transforming
```
const Component = () => <div css={{ fontSize: '20px' }}>hello world</div>;
```
to
```
const Component = () => (
<>
<style>{'.a { font-size: 20px; } '}</style>
<div className="a">hello world</div>
</>
);
```
**Expected behavior:**
it works no error thrown
**Actual behavior:**
`"Debug Failure. False expression."` error thrown. also tried with nightly typescript version - same error.
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
https://github.com/microsoft/TypeScript/issues/24380
π would love to get this figured out! hoping it's just something i've done wrong. make from this twitter thread https://twitter.com/orta/status/1206139333448732672 | Bug,Help Wanted,Domain: Transforms,Rescheduled | low | Critical |
538,046,273 | godot | AnimationPlayer track editor is missing click-and-drag timeline scrolling that was present in Godot 3.0 | **Godot version:** 3.1.2 stable
**OS/device including version:** Win7
**Issue description:**
LMB click and drag on the timeline is no longer possible in 3.1, while it was possible in 3.0.6
3.1:

3.0.6:

**Steps to reproduce:**
Either zoom into the timeline or increase the animation length to something larger than the currently displayed time in the Animationpanel, LMB click and drag the timestamp on the timeline
| enhancement,topic:editor,confirmed,usability,regression,topic:animation | low | Major |
538,087,816 | go | gccgo: aliases of *T should be embeddable if T is a defined type which is neither pointer nor interface | I create this issue per [@ianlancetaylor's suggestion](https://github.com/golang/go/issues/22005).
### What version of Go are you using (`go version`)?
<pre>
$ gccgo --version
gccgo (Debian 8.3.0-6) 8.3.0
</pre>
### Does this issue reproduce with the latest release?
yes.
### What did you do?
```golang
package main
type P1 = *bool
type P2 = *struct{}
type T struct {
P1
P2
}
func main() {}
```
### What did you expect to see?
Compiles okay.
### What did you see instead?
Doesn't compile.
| NeedsInvestigation | low | Minor |
538,123,129 | TypeScript | Codefix: removing inferable types | ## Search Terms
inferable types no-inferable-types no-inferrable-types removal
## Suggestion
It is somewhat unnecessary to add `:` type declarations for member, parameter, and/or variable types already inferable from code.
```ts
// This type declaration actually requests a lower amount of type information,
// as `number` is less specific (narrow) than `3`
const value: number = 3;
```
It'd be nice to have a codefix to remove them to clean up code.
## Use Cases
These unnecessary type declarations can easily get introduced in code as it gets changing during editing, and can cause visual clutter.
We should note that some users choose to prefer including these inferable type declarations as a style preference to increase the amount of type information visible purely from source code.
The [@typescript-eslint/no-inferable-types](https://github.com/typescript-eslint/typescript-eslint/blob/master/packages/eslint-plugin/docs/rules/no-inferrable-types.md) already exists with an auto-fixer for a good starting reference.
## Examples
**Variables** such as the above `value` can generally have their type removed if either is true:
* The variable is `let`, contains an initializer, and the type is the same as what it would be inferred as
```ts
// We can remove `: number` here
let value: number = 0;
// We cannot remove `: number | string`, as `| string` is not added by TypeScript
let otherValue: number | string = 0;
```
* The variable is `const` and the type is less narrow than what the type is inferred as
**Parameters** can similarly be treated as `let` parameters:
```ts
// The `: string` can be safely removed
function takesStringWithDefault(input: string = "") { }
```
Parameters can also have type declarations removed if their parent function is already declared to be a type that provides the same type declaration for the parameter:
```ts
type TakesString = (input: string) => void;
// We can remove the `: string` here too
const takesString: TakesString = (input: string) => {};
````
**Class members** can be treated as `let` variables here too.
```ts
class WithValue {
// We can remove the `: string` here
value: string = "";
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback,Domain: Quick Fixes | low | Minor |
538,161,403 | node | `in` operator not working correctly when using Proxy as VM context | <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: v12.13.1
* **Platform**: Windows 10
<!-- Please provide more details below this comment. -->
The `in` operator does not work correctly when using a Proxy as a VM context.
```js
var o = {};
var p = new Proxy(o, {
has(target, key) {
console.log('has', key);
return Reflect.has(target, key);
},
get(target, key, receiver) {
console.log('get', key);
return Reflect.get(target, key, receiver);
},
});
vm.createContext(p);
vm.runInContext(`this.abcInThis = 'abc' in this`, p);
console.log(JSON.stringify(o)); // Prints {"abcInThis":false}
```
In the above code, the expected output is:
```
has abc
{"abcInThis":false}
```
but the actual output is:
```
get abc
{"abcInThis":true}
```
If we remove the `get(target, key, receiver)` function, the still incorrect output is:
```
{"abcInThis":false}
```
| vm,v8 engine | low | Critical |
538,167,715 | flutter | Change camera `CameraController` `takePicture` API to optionally pass in the path where the image should be saved | ````dart
final path = join((await getTemporaryDirectory()).path, '${DateTime.now()}.png');
void value = await controller.takePicture(path);
File file = File(path);
````
Hello Flutter Team! and Community!
I'm just want to know if we can do something like this? (Code Below to reduce boilerplate)
````dart
final path = join((await getTemporaryDirectory()).path, '${DateTime.now()}.png');
File file = await controller.takePicture(path);
````
It will be great if takePicture future directly return the captured file :) | d: stackoverflow,p: camera,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Minor |
538,185,587 | storybook | Build: Selectively publish packages | NPM publish is failing with a higher frequency in recent weeks. To minimize this, I propose publishing a subset of packages.
## Why
NPM publishing fails intermittently due to unknown server issues. In each case, the server returns a `PUT 200` on publish, but the package is subsequently unavailable, either from the command-line NPM or the web interface, or both. This is a server issue, because an identical publish can succeed minutes later.
In recent weeks the servers seem to be less stable than usual. Also, we are publishing more packages. Recent failures include 5.3.0-beta.7, beta.17, beta.23, beta.26, beta.27. See https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md for details.
This is a problem for users and also for maintainers.
- **Users** may be unable to install Storybook, or may be prompted to select valid versions.
- **Maintainers** waste time re-releasing Storybook. Each release takes longer due to the number of packages published on each release.
## What
I propose only publishing a subset of the packages:
package | count
------- | ---
@storybook/addons | 5735915
@storybook/client-logger | 5664607
@storybook/channels | 5630934
@storybook/components | 5588912
@storybook/core-events | 5324992
@storybook/theming | 4957992
@storybook/router | 4758446
@storybook/channel-postmessage | 4737895
@storybook/node-logger | 4508624
@storybook/ui | 4494679
@storybook/core | 4449319
@storybook/api | 4256386
@storybook/client-api | 4074830
@storybook/addon-actions | 3894188
@storybook/react | 3722386
@storybook/addon-knobs | 2924045
@storybook/addon-links | 2535738
@storybook/addon-viewport | 1395532
~@storybook/addon-info~ | 1269880
@storybook/addon-a11y | 813777
@storybook/addon-notes | 789766
@storybook/source-loader | 738132
@storybook/addon-storyshots | 733029
@storybook/addon-storysource | 728057
@storybook/addon-options | 643882
@storybook/addon-backgrounds | 388060
@storybook/codemod | 385430
@storybook/addon-docs | 371426
@storybook/addon-centered | 352408
@storybook/cli | 342060
@storybook/vue | 306995
@storybook/channel-websocket | 239645
~@storybook/react-native~ | 233926
@storybook/angular | 175695
@storybook/html | 108007
@storybook/addon-jest | 84762
@storybook/addon-storyshots-puppeteer | 79237
~@storybook/addon-ondevice-knobs~ | 76207
~@storybook/react-native-server~ | 72287
@storybook/addon-contexts | 49376
~@storybook/addon-ondevice-notes~ | 38904
@storybook/postinstall | 34439
~@storybook/addon-cssresources~ | 26629
@storybook/ember | 26082
~@storybook/addon-ondevice-actions~ | 23295
~@storybook/addon-events~ | 20631
~@storybook/polymer~ | 13952
~@storybook/preact~ | 10765
~@storybook/addon-graphql~ | 9915
~@storybook/addon-ondevice-backgrounds~ | 9693
@storybook/svelte | 7047
~@storybook/mithril~ | 5781
~@storybook/marko~ | 5534
~@storybook/addon-google-analytics~ | 5377
~@storybook/riot~ | 4684
@storybook/web-components | 3847
~@storybook/addon-design-assets~ | 3504
~@storybook/addon-queryparams~ | 2845
~@storybook/rax~ | 2278
~@storybook/addon-parameter~ | 1963
~@storybook/addon-roundtrip~ | 1954
~@storybook/addon-decorator~ | 1943
~@storybook/addon-preview-wrapper~ | 753
@storybook/addon-essentials | 121
This should reduce the number of packages by about 1/3. Which will hopefully also reduce the publish failures by a similar amount. In a subsequent step, I'd like to remove some of the packages from the monorepo entirely.
## How
`lerna --force-publish` can take a list of packages to publish. I propose a blacklist, `skip-publish.txt`, containing the above packages, and filter those out from the publishing process. | maintenance | low | Critical |
538,215,891 | pytorch | [mac] Failure to import torch | ## π Bug
Failure to import torch in Jupyter Lab
## To Reproduce
Steps to reproduce the behavior:
1. conda install pytorch torchvision -c pytorch
2. import torch
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-2-eb42ca6e4af3> in <module>
----> 1 import torch
/Library/anaconda/lib/python3.6/site-packages/torch/__init__.py in <module>
79 del _dl_flags
80
---> 81 from torch._C import *
82
83 __all__ += [name for name in dir(_C)
ImportError: dlopen(/Library/anaconda/lib/python3.6/site-packages/torch/_C.cpython-36m-darwin.so, 9): Library not loaded: @rpath/libiomp5.dylib
Referenced from: /Library/anaconda/lib/python3.6/site-packages/torch/lib/libshm.dylib
Reason: image not found
## Expected behavior
PyTorch should be imported.
## Environment
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
OS: Mac OSX 10.15.1
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.6
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.15.4
[pip3] numpydoc==0.8.0
[pip3] torch==1.3.1
[pip3] torchvision==0.4.2
[conda] _tflow_select 2.3.0 mkl anaconda
[conda] mkl 2019.5 intel_281 intel
[conda] mkl-service 1.1.2 py36_3 https://repo.anaconda.com/pkgs/free
[conda] pytorch 1.3.1 py3.6_0 pytorch
[conda] tensorflow 1.14.0 mkl_py36h933f829_0 anaconda
[conda] tensorflow-base 1.14.0 mkl_py36h655c25b_0 anaconda
[conda] torchvision 0.4.2 py36_cpu pytorch
## Additional context
I also checked this thread: https://github.com/pytorch/pytorch/issues/4989
However, with "conda install -c intel mkl", the same error still occurs when importing.
I had no problem with installing using conda in ubuntu. | triaged,module: macos | low | Critical |
538,261,064 | TypeScript | WebGLShader has no properties | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.7.3
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** `array element type checking`, `error type checking`, `stops checking function argument types`
**Code**
https://github.com/vjancik/dont-find-me-yet/blob/tsc-issue/src/index.ts#L63-L64
- fails to infer a possible Error type
- and if I set it explicitly (see Playground), fails to enforce it in
https://github.com/vjancik/dont-find-me-yet/blob/tsc-issue/src/index.ts#L146
- `createProgram` doesn't enforce `WebGLShader[]` at all on the first argument, or seemingly any of the other arguments as well. I can add any variable in there and the array accepts it (also Playground)
**Expected behavior:**
Types of fn. arguments on `createProgram` get enforced correctly.
Optional: Return type on `loadShader` & `createProgram` gets inferred correctly as `... | Error`
**Actual behavior:**
The issue doesn't resolve even if I specify all return types on all functions manually.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
[Playground link](http://www.typescriptlang.org/play/?ts=3.8.0-dev.20191214#code/PTAEBkHsGcBdQCoHkAiToCgSNUgXKAJYC2ADgE6QBuApqOTbAK7kB2oVAhgDZN2wBPUjUzZkaAgGEmcSMVA1WTeYOHRQAM0jlQAcXCLlWMOPygMGVXQDKscoVIBRcpXIAeBAD5QAXkQKAD1hFABN1Z1dQAH5QVhpaHQIEAG4LYAAqdIxQdNAANR5CEM5g9VgACzouXjpCaFYAcnhOdhoXbQAaUG1QCsoAdzLKhXbyADps3MnQAAFSTnJOeWq+cxzgDA0mVgBjWEJIdm3+xdIPTwAKFZokgEoCW3snUfPQAG9s0C-CDVArnlWhFYcBaOxokF+EW0t3eny+8N65QGHABNFSCIAvgpuNA6G9QNgaDjauwdpxcYjkQBzRjqYgtKncGghIjsCp0YrkADWoH62m4LIhvWgO1AAFpvAAjTiSoni7wiACscIR9EYLHY11A5NAjwcUPcXnR8IxqQxFisoAAcpBYAakIaugBVbx+BCBYKsMKgA3RUBO0BJVJbXb7Q6gY6nAD62ijRNxHmdl2uSS6GlYBAuCypAAY7r5vE77tbbfbHf7vB8ET8-lqgSDduDIaNbiqEQxmGxNKx-jUYTqbXbRg7ExXjV8OxqUTVtepB2XRy6zRYdtxyepJC0uNAAEJMQgCtqwhEg-aijSUYhRooXIoEOD2VhUmFV1VfHaHOCgMmsbe+UAhJAOzKIosBjDSdpMsQoE7gIACSIS3iEtzjqqNYXAAhBcP5-vWsCgk2oAABIIAAsuAm6-uSjhQaBtytm+b6Tl2cT9D6owXAARAAYpwB7Mr0kCaECLKcKAbg4eS3j9IQFRECEPgNJxoAANTyapoCcUpKFtvCzHsKxoCUdu2FbuSOmYhg5oYKu65GWZ6ivhONCcIBrDcAI34OQQJHkcZ1G0awsCpCqH7AnYTB7NoplUdAPlkRRDk0TQ0FBS+ulfBUdRjJJ6h+LlqHmtZtnQOoADqNCSvoe4HiER5OaAp6EOel5RrlMXbvFflJYFsBdJApBhsCUQEBVVUUYcwRBAAgrAdiEJKTClOljHfp+8CMv+uXgYwkiTTQQRcf0lWMpx-WDQcwIWYx6FYZteEEUKY36AASqEbRAlSe1BQdsD0Rlqr6bENBsQaXG8fxLKwEJQKyYQhQAF50GJz3gGtP1BJol5ebFnHXUx6osSDoCoxcjL46ARUrmupUk5V+jHnpLluR5oCMqN9PgG9XofU+31TcFFgImFD6RdD5DYbAAQc+N3N1Y+X37UEK2MVl0DgdwW1S4VQvwtgVpIAgjgEBDOKCayGhHkDlofi4NB7B5KrcJArnWOUrltBc0Du-L1iQCwYL3vNT5dN7HvkAgQg3LEyiyuQMIZIG7GRAAPnT41u+H6yM2+IvwGH8v-mrGs5QwJQ0Jn8tez7bSR8IFPVr8mHV1nD2Nk9nOV20-2re2hMGcTYM8XxTJQ0JOxl8EjU1+QeOoQixeMmMBdtH7Ac0C38uhzPa-kGCDfwov3A5XIpD8V3Esr-H896yYuAEHBZCULQaqdpqqK9FHAN52tZCQ0X5RspLwghfAACgsJYjBPZXy6EfMYkgkCkVAXBcAjgozWAQNNBATprAH2+E3DCH4-6jxVr3dGX4aZDkiH4OBICZ5wVYFoKAVJN7dxvrndaIxXCPH-JxX0RCz7cE+tPcOSkNJXw0lpAgyk1KUINOw1UcC6pMmCBfVh18AZ9zfsDUGHE2jcLsHgymAMgZXx1iqfWhtjagFNmUGGjCrb90-sINadsHYCFCpPGgoDKBUkWMQdRcV076AvgAbQALpdBKPNSU0ARqNWDlSCJXRnZkiGnEggShiBxwiaQ1UP8KCQD8UsABQDj4TxcsEHxRT-EXCMbdC4hTinyDbmCDu41qnNJ7mQ1+U5DJD1NgJaG34vGgCaf4ueAMr7qy0OQRwnAdjlAuOomEPhKyaMPoA9WS9okLPKGo8ZSxt7hyMRiepTdon2FiXk1alyFozO0PMxZyy7mSi6F6AIqz1k9MUVskukoRKzRiVANJl1Gm+P8V0DZZDUklEuuoAAZAi5xhFYXpNCR88JvgfB+E4lkuOykYhovhRikIAQsWZLJR0aFvdXlGMxKcgGcChGsC5J02phziBGLEPfUAj9Ckv2tl-Va2ByhzVIHFEAdVaDO2EOMYgkAEYHjXGMbQVJgCKDFDg4AgEdjQGAGNYA01kEGs5nLXmisMawGABBdlSxwH+KgeQb+nCWVcgEjQv5wDGB2uIA6yBwQJactgV64+4A4JWgANLoMwdg3BCjbpuuZDcxiP85GjFKds4+tqIVLAYUwop4KalLHpfCH++i9RPl4b6IEYzc3yDdZ9aRGl02uAUQvUNYxlFQN9UWrp7ambaP6XolwlbnwKPNKtIGhlUa+scKwJZR8uicoplTCwQJA0aAWXQOCs763WEYIGxyKpQmFNIFaSBQcFbhJNtsPYl0rLUzsnu4txB53lBzmqVyhxWYMA0DLfQvqVRlxZp5dmwSubvQVvzX6KpcRzTaEE3dnNfUHoQ+QTAwt1rkDFtFcDqNzXQaVn1NU-6IO+pTZlUN-5GQKOLn+-8f66N-Pg0e-8bxTQqmsgiCCQKrkgrhYcC4rBL0JIVpR3pXZaGMD4wtATQ0Lj0ZoBod5kDV2PowConGf4-D+V3PuQ84wLxyGvIhTiuVJm5X-JGTgpAOrmRClp46kpNp+FRjVQzYxjNXnattXKOlnOuYjKwE4tmLiBe4DpTYd6hq9BELAaapBSAAEZwsnW4AB8AeSf7SmgM1PIbR4C4ofJ9TiqEcvkmatxRYVJeElafGV0Kn5IBMjGM7FhuX8uFa6J1nY1XODjqdowDghWDoX3-BFtrLsQhqN6wV8gJHJtLzyI4F6RsAAa6CiLTRQKtim2AEhTXG34GzdnDtjZnhTLTF4BupVgMd3k6WpuuxnhcXr-WqRdCW8fbiL1pq6FIo4K0CAts7b26hG7VI7sPdOxcSH0PLuoUsUbE2PRoByDoGXdHpIvG+qIPUJoChGHaB2MIhYixPJEhSqBFFmHb44AkKAaaIQWTy0ILQFkJV1D9EAYs0A7t1BiQY0KWSuJuAaCG-ATlE2nsVPLr20J52AgXzTDVhH4cugACZwkU2lydkLpw+3+KiwibAgERCNHgAwUg2h4CWmIHUeksBFkWLAN9naCW5r8aAoJnswbNKcCjDbvLQ08ZNfCmMmAcNDiycWsEeTl1-yco97HhPQnOKB+D9H1gkzrKlAS0l1LEWdJAA)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
No, there are multiple problems and I can't pinpoint the cause in a very specific setup. | Bug,Domain: lib.d.ts | low | Critical |
538,269,786 | angular | The possibility of setting Timeouts with HttpClient | <!--π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
Oh hi there! π
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
-->
# π feature request
### Relevant Package
<!-- Can you pin-point one or more @angular/* packages the are relevant for this feature request? -->
<!-- βοΈedit: --> This feature request is for @angular/common/http
### Description
<!-- βοΈ--> Angular's [HttpClient](https://angular.io/guide/http) has no timeout feature. Using alternative solutions like HTTPInterceptors or RxJS for this purpose a connection timeout functionality cannot be achieved. See: https://stackoverflow.com/questions/59348500/is-there-an-http-connection-timeout-in-client-side-js-angular
### Describe the solution you'd like
<!-- βοΈ--> Angular's HttpClient could use and expose [XMLHttpRequest.timeout](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/timeout) that would make this possible.
### Describe alternatives you've considered
<!-- βοΈ--> HTTPInterceptors, RxJS timeouts, custom HTTP client implementation
| feature,area: common/http,feature: in backlog | medium | Critical |
538,319,852 | TypeScript | Transpiling async/await to generators instead of es6-Generator-function for es6 | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
async await es6 transpile generator
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
If there was a compile option that i can tell the compiler to emit generators instead of es6-Generator-function when targeting es6.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
I have to use 3rd-party libs that implemented in es6οΌso targeting es5 will not work. But i can't write code with async/await when targeting es6 because the platform on which to run the app dose not support yield/funtion* (but other es6 features's ok).
## Examples
<!-- Show how this would be used and what the behavior would be -->
## Checklist
I try [Babel](https://babeljs.io/) and [regenerator](https://github.com/facebook/regenerator), all of them transpile es6 code into es5(or other version), but then I can't use 3rd-party libs. Work round is applying transpilation on libs too.
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [ ] This wouldn't change the runtime behavior of existing JavaScript code
* [ ] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
538,344,565 | electron | Allow localization of 'Cancel' button in Open and Save dialogs | ### Problem Description
Hi guys!
I'm working on localization of electron application and faced a problem with localization 'Cancel' button in Open and Save dialogs. There is property `buttonLabel` for 'Save'/'Open' button, so when the language is, for example, Japanese, the 'Save'/'Open' button is translated and the 'Cancel' isn't, which looks confusing.
### Proposed Solution
Add the ability to customize the button's label.
### Additional Information
I've seen the similar issue https://github.com/electron/electron/issues/19663, however, it was related to Linux.
Thanks!
| enhancement :sparkles: | low | Minor |
538,359,029 | rust | `==` after unsafe block is not correctly recognized | Given the following code:
```rust
let mut a = std::mem::MaybeUninit::new(10);
unsafe { std::ptr::read(a.as_ptr()) } == 10;
```
Rust reports error:
```
error: expected expression, found `==`
--> src/main.rs:3:43
|
3 | unsafe { std::ptr::read(a.as_ptr()) } == 10;
| ^^ expected expression
```
However, if I change `==` to `+`, Rust would instead mentions that a pair of parens should be added to wrap the block:
```
error: expected expression, found `+`
--> src/main.rs:3:43
|
3 | unsafe { std::ptr::read(a.as_ptr()) } + 10;
| ------------------------------------- ^ expected expression
| |
| help: parentheses are required to parse this as an expression: `(unsafe { std::ptr::read(a.as_ptr()) })`
```
I guess the error message should probably be updated for the `==` case as well.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | C-enhancement,T-compiler,A-suggestion-diagnostics | low | Critical |
538,406,238 | flutter | Flutter does not render in firefox extension popup |
## Steps to Reproduce
1. Clone https://github.com/billy1380/flutter_firefox_extension
2. Install the extension (prebuilt files located in the extension folder)
Note: the extension is installable in both firefox and chrome
3. Click on the flutter icon
In Firefox

In Chrome

**Target Platform:**
Web
**Target OS version/browser:**
Linux firefox, and Linux Chrome
## flutter analyze
```
flutter analyze
Analyzing flutter-firefox-plugin...
No issues found! (ran in 2.4s)
```
## flutter doctor -v
```
flutter doctor -v
[β] Flutter (Channel master, v1.13.3-pre.23, on Linux, locale en_GB.UTF-8)
β’ Flutter version 1.13.3-pre.23 at /home/billy1380/git/flutter-linux/flutter
β’ Framework revision c06bf6503a (3 days ago), 2019-12-13 17:42:35 -0500
β’ Engine revision e0e0ac0a68
β’ Dart version 2.8.0 (build 2.8.0-dev.0.0 45db297095)
β£½Error 1 retrieving device properties for ro.product.cpu.abi:
error: insufficient permissions for device: user in plugdev group; are your udev rules wrong?
See [http://developer.android.com/tools/device.html] for more information
[β] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
β’ Android SDK at /home/billy1380/android-sdk-linux
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-28, build-tools 28.0.3
β’ ANDROID_HOME = /home/billy1380/android-sdk-linux
β’ ANDROID_SDK_ROOT = /home/billy1380/android-sdk-linux
β’ Java binary at: /home/billy1380/Downloads/android-studio/jre/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
β’ All Android licenses accepted.
[β] Chrome - develop for the web
β’ Chrome at google-chrome
[β] Linux toolchain - develop for Linux desktop
β’ clang++ 9.0.0
β’ GNU Make 4.2.1
[β] Android Studio (version 3.5)
β’ Android Studio at /home/billy1380/Downloads/android-studio
β’ Flutter plugin version 41.1.2
β’ Dart plugin version 191.8593
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[β] VS Code (version 1.40.2)
β’ VS Code at /usr/share/code
β’ Flutter extension version 3.7.1
[β] Connected device (4 available)
β’ 891Y036PE β’ 891Y036PE β’ android-arm β’ Android null (API null)
β’ Linux β’ Linux β’ linux-x64 β’ Linux
β’ Chrome β’ chrome β’ web-javascript β’ Google Chrome 79.0.3945.79
β’ Web Server β’ web-server β’ web-javascript β’ Flutter Tools
β’ No issues found!
```
| c: crash,engine,platform-web,browser: firefox,P2,team-web,triaged-web | low | Critical |
538,473,472 | pytorch | quantization - Missing operations needed for object detection | Any sample code of object detection quantization?
I got the following errors when I try to quantized the faster-rcnn.
Could not run 'aten::empty_strided' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::empty_strided' is only available for these backends: [CPUTensorId, VariableTensorId]
Could not run 'aten::div.Tensor' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::div.Tensor' is only available for these backends: [CPUTensorId, SparseCPUTensorId, VariableTensorId].
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a @vkuzo | oncall: quantization,triaged | low | Critical |
538,514,513 | TypeScript | Generated code when re-exporting a const enum inside a namespace using "preserveConstEnums": true leads to a runtime error | **TypeScript Version:** 3.8.0-dev.20191216
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** const enum import export preserveConstEnums
**Code**
MyEnum.ts
```ts
const enum MyEnum {
FirstValue,
SecondValue
}
export default MyEnum;
```
ImportExportInNamespace.ts
```ts
import _MyEnum from "./MyEnum";
export namespace MyNamespace {
export import MyEnum = _MyEnum;
}
```
App.ts
```ts
import { MyNamespace } from "./ImportExportInNamespace";
console.log(MyNamespace.MyEnum.FirstValue);
```
Compile the above using
```json
{
"compilerOptions": {
"target": "es5",
"module": "commonjs",
"preserveConstEnums": true
}
}
```
**Expected behavior:**
ImportExportInNamespace.ts compiles to
```js
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
var MyEnum_1 = require("./MyEnum");
var MyNamespace;
(function (MyNamespace) {
MyNamespace.MyEnum = MyEnum_1.default;
})(MyNamespace = exports.MyNamespace || (exports.MyNamespace = {}));
```
**Actual behavior:**
ImportExportInNamespace.ts wrongly compiles to
```js
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
var MyNamespace;
(function (MyNamespace) {
MyNamespace.MyEnum = MyEnum_1.default;
})(MyNamespace = exports.MyNamespace || (exports.MyNamespace = {}));
```
i.e. we are missing
```js
var MyEnum_1 = require("./MyEnum");
```
If I change the `enum` not to be a `const` the code compiles correctly. I'm using it as a workaround.
**Related Issues:**
[#23514](https://github.com/microsoft/TypeScript/issues/23514)
| Bug,Help Wanted | low | Critical |
538,534,004 | tensorflow | -D_GLIBCXX_USE_CXX11_ABI=1 increases a lot RAM usage | **System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
- TensorFlow installed from: pip
- TensorFlow version (use command below): 1.13.1
- Python version: python3.7
- CUDA/cuDNN version: 10.1
- GPU model and memory: GTX 1080 ti
**Describe the current behavior**
Since g++7 is now the default version on Ubuntu 18 and most distributions, most builds will use `_GLIBCXX_USE_CXX11_ABI=1`. It seems also that when tensorflow is built with `_GLIBCXX_USE_CXX11_ABI=0`, it implies recompiling all other libraries of the project with this flag which can be unconvenient.
We noticed that building with `_GLIBCXX_USE_CXX11_ABI=1` increases the RAM by a lot.
**Describe the expected behavior**
Both packages should consume the same amount of RAM.
**Code to reproduce the issue**
You can install tensorflow using `python3.7 -m pip install tensorflow==1.13.1` (related to https://github.com/tensorflow/tensorflow/issues/27078) and makes sure
`python3.7 -c "import tensorflow; print(tensorflow.sysconfig.get_compile_flags())"` prints `-D_GLIBCXX_USE_CXX11_ABI=1`.
Then you can install it in python3.6 `python3.6 -m pip install tensorflow==1.13.1` and make sure ` python3.6 -c "import tensorflow; print(tensorflow.sysconfig.get_compile_flags())"` prints `D_GLIBCXX_USE_CXX11_ABI=0`.
Now run this script with python3.6 and python3.7 and you will see that the second one consume a lot more (x3 on the model I use). Any `saved_model.pb` should work.
```python
import io
import os
import sys
try:
from urllib import urlopen
except ImportError:
from urllib.request import urlopen
import numpy
import psutil
from PIL import Image
import tensorflow as tf
from tensorflow.core.protobuf import saved_model_pb2
from tensorflow.python.platform import gfile
from tensorflow.python.util import compat
process = psutil.Process(os.getpid())
def print_ram(prefix=''):
print("RAM", prefix, process.memory_info().rss / 1024. / 1024.)
if __name__ == '__main__':
if len(sys.argv) == 1:
model_filename = 'saved_model.pb'
else:
model_filename = sys.argv[1]
with gfile.FastGFile(model_filename, 'rb') as f:
data = compat.as_bytes(f.read())
sm = saved_model_pb2.SavedModel()
sm.ParseFromString(data)
if 1 != len(sm.meta_graphs):
print('More than one graph found. Not sure which to write')
sys.exit(1)
img_url = 'https://i.dailymail.co.uk/1s/2019/11/23/09/21370544-7717313-image-a-1_1574501083030.jpg'
image_data = urlopen(img_url).read()
decoded_data = numpy.array(Image.open(io.BytesIO(image_data)))
decoded_data = numpy.expand_dims(decoded_data, axis=0)
print_ram('before graph import')
graph = tf.import_graph_def(sm.meta_graphs[0].graph_def)
print_ram('before device')
with tf.device("/device:GPU:0"):
with tf.Session(graph=graph, config=None) as sess:
print_ram('after session')
output = sess.graph.get_tensor_by_name('import/predictions:0')
print_ram('before run')
for i in range(10000):
results = sess.run(output, feed_dict={"import/image_tensor:0": decoded_data})
print_ram('after run')
```
**Other info / logs**
python3.6:
```
RAM before graph import 527.55078125
RAM before device 856.59375
2019-12-16 17:51:27.400388: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-16 17:51:27.425979: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3399580000 Hz
2019-12-16 17:51:27.426637: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x229fee0 executing computations on platform Host. Devices:
2019-12-16 17:51:27.426655: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
RAM after session 860.25
RAM before run 860.25
RAM after run 948.12109375
RAM after run 998.0
RAM after run 1022.2265625
RAM after run 1038.984375
RAM after run 1038.984375
RAM after run 1056.0
RAM after run 1092.1953125
RAM after run 1097.09375
RAM after run 1097.09375
RAM after run 1097.09375
RAM after run 1097.09375
RAM after run 1097.09375
RAM after run 1116.42578125
RAM after run 1116.42578125
RAM after run 1116.42578125
RAM after run 1116.42578125
RAM after run 1116.42578125
```
python3.7:
```
RAM before graph import 510.33984375
RAM before device 752.87890625
2019-12-16 17:26:00.118658: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-16 17:26:00.141979: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3399580000 Hz
2019-12-16 17:26:00.142737: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x265aff0 executing computations on platform Host. Devices:
2019-12-16 17:26:00.142775: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
RAM after session 756.5625
RAM before run 756.5625
RAM after run 2977.8515625
RAM after run 3030.1640625
RAM after run 3047.8046875
RAM after run 3090.08203125
RAM after run 3121.296875
RAM after run 3122.0703125
RAM after run 3123.1015625
RAM after run 3123.1015625
RAM after run 3135.9921875
``` | stat:awaiting tensorflower,type:build/install,subtype: ubuntu/linux,type:performance,TF 2.9 | medium | Critical |
538,586,692 | pytorch | Ability to download docs HTML for offline use | ## π Feature
It would be excellent to have a button to download the docs for offline use, from the main docs page. This is an included feature on Sphinx. For example: on [Flask's docs](http://flask.palletsprojects.com/en/1.1.x/), if you click the floating green version in the lower right, there's a "download html" option.
## Motivation
Building the source to make the docs locally is very time consuming, and sometimes you want to learn something offline. My specific use-case is programming on a plane.
| module: docs,triaged | low | Minor |
538,602,190 | flutter | Flutter should warn when attempting to flutter install to simulator | I try install my app in any IOS device (Simulator and real device) get's me this error. I'm using flavors in this project, but when i build and install for android everything works fine.
Using ios debug and ``` flutter run ``` works fine as well
Flutter install error:
```
Install failed
#0 throwToolExit (package:flutter_tools/src/base/common.dart:28:3)
#1 InstallCommand.runCommand (package:flutter_tools/src/commands/install.dart:44:7)
<asynchronous suspension>
#2 FlutterCommand.verifyThenRunCommand (package:flutter_tools/src/runner/flutter_command.dart:478:18)
<asynchronous suspension>
#3 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:383:33)
<asynchronous suspension>
#4 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:29)
<asynchronous suspension>
#5 _rootRun (dart:async/zone.dart:1124:13)
#6 _CustomZone.run (dart:async/zone.dart:1021:19)
#7 _runZoned (dart:async/zone.dart:1516:10)
#8 runZoned (dart:async/zone.dart:1463:12)
#9 AppContext.run (package:flutter_tools/src/base/context.dart:152:18)
<asynchronous suspension>
#10 FlutterCommand.run (package:flutter_tools/src/runner/flutter_command.dart:375:20)
#11 CommandRunner.runCommand (package:args/command_runner.dart:197:27)
<asynchronous suspension>
#12 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:396:21)
<asynchronous suspension>
#13 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:29)
<asynchronous suspension>
#14 _rootRun (dart:async/zone.dart:1124:13)
#15 _CustomZone.run (dart:async/zone.dart:1021:19)
#16 _runZoned (dart:async/zone.dart:1516:10)
#17 runZoned (dart:async/zone.dart:1463:12)
#18 AppContext.run (package:flutter_tools/src/base/context.dart:152:18)
<asynchronous suspension>
#19 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:356:19)
<asynchronous suspension>
#20 CommandRunner.run.<anonymous closure> (package:args/command_runner.dart:112:25)
#21 new Future.sync (dart:async/future.dart:224:31)
#22 CommandRunner.run (package:args/command_runner.dart:112:14)
#23 FlutterCommandRunner.run (package:flutter_tools/src/runner/flutter_command_runner.dart:242:18)
#24 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:62:22)
<asynchronous suspension>
#25 _rootRun (dart:async/zone.dart:1124:13)
#26 _CustomZone.run (dart:async/zone.dart:1021:19)
#27 _runZoned (dart:async/zone.dart:1516:10)
#28 runZoned (dart:async/zone.dart:1500:12)
#29 run.<anonymous closure> (package:flutter_tools/runner.dart:60:18)
<asynchronous suspension>
#30 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:29)
<asynchronous suspension>
#31 _rootRun (dart:async/zone.dart:1124:13)
#32 _CustomZone.run (dart:async/zone.dart:1021:19)
#33 _runZoned (dart:async/zone.dart:1516:10)
#34 runZoned (dart:async/zone.dart:1463:12)
#35 AppContext.run (package:flutter_tools/src/base/context.dart:152:18)
<asynchronous suspension>
#36 runInContext (package:flutter_tools/src/context_runner.dart:56:24)
<asynchronous suspension>
#37 run (package:flutter_tools/runner.dart:51:10)
#38 main (package:flutter_tools/executable.dart:62:9)
<asynchronous suspension>
#39 main (file:///Users/pedro_myeong/Desktop/flutter/packages/flutter_tools/bin/flutter_tools.dart:8:3)
#40 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:299:32)
#41 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:172:12)
```
Flutter doctor -v:
```
[β] Flutter (Channel stable, v1.7.8+hotfix.3, on Mac OS X 10.14.6 18G103, locale en-US)
β’ Flutter version 1.7.8+hotfix.3 at /Users/pedro_myeong/Desktop/flutter
β’ Framework revision b712a172f9 (5 months ago), 2019-07-09 13:14:38 -0700
β’ Engine revision 54ad777fd2
β’ Dart version 2.4.0
[!] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
β’ Android SDK at /Users/pedro_myeong/Library/Android/sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-28, build-tools 28.0.3
β’ Java binary at: /Library/Java/JavaVirtualMachines/jdk-10.0.1.jdk/Contents/Home/bin/java
β’ Java version Java(TM) SE Runtime Environment 18.3 (build 10.0.1+10)
β Android license status unknown.
Try re-installing or updating your Android SDK Manager.
See https://developer.android.com/studio/#downloads or visit https://flutter.dev/setup/#android-setup for detailed instructions.
[β] Xcode - develop for iOS and macOS (Xcode 10.3)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Xcode 10.3, Build version 10G8
β’ CocoaPods version 1.7.5
[β] iOS tools - develop for iOS devices
β’ ios-deploy 1.9.4
[!] Android Studio (not installed)
β’ Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/setup/#android-setup for detailed instructions).
[β] Connected device (1 available)
β’ iPhone de Pedro β’ f6de5020e89defb2cb58089bb965423ada8d2054 β’ ios β’ iOS 13.1.3
! Doctor found issues in 2 categories.
``` | tool,P3,team-tool,triaged-tool | low | Critical |
538,673,280 | rust | Misleading message for [E0392] with associated types of trait bound | E0392 notifies the user that a type parameter is not being used, but it sometimes falsely identifies parameters as being unused **only when there is already another that is _actually_ unused.** I don't entirely understand expected behavior here, see the bottom for clarification.
For example, the [following code](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=0eef8d9cbffb058857a4a17141e5dc60) (on 1.39.0 stable, 1.40.0-beta6, and 1.41.0-nightly) correctly fails to compile
```rust
trait MyTrait<A> {
type Foo;
}
struct MyStructFails<A, B, M>
where
M: MyTrait<A, Foo = B>,
{
my_trait: M,
}
```
but it has the following error message
```
error[E0392]: parameter `A` is never used
--> src/lib.rs:7:22
|
7 | struct MyStructFails<A, B, M>
| ^ unused parameter
|
= help: consider removing `A` or using a marker such as `std::marker::PhantomData`
error[E0392]: parameter `B` is never used
--> src/lib.rs:7:25
|
7 | struct MyStructFails<A, B, M>
| ^ unused parameter
|
= help: consider removing `B` or using a marker such as `std::marker::PhantomData`
```
which notably includes `B` as an unused parameter. Clearly this isn't the case, because changing `MyStruct` to **only** add `PhantomData<A>` allows it to compile:
```rust
struct MyStructWorks<A, B, M>
where
M: MyTrait<A, Foo = B>,
{
my_trait: M,
_marker: std::marker::PhantomData<A>,
}
```
and only adding `PhantomData<B>` does not get rid of the error. Tuple structs produce the same error.
Also notable is that this false identification only occurs for the associated type of the trait bound that `A` is being used in; this example produces the correct error:
[playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=3ce493c9de74a12337dc59d227106db5)
```rust
trait Bar {
type Foo;
}
trait MyTrait<T> {}
struct MyStruct<F, B, T, M>
where
B: Bar<Foo = F>,
M: MyTrait<T>,
{
bar: B,
my_trait: M,
}
```
---
#### Clarification
I'm not very familiar with what the expected behavior should be. If the [following code](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=c0e52a80d118cc5b0d4da31f88f56fb5) should indeed give an error, it's just a diagnostics bug.
```rust
trait MyTrait<T> {}
struct MyStructFails<T, M: MyTrait<T>> {
inner: M,
}
``` | A-diagnostics,T-compiler,C-bug,A-variance | low | Critical |
538,674,296 | flutter | Reduce usage of context getters and globals in the flutter tool | Internal: b/139713315
The flutter_tool currently makes heavy usage of context getter globals to inject objects throughout the tool for both configuration and testability issues. For example:
```
Foo get foo => context.get<Foo>();
class Bar {
void doThing() {
foo.fizz(); // context global
}
}
```
This is making it more difficult than necessary to unit test tooling code. Besides mocking foo, we must also ensure the context injection is correct which leads to a significant amount of boilerplate configuration for each test case. Mistakes in this boilerplate can lead to tests silently doing the wrong thing.
Previously I attempted to clean this up with the Testbed class, but this hasn't succeeded it simplifying the test cases.
Instead, we should gradually reduce our usage of these context getters which removes the need for the test configuration and makes it more explicit which instances are being used for tests. For example, rewriting the snippet above becomes:
```
class Bar {
Bar({Foo foo}) : _foo = foo;
final Foo _foo;
void doThing() {
_foo.fizz();
}
}
```
To ensure we don't regress in usage of context getters, we can use the forthcoming `testWithoutContext` test method that throws if `context.get` is invoked. See also https://github.com/flutter/flutter/pull/45739
cc @zanderso | team,tool,P3,team-tool,triaged-tool | low | Major |
538,710,423 | PowerToys | Use GetModuleFileNameW wrappers | This commit https://github.com/microsoft/PowerToys/commit/fd8fc679be2497d69a1b295e6b7dd6c66584fe34 introduced the `get_module_filename` and `get_module_folderpath` wrappers.
Use them instead of calling directly GetModuleFileNameW | Area-Quality | low | Minor |
538,713,465 | flutter | code sharing/refactoring in the native Engine Layer code | Many of the native implementations of the Engine layers have very similar code cut and pasted between them. We should create helper methods and shared parent classes to consolidate much of these common implementation details:
- All Clip<Shape>Layer classes plus PhysicalShapeLayer have very similar code to manage the clipping
- Many classes have code to optionally coerce rendering to integer pixel translations for clarity (or raster cache share-ability)
- Classes that cache their (or a child) layer rendering have very similar code (Opacity, ImageFiltered, for example - and more will come over time)
| engine,P2,team-engine,triaged-engine | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.