id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
540,583,857 | flutter | BouncingScrollPhysics has an enormous fling velocity limit | I have tried using `BouncingScrollPhysics` and found out that it has just an enormously huge limit of max repeating fling velocity. This is important for me because I have a long list in my application (it's songs list and it is for like 400 songs on my device).
This is the piece of code that found that I guess cares about this behaviour
```dart
@override
double carriedMomentum(double existingVelocity) {
return existingVelocity.sign *
math.min(0.000816 * math.pow(existingVelocity.abs(), 1.967).toDouble(), 40000.0);
}
```
The thing is that with such a huge velocity limit I can jump through the entire 400-count items list with just 3 flings! And of course it causes fps drops like to 15, because my list widgets are quite heavy and contain stream builders, animations and images
I asked my friend to open some system native iOS app to test it out, fling with max rate he can. And actually it seems to be a lot more slower than it is in flutter.
[Video](https://youtu.be/WmRnpnjv4UU) - an iOS native fling (60fps)
[Video](https://youtu.be/3Tl73C9PE50) - default flutter 40000 limit (30fps)
[Video](https://youtu.be/7PjAT-FuNyw) - 20000 limit - seems to me at least a little bit more closer to native (30fps)
Of course I have not measured this accurately, frame-by-frame. This limit just seems to me too large number | platform-ios,framework,a: fidelity,f: scrolling,c: proposal,P2,team-framework,triaged-framework | low | Major |
540,610,786 | go | encoding/json: the Decoder.Decode API lends itself to misuse | I'm observing the existence of several production servers that are buggy because the `json.Decoder.Decode` API lends itself to misuse.
Consider the following:
```go
r := strings.NewReader("{} bad data")
var m map[string]interface{}
d := json.NewDecoder(r)
if err := d.Decode(&m); err != nil {
panic(err) // not triggered
}
```
`json.NewDecoder` is often used because the user has an `io.Reader` on hand or wants to configure some of the options on `json.Decoder`. However, the common case is that the user only wants to decode a *single* JSON value. As it stands the API does not make the common case easy since `Decode` is designed with the assumption that the user will continue to decode more JSON values, which is rarely the case.
The code above executes just fine without reporting an error and silently allows the decoder to silently accept bad input without reporting any problems. | NeedsDecision | high | Critical |
540,642,138 | terminal | Don't printf wstring_views by copying them into wstrings; use `%.*s`! | `StringCchPrintf` and--since C99--`printf` support a specifier that indicates that a string is counted:
```
auto size = /* ... */;
auto data = /* ... */;
printf("%.*s", size, data);
```
The `.*` suggests that printf should consume one register-width argument to determine how many characters the string that follows contains instead of waiting for a null terminator. | Product-Conhost,Help Wanted,Product-Terminal,Issue-Task,Area-CodeHealth | low | Minor |
540,673,554 | tensorflow | unbounded memory leak in tf.io.gfile.isdir() | This was discovered in debugging of https://github.com/tensorflow/tensorboard/issues/766 by a combination of @psybuzz, @wchargin, and myself. From empirical evidence from TensorBoard users it appears that this grows without bound, so in practical usage it only takes a day or so to consume dozens of GB of memory.
Calling `tf.io.gfile.isdir()` leaks memory at a rate of approximately 1 MB per 20,000 calls, and this is reproducible at TF 2.0.0 and latest tf-nightly (`tf-nightly-2.1.0.dev20191219`), on macOS, Ubuntu 16.04, and Linux Debian (a Google workstation), and with python 2.7, 3.5, and 3.7.
Here's our repro script:
```python
import gc
import os
import resource
import time
import tensorflow as tf
print("PID: %d\n" % (os.getpid(),))
prev = 0
while True:
peak = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
print("peak memory = %d (+%d) in kb (Linux) or b (macOS)" % (peak, peak-prev))
prev = peak
for _ in range(20000):
tf.io.gfile.isdir(b"/tmp/nonexistent-file-for-tf-memory-leak")
gc.collect()
time.sleep(1.0)
```
Sample output of `python repro.py`:
```
PID: 153611
peak memory = 226796 (+226796) in kb (Linux) or b (macOS)
peak memory = 228108 (+1312) in kb (Linux) or b (macOS)
peak memory = 229132 (+1024) in kb (Linux) or b (macOS)
peak memory = 229900 (+768) in kb (Linux) or b (macOS)
peak memory = 230924 (+1024) in kb (Linux) or b (macOS)
peak memory = 231948 (+1024) in kb (Linux) or b (macOS)
peak memory = 232716 (+768) in kb (Linux) or b (macOS)
peak memory = 233740 (+1024) in kb (Linux) or b (macOS)
peak memory = 234764 (+1024) in kb (Linux) or b (macOS)
peak memory = 235788 (+1024) in kb (Linux) or b (macOS)
peak memory = 236556 (+768) in kb (Linux) or b (macOS)
peak memory = 237580 (+1024) in kb (Linux) or b (macOS)
...
```
Our initial attempt to find a root cause led us to suspect the fact that `is_directory_v2` uses ScopedTFStatus while the rest of the `gfile` API does not (we spot-checked a few other APIs, including `tf.io.gfile.stat()`, and did not see the same issue).
Here's the code from v2.0.0 (file_io.py was just converted to PyBind11 today so it's possible this actually fixes the issue, but there is not yet a nightly with the change to check):
https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/lib/io/file_io.py#L585-L596
We attempted to debug further by deconstructing the calls to `isdir()` into the two lines, one that creates `ScopedTFStatus` and one that calls `pywrap_tensorflow.IsDirectory()`, and it seemed to be the case that the memory leak is proportional to the number of times `IsDirectory()` is called with a *distinct* `ScopedTFStatus` pointer (calling it over and over with the same status doesn't seem to leak at a proportional rate; reusing the status here seemed fine for testing this because `IsDirectory()` does not actually touch the status in the codepath for a nonexistent file). So we suspect maybe there's a weird interaction at the SWIG boundary that results in the leak.
Furthermore, it also seems to leak when the argument is an existing filename; the repro uses a nonexistent one for simplicity and because that makes the codepath slightly simpler (since then `IsDirectory()` exits early on file nonexistence via the `access()` syscall and never even calls `stat()`). Also, the leak still occurs when the `gc.collect()` is omitted; it's also just there to isolate possible causes of the leak. | comp:tensorboard,stat:awaiting tensorflower | low | Critical |
540,680,125 | go | x/build/cmd/coordinator: runSubrepoTests (golang.org/x repo tests) should also check maxTestExecErrors constant | In my investigation at https://github.com/golang/go/issues/35581#issuecomment-567275354, I wrote:
> It is intentional to keep retrying "communications failures" forever, because the expectation is that they should eventually succeed.
I'm seeing now that this isn't quite true. There is a constant defined:
```
// maxTestExecError is the number of test execution failures at which
// we give up and stop trying and instead permanently fail the test.
// Note that this is not related to whether the test failed remotely,
// but whether we were unable to start or complete watching it run.
// (A communication error)
const maxTestExecErrors = 3
```
The `runTestsOnBuildlet` method, which is called by `runTests` method, has block that checks if `ti.numFail` has reached `maxTestExecErrors`:
```Go
if err != nil {
bc.MarkBroken() // prevents reuse
for _, ti := range tis {
ti.numFail++
st.logf("Execution error running %s on %s: %v (numFails = %d)", ti.name, bc, err, ti.numFail)
if err == buildlet.ErrTimeout {
ti.failf("Test %q ran over %v limit (%v); saw output:\n%s", ti.name, timeout, execDuration, buf.Bytes())
} else if ti.numFail >= maxTestExecErrors {
ti.failf("Failed to schedule %q test after %d tries.\n", ti.name, maxTestExecErrors)
} else {
ti.retry()
}
}
return
}
```
However, the `runTests` method is only used for the main Go repository, not golang.org/x repos:
```Go
if st.IsSubrepo() {
remoteErr, err = st.runSubrepoTests()
} else {
remoteErr, err = st.runTests(st.getHelpers())
}
```
So this bug is about making the golang.org/x repos path also use the `maxTestExecErrors` constant and give up after some number of tries.
It's low value to fix because we rarely run into a situation where communication errors happen 3 times or more; that happens most often due to other bugs which we need to fix anyway.
/cc @bradfitz @cagedmantis @toothrot | Builders,NeedsFix,FeatureRequest | low | Critical |
540,684,380 | nvm | Add "rm" alias for "uninstall" | As a developer more regularly engaged with `npm`'s command aliases, I would like a way to use the same kinds of shortened syntax with `nvm`. For example:
- `npm i` == `npm install`
- `npm rm` == `npm uninstall`
- etc.
It seems odd to me that `nvm` doesn't consistently use long- or short-form verbs for commands in the documentation. For example:
- `nvm ls` instead of `nvm list`
- `nvm uninstall` instead of `nvm rm` (`nvm rm` doesn't even work)
How can I get better command consistency with `nvm`? I'd be happy to consider working on PR for this, but I don't know where to start. | feature requests | low | Minor |
540,714,853 | ant-design | Custom line in Tree's showLine | - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
In `Tree` Component, currently I can set `showLine` to show the line, but I can't customize this line's style.
### What does the proposed API look like?
It will be awesome if you can add a property => `line: React.ReactNode` in `Tree`
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive | low | Minor |
540,774,455 | flutter | Callback to be ran when StreamBuilder rebuilds? | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
I want to start an animation when my `StreamBuilder` rebuilds. This can be done in regular stateful widgets by overriding `didUpdateWidget`.
`didUpdateWidget` _does_ run when the `StreamBuilder` rebuilds, but there's no way to check if the update was caused by the `StreamBuilder`.
## Proposal
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
I propose a new callback parameter gets added to `StreamBuilder` that gets called when new stream data is received.
This would allow us to start animations when the `StreamBuilder` rebuilds. | c: new feature,framework,c: proposal,P3,team-framework,triaged-framework | low | Critical |
540,781,824 | flutter | Typography().dense and TextBaseline.ideographic cannot work with Chinese? | Typography().dense and TextBaseline.ideographic cannot work with Chinese?
When I write a widget, just like this:
```dart
Row(crossAxisAlignment: CrossAxisAlignment.center, mainAxisSize: MainAxisSize.max, children: [
Text('我是中文'), Spacer() ,Text('123456'),
])
```
The first `Text` and the second `Text` cann't be vertically centered,The Chinese font's offset will be lower than number or english. Although, I apply Typography().dense to the AppTheme, also set 'TextBaseline.ideographic' to text, but still cannot work well. What should I do ? Is it a bug? Thank you for your answer. | engine,a: internationalization,a: typography,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-engine,triaged-engine | low | Critical |
540,785,213 | opencv | OE-30 "Color Calibration" Discussion | Please see https://github.com/opencv/opencv/wiki/OE-30.-Color-Calibration
I think we need to:
- Create a function that will robustly find common MacBeth charts (allowing for partial occlusion of hands holding the corners) and their homography
- [This one](https://www.bhphotovideo.com/c/product/1014557-REG/dgk_color_tools_dkk_set_of_2_dkk_poly_bag_2.html/?ap=y&ap=y&smp=y&smp=y&lsft=BI%3A514&gclid=CjwKCAiA3OzvBRBXEiwALNKDP9L-mJu6IEZO3qlDo4SD_CHNb6MevNkPl4AuVk5dxSzfcrYr2IhghRoC4bcQAvD_BwE)
- and the [standard one](https://www.edmundoptics.com/p/large-x-rite-colorchecker/4243?gclid=CjwKCAiA3OzvBRBXEiwALNKDPy5GoLSCEXkDR5sDTSyf5GnrYpKjdcMKPqIIvISZ_ljhwD8_WLjXWxoCx9YQAvD_BwE)
- Rectify the chart and find each color value in order (detecting partial occlusion of say hands holding he corners)
- Apply a color correction algorithm
- [Linear correction matrix](http://www.imatest.com/docs/colormatrix/)
- [More extensive list of linear and polynomial corrections](http://im.snibgo.com/col2mp.htm)
See the wikipedia on [Macbeth ColorChecker charts](https://en.wikipedia.org/wiki/ColorChecker) | evolution | low | Minor |
540,865,195 | go | cmd/trace: grey colour is undocumented in goroutine analysis | <!-- Please answer these questions before submitting your issue. Thanks! -->
1.13.4
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
not need
### What did you do?
I use go tool trace to analysis goroutines
### What did you expect to see?
display clearly each part of the goroutine time
### What did you see instead?
[unknown part of grey color](https://github.com/RealBar/Blogs/blob/master/images/WeWork%20Helper20191220044708.png)
| Documentation,NeedsDecision,compiler/runtime | low | Minor |
540,887,056 | opencv | Support OpenCV Swift for macOS catalina | At the moment we are supporting latest version of Xcode in iOS, but would be really good to support macOS catalina as it is already compatible with iOS apps | feature,platform: ios/osx | low | Minor |
540,912,539 | TypeScript | Invalid output emited as the result of javascript file compilation: amd, default export, jsdoc | **TypeScript Version:** 3.7.3
**Search Terms:** compile javascript default function
**Code**
./index.js
```js
/**
* @typedef {Does_Not_Matter} <does.not.matter> does_not_matter
*/
export default function MyClass(){
}
MyClass.data = [];
```
**Expected result:**
./out/index.js
```js
define(["require", "exports"], function (require, exports) {
"use strict";
exports.__esModule = true;
/**
* @typedef {Does_Not_Matter} <does.not.matter> does_not_matter
*/
function MyClass() {
}
exports["default"] = MyClass;
MyClass.data = [];
});
```
**Actual result:**
./out/index.js
```js
define(["require", "exports"], function (require, exports) {
"use strict";
exports.__esModule = true;
/**
* @typedef {Does_Not_Matter} <does.not.matter> does_not_matter
*/
function MyClass() {
}
exports["default"] = MyClass;
exports.MyClass.data = [];
// ^^^^^^^ HERE IS THE ERROR!
// expecting: MyClass.data = [];
});
```
**Compiler options:**
```json
{
"compilerOptions": {
"outDir": "./out",
"allowJs": true,
"checkJs": false,
"module": "amd",
},
"include": [
"index.js"
]
}
```
| Bug,Help Wanted | low | Critical |
541,016,339 | kubernetes | Upgrades of large clusters should be scalability tested | As shown in https://github.com/kubernetes/kubernetes/issues/86483, performance of up-and-running large clusters was fine in 1.17, but upgrade of such cluster has high probability of killing the cluster.
We should add "control plane upgrade" scalability test to release blocking tests.
@kubernetes/sig-scalability-bugs
@liggitt @mm4tt | kind/bug,priority/important-soon,sig/scalability,lifecycle/frozen | medium | Critical |
541,016,373 | scrcpy | stop recording option is not available in scrcpy | I am trying to build a script that can automatically get video from scrcpy without opening the scrcpy application is there any way i can do this. is there any way so that scrapy can record video for some specific time lets say 5 - 10 seconds and then automatically exit the program. or atleast any command that can close the recording without manual intervention | feature request | medium | Critical |
541,110,830 | TypeScript | __decorate helper should not use `this` when targeting modules | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.7.x-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** __decorate this module
**Code**
```ts
class Foo {
@property() x = 1;
}
```
**Expected behavior:**
Top-level `this` reference is not emitted.
**Actual behavior:**
The `__decrate` variable is declared like:
```ts
var __decorate = (this && this.__decorate) || function (decorators, target, key, desc) {
```
Since top-level `this` is always `undefined` in modules, `(this && this.__decorate) || ` can be omitted. It's only a few bytes, but the presence of top-level `this` also causes warnings in other downstream tools like Rollup.
A few other things could be fixed for an ES6+ helper, btw:
- Use `const` instead of `var` (allows for some VM optimizations)
- Remove `Reflect.decorate`
- Don't use `arguments`
**Playground Link:** https://www.typescriptlang.org/play/?ts=Nightly#code/MYGwhgzhAEBiD29oG8BQ1oAEAOAne2AprgC4CeAFAJTQAe0AvNAIwDcqAvkA
**Related Issues:** None?
| Suggestion,Awaiting More Feedback | low | Critical |
541,134,150 | godot | Opening more than 1 editors at the same time, shows error | **Godot version:**
3.2 beta 4
**OS/device including version:**
Ubuntu 19.10
**Issue description:**
**Steps to reproduce:**
1. Download all repository https://github.com/godotengine/godot-demo-projects
2. Scan it
3. Open ~10 projects at the same time
This error should appear
```
ERROR: move_child: Parent node is busy setting up children, move_child() failed. Consider using call_deferred("move_child") instead (or "popup" if this is from a popup).
At: scene/main/node.cpp:332.
ERROR: close: Condition ' rename_error != 0 ' is true.
At: drivers/unix/file_access_unix.cpp:172.
``` | bug,topic:editor,confirmed | low | Critical |
541,142,122 | PowerToys | Add Recycle Bin to Windows Task Bar | I do this manually with every system I configure. Many coworkers and family ask how I manged to get the Recycle Bin into my task bar so it's proven to be very popular! Super handy for dragging files to delete and file recovery. I think every Windows user would love to have this automated.
The PowerToy should have a option which will create a folder containing a link to Recycle Bin and then add that new folder as a toolbar in the Task Bar.
Settings can control whether the icon is large or small and whether text is shown below the icon. There should also be a setting to remove Recycle Bin from the Desktop if it's currently shown.

Here are the manual steps I usually use. I'm hoping you'll automate this.
1. Open Documents, right-click > New Folder. Name it RecycleToolbar.
2. Open RecycleToolbar
3. Drag the recycle bin shortcut from Desktop into RecycleToolbar and close the folder.
4. Right-click Task Bar and unlock.
5. Right-click Task Bar > Toolbars > New Toolbar...
6. Select your new RecycleToolbar
7. Right-click RecycleToolbar grab handle in the Task Bar and change settings (I prefer): View > Large Icons, View > Show Text (un-check it), View > Show Title (un-check)
8. Right-click Task Bar and Lock.
9. In Settings, turn off the Recycle Bin on the Desktop, it's visual clutter now.
10. Enjoy! | Idea-New PowerToy,Product-Tweak UI Design | medium | Critical |
541,143,426 | kubernetes | Enable deleting API objects even when storage-level decryption is not working properly | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**:
Users are unable to delete secrets when kms provider (which originally encrypted such secrets) can no longer decrypt them. There may be several reasons why kms provider would fail to decrypt secrets, the most common one is that users deleted/disabled the version of the key that was used to originally encrypt secrets.
**What you expected to happen**:
Secrets to be deleted.
**How to reproduce it (as minimally and precisely as possible)**:
1. [Setup](https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/#encrypting-your-data-with-the-kms-provider) a cluster with a kms provider of your choice.
2. Create a secret, [validate](https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/#encrypting-your-data-with-the-kms-provider) that the secret is encrypted
3. Reboot the cluster (this is required to clear the cache of Key Encryption Keys).
4. Disable the key or key version that was used by the provider to encrypt the key in step 2.
5. Attempt to delete the secret. You should get an internal error that wraps the kms-plugin's specific error (the error will vary based on the plugin).
Note: the issue is probably not unique to kms provider, but will manifest itself in any provider when the key that was used to encrypt the secret is no longer available.
**Anything else we need to know?**:
I believe that the cause of this behaviour is that fact that objects' metadata needs to be updated prior to deletion, which implies the need to transform from storage. However, such transformation is not possible due to the unavailability of the KEK.
To address this issue we would need to read the metadata of the object (while processing a delete) even if the KEK is not available - after all, during a delete, we don't care about the payload.
Therefore, to enable this scenario we would need to move away from encrypting the whole object. Concretely, parts of the metadata should remain in cleartext.
I realize that this opens-up a lot of questions, and I could follow this issue up with a KEP.
**Environment**:
- Kubernetes version (use `kubectl version`):
1.14
/cc @liggitt @mikedanese @enj
/sig auth
| kind/feature,sig/auth,lifecycle/frozen,needs-triage | low | Critical |
541,152,999 | godot | Invalid read in audio demo | **Godot version:**
Godot 3.2 beta 4
**OS/device including version:**
Ubuntu 19.10
**Issue description:**
Sanitizer Log(asan)
```
==7635==ERROR: AddressSanitizer: heap-use-after-free on address 0x631006a5480c at pc 0x00000163132e bp 0x7ffc6bce04d0 sp 0x7ffc6bce04c0
READ of size 4 at 0x631006a5480c thread T0
#0 0x163132d in CowData<float>::size() const core/cowdata.h:128
#1 0x16317bc in CowData<float>::get(int) const core/cowdata.h:152
#2 0x162d8cf in Vector<float>::operator[](int) const core/vector.h:85
#3 0xda3dbef in AudioEffectRecord::get_recording() const servers/audio/effects/audio_effect_record.cpp:238
#4 0xda4d2d4 in MethodBind0RC<Ref<AudioStreamSample> >::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:593
#5 0xe52a5cc in Object::call(StringName const&, Variant const**, int, Variant::CallError&) core/object.cpp:921
#6 0xe79d090 in Variant::call_ptr(StringName const&, Variant const**, int, Variant*, Variant::CallError&) core/variant_call.cpp:1112
#7 0x1a182e9 in GDScriptFunction::call(GDScriptInstance*, Variant const**, int, Variant::CallError&, GDScriptFunction::CallState*) modules/gdscript/gdscript_function.cpp:1078
#8 0x18549c3 in GDScriptInstance::call(StringName const&, Variant const**, int, Variant::CallError&) modules/gdscript/gdscript.cpp:1173
#9 0xe52a139 in Object::call(StringName const&, Variant const**, int, Variant::CallError&) core/object.cpp:900
#10 0xe533fc4 in Object::emit_signal(StringName const&, Variant const**, int) core/object.cpp:1219
#11 0xe535f8b in Object::emit_signal(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:1276
#12 0x960abeb in BaseButton::_pressed() scene/gui/base_button.cpp:135
#13 0x960e558 in BaseButton::on_action_event(Ref<InputEvent>) scene/gui/base_button.cpp:169
#14 0x96074c5 in BaseButton::_gui_input(Ref<InputEvent>) scene/gui/base_button.cpp:64
#15 0x647ce71 in MethodBind1<Ref<InputEvent> >::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:775
#16 0xe5261e0 in Object::call_multilevel(StringName const&, Variant const**, int) core/object.cpp:763
#17 0xe5292eb in Object::call_multilevel(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:863
#18 0x950d202 in Viewport::_gui_call_input(Control*, Ref<InputEvent> const&) scene/main/viewport.cpp:1634
#19 0x951c9f5 in Viewport::_gui_input_event(Ref<InputEvent>) scene/main/viewport.cpp:2014
#20 0x9540d8d in Viewport::input(Ref<InputEvent> const&) scene/main/viewport.cpp:2790
#21 0x9500b3a in Viewport::_vp_input(Ref<InputEvent> const&) scene/main/viewport.cpp:1411
#22 0x234cd1d in MethodBind1<Ref<InputEvent> const&>::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:775
#23 0xe52a5cc in Object::call(StringName const&, Variant const**, int, Variant::CallError&) core/object.cpp:921
#24 0xe528d1d in Object::call(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:847
#25 0x93efa59 in SceneTree::call_group_flags(unsigned int, StringName const&, StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) scene/main/scene_tree.cpp:275
#26 0x93f4ee4 in SceneTree::input_event(Ref<InputEvent> const&) scene/main/scene_tree.cpp:430
#27 0x14e98d8 in InputDefault::_parse_input_event_impl(Ref<InputEvent> const&, bool) main/input_default.cpp:442
#28 0x14deee6 in InputDefault::parse_input_event(Ref<InputEvent> const&) main/input_default.cpp:259
#29 0x14f1ec4 in InputDefault::flush_accumulated_events() main/input_default.cpp:678
#30 0x147077d in OS_X11::process_xevents() platform/x11/os_x11.cpp:2687
#31 0x1482ed2 in OS_X11::run() platform/x11/os_x11.cpp:3251
#32 0x13fe299 in main platform/x11/godot_x11.cpp:56
#33 0x7fd3a1bfe1e2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x271e2)
#34 0x13fdead in _start (/usr/bin/godots+0x13fdead)
0x631006a5480c is located 12 bytes inside of 65552-byte region [0x631006a54800,0x631006a64810)
freed by thread T5 here:
#0 0x7fd3a3122f1e in __interceptor_realloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10df1e)
#1 0xea53d97 in Memory::realloc_static(void*, unsigned long, bool) core/os/memory.cpp:137
#2 0x1630ebe in CowData<float>::resize(int) (/usr/bin/godots+0x1630ebe)
#3 0x162d2e8 in Vector<float>::resize(int) core/vector.h:84
#4 0x2b77d35 in Vector<float>::push_back(float const&) core/vector.h:152
#5 0xda394e5 in AudioEffectRecordInstance::_io_store_buffer() servers/audio/effects/audio_effect_record.cpp:95
#6 0xda37e33 in AudioEffectRecordInstance::_update_buffer() servers/audio/effects/audio_effect_record.cpp:55
#7 0xda3868c in AudioEffectRecordInstance::_io_thread_process() servers/audio/effects/audio_effect_record.cpp:77
#8 0xda39918 in AudioEffectRecordInstance::_thread_callback(void*) servers/audio/effects/audio_effect_record.cpp:107
#9 0x4f821a5 in ThreadPosix::thread_callback(void*) drivers/unix/thread_posix.cpp:74
#10 0x7fd3a2a98668 in start_thread /build/glibc-4WA41p/glibc-2.30/nptl/pthread_create.c:479
previously allocated by thread T5 here:
#0 0x7fd3a3122f1e in __interceptor_realloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10df1e)
#1 0xea53d97 in Memory::realloc_static(void*, unsigned long, bool) core/os/memory.cpp:137
#2 0x1630ebe in CowData<float>::resize(int) (/usr/bin/godots+0x1630ebe)
#3 0x162d2e8 in Vector<float>::resize(int) core/vector.h:84
#4 0x2b77d35 in Vector<float>::push_back(float const&) core/vector.h:152
#5 0xda3957f in AudioEffectRecordInstance::_io_store_buffer() servers/audio/effects/audio_effect_record.cpp:96
#6 0xda37e33 in AudioEffectRecordInstance::_update_buffer() servers/audio/effects/audio_effect_record.cpp:55
#7 0xda3868c in AudioEffectRecordInstance::_io_thread_process() servers/audio/effects/audio_effect_record.cpp:77
#8 0xda39918 in AudioEffectRecordInstance::_thread_callback(void*) servers/audio/effects/audio_effect_record.cpp:107
#9 0x4f821a5 in ThreadPosix::thread_callback(void*) drivers/unix/thread_posix.cpp:74
#10 0x7fd3a2a98668 in start_thread /build/glibc-4WA41p/glibc-2.30/nptl/pthread_create.c:479
Thread T5 created by T0 here:
#0 0x7fd3a304f805 in pthread_create (/lib/x86_64-linux-gnu/libasan.so.5+0x3a805)
#1 0x4f826a0 in ThreadPosix::create_func_posix(void (*)(void*), void*, Thread::Settings const&) drivers/unix/thread_posix.cpp:90
#2 0xea676b1 in Thread::create(void (*)(void*), void*, Thread::Settings const&) core/os/thread.cpp:51
#3 0xda39fe8 in AudioEffectRecordInstance::init() servers/audio/effects/audio_effect_record.cpp:122
#4 0xda3c21e in AudioEffectRecord::set_recording_active(bool) servers/audio/effects/audio_effect_record.cpp:196
#5 0x1eaaf81 in MethodBind1<bool>::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:775
#6 0xe52a5cc in Object::call(StringName const&, Variant const**, int, Variant::CallError&) core/object.cpp:921
#7 0xe79d090 in Variant::call_ptr(StringName const&, Variant const**, int, Variant*, Variant::CallError&) core/variant_call.cpp:1112
#8 0x1a1836a in GDScriptFunction::call(GDScriptInstance*, Variant const**, int, Variant::CallError&, GDScriptFunction::CallState*) modules/gdscript/gdscript_function.cpp:1081
#9 0x18549c3 in GDScriptInstance::call(StringName const&, Variant const**, int, Variant::CallError&) modules/gdscript/gdscript.cpp:1173
#10 0xe52a139 in Object::call(StringName const&, Variant const**, int, Variant::CallError&) core/object.cpp:900
#11 0xe533fc4 in Object::emit_signal(StringName const&, Variant const**, int) core/object.cpp:1219
#12 0xe535f8b in Object::emit_signal(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:1276
#13 0x960abeb in BaseButton::_pressed() scene/gui/base_button.cpp:135
#14 0x960e558 in BaseButton::on_action_event(Ref<InputEvent>) scene/gui/base_button.cpp:169
#15 0x96074c5 in BaseButton::_gui_input(Ref<InputEvent>) scene/gui/base_button.cpp:64
#16 0x647ce71 in MethodBind1<Ref<InputEvent> >::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:775
#17 0xe5261e0 in Object::call_multilevel(StringName const&, Variant const**, int) core/object.cpp:763
#18 0xe5292eb in Object::call_multilevel(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:863
#19 0x950d202 in Viewport::_gui_call_input(Control*, Ref<InputEvent> const&) scene/main/viewport.cpp:1634
#20 0x951c9f5 in Viewport::_gui_input_event(Ref<InputEvent>) scene/main/viewport.cpp:2014
#21 0x9540d8d in Viewport::input(Ref<InputEvent> const&) scene/main/viewport.cpp:2790
#22 0x9500b3a in Viewport::_vp_input(Ref<InputEvent> const&) scene/main/viewport.cpp:1411
#23 0x234cd1d in MethodBind1<Ref<InputEvent> const&>::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:775
#24 0xe52a5cc in Object::call(StringName const&, Variant const**, int, Variant::CallError&) core/object.cpp:921
#25 0xe528d1d in Object::call(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:847
#26 0x93efa59 in SceneTree::call_group_flags(unsigned int, StringName const&, StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) scene/main/scene_tree.cpp:275
#27 0x93f4ee4 in SceneTree::input_event(Ref<InputEvent> const&) scene/main/scene_tree.cpp:430
#28 0x14e98d8 in InputDefault::_parse_input_event_impl(Ref<InputEvent> const&, bool) main/input_default.cpp:442
#29 0x14deee6 in InputDefault::parse_input_event(Ref<InputEvent> const&) main/input_default.cpp:259
#30 0x14f1ec4 in InputDefault::flush_accumulated_events() main/input_default.cpp:678
#31 0x147077d in OS_X11::process_xevents() platform/x11/os_x11.cpp:2687
#32 0x1482ed2 in OS_X11::run() platform/x11/os_x11.cpp:3251
#33 0x13fe299 in main platform/x11/godot_x11.cpp:56
#34 0x7fd3a1bfe1e2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x271e2)
```
**Steps to reproduce:**
1. Run project
2. Click "Record"
3. Click "Stop"
**Minimal reproduction project:**
https://github.com/godotengine/godot-demo-projects/tree/master/audio/mic_record | bug,topic:audio | low | Critical |
541,171,498 | godot | Memory leak when reverting type changing | **Godot version:**
3.2 beta 4
**OS/device including version:**
Ubuntu 19.10
**Issue description:**
Leak info from Leak Sanitizer
```
Indirect leak of 72 byte(s) in 1 object(s) allocated from:
#0 0x7f843360dae8 in malloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10dae8)
#1 0xea532ff in Memory::alloc_static(unsigned long, bool) core/os/memory.cpp:82
#2 0x140886a in DefaultAllocator::alloc(unsigned long) core/os/memory.h:65
#3 0xea53221 in operator new(unsigned long, void* (*)(unsigned long)) core/os/memory.cpp:47
#4 0x9ca5135 in Set<Range*, Comparator<Range*>, DefaultAllocator>::_insert(Range* const&) core/set.h:333
#5 0x9ca3bbf in Set<Range*, Comparator<Range*>, DefaultAllocator>::insert(Range* const&) core/set.h:536
#6 0x9ca2332 in Range::Range() scene/gui/range.cpp:335
#7 0x9dd3f9e in ScrollBar::ScrollBar(Orientation) scene/gui/scroll_bar.cpp:662
#8 0x717301a in VScrollBar::VScrollBar() scene/gui/scroll_bar.h:128
#9 0x9a793f3 in ItemList::ItemList() scene/gui/item_list.cpp:1598
#10 0x9247ff6 in Object* ClassDB::creator<ItemList>() (/usr/bin/godots+0x9247ff6)
#11 0xe2c5fd6 in ClassDB::instance(StringName const&) core/class_db.cpp:541
#12 0x583e455 in CreateDialog::instance_selected() editor/create_dialog.cpp:547
#13 0x6900e26 in SceneTreeDock::_do_create(Node*) editor/scene_tree_dock.cpp:1934
#14 0x6908cf2 in SceneTreeDock::_create() editor/scene_tree_dock.cpp:1999
#15 0x178d360 in MethodBind0::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:59
#16 0xe52a5cc in Object::call(StringName const&, Variant const**, int, Variant::CallError&) core/object.cpp:921
#17 0xe533fc4 in Object::emit_signal(StringName const&, Variant const**, int) core/object.cpp:1219
#18 0xe535f8b in Object::emit_signal(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:1276
#19 0x583978e in CreateDialog::_confirmed() editor/create_dialog.cpp:463
#20 0x178d360 in MethodBind0::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:59
#21 0xe52a5cc in Object::call(StringName const&, Variant const**, int, Variant::CallError&) core/object.cpp:921
#22 0xe533fc4 in Object::emit_signal(StringName const&, Variant const**, int) core/object.cpp:1219
#23 0xe535f8b in Object::emit_signal(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:1276
#24 0xa1f515f in Tree::_gui_input(Ref<InputEvent>) scene/gui/tree.cpp:2720
#25 0x647ce71 in MethodBind1<Ref<InputEvent> >::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:775
#26 0xe5261e0 in Object::call_multilevel(StringName const&, Variant const**, int) core/object.cpp:763
#27 0xe5292eb in Object::call_multilevel(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:863
#28 0x950d202 in Viewport::_gui_call_input(Control*, Ref<InputEvent> const&) scene/main/viewport.cpp:1634
#29 0x951958e in Viewport::_gui_input_event(Ref<InputEvent>) scene/main/viewport.cpp:1944
Indirect leak of 1256 byte(s) in 1 object(s) allocated from:
#0 0x7f843360dae8 in malloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10dae8)
#1 0xea532ff in Memory::alloc_static(unsigned long, bool) core/os/memory.cpp:82
#2 0xea531fe in operator new(unsigned long, char const*) core/os/memory.cpp:42
#3 0x9a793c6 in ItemList::ItemList() scene/gui/item_list.cpp:1598
#4 0x9247ff6 in Object* ClassDB::creator<ItemList>() (/usr/bin/godots+0x9247ff6)
#5 0xe2c5fd6 in ClassDB::instance(StringName const&) core/class_db.cpp:541
#6 0x583e455 in CreateDialog::instance_selected() editor/create_dialog.cpp:547
#7 0x6900e26 in SceneTreeDock::_do_create(Node*) editor/scene_tree_dock.cpp:1934
#8 0x6908cf2 in SceneTreeDock::_create() editor/scene_tree_dock.cpp:1999
#9 0x178d360 in MethodBind0::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:59
#10 0xe52a5cc in Object::call(StringName const&, Variant const**, int, Variant::CallError&) core/object.cpp:921
#11 0xe533fc4 in Object::emit_signal(StringName const&, Variant const**, int) core/object.cpp:1219
#12 0xe535f8b in Object::emit_signal(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:1276
#13 0x583978e in CreateDialog::_confirmed() editor/create_dialog.cpp:463
#14 0x178d360 in MethodBind0::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:59
#15 0xe52a5cc in Object::call(StringName const&, Variant const**, int, Variant::CallError&) core/object.cpp:921
#16 0xe533fc4 in Object::emit_signal(StringName const&, Variant const**, int) core/object.cpp:1219
#17 0xe535f8b in Object::emit_signal(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:1276
#18 0xa1f515f in Tree::_gui_input(Ref<InputEvent>) scene/gui/tree.cpp:2720
#19 0x647ce71 in MethodBind1<Ref<InputEvent> >::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:775
#20 0xe5261e0 in Object::call_multilevel(StringName const&, Variant const**, int) core/object.cpp:763
#21 0xe5292eb in Object::call_multilevel(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:863
#22 0x950d202 in Viewport::_gui_call_input(Control*, Ref<InputEvent> const&) scene/main/viewport.cpp:1634
#23 0x951958e in Viewport::_gui_input_event(Ref<InputEvent>) scene/main/viewport.cpp:1944
#24 0x9540d8d in Viewport::input(Ref<InputEvent> const&) scene/main/viewport.cpp:2790
#25 0x9500b3a in Viewport::_vp_input(Ref<InputEvent> const&) scene/main/viewport.cpp:1411
#26 0x234cd1d in MethodBind1<Ref<InputEvent> const&>::call(Object*, Variant const**, int, Variant::CallError&) core/method_bind.gen.inc:775
#27 0xe52a5cc in Object::call(StringName const&, Variant const**, int, Variant::CallError&) core/object.cpp:921
#28 0xe528d1d in Object::call(StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) core/object.cpp:847
#29 0x93efa59 in SceneTree::call_group_flags(unsigned int, StringName const&, StringName const&, Variant const&, Variant const&, Variant const&, Variant const&, Variant const&) scene/main/scene_tree.cpp:275
```
**Steps to reproduce:**
1. Create ItemList
2. Save scene
3. Change type of ItemList to Label
4. Revert change with CTRL + Z
5. Close Godot | bug,topic:core,confirmed,topic:gui | low | Critical |
541,176,103 | go | cmd/go: two possible canonical states when go.sum is empty, the file may or may not exist | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
go version go1.13.5 darwin/amd64
### Does this issue reproduce with the latest release?
using latest
### What operating system and processor architecture are you using (`go env`)?
macOS Catalina
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
### What did you do?
- Created simple hello world application without dependencies consisting of main.go and a go.mod file containing:
```
module github.com/stephanwesten/hellojob
go 1.13
```
- Executed go build and noticed that no go.sum was generated (unexpected)
- added a dummy dependency to go.mod, re-ran go build and a go.sum was generated.
- removed dependency from go.mod, so it is back to original above, and re-ran go build and go mod tidy
- go.sum was still there.
### What did you expect to see?
(Note that this is the first time for me using go modules so my formulation might not be 100% accurate)
Either go.sum is removed or it should have been generated with the first go.build. This would give symmetric behaviour.
I would prefer that go.sum is always generated. The reason why this was a problem for me is that I used a Dockerfile template that assumed a go.sum file to be there.
### What did you see instead?
see above
BTW I entered 'go bug' to file this issue and I got a github error. Perhaps temporary problem?
| NeedsInvestigation,GoCommand,modules | low | Critical |
541,198,170 | javascript | Rules for nullish coalescing and optional chaining | I noticed that babel-preset-airbnb added the following stage 4 proposals in [v4.2.0](https://github.com/airbnb/babel-preset-airbnb/blob/master/CHANGELOG.md#420---20191114):
- @babel/plugin-proposal-nullish-coalescing-operator
- @babel/plugin-proposal-optional-chaining
Are there any rules/guidelines for these features? | semver-breaking: guide change,needs eslint rule change/addition | low | Major |
541,204,789 | godot | Main screen EditorPlugins don't set the bottom anchor | **Godot version:**
Checked in v3.1.2-stable, and latest master
**OS/device including version:**
linux, 4.19.84-1-MANJARO
**Issue description:**
If you add a GraphEdit node to the panel used in a MainScreen in an EditorPlugin, it won't appear.
Every other node i've tested shows up fine.
**Steps to reproduce:**
1. Make an editorplugin and define that it has mainscreen
2. Instance a panel with a graphedit as a child, optionally with a few graphnodes
3. Switch to the mainscreen and it's not there
**Minimal reproduction project:**
[grapheditplugin.zip](https://github.com/godotengine/godot/files/3990334/grapheditplugin.zip) | topic:plugin,topic:gui | low | Minor |
541,221,374 | node | Rscript throws error "missing value where TRUE/FALSE needed" when analyzing benchmarks | * **Version**: master
* **Platform**: Ubuntu
* **Subsystem**: http2
Followed documentation to create [comparison benchmarks](https://github.com/nodejs/node/blob/master/benchmark/writing-and-running-benchmarks.md#comparing-nodejs-versions)
Command run:
```console
$ node benchmark/compare.js --old ../node-master --new ../node-http2-for-of --set benchmarker=wrk http2 > compare-http2.csv
[05:52:54|% 100| 5/5 files | 60/60 runs | 12/12 configs]: Done
```
Output file (rename extension to CSV): [compare-http2.txt](https://github.com/nodejs/node/files/3990451/compare-http2.txt)
I receive the following error when trying to analyze benchmarks:
```console
$ cat compare-http2.csv | Rscript benchmark/compare.R
Error in if (w$p.value < 0.001) { : missing value where TRUE/FALSE needed
Calls: ddply ... llply -> loop_apply -> .Call -> <Anonymous> -> .fun
Execution halted
``` | benchmark | low | Critical |
541,225,305 | pytorch | cuCtxGetDevice error and seg fault with DDP and OpenMPI | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
When using DistributedDataParallel with OpenMPI+UCX for certain tested models I'm getting this error in the DDP constructor (I believe it's during the model broadcast):
```
0: [1576565271.426514] [cgpu06:45376:0] cuda_ipc_md.c:62 UCX ERROR cuCtxGetDevice(&cu_device) is failed. ret:invalid device context
0: [1576565271.426547] [cgpu06:45376:0] ucp_rkey.c:250 UCX ERROR Failed to unpack remote key from remote md[4]: Input/output error
0: [cgpu06:45376:0:45523] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x20)
```
I observed this error using a resnet50 model but no error if I instead used a very small single-layer CNN.
## To Reproduce
The script is here:
https://github.com/sparticlesteve/nersc-pytorch-build/blob/911fc67b6667d3c6e3be972169e30e34c1a33af5/test_ddp.py
I submit via slurm a single-node job with 8 MPI ranks for 8 V100 gpus, something like:
`srun --ntasks-per-node 8 -u -l python test_ddp.py --backend mpi`
My full log with stack trace is here:
https://gist.github.com/sparticlesteve/7307694f89329c277e16e452b524fefa
## Environment
PyTorch version: 1.3.1
Is debug build: No
OpenMPI: 4.0.1 with UCX 1.6
CUDA used to build PyTorch: 10.1.168
OS: openSUSE Leap 15.0
GCC version: (GCC) 7.3.0 20180125 (Cray Inc.)
CMake version: version 3.14.0
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.1.168
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
GPU 4: Tesla V100-SXM2-16GB
GPU 5: Tesla V100-SXM2-16GB
GPU 6: Tesla V100-SXM2-16GB
GPU 7: Tesla V100-SXM2-16GB
Nvidia driver version: 440.33.01
cuDNN version: Could not collect
cc @ngimel @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 | oncall: distributed,module: cuda,triaged | low | Critical |
541,230,018 | node | http2/write.js benchmark throws "ERR_HTTP2_ERROR" | * **Version**: v10.18.0, v12.14.0
* **Platform**: Ubuntu
* **Subsystem**: http2
Command run:
```console
$ h2load --version
h2load nghttp2/1.41.0-DEV
$ node benchmark/run.js --filter write http2
http2/write.js
http2/write.js benchmarker="h2load" size=100000 length=65536 streams=100: 1,898.23
http2/write.js benchmarker="h2load" size=100000 length=131072 streams=100: 1,038.20
events.js:174
throw er; // Unhandled 'error' event
^
Error [ERR_HTTP2_ERROR]: The user callback function failed
at Http2Session.onSessionInternalError [as error] (internal/http2/core.js:734:26)
Emitted 'error' event at:
at emitErrorNT (internal/streams/destroy.js:91:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
at process._tickCallback (internal/process/next_tick.js:63:19)
```
| http2 | low | Critical |
541,251,688 | go | x/tools/internal/event: improve package documentation | I know it's an internal package, but the package doc for internal/telemetry is poorly conceived:
```
Package telemetry provides an opinionated set of packages that cover the
main concepts of telemetry in an implementation agnostic way. As a library
author you should look at...
```
It doesn't say what the package does. It claims that the package is "opinionated' (a passive-agressive way to say, we don't care what you think about the design) but gives no hint at what that opinion leads to. Avoid editorializing, just say what the package does. And it wouldn't hurt to say what "telemetry" is and what it's measuring, and for whom.
| Documentation,NeedsInvestigation,Tools | low | Major |
541,265,436 | terminal | TermControl key down handling investigation and refactoring | <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
In current TermControl implementation, KeyDown events are handled by PreviewKeyDown event. It uses Tunneling strategy and key down events will first be sent to XAML root element, which is TermControl.
SearchBoxControl is a child element of TermControl. Thus, key input events on SearchBoxControl will be sent to TermControl first. Now there is an early return check in TermControl::_KeyDownHandler, which prevents key inputs from Search Box to be processed by TermControl. But this is not a good implementation.
We need to think about using KeyDown instead of PreviewKeyDown event handler in TermControl, but we need to make sure this change won't break anything. | Area-TerminalControl,Product-Terminal,Issue-Task | low | Critical |
541,271,126 | godot | [Modifier + Key] action is fired when pressing Key without modifier | Sorry if this should be a duplicate. Its not that easy finding anything in 5000+ open issues tho.
**Godot version:**
3.1.2
**OS/device including version:**
Manjaro with kernel 5.4.2
**Issue description:**
When creating an action with a modifier key `event.is_action_released()` returns `true` when only pressing the key without the modifier.
There are a lot of issues with the opposite problem, but i didn't found one about this way around.
As seen in the relevant code (https://github.com/godotengine/godot/blob/3.1/core/os/input_event.cpp#L312):
```
bool match = get_scancode() == key->get_scancode() && (!key->is_pressed() || (code & event_code) == code);
```
modifiers are simply ignored when the key isn't pressed (key released).
**Steps to reproduce:**
Create action with an modifier. Ctrl+A for example.
In an `_unhandled_input` function do:
```
if event.is_action_released("my_ctrl_a_action"):
print("ctrl+a action called")
```
and press "A".
| bug,confirmed,topic:input | low | Minor |
541,305,813 | godot | C# threads / tasks don't work on the web (async / await) | **Godot version:**
3.2 beta 4
**OS/device including version:**
Windows 10
**Issue description:**
Threading and tasks don't seem to work at all when exporting mono projects to the web.
**Steps to reproduce:**
1. Create a new project.
2. Create a new thread or task in a c# script that does something ( I tried printing to the console via GD.Print )
3. Export for the web
4. Run the exported project in the web browser.
**Minimal reproduction project:**
https://1drv.ms/u/s!AvsEaC7oMEMagstwXvyFlCMdOOMO2A?e=uuytsD
The label should change text to "it worked" when the program starts. | enhancement,platform:web,topic:dotnet | low | Major |
541,308,437 | vue | Inline style binding is NOT corporate with web standard | ### Version
2.6.11
### Reproduction link
[https://codesandbox.io/s/thirsty-heisenberg-elvz6?fontsize=14&hidenavigation=1&theme=dark](https://codesandbox.io/s/thirsty-heisenberg-elvz6?fontsize=14&hidenavigation=1&theme=dark)
### Steps to reproduce
1. open the repo
2. check the style of the green box
### What is expected?
The green div's height should be 101px
### What is actually happening?
the green div's height is 22px, cause inline style didn't work on this component
---
I check the web standard that if I assigned a string to a node's style with either `"node.style.cssText"` or add style directly in devtools with this `"height: 100px; height:" `, the final height will be 100px. Obviously the invalid "height: " will be abandoned.
Then I check the source code and find that the function `parseStyleText` only uses regex to split the string and then assign it to result from value by value, no matter what the value is or if the value is valid. That is why the green box's height is incorrect.
Then I create a PR try to fix this issue in web rendering but not in SSR.
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | medium | Minor |
541,313,660 | svelte | Flip animations skipped in Firefox and Chrome, but not in Safari | **Describe the bug**
In rare instances, flip animations are skipped (in browsers other than Safari).
I'm making a sliding puzzle. I have a fixed start position. As shown in the attached GIF, when I click 2 followed by 3, the sliding animation is skipped for the second click (i.e. the 3-piece teleports to its end position). If I slide the pieces back (by clicking 3 and then 2) and repeat the process, everything works.

The bug appears with some opening sequences, but not others. E.g.: If you click 4 and then 1, the bug appears, but not 5 and then 3.
The bug appears in Firefox and Chrome, but not Safari.
**Logs**
*Not included*
**To Reproduce**
See this REPL; reload the page between each test: https://svelte.dev/repl/c0fdc09f46784474be2bcca4d56bd1ce?version=3.16.5
I have tested all possible opening sequences, and the bug appears in the following:
- 2-3
- 2-1
- 4-1
- 4-7
... but not in the following:
- 8-7
- 8-6
- 5-6
- 5-3
My suspicion is that the first tile is the deciding factor, because the bug appears when clicking 2 or 4 first, but not 8 or 5.
If you click the second piece before the first piece's slide animation has finished, the bug disappears. (This is easier to reproduce if you increase the animation duration.)
**Expected behavior**
I expect the animation to work in all cases (in all browsers).
**Stacktraces**
*Not included*
**Information about your Svelte project:**
- Firefox 71.0, Chrome 77.0.3865.90 and Safari 13.0.4 (15608.4.9.1.3)
- MacOS 10.15.2 (19C57)
- Svelte version 3.16.5
**Severity**
This is just annoying and mysterious. In other words, not very severe.
**Additional context**
No other context
| bug,temp-stale | low | Critical |
541,322,847 | youtube-dl | Has the support for decko been implemented yet? | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions
- Search the bugtracker for similar questions: http://yt-dl.org/search-issues
- Finally, put x into all relevant boxes (like this [x])
-->
- [X] I'm asking a question
- [X] I've looked through the README and FAQ for similar questions
- [X ] I've searched the bugtracker for similar questions including closed ones
## Question
<!--
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.
-->
Hi,
I tried to download from https://decko.ceskatelevize.cz/ but youtube-dl says the URL is not supported.
I found #12713 and #12719 but the mentioned changes are not part of the code nor the release.
What is the status of this feature.
My youtube-dl version is 2019.09.28-1 but I did not find the support for decko either when I searched the code here on github.
Thanks
| question | low | Critical |
541,322,934 | flutter | Support protected GL contexts | This is related to #32082
Support of [EGL_EXT_protected_content](https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_protected_content.txt) and [GL_EXT_protected_textures](https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_protected_textures.txt) seems to be an alternative way to resolve #32082
In this case, DRM protected video could be rendered to Surface which will be built on top of protected SurfaceTexture and referenced from flutter by [Texture](https://api.flutter.dev/flutter/widgets/Texture-class.html) | c: new feature,platform-android,engine,a: video,customer: crowd,a: platform-views,p: video_player,package,c: proposal,P2,team-android,triaged-android | low | Major |
541,336,998 | rust | [mir-opt] Simplify `SwitchInt`s of explictly written ranges to range tests | Consider the program:
```rust
fn program(x: u8) -> u8 {
match x {
1 | 2 | 3 | 4 => 0,
_ => 1,
}
}
```
We will generate the following `SwitchInt`:
```rust
switchInt(_1) -> [1u8: bb2, 2u8: bb2, 3u8: bb2, 4u8: bb2, otherwise: bb1];
```
Ostensibly, something like this would be more profitable:
```rust
_2 = Le(const 1u8, _1);
switchInt(move _2) -> [false: bb2, otherwise: bb3];
```
This is the code you get from the range pattern `0..=4`.
cc @wesleywiser @oli-obk | I-slow,C-enhancement,T-compiler,A-mir-opt | low | Minor |
541,343,568 | godot | Exported mac executables don't have proper permissions to run when exported from Mac | **Godot version:**
3.2 beta4 **(mono)**
**OS/device including version:**
Mac OSX (Mojave)
**Issue description:**
When exporting mac applets, they don't run because they don't have the correct permissions when exported on a mac.
**Steps to reproduce:**
1. Create basic project, set main scene
2. Export for mac
3. Try running the applet, get prompted with permission error
Note: fixing the permissions, allows the executable to run fine. | bug,platform:macos,topic:porting | low | Critical |
541,344,656 | pytorch | IndexExpressions (or slice) for jit.script functions | ## 🚀 Feature
We should be able to do something like:
```py
@jit.script
def foo(x, b: int):
bs = x.size(0) // b
for i in range(bs):
bi = np.s_[i*b:(i+1)*b] # or slice(i*b, (i+1)*b)
bar(x[bi])
```
## Motivation
Giving names to long/repeated index expressions makes kernels a lot easier to read and reason about. There's currently no way around this besides copy-pasting the expression at every indexing site.
## Pitch
Add an analogue to `numpy.IndexExpression` and an associated constructor (like `numpy.s_`) to PyTorch with support in TorchScript.
## Alternatives
Not sure.
## Additional context
https://docs.scipy.org/doc/numpy/reference/generated/numpy.s_.html
cc @suo | oncall: jit,triaged | low | Minor |
541,347,892 | vscode | [Feature Request] Create collapsed and expanded elements in tooltips | ### Description
I try create my own extension. I use vscode api extensions. But I didn't find any methods for create collapsed / expanded elements in tooltip
> I try found any methods with keywords: `collapse`, `collapsed` etc. I found only **treeItems**, but I think this is other.
**So, I want create request to add this ability to api (if this is already exist) or create method for it.**
### Example
How we can know, vscode have debug mode.
If we enable it and try use **breakpoints**, we can check what storage in variables (or etc)
Example:

How we can see, `_proto_`, can be **expanded**/**collapsed** on click:

### Suggestion
So, how I will be wrote up.
Now, we can create tooltips with **markdown string** or **string**
But I want create a bit more, like this "spoilers" (expanded / collapsed elements),
> if we can I want use highlight too (like change color, bold and other styles) for any created elements. Not like a code block.
| markdown,under-discussion | low | Critical |
541,348,610 | pytorch | [Feature Request] Make torch.solve output NaN for singular matrix | ## 🚀 Feature
Make torch.solve output NaNs instead of crashing for singular matrix
```python
import torch
sol, lu = torch.solve(torch.zeros(3,1), torch.ones(3,3).unsqueeze(0))
```
Now it leads to crash:
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> RuntimeError: solve_cpu: For batch 0: U(2,2) is zero, singular U.
## Motivation
I am using torch.solve for quadratic interpolation of local response maximum in kornia library, similar to way it is done in OpenCV for SIFT
https://github.com/kornia/kornia/blob/master/kornia/geometry/spatial_soft_argmax.py#L602
```python3
# to determine the location we are solving system of linear equations Ax = b, where b is 1st order gradient
# and A is Hessian matrix
b: torch.Tensor = kornia.filters.spatial_gradient3d(input, order=1, mode='diff') #
b = b.permute(0, 1, 3, 4, 5, 2).reshape(-1, 3, 1)
A: torch.Tensor = kornia.filters.spatial_gradient3d(input, order=2, mode='diff')
A = A.permute(0, 1, 3, 4, 5, 2).reshape(-1, 6)
dxx = A[..., 0]
dyy = A[..., 1]
dss = A[..., 2]
dxy = A[..., 3]
dys = A[..., 4]
dxs = A[..., 5]
# for the Hessian
Hes = torch.stack([dxx, dxy, dxs, dxy, dyy, dys, dxs, dys, dss]).view(-1, 3, 3)
nms_mask: torch.Tensor = kornia.feature.nms3d(input, (3, 3, 3), True)
x_solved: torch.Tensor = torch.zeros_like(b)
x_solved_masked, _ = torch.solve(b[nms_mask.view(-1)], Hes[nms_mask.view(-1)])
```
OpenCV: https://github.com/opencv/opencv_contrib/blob/master/modules/xfeatures2d/src/sift.cpp#L516
```
Vec3f X = H.solve(dD, DECOMP_LU);
xi = -X[2];
xr = -X[1];
xc = -X[0];
if( std::abs(xi) < 0.5f && std::abs(xr) < 0.5f && std::abs(xc) < 0.5f )
break;
if( std::abs(xi) > (float)(INT_MAX/3) ||
std::abs(xr) > (float)(INT_MAX/3) ||
std::abs(xc) > (float)(INT_MAX/3) )
return false;
```
However, for some points the system can be singular. In OpenCV such points are simply rejected without crash
## Pitch
I would like to have the following output instead:
```python
import torch
sol, lu = torch.solve(torch.zeros(3,1), torch.ones(3,3).unsqueeze(0))
print (sol, lu)
```
```
tensor([[[nan],
[nan],
[nan]]]) tensor([[[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]]])
```
## Alternatives
Try/catch within for-loop, but that is crazy slow. Another alternative would be to use least squares instead of LU torch.lstsq, but it doesn`t yet has a batched version https://github.com/pytorch/pytorch/issues/27749
cc @vincentqb @vishwakftw @jianyuh @nikitaved @pearu | triaged,module: linear algebra | low | Critical |
541,363,340 | rust | Switching to opt-level=z on i686-windows-msvc triggers STATUS_ACCESS_VIOLATION | I have a crate which compiles and tests fine, but as soon as I switch `opt-level=z` it triggers `STATUS_ACCESS_VIOLATION` in some tests, but only on the windows targets. Tested on
- nightly-2019-11-06
- nightly-2019-12-20
The corresponding log outputs can be found here https://ci.appveyor.com/project/dignifiedquire/deltachat-core-rust/builds/29699168
and the code https://github.com/deltachat/deltachat-core-rust/pull/1087
I am sorry I don't have a more reduced testcase, but I am quite clueless on how to debug this tbh. | A-LLVM,O-windows,P-high,T-compiler,I-unsound,C-bug,E-needs-mcve,O-x86_32,E-needs-investigation | medium | Critical |
541,363,850 | TypeScript | Support for all type features in declaration files. | I'm on TS 3.7.3 at time of writing this.
It seems like this issue should simply be focused on and fixed before continuing to add more and more features to the language and widening the source vs declaration gap.
This is quite a problem!
## Search terms
"typescript declaration file limitations"
"has or is using private name"
"exported class expression may not be private or protected"
"Property 'foo' of exported class expression may not be private or protected. ts(4094)"
"Return type of exported function has or is using private name 'foo'. ts(4060)"
Related issues (a small fraction of the results on Google) in no particular order:
- https://github.com/microsoft/TypeScript/issues/29872 (@canonic-epicure)
- https://github.com/microsoft/TypeScript/issues/17744 (@fr0)
- https://github.com/Microsoft/TypeScript/issues/18791 (@johnnyreilly)
- https://github.com/microsoft/TypeScript/issues/30355 (@a-student)
- https://github.com/Microsoft/TypeScript/issues/17293 (@saschanaz)
- https://github.com/Microsoft/TypeScript/issues/24226 (@rjamesnw)
- https://github.com/gluons/vue-thailand-address/issues/5 (@gluons)
- https://github.com/types/sequelize/issues/176 (@billybarnum)
- https://github.com/microsoft/TypeScript/issues/6307 (@jeffschwartz)
- https://github.com/Microsoft/TypeScript/issues/23110 (@evil-shrike)
- https://github.com/typed-ember/ember-cli-typescript/issues/133 (@dfreeman)
- https://github.com/microsoft/TypeScript/issues/15947 (@ryansmith94)
- https://github.com/vuejs/vue/issues/6999 (@gluons)
- https://github.com/Microsoft/TypeScript/issues/16440 (@Manusan42)
- https://github.com/microsoft/TypeScript/issues/28754 (@trusktr)
- https://github.com/Microsoft/TypeScript/issues/5284 (@tommyZZM)
- https://github.com/Microsoft/TypeScript/issues/23280 (@mjbvz)
- https://github.com/gr2m/javascript-plugin-architecture-with-typescript-definitions/issues/1 (@gr2m)
- **And the list on Google _literally_ goes on and on and on.** (I tried to skip any that were actual recognized bugs, fixed or not, but it may be possible I missed one or two, though the vast majority appear to be limitations of declaration files)
## Workaround
One way to work around all of the above issues, for libraries, is to have library authors point their `types` field in `package.json` to their source entrypoint. This will eliminate the problems in the listed issues, but has some big downsides:
1. If a downstream consumer of a library without declaration files (f.e. without `dist/index.d.ts`) but with `types` pointing to a source file (f.e. `src/index.ts`) then if the consumer's `tsconfig.json` settings are not the same as the library's, this may cause type errors (f.e. the library had `strict` set to `false` while being developed, but the consumer sets `strict` to `true` and TypeScript picks up all the strict type errors in the library code).
- This is documented in https://github.com/microsoft/TypeScript/issues/35744
1. If `skipLibCheck` is set to true, the consumer project's compilation will still type-check the (non-declaration) library code regardless.
- This is documented in https://github.com/microsoft/TypeScript/issues/41883
With those two problems detailed, if you know you will be working in a system where you can guarantee that all libraries and consumers of those libraries will be compiled with the same compiler (`tsconfig.json`) settings (i.e. similar to Deno that defines compiler options for everyone), then pointing the `package.json` `types` field to a source file (because declaration output is not possible) is currently the workaround that allows all the unsupported features of declaration files to be usable. **But this will fail badly if a library author does not know what compiler settings library consumers will have: everything may work fine during development and testing within external examples, but a downstream user *will* eventually report type errors if they compile their program with different settings while depending on the declaration-less libraries!**
## Suggestion
Support for all type features in declaration files.
**_Please._** 🙏 I love you TS team, you have enabled so much better code dev with TypeScript. :heart: :blush: TS just needs declaration love for support of _all_ features of the language.
Some work is being done, f.e. https://github.com/microsoft/TypeScript/issues/23127, but overall I feel that too much effort is being put onto new language features while declaration output is left behind.
This creates a poor developer experience when people's working code doesn't work (the moment they wish to publish it as a library and turn on `declaration: true`).
I hope the amazing and wonderful TS team realize how frustrating it may feel for someone to work on a project for months, only to end with un-fixable type errors the moment they want to publish the code as a library with `declaration: true`, then having to abandon features and re-write their code, or try to point package.json `"types"` to `.ts` files only to face [other](https://github.com/microsoft/TypeScript/issues/35744) [issues](https://github.com/microsoft/TypeScript/issues/41883).
I wish every new feature of the language came paired with working tests for equivalent declaration emit. Can this be made a requirement for every new language feature, just like unit tests are a requirement?
## Use Case
Use any language feature, publish your code, then happily move along, without facing issues like
```
src/html/DeclarativeBase.ts:25:10 - error TS4094: Property 'onChildSlotChange' of exported class expression may not be private or protected.
25 function makeDeclarativeBase() {
~~~~~~~~~~~~~~~~~~~
src/html/WebComponent.ts:32:10 - error TS4060: Return type of exported function has or is using private name 'WebComponent'.
32 function WebComponentMixin<T extends Constructor<HTMLElement>>(Base: T) {
~~~~~~~~~~~~~~~~~
```
I worked hard for months to get the typings in the above code working, then I turned on `"declaration": true` and TypeScript said "today's not the day!".
I hope you the team can see that these issues are a pain, and realize the **need** to bring declaration emit to parity with language features and not letting the gap between language features and declaration output widen.
## Examples
The issue that sparked me to write this was https://github.com/microsoft/TypeScript/issues/35744.
In that issue there's an example of what implicit return types might look like:
```ts
export declare function AwesomeMixin<T extends Constructor>(Base: T) {
type Foo = SomeOtherType<T> // only types allowed in here.
// only special return statements allowed, or something.
return declare class Awesome extends Base {
method(): Foo
}
}
```
We'd need solutions for other problems like the `protected`/`private` error above, etc.
Of course it'll take some imagination, but more importantly it will take some discipline: disallow new features without declaration parity.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Meta-Issue | high | Critical |
541,364,094 | pytorch | c++ PReLUFuncOptions declared, not used or valid | ## 🐛 Bug
nothing serious,
parameterized relu's functional form requires the input tensor and the weight(s),
not the _num_parameters_ & _init_ arguments of the module:
```
struct TORCH_API PReLUOptions {
/// number of `a` to learn. Although it takes an int as input, there is only
/// two values are legitimate: 1, or the number of channels at input. Default: 1
TORCH_ARG(int64_t, num_parameters) = 1;
/// the initial value of `a`. Default: 0.25
TORCH_ARG(double, init) = 0.25;
};
TORCH_NN_FUNCTIONAL_USE_MODULE_OPTIONS(PReLU, PReLUFuncOptions)
```
_PReLUFuncOptions_ as declared is never used, wouldn't fit the functional form | triaged | low | Critical |
541,375,751 | pytorch | TorchBind broken on rocm | with https://github.com/pytorch/pytorch/pull/29501
```
20:15:47 ======================================================================
20:15:47 ERROR: test_torchbind (__main__.TestScript)
20:15:47 ----------------------------------------------------------------------
20:15:47 Traceback (most recent call last):
20:15:47 File "test_jit.py", line 4561, in test_torchbind
20:15:47 test_equality(f, lambda x: x)
20:15:47 File "test_jit.py", line 4553, in test_equality
20:15:47 obj1 = f()
20:15:47 File "test_jit.py", line 4558, in f
20:15:47 val = torch.classes._TorchScriptTesting_Foo(5, 3)
20:15:47 File "/var/lib/jenkins/.local/lib/python3.6/site-packages/torch/_classes.py", line 9, in __getattr__
20:15:47 proxy = torch._C._get_custom_class_python_wrapper(attr)
20:15:47 RuntimeError: Class _TorchScriptTesting_Foo not registered!
20:15:47
20:15:47 ----------------------------------------------------------------------
```
cc @suo | oncall: jit,triaged | low | Critical |
541,377,842 | terminal | terminal/parser: determine what to do when split escape sequence writes show up | # Environment
```none
Windows build number: Version 10.0.18363.535
(reproducible with OpenConsole from master)
Windows Terminal version (if applicable): n/a
Any other software?
- Simple ConPTY test code
```
# Steps to reproduce
- Create pseudo console as seen in examples
- Copy I/O to and from serial / COM port; alternatively manually write CSI or escape sequence in multiple write operations.
Example for arrow-up key (as seen during development with serial ports):
Write 1: 0x1b
Write 2: 0x5b 0x41
# Expected behavior
- Working escape sequences over multiple write operations to PTY input pipe
Example for arrow-up key:
```
$ showkey -a
Press any keys - Ctrl-D will terminate this program
^[[A 27 0033 0x1b
91 0133 0x5b
65 0101 0x41
```
# Actual behavior
Due to [```FlushAtEndOfString```](https://github.com/microsoft/terminal/blob/a322ff06f8a299a8215d85c16aa3b434cf4b3081/src/terminal/parser/InputStateMachineEngine.cpp#L905) being true, the escape sequence is split up and rendered wrong. E.g. arrow-up becomes:
```
^[ 27 0033 0x1b
[A 91 0133 0x5b
65 0101 0x41
```
which only renders "[A" to the PTY without the desired effect.
This is 100% reproducible, i.e. the arrow keys never work because they are always received as two reads from the (virtual) serial port.
Note: A custom build of conhost.exe (OpenConsole.exe) with ```FlushAtEndOfString``` changed to return false does correctly process split escape sequence writes.
# Comments
The comment in the caller in [stateMachine.cpp](https://github.com/microsoft/terminal/blob/a322ff06f8a299a8215d85c16aa3b434cf4b3081/src/terminal/parser/stateMachine.cpp#L1288) says the following:
```
// <kbd>alt+[</kbd>, <kbd>A</kbd> would be processed like `\x1b[A`,
// which is _wrong_).
//
// Fortunately, for VT input, each keystroke comes in as an individual
// write operation.
```
I think both statements here are wrong, or at least not universally true. Typing Alt+[ and A on a VTE based Linux terminal emulator (e.g. gnome-terminal or xfce4-terminal) will result in the same reaction as pressing the arrow-up key.
Is there any standard that guarantees VT input sequences to happen in single writes? I tested Linux support of split escape sequence writes - it worked fine on Linux 5.4 framebuffer getty TTY and on all tested PTY terminal emulators (xterm, VTE, etc.).
Related: https://github.com/microsoft/terminal/pull/2823 | Product-Conpty,Area-Input,Issue-Task,Priority-3 | medium | Major |
541,387,273 | pytorch | allow setting different batch size splits for data_parallel.py and distributed.py | ## 🚀 Feature
Allow allocation of a second parameter when assigning devices to parallel implementations to control how the batch will be split between the devices involved.
## Motivation
The main limitation in any multi-GPU or multi-system implementation of PyTorch for training i have encountered is that each GPU must be of the same size or risk slow downs and memory overruns during training.
## Pitch
new parameter for data_parallel and distributed to set batch size allocation to each device involved
## Alternatives
My current work around is assigning multiple instances of a particular GPU to DataParallel but this is not ideal because it still carry a significant speed and batch size overhead on large models.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 | oncall: distributed,feature,triaged,module: data parallel | low | Major |
541,396,519 | flutter | Server-side rendering for Flutter web | Status: If your interest is in SEO, see https://github.com/flutter/flutter/issues/46789; otherwise, see https://github.com/flutter/flutter/issues/47600#issuecomment-1016920547.
----
As an app/web developer, I think Server-side rendering may offer a few advantages:
1. Easier search engine optimization.
2. Better loading time in some cases. Older devices may take a little longer to run the Javascript code. Also for applications that need to call a few REST APIs before rendering content, SSR may use caching to significantly reduce delay.
3. Better compatibility. Browsers still use different JS engines, therefore, rendering content from JS code on the front-end may sometimes get different results on different devices. Plain HTML will be much more likely to get rendered consistently.
Is there any chance that Flutter web will support SSR in the future?
| c: new feature,customer: crowd,platform-web,c: proposal,P3,team-web,triaged-web | high | Major |
541,407,765 | storybook | How to show viewport on the docs page? | **Describe the bug**
How to show viewport on the docs page? The canvas page has viewport options,But the documentation page is not。Is this a bug or does it require additional configuration?
**Screenshots**
The canvas page has viewport options.

But the documentation page is not。

**Code snippets**
```
import { configure, addParameters } from '@storybook/react'
import { DocsPage, DocsContainer } from '@storybook/addon-docs/blocks'
import { INITIAL_VIEWPORTS } from '@storybook/addon-viewport'
import '../src/index.less'
addParameters({
viewport: {
viewports: INITIAL_VIEWPORTS
},
docs: {
container: DocsContainer,
page: DocsPage
}
})
```
| help wanted,addon: viewport,addon: docs,medium | low | Critical |
541,413,610 | TypeScript | Assigning to a string fails if it was previously compared to an enum | **TypeScript Version:** What is installed on the TypeScript playground
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Assigning to a string fails if it was previously compared to an enum**
If you run the code below in the TypeScript playground, line `widgetName += 'AddSomething'` errors out with `Type 'string' cannot be assigned to type 'Widgets'`.
However, if I cast the enum on the previous line to string (`widgetName == <string> Widgets.ComparableHomeSales`), everything compiles fine.
In addition, if I replace `widgetName += 'AddSomething';` with `widgetName = widgetName + 'AddSomething';`, it also compiles fine.
**Code**
```ts
enum Widgets {
ComparableHomeSales = 'ComparableHomeSales',
PriceHistory = 'PriceHistory'
}
class WidgetDescription {
private name: Widgets;
constructor(name: Widgets) {
this.name = name;
}
get Name(): Widgets {
return this.name;
}
}
// code implementation
let widgetDescription: WidgetDescription = new WidgetDescription(Widgets.ComparableHomeSales);
let widgetName: string = widgetDescription.Name;
if (widgetName == Widgets.ComparableHomeSales) {
widgetName += 'AddSomething'; // <-- ERROR
}
```
**Expected behavior:**
`widgetName += 'AddSomething'; ` should work
**Actual behavior:**
`widgetName += 'AddSomething'; ` does not work
**Playground Link:**
[Link](https://www.typescriptlang.org/play/?ssl=24&ssc=6&pln=1&pc=1#code/AQ4UwOwVwW2B1AlgEwOZgC4GdgG8BQowAwgPYwAOAhgE5UBGANmABLlgDKVzOAvMAHIylWg2ZsYnbmCwCANIVAAFGogDGrRFgykaAT2D8BK9Zu269AxSAC++a8DWMqWHEjSYAIjLWqKGRFIIPAciClUANyoMMGAIKkkALgQUdGwAbnsiIjUg7RooNR0aAAp4pJSPbABKEOz64AwACy0AOnLY-g7MhuA7UNA04AA5BLAS6uT3NJwCXqIaTCgaYOa27oHbB36iAHpdx1JkWMRKZkkIDGjAiAdmDGAAd1SvHz8AoKmXjG8sX0R-DdDHEwI9Kmlfv9AUEStNMFhWsJqHQmKx2FweNUeqB7k9vqMKvlEBBUMDnlVIe8bq0CWBMg5EAAzYAlclpWmGfhw7CI8jIsRoyQYmS1OYNNmYDkAaiMAEFkMgOOw1iSBNitvZ8EA)
| Bug,Fix Available,Rescheduled | low | Critical |
541,447,153 | godot | UniformTexture in VisualShader Not Being Applied / Saved in Some Situations | **Godot version:** 3.2 beta 4
**OS/device including version:** Windows 10
**Issue description:** UniformTexture in VisualShader is not being applied / saved in some situations. It is shown in the viewport, but running the game has no effect and when I reload the project no longer appears the texture placed. I noticed that only when saved the first time saves successfully, but if I have several meshs with material, when changing them (are unique) the others do not save
**Steps to reproduce:**
1. Add a InstanceMesh
2. Load a mesh and create a VisualShader to surface of the mesh
3. Make a UniformTexture and put in a Channel (in my case metal and roughness) then Save Material
4. Duplicate InstanceMesh and load material, make unique
5. Change UniformTexture of the InstanceMesh copy and run
6. On editor shows the change, but in game keep with old texture or empty
7. When i reload project, on InstanceMesh copy has no changes
Every time i make these steps happens this.
**Minimal reproduction project:**
| bug,topic:editor,topic:shaders | low | Minor |
541,464,800 | pytorch | [docs] F.ctc_loss docs to warn clearly about invalid inf-causing inputs; zero_infinity to become enabled by default | **UPD** summary of all the long discussion for further discoverability:
1. F.ctc_loss will produce inf loss if presented with invalid unalignable examples
2. Such invalid examples may be generated by official usage code example if one is extremely unlucky or if one twists the dimension sizes a little bit
3. When presented with invalid examples, sum and mean reduction modes by default cause the whole batch loss to be **inf**
Proposals:
1. Have docs warn clearly about conditions on valid examples
2. Have docs warn clearly that the official usage example may produce invalid examples / fix the official code example
3. Enable zero_infinity = True by default or at least in reduction modes sum/mean
**BELOW IS THE ORIGINAL ISSUE DESCRIPTION**
```python
import torch
import torch.nn.functional as F
def test_ctc(C, B = 1, T = 2, full = False):
log_probs = torch.randn(B, C, T).log_softmax(dim = 1)
input_lengths = torch.full((B,), T, dtype=torch.long)
target_lengths = torch.full((B,), T, dtype = torch.long)
if full:
targets = torch.full((B, T), C - 1, dtype = torch.long)
else:
targets = torch.randint(1, C, (B, T), dtype=torch.long)
loss = F.ctc_loss(log_probs.permute(2, 0, 1), targets, input_lengths, target_lengths)
print(float(loss))
test_ctc(C = 64, full = False)
# 4.894557952880859
test_ctc(C = 2, full = False)
# inf
test_ctc(C = 64, full = True)
# inf
```
Docs specify that targets can't be blank. If same consecutive labels are not supported, I think it should be explicitly mentioned in docs. And maybe docs should specify some workaround to encode consecutive same-valued targets (given that blank to separate them isn't allowed by docs).
Current docs: `Each element in the target sequence is a class index. And the target index cannot be blank (default=0)` | module: docs,triaged | low | Major |
541,470,592 | flutter | [google_maps_flutter] MissingPluginException When using in PageView | ## Steps to Reproduce
I have an issue combining PageView and Google maps, as follows:
1. Install [google_maps_flutter: ^0.5.21+15]
2. Run attached app
3. Google maps will appear, swtich to next page by clicking Font size icon
4. Exception will be thrown `MissingPluginException (MissingPluginException(No implementation found for method map#update on channel plugins.flutter.io/google_maps_0)`
**Target Platform:** Android 9.1
**Devices:** Huaweui Mate 10 pro
<details>
<summary>Sample App</summary>
```dart
import 'package:flutter/material.dart';
import 'package:google_maps_flutter/google_maps_flutter.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
home: Home(),
);
}
}
class Home extends StatefulWidget {
@override
_HomeState createState() => _HomeState();
}
class _HomeState extends State<Home> {
PageController _pageController =
PageController(initialPage: 0, keepPage: true);
@override
Widget build(BuildContext context) {
return Scaffold(
body: PageView(
controller: _pageController,
physics: NeverScrollableScrollPhysics(),
onPageChanged: (int) {
print('Page Changes to index $int');
},
children: <Widget>[
GoogleMap(
initialCameraPosition: CameraPosition(
target: LatLng(26.3721621, 50.0911194),
zoom: 11.4746,
),
),
Container(
child: Center(
child: Text("I love Flutter!"),
),
)
],
),
bottomNavigationBar: BottomAppBar(
shape: CircularNotchedRectangle(),
color: Theme.of(context).primaryColor,
child: Row(
mainAxisSize: MainAxisSize.max,
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: <Widget>[
IconButton(
icon: Icon(Icons.map),
onPressed: () => setState(() => _pageController.jumpToPage(0)),
),
IconButton(
icon: Icon(Icons.text_fields),
onPressed: () => setState(() => _pageController.jumpToPage(1)),
),
],
),
),
);
}
}
```
</details>
## Logs
```bash
flutter analyze
Analyzing maps_pageview...
No issues found! (ran in 9.7s)
```
```console
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, v1.12.13+hotfix.5, on Microsoft Windows [Version 10.0.17763.914], locale en-US)
[√] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
[!] Android Studio (version 3.5)
X Flutter plugin not installed; this adds Flutter specific functionality.
X Dart plugin not installed; this adds Dart specific functionality.
[√] VS Code (version 1.41.0)
[√] Connected device (1 available)
```
| c: crash,framework,p: maps,package,team-ecosystem,has reproducible steps,P2,found in release: 2.3,triaged-ecosystem | low | Major |
541,470,981 | youtube-dl | Fallback to import cryptodome if crypto is missing | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions
- Search the bugtracker for similar questions: http://yt-dl.org/search-issues
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm asking a question
- [x] I've looked through the README and FAQ for similar questions
- [x] I've searched the bugtracker for similar questions including closed ones
## Question
<!--
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.
-->
Hello, part of updating a project that uses youtube-dl to alpine 3.11 i noticed that alpine have removed the py-crypto package. Recommendation seems to be to use cryptodome package instead. So to my queston: would it make sense to make youtube-dl fallback to cryptodome if crypto is missing? the only instance of this i can find is https://github.com/ytdl-org/youtube-dl/blob/c712b16dc41b792757ee8e13a59bce9ab3b4e5b4/youtube_dl/downloader/hls.py#L6
| question | low | Critical |
541,472,510 | TypeScript | Named imports should respect Record<string, any> types | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.7.x-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms: dynamic named imports, commonjs named imports**
**Code**
```ts
declare module '*.wasm' {
type WasmProxy = Record<string, (...args: any[]) => Promise<any>>;
const wasmProxy: WasmProxy;
export = wasmProxy;
}
```
**Expected behaviour:**
I would expect typescript to allow **any** named import such as:
```ts
import {anything, hello, hola } from './program.wasm';
```
Since the exported namespace is a Record, where any string is valid.
**Actual behavior:**
The following code does NOT work:
```ts
// Does not work
import {anything, hello, hola } from './program.wasm';
```
Using the wildcard import works:
```ts
// Does work
import * as ns from './program.wasm';
ns.anything;
ns.hello;
ns.hola;
```
| Bug | low | Critical |
541,483,215 | pytorch | More dynamic PyTorch APIs | ## 🚀 Feature
Implement some of the pytorch APIs taking tensors as inputs, instead of python scalars.
## Motivation
Many of the pytorch APIs takes some of the inputs as python scalars.
Example:
https://pytorch.org/docs/stable/tensors.html#torch.Tensor.narrow
This is OK when the inputs are step constants (either real constants, or command line arguments), but in case these are dynamic, they force model code to perform `item()` calls on tensors.
For pytorch CPU/GPU an `item()` call is relatively cheap, but for backends (like XLA) which like to work within a *Graph World*, they force explicit execution at intermediate graph locations by hence considerably lower performance.
It would be nice if some of the pytorch APIs would take some of the inputs as tensors.
For example, `torch.narrow()` could take `start` (and maybe even `length`) as tensor, which would map to:
```Python
at::Tensor narrow(const at::Tensor& self, const at::Tensor& start, int64_t lenght);
```
For CPU/GPU this can be easily mapped to the existing, static, `narrow()` (via a simple `item()` call), while XLA could override that and map it to `DynamicSlice()`.
Similarly, would be nice to have APIs which return the size of tensors, as tensors.
Example:
```Python
at::Tensor dimsize(const at::Tensor& self, int dim);
```
This came up a few times already, like in:
https://github.com/pytorch/xla/issues/1467
https://github.com/pytorch/xla/issues/1379
In that case a wavenet like model needs to slice tensors based on offsets (`start`) values coming off training.
cc @ezyang @gchanan @zou3519 | high priority,triaged,module: ux | medium | Major |
541,489,101 | vscode | Reserve shortcut prefix for user | While installing multiple third-party extensions my shortcuts quickly become messed up.
I found almost any direct ctrl-key sequences on my linux based system where bound, mostly by default vscode commands, with the single exception of `ctrl-e`.
I started adding the individual commands I needed under that prefix in order to not remove default bindings, so if I wanted something being run with `ctrl-r` I just put it under `ctrl-e ctrl-r`.
I propose to reserve a prefix to user shortcuts in a similar way as [Spacemacs handily do](https://develop.spacemacs.org/doc/DOCUMENTATION#reserved-prefix-command-for-user) thus providing user a "safe harbor" to organize one's shortcuts with freedom.
| feature-request | low | Minor |
541,497,141 | create-react-app | Cannot find module './logo.svg' using custom react-scripts and Typescript template | ### Is your proposal related to a problem?
Yes.
When I forked this repo and re-published version 3.3.0 of react-scripts (WITHOUT ANY CHANGES!) under my own organization on npmjs, I was unable to successfully build/start a project using these custom scripts.
Steps used:
1. Execute `npx create-react-app my-app --template typescript --scripts-version my-custom-scripts-on-npm`
2. run `yarn start`
3. See error:
```
D:/Development/custom-cra-tests/my-app/src/App.tsx
TypeScript error in D:/Development/custom-cra-tests/my-app/src/App.tsx(2,18):
Cannot find module './logo.svg'. TS2307
1 | import React from 'react';
> 2 | import logo from './logo.svg';
| ^
3 | import './App.css';
4 |
5 | const App: React.FC = () => {
```
### Describe the solution you'd like
After wandering around, I managed to fix this by doing the following:
1. Locate `react-app-env.d.ts` in the `src` folder of your CRA project
2. Rename `/// <reference types="react-scripts" />` to `/// <reference types="your-custom-published-react-scripts-name-here" />`
### Describe alternatives you've considered
/
### Additional context
A pr has already been proposed to fix this issue, but it has been closed
Reference issue: https://github.com/facebook/create-react-app/issues/5875
Reference PR: https://github.com/facebook/create-react-app/pull/5827
I am not sure why this change didn't come through, but it remains an issue to date. | issue: proposal | low | Critical |
541,507,732 | youtube-dl | Please add support for likemoi.telequebec.tv | ## Checklist
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.11.28**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
- Single episode: http://likemoi.telequebec.tv/episodes/44-episode-44
- Single skit: http://likemoi.telequebec.tv/episodes/44-episode-44/sketchs/5-la-paille
- Latest episode: http://likemoi.telequebec.tv/episodes
## Description
Comedy show on Télé-Québec.
Probably very similar to https://github.com/ytdl-org/youtube-dl/pull/22482 or https://github.com/ytdl-org/youtube-dl/commit/05446d483d089d0bc7fa3037900dadc856d3e687.
| site-support-request,geo-restricted | low | Critical |
541,513,863 | rust | In the lexer, accept number suffixes that start with `e`. | Currently, the lexer will reject tokens like `1.0etest` expecting an exponent after the `e`. It could alternatively accept this token as `LitFloat { value: 1.0, suffix: "etest" }`. This would mean that proc macros could use suffixes that start with an `e` (although not the suffix `e` itself of course). | A-parser,T-lang,C-feature-request,A-proc-macros | low | Minor |
541,522,391 | rust | Don't emit "value assigned to ... is never read" for each instance? | Scrolling through warnings when compiling projects during the early development stages can be a bit of a chore, but there's not much the compiler can do about it as it doesn't know whether you just haven't gotten around to something or if you forgot about it. I typically add `#![allow(unused_code)]` and `#![allow(dead_code)]` to new projects and take them out when the codebase nears completion of v1, but it occurs to me that there's no real value in emitting repeated warnings for unused assignments of the same variable.
e.g.

There are different heuristics here depending on how important it is to generate all warnings in advance, including
* a bloom filter for the variable (by ast path, not name) with the corresponding buckets filled when the first warning is emitted
This means code like
```rust
{
let foo = bar;
let foo = bar;
}
```
continues to emit two warnings as they are two different variables being unread, *but* this wouldn't emit two warnings:
```rust
{
let mut foo = "bar";
foo = "baz";
}
```
as we are deduplicating by variable and not by unread write so only the first assignment will generate the warning, which is *probably* ok since addressing that error by adding either of
```rust
{
let mut foo = "bar";
if (foo == "bar") { ... }
foo = "baz";
}
```
or
```rust
{
let mut foo = "bar";
foo = "baz";
if (foo == "bar") { ... }
}
```
will go back to generating the warning for the other instance of unused write.
* actually tracking individual code paths for each variable and only warning for writes that are not provably exclusive:
```rust
{
let mut foo = "bar";
foo = "baz";
}
```
The above would continue to generate two warnings, but the following would generate only one:
```rust
let mut bar;
let foo = "hello";
match foo {
"bar" => foo = "bar",
"baz" => foo = "qux",
}
```
as no value was overwritten at any point and a single read would suffice to address both instances of unused writes.
Personally, I think the latter is overkill and might slow things down and increase memory usage in the compiler without much benefit. The former is both fast and easy (with no allocations and predetermined memory consumption based off the size of the bloom filter). | C-enhancement,A-diagnostics,T-compiler,D-verbose | low | Critical |
541,563,614 | go | cmd/asm: doesn't handle register offset correctly when GOARCH=arm | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.5 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/dd/Library/Caches/go-build"
GOENV="/Users/dd/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/dd/dev/source/golang/gopath"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/n5/26_m50tn4_j2n3gmvt19300h0000gn/T/go-build651869539=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
main.go:
```
package main
import (
"fmt"
)
func b()
func print32(i uint32) {
fmt.Printf("0x%x\n", i)
}
func main() {
b()
}
```
asm_arm.s:
```
#include "textflag.h"
TEXT ·b(SB), NOSPLIT, $4
MOVW $1, R0
MOVW $4, R1
MOVW R0, (R13)(R1)
CALL ·print32(SB)
RET
```
then run,
```
objdump -d -df 'main.b' ./demo
```
and
```
go tool objdump -S -s 'main\.b' ./demo
```
./demo is the target executable file.
### What did you expect to see?
```
main.b:
b8e18: 08 e0 2d e5 str lr, [sp, #-8]!
b8e1c: 01 00 a0 e3 mov r0, #1
b8e20: 04 10 a0 e3 mov r1, #4
b8e24: 01 00 8d e7 str r0, [sp, r1]
b8e28: c5 ff ff eb bl #-236 <main.print32>
b8e2c: 08 f0 9d e4 ldr pc, [sp], #8687
```
and
```
TEXT main.b(SB)
0xb8e18 e52de008 MOVW.W R14, -0x8(R13)
0xb8e1c e3a00001 MOVW $1, R0
0xb8e20 e3a01004 MOVW $4, R1
0xb8e24 e78d0001 MOVW R0, (R13)(R1)
0xb8e28 ebffffc5 BL main.print32(SB)
0xb8e2c e49df008 RET #8
```
### What did you see instead?
```
main.b:
b8e18: 08 e0 2d e5 str lr, [sp, #-8]!
b8e1c: 01 00 a0 e3 mov r0, #1
b8e20: 04 10 a0 e3 mov r1, #4
b8e24: 00 00 8d e5 str r0, [sp]
b8e28: c5 ff ff eb bl #-236 <main.print32>
b8e2c: 08 f0 9d e4 ldr pc, [sp], #8
```
and
```
TEXT main.b(SB)
0xb8e18 e52de008 MOVW.W R14, -0x8(R13)
0xb8e1c e3a00001 MOVW $1, R0
0xb8e20 e3a01004 MOVW $4, R1
0xb8e24 e58d0000 MOVW R0, (R13)
0xb8e28 ebffffc5 BL main.print32(SB)
0xb8e2c e49df008 RET #8
```
don't know why the code of register offset was lost?
```
MOVW R0, (R13)(R1)
```
change into
```
MOVW R0, (R13)
``` | help wanted,NeedsInvestigation,compiler/runtime | low | Critical |
541,599,878 | electron | Cascading of windows | Will electron has any plan to provide a flag for cascading of windows based on OS? It will be helpful in many applications if flag is provided.
### Problem Description
I have a use case in my application to cascade the windows based on last opened window. we implemented it based on screen module provided in electron. but the issue is when we have multiple screen we need to consider so many scenarios, so it will be helpful if electron is providing the flag which will save so much of time for all the developers.
| enhancement :sparkles: | low | Minor |
541,646,027 | youtube-dl | Please add support for play.nova.bg | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.11.28. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.11.28**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://play.nova.bg/video/midnight-sun/midnight-sun-2019-12-18
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Bulgarian open access for everyone, no registration needed tv show and movies platform. You may need bulgarian vpn server to access it if you are outside Bulgaria. But some free one like tunnel bear provide access for free to such server. | site-support-request | low | Critical |
541,662,945 | pytorch | Force libtorch to use CUDA context | I’m trying to integrate C++ libtoch to load a model into my application. My application does many CUDA staff before I load the model with libtorch. Thus, the CUDA context has already been created.
For some reason, even the CUDA context has already been created and the calling thread has already a valid context, when I call
torch::jit::script::Module module = torch::jit::load(“test.pt”);
module.to(at::kCUDA);
a new context is created by libtorch. The new context is not event push to the stack of contexts, is overwritting the current context. I know because if after the module.to(at::kCUDA) I call cuCtxPopCurrent the current context is null.
This causes a lot of problems because I cannot interact with current allocated memory I have in my context.
#jit #cuda #c++
cc @suo | oncall: jit,triaged | low | Minor |
541,741,505 | TypeScript | Generator helper should include [Symbol.toStringTag] | **TypeScript Version:** `3.8.0-dev.20191223` and `3.7.3`
**Search Terms:**
Generator, toStringTag, transformer, helper, tslib
**Code**
The following code will error in when using `compilerOptions.target = es5`, but works correctly when compiled with `target = es2015` or greater.
```ts
// index.ts
function* f() {}
if (f()[Symbol.toStringTag] !== 'Generator') {
throw new Error("Bad generator!")
}
console.log('👍 Works well!');
```
```sh
tsc --target es5 --lib dom,es5,es2015 ./index.ts && node index.js
# Error: Bad generator!
tsc --target es2015 ./index.ts && node index.js
# 👍Works well!
```
**Expected behavior:**
The property should be set as expected via the helpers.
Generators should include the `Symbol.toStringTag` property `"Generator"` according to [spec](https://www.ecma-international.org/ecma-262/6.0/#sec-generator.prototype-@@tostringtag).
Some libraries check for a generator by inspecting this property. For example, [`@wordpress/redux-routine`](https://github.com/WordPress/gutenberg/blob/3ca05a7d0ef966df724ebe744b04efd4330e7c20/packages/redux-routine/src/is-generator.js#L10-L15) checks for this property and does not correctly handle generators produced by the helper.
[`regenerator-runtime`](https://www.npmjs.com/package/regenerator-runtime) is a widely used generator helper which [does set this property](https://github.com/facebook/regenerator/blob/6e9e8d7747c2ab49927bdd9dd6261753181faec1/packages/regenerator-runtime/runtime.js#L386).
**Actual behavior:**
`Symbol.toStringTag` is not included on the generator.
**Playground Link:**
[Broken with target=es5](http://www.typescriptlang.org/play/?target=1&ts=3.8.0-dev.20191221#code/GYVwdgxgLglg9mAVAAmACgJTIN4F8BQMwya6GA2gMoCeAtgEZwA2AdFHJVAE4xgDmAFQCGfALrIAhAF4pyAOQBxAKZglXIey5ys2fMmRQAFlzgB3ZKvMBRLia5oARACEhAE2R8VajXC4SHGPgEEAgAzsxKLExwfGhyAOq+ANahyKZKTEwS2gDcQA)
[Working with target=es2015](http://www.typescriptlang.org/play/?target=2&ts=3.8.0-dev.20191221#code/GYVwdgxgLglg9mAVAAmACgJTIN4F8BQMwya6GA2gMoCeAtgEZwA2AdFHJVAE4xgDmAFQCGfALrIAhAF4pyAOQBxAKZglXIey5ys2fMmRQAFlzgB3ZKvMBRLia5oARACEhAE2R8VajXC4SHGPgEEAgAzsxKLExwfGhyAOq+ANahyKZKTEwS2gDcQA)
**Related Issues:**
#19006
| Needs Investigation | low | Critical |
541,741,872 | terminal | no { with German Keyboard via RDP (Remote Desktop APPx App only) | Version: 0.7.3451.0
local machine: Microsoft Windows [Version 10.0.18363.535]
RDP (PAW): Microsoft Windows [Version 10.0.18363.535]
WinPS 5.1 and PSCore 7 rc1
I am using the Remote Desktop and Remote Desktop Preview APPx apps to connect to my virtual PAW where I run Windows Terminal Preview (version above and previous). Only in this configuration with a german keyboard, i get "^_" when I want to enter "{". "}" works, it works with an english keyboard, it works with classic RDP client and of course on local machine.
I wonder if this should be an issue for the Remote Desktop App, but since the issue ONLY happens together with Windows Terminal Preview, I'd rather have it here. | Help Wanted,Area-Input,Issue-Bug,Area-TerminalControl,Product-Terminal,Priority-2 | low | Minor |
541,774,651 | flutter | [webview_flutter] on IOS onPageFinished is never called (while being offline) | If my iPhone is in Airplane mode `onPageFinished` is never called.
For Android it works as expected (`onPageFinished` is called and "Web page not available" is shown)
Implementation:
```
WebView(
onWebViewCreated: (WebViewController controller) {
webViewController = controller;
},
initialUrl: "some_url",
javascriptMode: JavascriptMode.unrestricted,
onPageFinished: (String url) async {
String title = await webViewController.getTitle();
// do something with title
},
)
```
Doctor:
```
[✓] Flutter (Channel stable, v1.9.1+hotfix.6, on Mac OS X 10.14.5 18F132, locale en-LT)
[!] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[✓] Xcode - develop for iOS and macOS (Xcode 11.1)
[✓] Android Studio (version 3.5)
[!] IntelliJ IDEA Community Edition (version 2019.1.3)
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
[✓] Connected device (1 available)
```
| platform-ios,customer: crowd,p: webview,package,has reproducible steps,P2,found in release: 2.2,found in release: 2.5,team-ios,triaged-ios | low | Major |
541,778,888 | pytorch | Can't pin storage memory | ## 🐛 Bug
Can't pin storage memory.
Not sure if this is planned or common behavior, but if this is part of the interface I guess it shouldn't fail.
## To Reproduce
Steps to reproduce the behavior:
1. ```import torch; torch.randn(10).storage().pin_memory()```
AttributeError: module 'torch.cuda' has no attribute '_host_allocator'
## Expected behavior
Returns a copy of storage which is pinned.
## Environment
(built from source)
Collecting environment information...
PyTorch version: nightly
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
GPU 2: GeForce RTX 2080 Ti
GPU 3: GeForce RTX 2080 Ti
GPU 4: GeForce RTX 2080 Ti
GPU 5: GeForce RTX 2080 Ti
GPU 6: GeForce RTX 2080 Ti
Nvidia driver version: 440.44
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.17.4
[pip] torch==nightly
[pip] torchvision==0.5.0a0+2d7c066
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.15 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] pytorch nightly py3.7_cuda10.1.243_cudnn7.6.5_1 saareliad
[conda] torchvision 0.5.0a0+2d7c066 pypi_0 pypi## Additional context
<!-- Add any other context about the problem here. -->
(What I'm really trying to do is find some hack to create pinned memory which is also shared, currently only managed to do so with pin_memory() than os.fork(), (before cuda is initialized), I'd like to do it on Storage too and not just the tensor itself)...
cc @ngimel | module: docs,module: cuda,triaged | low | Critical |
541,783,771 | pytorch | Bug in ForkingPickler for multiprocessing spawn context for shared storages on Linux | Hello!
This code
```python
import torch
import multiprocessing
def _mp_foo(*args, **kwargs):
pass
if __name__ == '__main__':
ctx = multiprocessing.get_context("spawn")
t = torch.randn(100)
t1 = t.narrow(0, 0, 10)
t2 = t.narrow(0, 0, 10)
t3 = t.narrow(0, 10, 20)
p = ctx.Process(
target=_mp_foo,
args=(t1, t2, t3),
daemon=True
)
p.start()
p.join()
```
Ends up with
```
Traceback (most recent call last):
File "spawn_dup.py", line 22, in <module>
p.start()
File "/home/alxmopo3ov/anaconda3/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/alxmopo3ov/anaconda3/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/alxmopo3ov/anaconda3/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/alxmopo3ov/anaconda3/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/alxmopo3ov/anaconda3/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 58, in _launch
cmd, self._fds)
File "/home/alxmopo3ov/anaconda3/lib/python3.7/multiprocessing/util.py", line 420, in spawnv_passfds
False, False, None)
ValueError: bad value(s) in fds_to_keep
```
I have not completely investigated the problem, but looks like that issue in ForkingPickler.
When spawning new process, forking pickler for tensors sends several file descriptors for shared tensors. Here https://github.com/google/python-subprocess32/blob/master/_posixsubprocess.c#L129 CPython checks that file descriptors list does not have duplicates.
If we insert prints here https://github.com/python/cpython/blob/master/Lib/multiprocessing/popen_spawn_posix.py#L47
before and after reduction.dump (which is actually call for ForkingPickler), for this script, you will see this ends up using three identical ids, which corresponds to the same storage of three passed tensors.
System:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.6 LTS
Release: 14.04
Codename: trusty
I have not read the tensors reduction code yet, but I guess this issue should not be hard to fix | module: multiprocessing,triaged | low | Critical |
541,829,352 | pytorch | attribute and register_buffer are not the same on gpu | ## 🐛 Bug
According to the [jit documentation](https://pytorch.org/docs/stable/jit.html), "2. register_buffer - Values wrapped in register_buffer will work as they do on nn.Modules. This is equivalent to an attribute (see 4) of type Tensor."
However, when using attribute, an error is raised, see below for an example. This does not occur when using `register_buffer` in pytorch/audio#369.
```
======================================================================
ERROR: test_scriptmodule_MelSpectrogram (__main__.Tester)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_transforms.py", line 109, in test_scriptmodule_MelSpectrogram
_test_script_module(transforms.MelSpectrogram, tensor)
File "test_transforms.py", line 41, in _test_script_module
py_out = py_method(tensor)
File "/private/home/vincentqb/anaconda3/envs/pytorch-from-source/lib/python3.7/site-packages/torc
h/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/private/home/vincentqb/anaconda3/envs/pytorch-from-source/lib/python3.7/site-packages/torc
haudio-0.4.0a0+774ebc7-py3.7-linux-x86_64.egg/torchaudio/transforms.py", line 215, in forward
specgram = self.spectrogram(waveform)
File "/private/home/vincentqb/anaconda3/envs/pytorch-from-source/lib/python3.7/site-packages/torc
h/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/private/home/vincentqb/anaconda3/envs/pytorch-from-source/lib/python3.7/site-packages/torc
haudio-0.4.0a0+774ebc7-py3.7-linux-x86_64.egg/torchaudio/transforms.py", line 70, in forward
self.win_length, self.power, self.normalized)
File "/private/home/vincentqb/anaconda3/envs/pytorch-from-source/lib/python3.7/site-packages/torc
haudio-0.4.0a0+774ebc7-py3.7-linux-x86_64.egg/torchaudio/functional.py", line 259, in spectrogram
waveform, n_fft, hop_length, win_length, window, True, "reflect", False, True
File "/private/home/vincentqb/anaconda3/envs/pytorch-from-source/lib/python3.7/site-packages/torc
haudio-0.4.0a0+774ebc7-py3.7-linux-x86_64.egg/torchaudio/functional.py", line 52, in _stft
onesided,
File "/private/home/vincentqb/anaconda3/envs/pytorch-from-source/lib/python3.7/site-packages/torc
h/functional.py", line 393, in stft
return torch._C._VariableFunctions.stft(input, n_fft, hop_length, win_length, window, normalize
d, onesided)
RuntimeError: expected device cuda:0 but got device cpu
```
## To Reproduce
Steps to reproduce the behavior:
1. Compile torchaudio without pytorch/audio#369.
1. Run `test_transforms.py` and error occurs
1. Compile torchaudio with pytorch/audio#369.
1. Run `test_transforms.py` and error does not occur.
## Environment
Collecting environment information...
PyTorch version: 1.4.0a0+cd20ecb
Is debug build: No
CUDA used to build PyTorch: 10.0
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.14.0
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: Quadro GP100
GPU 1: Quadro GP100
Nvidia driver version: 410.79
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.17.4
[pip3] torch==1.3.1
[conda] blas 1.0 mkl
[conda] magma-cuda100 2.5.1 1 pytorch
[conda] mkl 2019.4 243
[conda] mkl-include 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.15 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] torch 1.4.0a0+cd20ecb pypi_0 pypi
[conda] torchaudio 0.4.0a0+ed8e818 pypi_0 pypi
cc @suo | oncall: jit,triaged | low | Critical |
541,833,715 | TypeScript | Undetected unreachable code | These are obvious mistakes.
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.7.x-dev.20191221
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
declare const a: boolean;
(() => {
if (a) return;
if (a) {
return; // Unreachable
}
});
while (1) {
if (a) break;
if (a) {
break; // Unreachable
}
}
```
**Expected behavior:**
Detect unreachable code.
**Actual behavior:**
not detected.
**Playground Link:** http://www.typescriptlang.org/play/index.html?ts=3.8.0-dev.20191221#code/CYUwxgNghgTiAEYD2A7AzgF3lAXPARkkhCFCgNwCwAUABS0CU8AvAHzwDeN8P8AlgDN4tKEzgYArjArdeg4aM6zevcVIrwA9JvgBVFHChgAFlHwll8AL40rDKtQDuxviWEBGJl2or5IpviGANYOvkL+Sj4qvIGkIVo6+oYmZhZRPDbUVkA
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Suggestion,In Discussion | low | Critical |
541,840,789 | youtube-dl | subtitles urls only listed when used with "writesubtitles" in extract_info() | ## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions
- Search the bugtracker for similar questions: http://yt-dl.org/search-issues
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm asking a question
- [x] I've looked through the README and FAQ for similar questions
- [x] I've searched the bugtracker for similar questions including closed ones
## Question
I try to gather only subtitles information from youtube videos and therefore use `skip download` in my YoutubeDL instance. I only need the links to the vtt Subtitles of the Youtube Videos but those will just get attached to the result dict if I also use `writesubtitles`.
Is there a way to get the subtitle links without having to download the subtitles as file?
| question | low | Critical |
541,846,946 | flutter | flutter assembly fails if the project path contains ! (exclamation mark) | flutter assemble (and hence - build and everything) fails if the project path includes a ! character (an exclamation mark). It raises error:
```
--output directory is required for assemble.
```
Discovered today (rather painfully), after an upgrade to 1.12.
## Steps to Reproduce
<!-- Please tell us exactly how to reproduce the problem you are running into. -->
1. Create or open a project from a path containing an exclamation mark.
2. Run 'flutter build apk'
## Logs
<!--
Include the full logs of the commands you are running between the lines
with the backticks below. If you are running any "flutter" commands,
please include the output of running them with "--verbose"; for example,
the output of running "flutter --verbose create foo".
-->
```
C:\Users\<cut>\Documents\test-new-version>flutter build apk --debug
Running Gradle task 'assembleDebug'...
Running Gradle task 'assembleDebug'... Done 48.6s
√ Built build\app\outputs\apk\debug\app-debug.apk.
C:\Users\<cut>\Documents\test-new-version>cd..
// here changing the dir name to include !
C:\Users\<cut>\Documents>cd "!test-new-version"
C:\Users\<cut>\Documents\!test-new-version>flutter build apk --debug
--output directory is required for assemble.
FAILURE: Build failed with an exception.
* Where:
Script 'C:\Software\flutter\packages\flutter_tools\gradle\flutter.gradle' line: 780
* What went wrong:
Execution failed for task ':app:compileFlutterBuildDebug'.
> Process 'command 'C:\Software\flutter\bin\flutter.bat'' finished with non-zero exit value 1
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 11s
Running Gradle task 'assembleDebug'...
Running Gradle task 'assembleDebug'... Done 12.5s
Gradle task assembleDebug failed with exit code 1
C:\Users\<cut>\Documents\!test-new-version>
```
<!-- If possible, paste the output of running `flutter doctor -v` here. -->
```
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, v1.12.13+hotfix.5, on Microsoft Windows [Version 10.0.18362.535], locale en-US)
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[√] Android Studio (version 3.5)
[√] VS Code, 64-bit edition (version 1.41.0)
[!] Connected device
! No devices available
```
| c: regression,tool,platform-windows,P2,team-tool,triaged-tool | low | Critical |
541,857,538 | tauri | Tracking : accessibility (a11y) | Hi, pretty new here, but as a discussion where ongoing on the project Discord, doing a tracking issue for this seemed like a good idea (Disclaimer : I'm not an accessibility expert at all, let alone a Tauri one, so suggestions are highly encouraged)
Accessibility is something very important, it allows people with vision problems and movement difficulties to use our application.
Electron cover this issue by using Chromium, that since it's a fully featured web browser, already have accessibility feature built-in.
Tauri, on the other side, use many system independent webviews, we must make sure to exploit all their built-in accessibility features, and if possible.
While Tauri can't force everyone into doing accessible application, we must make sure that many point are covered by it :
- Make sure screen readers have access to the website content. Aka being able to read it like a website.
- Make sure others inputs aren't overridden by Tauri, like arrows, tabs, shift + tabs, so navigation on websites can be possible. | type: documentation,type: feature request | medium | Major |
541,863,901 | godot | BUTTON_MASK_MIDDLE and BUTTON_WHEEL_UP both have a value of 4 | **Godot version:**
3.1.2-mono
**OS/device including version:**
Windows 10 LTSC Version 1809 and Manjaro Linux
**Issue description:**
I'm not sure if it's intentional but BUTTON_MASK_MIDDLE and BUTTON_WHEEL_UP in the ButtonList enum both have a value of 4.
BUTTON_MIDDLE has a value of 3 so I would assume BUTTON_MASK_MIDDLE would have a value of 3
**Steps to reproduce:**
Put this C# code anywhere in your project
```cs
foreach (ButtonList item in Enum.GetValues(typeof(ButtonList)))
{
GD.Print(Convert.ToInt32(item) + " = " + item);
// Doesn't print WheelUp since it has the same value as MaskMiddle
}
GD.Print(ButtonList.WheelUp + " has a value of " + Convert.ToInt32(ButtonList.WheelUp));
// Should print "WheelUp has a value of 4"
// Prints "MaskMiddle has a value of 4"
```
**Minimal reproduction project:**
[Download from Google Drive](https://drive.google.com/file/d/1mwlu6gneU_b3qYeFnVFQy61HNGmmY3gu/view?usp=sharing)
| discussion,topic:input | low | Minor |
541,907,426 | pytorch | Option to apply weights to gradients when using DistributedDataParallel | ## 🚀 Feature
I would like to propose that the API expose an option to weight the gradients during backpropagation in when using DistributedDataParallel.
## Motivation
Let’s say I have 4 GPUs and I am training a semantic segmentation network on a dataset with an ignore class. As I understand it, in the DataParallel setting, the outputs are aggregated on GPU0, the loss computed, and then the gradient is backpropagated back through each GPU’s model. In the DistributedDataParallel case, L0, L1, L2, L3 are each computed for each GPU’s share of the batch, the losses are backpropagated back through their respective GPU’s model, and the gradients are averaged along the way.
Using DataParallel, the presence of an ignore class makes no difference. Even if one GPU’s mini-batch has a lopsided amount of ignore pixels, the total loss is computed as the weighted average before any backprop occurs. However, what happens when you have a lopsided distribution of ignore pixels on one GPU using DistributedDataParallel? There does not seem to be any mechanism for weighting the average of the gradients; it seems that the averaging is unweighted. Yet in this case, L0, L1, L2, and L3 ought to have their contributions weighted by the ratio of valid pixels when averaging gradients during backpropagation.
## Pitch
I would like to see an optional field to `backward()` to provide weights for each backpropagation channel in DistributedDataParallel.
## Alternatives
Unclear what the alternatives are.
cc @ezyang @SsnL @albanD @zou3519 @gqchen @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @aazzolini @xush6528 | oncall: distributed,feature,module: autograd,triaged,module: data parallel | low | Major |
541,920,611 | flutter | Add ability to consume XML fonts <font-family> on Android | [XML fonts](https://developer.android.com/guide/topics/ui/look-and-feel/fonts-in-xml) allow Android apps not to bundle the font assets. Internally, apps use these to pull fonts out of play services.
Would it be possible to consume these from Flutter as well? This could significantly reduce the assets that need to be bundled with Flutter apps on Android. | c: new feature,platform-android,engine,a: typography,P2,team-android,triaged-android | low | Minor |
541,975,224 | ant-design | TreeSelect - TreeNode `selectable` false is same style as other one | - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
Design should update:
https://codesandbox.io/s/r5ff9
### What does the proposed API look like?
N/A
ref: https://github.com/ant-design/ant-design/issues/20399#issuecomment-568638878
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive | low | Minor |
542,115,623 | rust | illegal instruction on x86_64-linux-android since 1.40.0 | It looks like when running the tests on x86_64-linux-android through qemu (via cross) they seem to fail due to an illegal instruction since the Rust 1.40 release. This happens consistently.
https://github.com/LiveSplit/livesplit-core/runs/362238400#step:8:141 | I-crash,O-android,O-x86_64,P-medium,T-compiler,regression-from-stable-to-stable,C-bug | low | Minor |
542,120,950 | pytorch | pytorch forward hangs in multiprocess environment | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
This a very peculiar bug I am encountering. I am using MTCNN face detector from https://github.com/TreB1eN/InsightFace_Pytorch/ on pytorch 1.3.1+cpu
The face detection works fine when API is called from __main__ thread. But if I forked a new process and initialized MTCNN as well as face detector model (MobileFaceNet), it hangs in first net forward call (in MTCNN). If load_state_dict for MobileFaceNet is commented out, again it works fine.
## To Reproduce
Steps to reproduce the behavior:
1. Clone https://github.com/TreB1eN/InsightFace_Pytorch/
2. Try following code
```
from mtcnn import MTCNN
from model import MobileFaceNet
import torch
from PIL import Image
from torch.multiprocessing import Process
import cv2
import time
# mtcnn = MTCNN()
myfacenet = MobileFaceNet(512)
facenet_dict = torch.load('./model_mobilefacenet.pth', map_location = torch.device('cpu'))
out = myfacenet.load_state_dict(facenet_dict)
def test():
mtcnn = MTCNN()
img = cv2.imread('29.png')
img2 = Image.fromarray(img)
print(mtcnn.align_multi(img2, 1, 100))
if __name__ == '__main__':
p = Process(target=test)
p.start()
p.join()
```
3. If test function is called in __main__ thread, this code works fine.
4. If `out = myfacenet.load_state_dict(facenet_dict)` line is removed, again code works fine
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Environment
pkg | version
--- | ---
Python | 3.7.4
torch | 1.3.1+cpu (pip)
Pillow | 5.4.1
opencv-python | 4.1.1.26
OS | ubuntu 18.04, linux 4.15.0-72-generic
| module: multiprocessing,triaged | low | Critical |
542,127,484 | youtube-dl | Support request for app.myoutdoortv.com | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.11.28. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.11.28**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video (Require paid account) : https://app.myoutdoortv.com/videos/1568998234677-ott-dreambucks-2018-02-romania
- Single video (Free) : https://app.myoutdoortv.com/fr/videos/1572370082204-ott-hvms-9022-redstags-01-brianbassesdreamcometrue
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Some video are free and does not require any account, some other needs an account and require a need a subscription (a 7 day free trial exists)
Regards,
| site-support-request | low | Critical |
542,133,411 | godot | Viewport sliders don't properly update size when dragged manually | **Godot version:**
3.2 beta4
**Issue description:**
When you pan viewport outside view, the slider will shrink, but it won't expand again when dragged manually, which later results in the grabber "teleporting" on zoom.

Here's how it works in GIMP:

| bug,topic:editor,confirmed | low | Minor |
542,168,293 | pytorch | ATen not compiled with MKL support | I want to run torchaudio with this code
```
import torch
import torchaudio
import matplotlib.pyplot as plt
filename = "./steam-train-whistle-daniel_simon-converted-from-mp3.wav"
waveform, sample_rate = torchaudio.load(filename)
print("Shape of waveform: {}".format(waveform.size()))
print("Sample rate of waveform: {}".format(sample_rate))
plt.figure()
plt.plot(waveform.t().numpy())
specgram = torchaudio.transforms.Spectrogram()(waveform)
print("Shape of spectrogram: {}".format(specgram.size()))
```
but I get this error
```
File "main.py", line 15, in <module>
specgram = torchaudio.transforms.Spectrogram()(waveform)
File "/home/mnaderan/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "build/bdist.linux-x86_64/egg/torchaudio/transforms.py", line 70, in forward
File "build/bdist.linux-x86_64/egg/torchaudio/functional.py", line 259, in spectrogram
File "build/bdist.linux-x86_64/egg/torchaudio/functional.py", line 52, in _stft
File "<__torch_function__ internals>", line 6, in stft
File "/home/mnaderan/.local/lib/python2.7/site-packages/torch/_overrides.py", line 140, in _implement_torch_function
return implementation(*args, **kwargs)
File "/home/mnaderan/.local/lib/python2.7/site-packages/torch/functional.py", line 427, in stft
return torch._C._VariableFunctions.stft(input, n_fft, hop_length, win_length, window, normalized, onesided)
RuntimeError: fft: ATen not compiled with MKL support
```
The error is not meaningful and I don't know what is missing? MKL? Aten?
Here is the env information for pytorch
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 1.4.0a0+2488231
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.3.0-16ubuntu3) 7.3.0
CMake version: version 3.10.2
Python version: 2.7
Is CUDA available: Yes
CUDA runtime version: 10.1.168
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: TITAN V
Nvidia driver version: 418.56
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.5
[pip] torch==1.4.0a0+2488231
[pip] torchaudio==0.4.0a0+774ebc7
[pip] torchtext==0.4.0
[pip] torchvision==0.5.0a0+5c03d59
[conda] Could not collect
```
The variables set according to CMakeCache.txt are
```
# This is the CMakeCache file.
# For build in directory: /mnt/local/mnaderan/pt/pytorch/build
# It was generated by CMake: /usr/bin/cmake
# You can edit this file to change values found and used by cmake.
# If you do not want to change any of the values, simply exit the editor.
# If you do want to change a value, simply edit, save, and exit the editor.
# The syntax for the file is as follows:
# KEY:TYPE=VALUE
# KEY is the name of a variable in the cache.
# TYPE is a hint to GUIs for the type of VALUE, DO NOT EDIT TYPE!.
# VALUE is the current value for the KEY.
########################
# EXTERNAL cache entries
########################
ARCH_OPT_FLAGS:STRING=-msse4
//Path to a program.
ARMIE_COMMAND:FILEPATH=ARMIE_COMMAND-NOTFOUND
//ASIMD/NEON available on host
ASIMD_FOUND:BOOL=false
//Build ARM backends
ASMJIT_BUILD_ARM:BOOL=FALSE
//Build X86 backends (X86 and X86_64)
ASMJIT_BUILD_X86:BOOL=FALSE
//Location of 'asmjit'
ASMJIT_DIR:PATH=/mnt/local/mnaderan/pt/pytorch/third_party/fbgemm/third_party/asmjit
//Embed 'asmjit' library (no targets)
ASMJIT_EMBED:BOOL=FALSE
//Build with C/C++ sanitizers enabled
ASMJIT_SANITIZE:BOOL=
//asmjit source directory from submodules
ASMJIT_SRC_DIR:STRING=/mnt/local/mnaderan/pt/pytorch/third_party/fbgemm/third_party/asmjit
//Build 'asmjit' library as static
ASMJIT_STATIC:BOOL=ON
//Build 'asmjit' test applications
ASMJIT_TEST:BOOL=FALSE
//ATen install binary subdirectory
ATEN_INSTALL_BIN_SUBDIR:PATH=bin
//ATen install include subdirectory
ATEN_INSTALL_INCLUDE_SUBDIR:PATH=include
//ATen install library subdirectory
ATEN_INSTALL_LIB_SUBDIR:PATH=lib
//Do not build ATen test binaries
ATEN_NO_TEST:BOOL=OFF
//ATen parallel backend
ATEN_THREADING:STRING=OMP
//AT install binary subdirectory
AT_INSTALL_BIN_DIR:PATH=bin
//AT install include subdirectory
AT_INSTALL_INCLUDE_DIR:PATH=include
//AT install library subdirectory
AT_INSTALL_LIB_DIR:PATH=lib
//AT install include subdirectory
AT_INSTALL_SHARE_DIR:PATH=share
//enables rdpms counter to report precise cpu frequency in benchdnn.
//\n CAUTION: may not work on all cpus (hence disabled by default)
BENCHDNN_USE_RDPMC:BOOL=OFF
//Build a 32 bit version of the library.
BENCHMARK_BUILD_32_BITS:BOOL=OFF
//Flags used by the C++ compiler during coverage builds.
BENCHMARK_CXX_FLAGS_COVERAGE:STRING=-g
//Allow the downloading and in-tree building of unmet dependencies
BENCHMARK_DOWNLOAD_DEPENDENCIES:BOOL=OFF
//Enable building and running the assembly tests
BENCHMARK_ENABLE_ASSEMBLY_TESTS:BOOL=OFF
//Enable the use of exceptions in the benchmark library.
BENCHMARK_ENABLE_EXCEPTIONS:BOOL=ON
//Enable building the unit tests which depend on gtest
BENCHMARK_ENABLE_GTEST_TESTS:BOOL=ON
//Enable installation of benchmark. (Projects embedding benchmark
// may want to turn this OFF.)
BENCHMARK_ENABLE_INSTALL:BOOL=OFF
//Enable link time optimisation of the benchmark library.
BENCHMARK_ENABLE_LTO:BOOL=OFF
//Enable testing of the benchmark library.
BENCHMARK_ENABLE_TESTING:BOOL=OFF
//Flags used for linking binaries during coverage builds.
BENCHMARK_EXE_LINKER_FLAGS_COVERAGE:STRING=
//Flags used by the shared libraries linker during coverage builds.
BENCHMARK_SHARED_LINKER_FLAGS_COVERAGE:STRING=
//Build and test using libc++ as the standard library.
BENCHMARK_USE_LIBCXX:BOOL=OFF
//Selected BLAS library
BLAS:STRING=MKL
//Path to a library.
BLAS_Accelerate_LIBRARY:FILEPATH=BLAS_Accelerate_LIBRARY-NOTFOUND
//Marks whether BLAS was manually set by user or auto-detected
BLAS_SET_BY_USER:STRING=FALSE
//Path to a library.
BLAS_acml_LIBRARY:FILEPATH=BLAS_acml_LIBRARY-NOTFOUND
//Path to a library.
BLAS_blas_LIBRARY:FILEPATH=BLAS_blas_LIBRARY-NOTFOUND
//Path to a library.
BLAS_blis_LIBRARY:FILEPATH=BLAS_blis_LIBRARY-NOTFOUND
//Path to a library.
BLAS_goto2_LIBRARY:FILEPATH=BLAS_goto2_LIBRARY-NOTFOUND
//Path to a library.
BLAS_openblas_LIBRARY:FILEPATH=BLAS_openblas_LIBRARY-NOTFOUND
//Path to a library.
BLAS_ptf77blas_LIBRARY:FILEPATH=BLAS_ptf77blas_LIBRARY-NOTFOUND
//Path to a library.
BLAS_vecLib_LIBRARY:FILEPATH=BLAS_vecLib_LIBRARY-NOTFOUND
//Tell cmake if Caffe2 is being built alongside torch libs
BUILDING_WITH_TORCH_LIBS:BOOL=ON
//Build benchmark binary (requires hiredis)
BUILD_BENCHMARK:BOOL=OFF
//Build C++ binaries
BUILD_BINARY:BOOL=OFF
//Build libcaffe2 for mobile (deprecating)
BUILD_CAFFE2_MOBILE:BOOL=ON
//Build Caffe2 operators
BUILD_CAFFE2_OPS:BOOL=ON
//Build and use Caffe2's own protobuf under third_party
BUILD_CUSTOM_PROTOBUF:BOOL=ON
//libsleefdft will be built.
BUILD_DFT:BOOL=OFF
//Build Caffe2 documentation
BUILD_DOCS:BOOL=OFF
//Build examples
BUILD_EXAMPLES:BOOL=OFF
//Builds the googlemock subproject
BUILD_GMOCK:BOOL=ON
//libsleefgnuabi will be built.
BUILD_GNUABI_LIBS:BOOL=OFF
//Build JNI bindings
BUILD_JNI:BOOL=OFF
//libsleef will be built.
BUILD_LIBM:BOOL=ON
//Experimental: compile with namedtensor support
BUILD_NAMEDTENSOR:BOOL=OFF
//Build Python binaries
BUILD_ONNX_PYTHON:BOOL=OFF
//Build Python binaries
BUILD_PYTHON:BOOL=True
//libsleefquad will be built.
BUILD_QUAD:BOOL=OFF
//Build shared libs
BUILD_SHARED_LIBS:BOOL=ON
//Build tests
BUILD_TEST:BOOL=True
//Build tests
BUILD_TESTS:BOOL=
//If set, build protobuf inside libcaffe2.so.
CAFFE2_LINK_LOCAL_PROTOBUF:BOOL=ON
//Statically link CUDA libraries
CAFFE2_STATIC_LINK_CUDA:BOOL=OFF
//A whitelist file of files that one should build.
CAFFE2_WHITELIST:STRING=
//already checked for OpenMP
CHECKED_OPENMP:BOOL=ON
//Path to a program.
CLANG_EXE_PATH:FILEPATH=CLANG_EXE_PATH-NOTFOUND
//Build clog tests
CLOG_BUILD_TESTS:BOOL=OFF
//Log errors, warnings, and information to stdout/stderr
CLOG_LOG_TO_STDIO:BOOL=ON
CLOG_RUNTIME_TYPE:STRING=
//Path to a program.
CMAKE_AR:FILEPATH=/usr/bin/ar
//The ASM compiler
CMAKE_ASM_COMPILER:FILEPATH=/usr/bin/cc
//A wrapper around 'ar' adding the appropriate '--plugin' option
// for the GCC compiler
CMAKE_ASM_COMPILER_AR:FILEPATH=/usr/bin/gcc-ar
//A wrapper around 'ranlib' adding the appropriate '--plugin' option
// for the GCC compiler
CMAKE_ASM_COMPILER_RANLIB:FILEPATH=/usr/bin/gcc-ranlib
//Flags used by the assembler during all build types.
CMAKE_ASM_FLAGS:STRING=
//Flags used by the assembler during debug builds.
CMAKE_ASM_FLAGS_DEBUG:STRING=-g
//Flags used by the assembler during release minsize builds.
CMAKE_ASM_FLAGS_MINSIZEREL:STRING=-Os -DNDEBUG
//Flags used by the assembler during release builds.
CMAKE_ASM_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
//Flags used by the assembler during Release with Debug Info builds.
CMAKE_ASM_FLAGS_RELWITHDEBINFO:STRING=-O2 -g -DNDEBUG
//Choose the type of build, options are: None(CMAKE_CXX_FLAGS or
// CMAKE_C_FLAGS used) Debug Release RelWithDebInfo MinSizeRel.
CMAKE_BUILD_TYPE:STRING=Release
//Enable/Disable color output during build.
CMAKE_COLOR_MAKEFILE:BOOL=ON
//CXX compiler
CMAKE_CXX_COMPILER:FILEPATH=/usr/bin/c++
//A wrapper around 'ar' adding the appropriate '--plugin' option
// for the GCC compiler
CMAKE_CXX_COMPILER_AR:FILEPATH=/usr/bin/gcc-ar-7
//A wrapper around 'ranlib' adding the appropriate '--plugin' option
// for the GCC compiler
CMAKE_CXX_COMPILER_RANLIB:FILEPATH=/usr/bin/gcc-ranlib-7
//Flags used by the compiler during all build types.
CMAKE_CXX_FLAGS:STRING=
//Flags used by the compiler during debug builds.
CMAKE_CXX_FLAGS_DEBUG:STRING=-g
//Flags used by the compiler during release builds for minimum
// size.
CMAKE_CXX_FLAGS_MINSIZEREL:STRING=-Os -DNDEBUG
//Flags used by the compiler during release builds.
CMAKE_CXX_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
//Flags used by the compiler during release builds with debug info.
CMAKE_CXX_FLAGS_RELWITHDEBINFO:STRING=-O2 -g -DNDEBUG
//C compiler
CMAKE_C_COMPILER:FILEPATH=/usr/bin/cc
//A wrapper around 'ar' adding the appropriate '--plugin' option
// for the GCC compiler
CMAKE_C_COMPILER_AR:FILEPATH=/usr/bin/gcc-ar-7
//A wrapper around 'ranlib' adding the appropriate '--plugin' option
// for the GCC compiler
CMAKE_C_COMPILER_RANLIB:FILEPATH=/usr/bin/gcc-ranlib-7
//Flags used by the compiler during all build types.
CMAKE_C_FLAGS:STRING=
//Flags used by the compiler during debug builds.
CMAKE_C_FLAGS_DEBUG:STRING=-g
//Flags used by the compiler during release builds for minimum
// size.
CMAKE_C_FLAGS_MINSIZEREL:STRING=-Os -DNDEBUG
//Flags used by the compiler during release builds.
CMAKE_C_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
//Flags used by the compiler during release builds with debug info.
CMAKE_C_FLAGS_RELWITHDEBINFO:STRING=-O2 -g -DNDEBUG
//Flags used by the linker.
CMAKE_EXE_LINKER_FLAGS:STRING=
//Flags used by the linker during debug builds.
CMAKE_EXE_LINKER_FLAGS_DEBUG:STRING=
//Flags used by the linker during release minsize builds.
CMAKE_EXE_LINKER_FLAGS_MINSIZEREL:STRING=
//Flags used by the linker during release builds.
CMAKE_EXE_LINKER_FLAGS_RELEASE:STRING=
//Flags used by the linker during Release with Debug Info builds.
CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO:STRING=
//Enable/Disable output of compile commands during generation.
CMAKE_EXPORT_COMPILE_COMMANDS:BOOL=OFF
//User executables (bin)
CMAKE_INSTALL_BINDIR:PATH=bin
//Directory relative to CMAKE_INSTALL to install the cmake configuration
// files
CMAKE_INSTALL_CMAKEDIR:STRING=lib/cmake/protobuf
//Read-only architecture-independent data (DATAROOTDIR)
CMAKE_INSTALL_DATADIR:PATH=
//Read-only architecture-independent data root (share)
CMAKE_INSTALL_DATAROOTDIR:PATH=share
//Documentation root (DATAROOTDIR/doc/PROJECT_NAME)
CMAKE_INSTALL_DOCDIR:PATH=
//C header files (include)
CMAKE_INSTALL_INCLUDEDIR:PATH=include
//Info documentation (DATAROOTDIR/info)
CMAKE_INSTALL_INFODIR:PATH=
//Object code libraries (lib)
CMAKE_INSTALL_LIBDIR:PATH=lib
//Program executables (libexec)
CMAKE_INSTALL_LIBEXECDIR:PATH=libexec
//Locale-dependent data (DATAROOTDIR/locale)
CMAKE_INSTALL_LOCALEDIR:PATH=
//Modifiable single-machine data (var)
CMAKE_INSTALL_LOCALSTATEDIR:PATH=var
//Man documentation (DATAROOTDIR/man)
CMAKE_INSTALL_MANDIR:PATH=
//C header files for non-gcc (/usr/include)
CMAKE_INSTALL_OLDINCLUDEDIR:PATH=/usr/include
//Install path prefix, prepended onto install directories.
CMAKE_INSTALL_PREFIX:PATH=/mnt/local/mnaderan/pt/pytorch/torch
//Run-time variable data (LOCALSTATEDIR/run)
CMAKE_INSTALL_RUNSTATEDIR:PATH=
//System admin executables (sbin)
CMAKE_INSTALL_SBINDIR:PATH=sbin
//Modifiable architecture-independent data (com)
CMAKE_INSTALL_SHAREDSTATEDIR:PATH=com
//Read-only single-machine data (etc)
CMAKE_INSTALL_SYSCONFDIR:PATH=etc
//Path to a program.
CMAKE_LINKER:FILEPATH=/usr/bin/ld
//Path to a program.
CMAKE_MAKE_PROGRAM:FILEPATH=/usr/bin/make
//Flags used by the linker during the creation of modules.
CMAKE_MODULE_LINKER_FLAGS:STRING=
//Flags used by the linker during debug builds.
CMAKE_MODULE_LINKER_FLAGS_DEBUG:STRING=
//Flags used by the linker during release minsize builds.
CMAKE_MODULE_LINKER_FLAGS_MINSIZEREL:STRING=
//Flags used by the linker during release builds.
CMAKE_MODULE_LINKER_FLAGS_RELEASE:STRING=
//Flags used by the linker during Release with Debug Info builds.
CMAKE_MODULE_LINKER_FLAGS_RELWITHDEBINFO:STRING=
//Path to a program.
CMAKE_NM:FILEPATH=/usr/bin/nm
//Path to a program.
CMAKE_OBJCOPY:FILEPATH=/usr/bin/objcopy
//Path to a program.
CMAKE_OBJDUMP:FILEPATH=/usr/bin/objdump
//No help, variable specified on the command line.
CMAKE_PREFIX_PATH:UNINITIALIZED=/usr/lib/python3/dist-packages
//Value Computed by CMake
CMAKE_PROJECT_NAME:STATIC=Caffe2
//Path to a program.
CMAKE_RANLIB:FILEPATH=/usr/bin/ranlib
//Flags used by the linker during the creation of dll's.
CMAKE_SHARED_LINKER_FLAGS:STRING=
//Flags used by the linker during debug builds.
CMAKE_SHARED_LINKER_FLAGS_DEBUG:STRING=
//Flags used by the linker during release minsize builds.
CMAKE_SHARED_LINKER_FLAGS_MINSIZEREL:STRING=
//Flags used by the linker during release builds.
CMAKE_SHARED_LINKER_FLAGS_RELEASE:STRING=
//Flags used by the linker during Release with Debug Info builds.
CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO:STRING=
//If set, runtime paths are not added when installing shared libraries,
// but are added when building.
CMAKE_SKIP_INSTALL_RPATH:BOOL=NO
//If set, runtime paths are not added when using shared libraries.
CMAKE_SKIP_RPATH:BOOL=NO
//Flags used by the linker during the creation of static libraries.
CMAKE_STATIC_LINKER_FLAGS:STRING=
//Flags used by the linker during debug builds.
CMAKE_STATIC_LINKER_FLAGS_DEBUG:STRING=
//Flags used by the linker during release minsize builds.
CMAKE_STATIC_LINKER_FLAGS_MINSIZEREL:STRING=
//Flags used by the linker during release builds.
CMAKE_STATIC_LINKER_FLAGS_RELEASE:STRING=
//Flags used by the linker during Release with Debug Info builds.
CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO:STRING=
//Path to a program.
CMAKE_STRIP:FILEPATH=/usr/bin/strip
//If this value is on, makefiles will be generated without the
// .SILENT directive, and all commands will be echoed to the console
// during the make. This is useful for debugging only. With Visual
// Studio IDE projects all commands are done without /nologo.
CMAKE_VERBOSE_MAKEFILE:BOOL=FALSE
//Colorize output during compilation
COLORIZE_OUTPUT:BOOL=ON
//Confu-style dependencies binary directory
CONFU_DEPENDENCIES_BINARY_DIR:PATH=/mnt/local/mnaderan/pt/pytorch/build/confu-deps
//Confu-style dependencies source directory
CONFU_DEPENDENCIES_SOURCE_DIR:PATH=/mnt/local/mnaderan/pt/pytorch/build/confu-srcs
//OMAP3 available on host
CORTEXA8_FOUND:BOOL=false
//OMAP4 available on host
CORTEXA9_FOUND:BOOL=false
//Build cpuinfo micro-benchmarks
CPUINFO_BUILD_BENCHMARKS:BOOL=OFF
//Build cpuinfo mock tests
CPUINFO_BUILD_MOCK_TESTS:BOOL=OFF
//Build command-line tools
CPUINFO_BUILD_TOOLS:BOOL=OFF
//Build cpuinfo unit tests
CPUINFO_BUILD_UNIT_TESTS:BOOL=OFF
CPUINFO_LIBRARY_TYPE:STRING=static
CPUINFO_LOG_LEVEL:STRING=error
//Type of runtime library (shared, static, or default) to use
CPUINFO_RUNTIME_TYPE:STRING=default
//cpuinfo source directory
CPUINFO_SOURCE_DIR:STRING=/mnt/local/mnaderan/pt/pytorch/third_party/cpuinfo
//The directory where CUB includes reside
CUB_INCLUDE_DIR:PATH=CUB_INCLUDE_DIR-NOTFOUND
//Compile device code in 64 bit mode
CUDA_64_BIT_DEVICE_CODE:BOOL=ON
//Attach the build rule to the CUDA source file. Enable only when
// the CUDA source file is added to at most one target.
CUDA_ATTACH_VS_BUILD_RULE_TO_CUDA_FILE:BOOL=ON
//Generate and parse .cubin files in Device mode.
CUDA_BUILD_CUBIN:BOOL=OFF
//Build in Emulation mode
CUDA_BUILD_EMULATION:BOOL=OFF
//"cudart" library
CUDA_CUDART_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libcudart.so
//Path to a library.
CUDA_CUDA_LIB:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/stubs/libcuda.so
//"cuda" library (older versions only).
CUDA_CUDA_LIBRARY:FILEPATH=/usr/lib/x86_64-linux-gnu/libcuda.so
//Directory to put all the output files. If blank it will default
// to the CMAKE_CURRENT_BINARY_DIR
CUDA_GENERATED_OUTPUT_DIR:PATH=
//Generated file extension
CUDA_HOST_COMPILATION_CPP:BOOL=ON
//Host side compiler used by NVCC
CUDA_HOST_COMPILER:FILEPATH=/usr/bin/cc
//Path to a program.
CUDA_NVCC_EXECUTABLE:FILEPATH=/home/mnaderan/cuda-10.1.168/bin/nvcc
//Semi-colon delimit multiple arguments. during all build types.
CUDA_NVCC_FLAGS:STRING=
//Semi-colon delimit multiple arguments. during DEBUG builds.
CUDA_NVCC_FLAGS_DEBUG:STRING=
//Semi-colon delimit multiple arguments. during MINSIZEREL builds.
CUDA_NVCC_FLAGS_MINSIZEREL:STRING=
//Semi-colon delimit multiple arguments. during RELEASE builds.
CUDA_NVCC_FLAGS_RELEASE:STRING=
//Semi-colon delimit multiple arguments. during RELWITHDEBINFO
// builds.
CUDA_NVCC_FLAGS_RELWITHDEBINFO:STRING=
//Path to a library.
CUDA_NVRTC_LIB:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnvrtc.so
//Propagate C/CXX_FLAGS and friends to the host compiler via -Xcompile
CUDA_PROPAGATE_HOST_FLAGS:BOOL=ON
//Blacklisted flags to prevent propagation
CUDA_PROPAGATE_HOST_FLAGS_BLACKLIST:STRING=
//Path to a file.
CUDA_SDK_ROOT_DIR:PATH=CUDA_SDK_ROOT_DIR-NOTFOUND
//Compile CUDA objects with separable compilation enabled. Requires
// CUDA 5.0+
CUDA_SEPARABLE_COMPILATION:BOOL=OFF
//Path to a file.
CUDA_TOOLKIT_INCLUDE:PATH=/home/mnaderan/cuda-10.1.168/include
//Toolkit location.
CUDA_TOOLKIT_ROOT_DIR:PATH=/home/mnaderan/cuda-10.1.168
//Print out the commands run while compiling the CUDA source file.
// With the Makefile generator this defaults to VERBOSE variable
// specified on the command line, but can be forced on with this
// option.
CUDA_VERBOSE_BUILD:BOOL=OFF
//Version of CUDA as computed from nvcc.
CUDA_VERSION:STRING=10.1
//"cublas" library
CUDA_cublas_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libcublas.so
//"cudadevrt" library
CUDA_cudadevrt_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libcudadevrt.a
//static CUDA runtime library
CUDA_cudart_static_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libcudart_static.a
//"cufft" library
CUDA_cufft_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libcufft.so
//"cupti" library
CUDA_cupti_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/extras/CUPTI/lib64/libcupti.so
//"curand" library
CUDA_curand_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libcurand.so
//"cusolver" library
CUDA_cusolver_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libcusolver.so
//"cusparse" library
CUDA_cusparse_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libcusparse.so
//"nppc" library
CUDA_nppc_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnppc.so
//"nppial" library
CUDA_nppial_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnppial.so
//"nppicc" library
CUDA_nppicc_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnppicc.so
//"nppicom" library
CUDA_nppicom_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnppicom.so
//"nppidei" library
CUDA_nppidei_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnppidei.so
//"nppif" library
CUDA_nppif_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnppif.so
//"nppig" library
CUDA_nppig_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnppig.so
//"nppim" library
CUDA_nppim_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnppim.so
//"nppist" library
CUDA_nppist_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnppist.so
//"nppisu" library
CUDA_nppisu_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnppisu.so
//"nppitc" library
CUDA_nppitc_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnppitc.so
//"npps" library
CUDA_npps_LIBRARY:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnpps.so
//Folder containing NVIDIA cuDNN header files
CUDNN_INCLUDE_DIR:PATH=/home/mnaderan/cudnn-7.6.2/include
//Path to a file.
CUDNN_INCLUDE_PATH:PATH=/home/mnaderan/cudnn-7.6.2/include
//Path to the cudnn library file (e.g., libcudnn.so)
CUDNN_LIBRARY:PATH=/home/mnaderan/cudnn-7.6.2/lib64
//Path to a library.
CUDNN_LIBRARY_PATH:FILEPATH=/home/mnaderan/cudnn-7.6.2/lib64/libcudnn.so
//Folder containing NVIDIA cuDNN
CUDNN_ROOT:PATH=
//Look for static CUDNN
CUDNN_STATIC:BOOL=OFF
//CXX AVX2 flags
CXX_AVX2_FLAGS:STRING=-mavx2 -mfma
//CXX AVX2 support
CXX_AVX2_FOUND:BOOL=TRUE
//CXX AVX flags
CXX_AVX_FLAGS:STRING=-mavx
//CXX AVX support
CXX_AVX_FOUND:BOOL=TRUE
//C AVX2 flags
C_AVX2_FLAGS:STRING=-mavx2 -mfma
//C AVX2 support
C_AVX2_FOUND:BOOL=TRUE
//C AVX flags
C_AVX_FLAGS:STRING=-mavx
//C AVX support
C_AVX_FOUND:BOOL=TRUE
//Value Computed by CMake
Caffe2_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build
//Value Computed by CMake
Caffe2_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch
//Dependencies for the target
Caffe2_perfkernels_avx2_LIB_DEPENDS:STATIC=general;c10;
//Dependencies for the target
Caffe2_perfkernels_avx512_LIB_DEPENDS:STATIC=general;c10;
//Dependencies for the target
Caffe2_perfkernels_avx_LIB_DEPENDS:STATIC=general;c10;
//Disable AVX2
DISABLE_AVX2:BOOL=OFF
//Disable AVX512F
DISABLE_AVX512F:BOOL=OFF
//Disable float128
DISABLE_FLOAT128:BOOL=OFF
//Disable FMA4
DISABLE_FMA4:BOOL=OFF
//Disable long double
DISABLE_LONG_DOUBLE:BOOL=OFF
//Disable OPENMP
DISABLE_OPENMP:BOOL=OFF
//Disable SSE2
DISABLE_SSE2:BOOL=OFF
//Disable AVX
DISABLE_SSE4:BOOL=OFF
//Disable SVE
DISABLE_SVE:BOOL=OFF
//Disable VSX
DISABLE_VSX:BOOL=OFF
//Dot tool for use with Doxygen
DOXYGEN_DOT_EXECUTABLE:FILEPATH=/usr/bin/dot
//Doxygen documentation generation tool (http://www.doxygen.org)
DOXYGEN_EXECUTABLE:FILEPATH=/usr/bin/doxygen
//Build fails if AVX2 is not supported by the compiler
ENFORCE_AVX2:BOOL=OFF
//Build fails if AVX512F is not supported by the compiler
ENFORCE_AVX512F:BOOL=OFF
//Build fails if float128 is not supported by the compiler
ENFORCE_FLOAT128:BOOL=OFF
//Build fails if FMA4 is not supported by the compiler
ENFORCE_FMA4:BOOL=OFF
//Build fails if long double is not supported by the compiler
ENFORCE_LONG_DOUBLE:BOOL=OFF
//Build fails if OPENMP is not supported by the compiler
ENFORCE_OPENMP:BOOL=OFF
//Build fails if SSE2 is not supported by the compiler
ENFORCE_SSE2:BOOL=OFF
//Build fails if AVX is not supported by the compiler
ENFORCE_SSE4:BOOL=OFF
//Build fails if SVE is not supported by the compiler
ENFORCE_SVE:BOOL=OFF
//Build fails if tester3 is not built
ENFORCE_TESTER3:BOOL=OFF
//Build fails if VSX is not supported by the compiler
ENFORCE_VSX:BOOL=OFF
//Experimental option to use a single thread pool for inter- and
// intra-op parallelism
EXPERIMENTAL_SINGLE_THREAD_POOL:STRING=0
//Build fbgemm benchmarks
FBGEMM_BUILD_BENCHMARKS:BOOL=OFF
//Build fbgemm unit tests
FBGEMM_BUILD_TESTS:BOOL=OFF
FBGEMM_LIBRARY_TYPE:STRING=static
//FBGEMM source directory
FBGEMM_SOURCE_DIR:STRING=/mnt/local/mnaderan/pt/pytorch/third_party/fbgemm
//Path to a file.
FFTW3_INCLUDE_DIR:PATH=FFTW3_INCLUDE_DIR-NOTFOUND
//Build with Werror
FOXI_WERROR:BOOL=OFF
//Value Computed by CMake
FP16_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/confu-deps/fp16
//Build FP16 micro-benchmarks
FP16_BUILD_BENCHMARKS:BOOL=OFF
//Build FP16 unit tests
FP16_BUILD_TESTS:BOOL=OFF
//Value Computed by CMake
FP16_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/FP16
//Build FXdiv micro-benchmarks
FXDIV_BUILD_BENCHMARKS:BOOL=OFF
//Build FXdiv unit tests
FXDIV_BUILD_TESTS:BOOL=OFF
//FXdiv source directory
FXDIV_SOURCE_DIR:STRING=/mnt/local/mnaderan/pt/pytorch/third_party/FXdiv
//Allow use of inline assembly in FXdiv
FXDIV_USE_INLINE_ASSEMBLY:BOOL=OFF
//Value Computed by CMake
FXdiv_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/confu-deps/fxdiv
//Value Computed by CMake
FXdiv_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/FXdiv
//Git command line client
GIT_EXECUTABLE:FILEPATH=/usr/bin/git
GLOO_INSTALL:BOOL=ON
GLOO_STATIC_OR_SHARED:STRING=STATIC
//Google Test source directory
GOOGLETEST_SOURCE_DIR:STRING=/mnt/local/mnaderan/pt/pytorch/third_party/googletest
//Path to a file.
IDEEP_INCLUDE_DIR:PATH=/mnt/local/mnaderan/pt/pytorch/third_party/ideep/include
//Enable installation of googletest. (Projects embedding googletest
// may want to turn this OFF.)
INSTALL_GTEST:BOOL=OFF
//Install test binaries if BUILD_TEST is on
INSTALL_TEST:BOOL=ON
//Root directory of the Intel Compiler Suite (contains ipp, mkl,
// etc.)
INTEL_COMPILER_DIR:STRING=/opt/intel
//Root directory of the Intel MKL (standalone)
INTEL_MKL_DIR:STRING=/opt/intel/mkl
//Value Computed by CMake
Intel(R) MKL-DNN_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/third_party/ideep/mkl-dnn
//Value Computed by CMake
Intel(R) MKL-DNN_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/ideep/mkl-dnn
//Path to a library.
LIBFFTW3:FILEPATH=LIBFFTW3-NOTFOUND
//Path to a library.
LIBGMP:FILEPATH=LIBGMP-NOTFOUND
//Path to a library.
LIBM:FILEPATH=/usr/lib/x86_64-linux-gnu/libm.so
//Path to a library.
LIBNVTOOLSEXT:FILEPATH=/home/mnaderan/cuda-10.1.168/lib64/libnvToolsExt.so
//Path to a library.
LIBRT:FILEPATH=/usr/lib/x86_64-linux-gnu/librt.so
//libshm install library directory
LIBSHM_INSTALL_LIB_SUBDIR:PATH=lib
//Path to a library.
LIB_MPFR:FILEPATH=LIB_MPFR-NOTFOUND
//Path to a program.
LLVM_FILECHECK_EXE:FILEPATH=LLVM_FILECHECK_EXE-NOTFOUND
//Path to a file.
MAGMA_INCLUDE_DIR:PATH=MAGMA_INCLUDE_DIR-NOTFOUND
//Path to a library.
MAGMA_LIBRARIES:FILEPATH=MAGMA_LIBRARIES-NOTFOUND
//disables sharing a common scratchpad between primitives.
//\n This option must be turned on if there is a possibility
// of concurrent
//\n execution of primitives that were created in the same thread.
//\n CAUTION: enabling this option increases memory consumption
MKLDNN_ENABLE_CONCURRENT_EXEC:BOOL=OFF
//Path to a file.
MKLDNN_INCLUDE_DIR:PATH=/mnt/local/mnaderan/pt/pytorch/third_party/ideep/mkl-dnn/include
//specifies installation mode; supports DEFAULT or BUNDLE.
//\n
//\n When BUNDLE option is set MKL-DNN will be installed as a
// bundle
//\n which contains examples and benchdnn.
//\n The BUNDLE option requires MKLDNN_USE_MKL be set to FULL:STATIC.
MKLDNN_INSTALL_MODE:STRING=DEFAULT
MKLDNN_LIBRARY_TYPE:STRING=STATIC
MKLDNN_THREADING:STRING=OMP:COMP
//instructs build system to use a Clang sanitizer. Possible values:
//\n Address: enables AddressSanitizer
//\n Memory: enables MemorySanitizer
//\n MemoryWithOrigin: enables MemorySanitizer with origin tracking
//\n Undefined: enables UndefinedBehaviourSanitizer
//\n This feature is experimental and is only available on Linux.
MKLDNN_USE_CLANG_SANITIZER:STRING=
MKLDNN_USE_MKL:STRING=NONE
//allows Intel(R) MKL-DNN be verbose whenever MKLDNN_VERBOSE
//\n environment variable set to 1
MKLDNN_VERBOSE:BOOL=ON
//treat warnings as errors
MKLDNN_WERROR:BOOL=OFF
//Path to a library.
MKL_LIBRARIES_mkl_LIBRARY:FILEPATH=MKL_LIBRARIES_mkl_LIBRARY-NOTFOUND
//Path to a library.
MKL_LIBRARIES_mkl_gf_LIBRARY:FILEPATH=MKL_LIBRARIES_mkl_gf_LIBRARY-NOTFOUND
//Path to a library.
MKL_LIBRARIES_mkl_gf_lp64_LIBRARY:FILEPATH=MKL_LIBRARIES_mkl_gf_lp64_LIBRARY-NOTFOUND
//Path to a library.
MKL_LIBRARIES_mkl_intel_LIBRARY:FILEPATH=MKL_LIBRARIES_mkl_intel_LIBRARY-NOTFOUND
//Path to a library.
MKL_LIBRARIES_mkl_intel_lp64_LIBRARY:FILEPATH=MKL_LIBRARIES_mkl_intel_lp64_LIBRARY-NOTFOUND
//MKL flavor: SEQ, TBB or OMP (default)
MKL_THREADING:STRING=OMP
//Path to a file.
MPFR_INCLUDE_DIR:PATH=MPFR_INCLUDE_DIR-NOTFOUND
//Executable for running MPI programs.
MPIEXEC_EXECUTABLE:FILEPATH=/usr/bin/mpiexec
//Maximum number of processors available to run MPI applications.
MPIEXEC_MAX_NUMPROCS:STRING=24
//Flag used by MPI to specify the number of processes for mpiexec;
// the next option will be the number of processes.
MPIEXEC_NUMPROC_FLAG:STRING=-n
//These flags will be placed after all flags passed to mpiexec.
MPIEXEC_POSTFLAGS:STRING=
//These flags will be directly before the executable that is being
// run by mpiexec.
MPIEXEC_PREFLAGS:STRING=
//MPI CXX additional include directories
MPI_CXX_ADDITIONAL_INCLUDE_DIRS:STRING=/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi;/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent;/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/include
//MPI compiler for CXX
MPI_CXX_COMPILER:FILEPATH=/usr/bin/mpicxx
//MPI CXX compilation definitions
MPI_CXX_COMPILE_DEFINITIONS:STRING=
//MPI CXX compilation options
MPI_CXX_COMPILE_OPTIONS:STRING=-pthread
//Path to a file.
MPI_CXX_HEADER_DIR:PATH=/usr/lib/x86_64-linux-gnu/openmpi/include
//MPI CXX libraries to link against
MPI_CXX_LIB_NAMES:STRING=mpi_cxx;mpi
//MPI CXX linker flags
MPI_CXX_LINK_FLAGS:STRING=-pthread
//If true, the MPI-2 C++ bindings are disabled using definitions.
MPI_CXX_SKIP_MPICXX:BOOL=FALSE
//MPI C additional include directories
MPI_C_ADDITIONAL_INCLUDE_DIRS:STRING=/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi;/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent;/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/include
//MPI compiler for C
MPI_C_COMPILER:FILEPATH=/usr/bin/mpicc
//MPI C compilation definitions
MPI_C_COMPILE_DEFINITIONS:STRING=
//MPI C compilation options
MPI_C_COMPILE_OPTIONS:STRING=-pthread
//Path to a file.
MPI_C_HEADER_DIR:PATH=/usr/lib/x86_64-linux-gnu/openmpi/include
//MPI C libraries to link against
MPI_C_LIB_NAMES:STRING=mpi
//MPI C linker flags
MPI_C_LINK_FLAGS:STRING=-pthread
//Location of the mpi library for MPI
MPI_mpi_LIBRARY:FILEPATH=/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so
//Location of the mpi_cxx library for MPI
MPI_mpi_cxx_LIBRARY:FILEPATH=/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so
//Path to a file.
NCCL_INCLUDE_DIR:PATH=/home/mnaderan/nccl_2.4.8/include
//Path to a library.
NCCL_LIBRARY:FILEPATH=/home/mnaderan/nccl_2.4.8/lib/libnccl.so
//Folder contains NVIDIA NCCL
NCCL_ROOT_DIR:PATH=/home/mnaderan/nccl_2.4.8
//disable asserts (WARNING: this may result in silent UB e.g. with
// out-of-bound indices)
NDEBUG:BOOL=OFF
//NEON available on host
NEON_FOUND:BOOL=false
//Backend for micro-kernels implementation
NNPACK_BACKEND:STRING=auto
//Value Computed by CMake
NNPACK_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/confu-deps/NNPACK
NNPACK_BUILD_BENCHMARKS:BOOL=OFF
//Build NNPACK unit tests
NNPACK_BUILD_TESTS:BOOL=OFF
//Build only NNPACK functions for convolutional layer
NNPACK_CONVOLUTION_ONLY:BOOL=OFF
//Build NNPACK for custom thread pool
NNPACK_CUSTOM_THREADPOOL:BOOL=ON
//Build only NNPACK functions for inference
NNPACK_INFERENCE_ONLY:BOOL=OFF
NNPACK_LIBRARY_TYPE:STRING=static
//Value Computed by CMake
NNPACK_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/NNPACK
//No help, variable specified on the command line.
NUMPY_INCLUDE_DIR:UNINITIALIZED=/home/mnaderan/.local/lib/python3.6/site-packages/numpy/core/include
//Path to a file.
Numa_INCLUDE_DIR:PATH=/usr/include
//Path to a library.
Numa_LIBRARIES:FILEPATH=/usr/lib/x86_64-linux-gnu/libnuma.so
//Path to a program.
OMPI_INFO:FILEPATH=/usr/bin/ompi_info
//Use dummy backend in onnxifi test driver.
ONNXIFI_DUMMY_BACKEND:BOOL=OFF
//Enable onnxifi extensions.
ONNXIFI_ENABLE_EXT:BOOL=OFF
//Build ONNX micro-benchmarks
ONNX_BUILD_BENCHMARKS:BOOL=OFF
//Build ONNX C++ APIs Tests
ONNX_BUILD_TESTS:BOOL=OFF
//Build with coverage instrumentation
ONNX_COVERAGE:BOOL=OFF
//Generate protobuf python type stubs
ONNX_GEN_PB_TYPE_STUBS:BOOL=ON
//A namespace for ONNX; needed to build with other frameworks that
// share ONNX.
ONNX_NAMESPACE:STRING=onnx_torch
//Use lite protobuf instead of full.
ONNX_USE_LITE_PROTO:BOOL=OFF
//Build ONNX using protobuf shared library. Sets PROTOBUF_USE_DLLS
// CMAKE Flag
ONNX_USE_PROTOBUF_SHARED_LIBS:BOOL=OFF
//Generate code by proto3
ONNX_VERIFY_PROTO3:BOOL=OFF
//Build with Werror
ONNX_WERROR:BOOL=OFF
//OpenMP Support found
OPENMP_FOUND:BOOL=TRUE
//Path to a library.
OPENSSL_CRYPTO_LIBRARY:FILEPATH=OPENSSL_CRYPTO_LIBRARY-NOTFOUND
//Path to a file.
OPENSSL_INCLUDE_DIR:PATH=OPENSSL_INCLUDE_DIR-NOTFOUND
//Path to a library.
OPENSSL_SSL_LIBRARY:FILEPATH=OPENSSL_SSL_LIBRARY-NOTFOUND
//CXX compiler flags for OpenMP parallelization
OpenMP_CXX_FLAGS:STRING=-fopenmp
//CXX compiler libraries for OpenMP parallelization
OpenMP_CXX_LIB_NAMES:STRING=gomp;pthread
//C compiler flags for OpenMP parallelization
OpenMP_C_FLAGS:STRING=-fopenmp
//C compiler libraries for OpenMP parallelization
OpenMP_C_LIB_NAMES:STRING=gomp;pthread
//Path to the gomp library for OpenMP
OpenMP_gomp_LIBRARY:FILEPATH=/usr/lib/gcc/x86_64-linux-gnu/7/libgomp.so
//Path to the pthread library for OpenMP
OpenMP_pthread_LIBRARY:FILEPATH=/usr/lib/x86_64-linux-gnu/libpthread.so
//pkg-config executable
PKG_CONFIG_EXECUTABLE:FILEPATH=/usr/bin/pkg-config
//PSimd source directory
PSIMD_SOURCE_DIR:STRING=/mnt/local/mnaderan/pt/pytorch/third_party/psimd
//Build pthreadpool micro-benchmarks
PTHREADPOOL_BUILD_BENCHMARKS:BOOL=OFF
//Build pthreadpool unit tests
PTHREADPOOL_BUILD_TESTS:BOOL=OFF
PTHREADPOOL_LIBRARY_TYPE:STRING=static
//pthreadpool source directory
PTHREADPOOL_SOURCE_DIR:STRING=/mnt/local/mnaderan/pt/pytorch/third_party/pthreadpool
//enum34 (Python package) source directory
PYTHON_ENUM_SOURCE_DIR:STRING=/mnt/local/mnaderan/pt/pytorch/third_party/python-enum
//Path to a program.
PYTHON_EXECUTABLE:FILEPATH=/usr/bin/python3
//Path to a file.
PYTHON_INCLUDE_DIR:PATH=/usr/include/python3.6m
//Path to a library.
PYTHON_LIBRARY:FILEPATH=/usr/lib/libpython3.6m.so.1.0
//Path to a library.
PYTHON_LIBRARY_DEBUG:FILEPATH=PYTHON_LIBRARY_DEBUG-NOTFOUND
//Python installation path (relative to CMake installation prefix)
PYTHON_LIB_REL_PATH:STRING=lib/python3/dist-packages
//PeachPy (Python package) source directory
PYTHON_PEACHPY_SOURCE_DIR:STRING=/mnt/local/mnaderan/pt/pytorch/third_party/python-peachpy
//six (Python package) source directory
PYTHON_SIX_SOURCE_DIR:STRING=/mnt/local/mnaderan/pt/pytorch/third_party/python-six
//Value Computed by CMake
PYTORCH_QNNPACK_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/confu-deps/pytorch_qnnpack
//Build QNNPACK benchmarks
PYTORCH_QNNPACK_BUILD_BENCHMARKS:BOOL=OFF
//Build QNNPACK unit tests
PYTORCH_QNNPACK_BUILD_TESTS:BOOL=OFF
//Build QNNPACK for custom thread pool
PYTORCH_QNNPACK_CUSTOM_THREADPOOL:BOOL=ON
PYTORCH_QNNPACK_LIBRARY_TYPE:STRING=static
//Value Computed by CMake
PYTORCH_QNNPACK_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack
//Value Computed by CMake
QNNPACK_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/confu-deps/QNNPACK
//Build QNNPACK benchmarks
QNNPACK_BUILD_BENCHMARKS:BOOL=OFF
//Build QNNPACK unit tests
QNNPACK_BUILD_TESTS:BOOL=OFF
//Build QNNPACK for custom thread pool
QNNPACK_CUSTOM_THREADPOOL:BOOL=ON
QNNPACK_LIBRARY_TYPE:STRING=static
//Value Computed by CMake
QNNPACK_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/QNNPACK
//Path to a program.
SDE_COMMAND:FILEPATH=SDE_COMMAND-NOTFOUND
//Path to the yaml file that contains the list of operators to
// include for custom build. Include all operators by default.
SELECTED_OP_LIST:STRING=
//Value Computed by CMake
SLEEF_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/sleef
//Path for finding sleef specific cmake scripts
SLEEF_SCRIPT_PATH:PATH=/mnt/local/mnaderan/pt/pytorch/third_party/sleef/cmake/Scripts
//Show SLEEF configuration status messages.
SLEEF_SHOW_CONFIG:BOOL=ON
//Show cmake error log.
SLEEF_SHOW_ERROR_LOG:BOOL=OFF
//Value Computed by CMake
SLEEF_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/sleef
//List of SIMD architectures supported by libsleef.
SLEEF_SUPPORTED_EXTENSIONS:STRING=AVX512F;AVX512FNOFMA;AVX2;AVX2128;FMA4;AVX;SSE4;SSE2;ADVSIMD;ADVSIMDNOFMA;SVE;SVENOFMA;NEON32;NEON32VFPV4;VSX;VSXNOFMA;PUREC_SCALAR;PURECFMA_SCALAR
//List of SIMD architectures supported by libsleef for GNU ABI.
SLEEF_SUPPORTED_GNUABI_EXTENSIONS:STRING=SSE2;AVX;AVX2;AVX512F;ADVSIMD;SVE
//Perform tests on implementations with all vector extensions
SLEEF_TEST_ALL_IUT:BOOL=OFF
//Torch build version
TORCH_BUILD_VERSION:STRING=1.4.0a0+2488231
//Use Address Sanitizer
USE_ASAN:BOOL=OFF
//USE C10D GLOO
USE_C10D_GLOO:BOOL=ON
//USE C10D MPI
USE_C10D_MPI:BOOL=ON
//USE C10D NCCL
USE_C10D_NCCL:BOOL=ON
//Use CUDA
USE_CUDA:BOOL=ON
//Use cuDNN
USE_CUDNN:BOOL=ON
//Use distributed
USE_DISTRIBUTED:BOOL=ON
//Use FBGEMM (quantized 8-bit server operators)
USE_FBGEMM:BOOL=ON
//Use ffmpeg
USE_FFMPEG:BOOL=OFF
//Use GFLAGS
USE_GFLAGS:BOOL=OFF
//Use GLOG
USE_GLOG:BOOL=OFF
//Use Gloo. Only available if USE_DISTRIBUTED is on.
USE_GLOO:BOOL=ON
//Support ibverbs transport
USE_IBVERBS:BOOL=OFF
//Use LEVELDB
USE_LEVELDB:BOOL=OFF
//Build libuv transport
USE_LIBUV:BOOL=OFF
//Use lite protobuf instead of full.
USE_LITE_PROTO:BOOL=OFF
//Use LMDB
USE_LMDB:BOOL=OFF
//Use Metal for iOS build
USE_METAL:BOOL=OFF
//Use MKLDNN. Only available on x86 and x86_64.
USE_MKLDNN:BOOL=ON
//Use CBLAS in MKLDNN
USE_MKLDNN_CBLAS:BOOL=OFF
//Use MPI for Caffe2. Only available if USE_DISTRIBUTED is on.
USE_MPI:BOOL=ON
//Use -march=native
USE_NATIVE_ARCH:BOOL=OFF
//Support using NCCL for local collectives
USE_NCCL:BOOL=ON
//Use NNAPI
USE_NNAPI:BOOL=OFF
//Use NNPACK
USE_NNPACK:BOOL=ON
//Use NUMA. Only available on Linux.
USE_NUMA:BOOL=ON
//Use NumPy
USE_NUMPY:BOOL=ON
//Use NVRTC. Only available if USE_CUDA is on.
USE_NVRTC:BOOL=OFF
//Use observers module.
USE_OBSERVERS:BOOL=ON
//Use OpenCL
USE_OPENCL:BOOL=OFF
//Use OpenCV
USE_OPENCV:BOOL=OFF
//Use OpenMP for parallel code
USE_OPENMP:BOOL=ON
//Use profiling
USE_PROF:BOOL=OFF
//Use ATen/QNNPACK (quantized 8-bit operators)
USE_PYTORCH_QNNPACK:BOOL=ON
//Use QNNPACK (quantized 8-bit operators)
USE_QNNPACK:BOOL=ON
//Support using RCCL for local collectives
USE_RCCL:BOOL=OFF
//Support using Redis for rendezvous
USE_REDIS:BOOL=OFF
//Use RocksDB
USE_ROCKSDB:BOOL=OFF
//Use ROCm
USE_ROCM:BOOL=OFF
//Use Qualcomm's SNPE library
USE_SNPE:BOOL=OFF
//Use cuDNN static libraries
USE_STATIC_CUDNN:BOOL=OFF
//Use static dispatch for ATen operators
USE_STATIC_DISPATCH:BOOL=OFF
//Use static NCCL
USE_STATIC_NCCL:BOOL=OFF
//Use system Eigen instead of the one under third_party
USE_SYSTEM_EIGEN_INSTALL:BOOL=OFF
//Use system-wide NCCL
USE_SYSTEM_NCCL:BOOL=OFF
//Use TBB
USE_TBB:BOOL=OFF
//Using Nvidia TensorRT library
USE_TENSORRT:BOOL=OFF
//Use ZMQ
USE_ZMQ:BOOL=OFF
//Use ZSTD
USE_ZSTD:BOOL=OFF
//path to Intel(R) VTune(tm) Amplifier.
//\n Required to register Intel(R) MKL-DNN kernels that are generated
// at
//\n runtime, otherwise the profile would not be able to track
// the kernels and
//\n would report `outside any known module`.
VTUNEROOT:STRING=
//Blas type [mkl/open/goto/acml/atlas/accelerate/veclib/generic]
WITH_BLAS:STRING=
//builds examples
WITH_EXAMPLE:BOOL=FALSE
//OpenMP support if available?
WITH_OPENMP:BOOL=ON
//builds tests
WITH_TEST:BOOL=FALSE
//Path to a file.
__header_dir:PATH=__header_dir-NOTFOUND
//Dependencies for the target
asmjit_LIB_DEPENDS:STATIC=general;pthread;general;rt;
//Value Computed by CMake
benchmark_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/third_party/benchmark
//Dependencies for the target
benchmark_LIB_DEPENDS:STATIC=general;-pthread;general;/usr/lib/x86_64-linux-gnu/librt.so;
//Value Computed by CMake
benchmark_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/benchmark
//Dependencies for the target
benchmark_main_LIB_DEPENDS:STATIC=general;benchmark;
//Value Computed by CMake
c10_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/c10
//Dependencies for the target
c10_LIB_DEPENDS:STATIC=general;/usr/lib/x86_64-linux-gnu/libnuma.so;
//Value Computed by CMake
c10_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/c10
//Dependencies for the target
c10_cuda_LIB_DEPENDS:STATIC=general;/home/mnaderan/cuda-10.1.168/lib64/libcudart.so;general;c10;
//Dependencies for the target
c10d_LIB_DEPENDS:STATIC=general;torch;general;/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so;general;/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so;general;gloo;general;gloo_cuda;
//Dependencies for the target
c10d_cuda_test_LIB_DEPENDS:STATIC=general;/home/mnaderan/cuda-10.1.168/lib64/libcudart.so;general;c10d;
//Dependencies for the target
caffe2_detectron_ops_gpu_LIB_DEPENDS:STATIC=general;/home/mnaderan/cuda-10.1.168/lib64/libcudart.so;general;torch;general;/usr/lib/gcc/x86_64-linux-gnu/7/libgomp.so;general;/usr/lib/x86_64-linux-gnu/libpthread.so;
//Dependencies for target
caffe2_module_test_dynamic_LIB_DEPENDS:STATIC=
//Dependencies for the target
caffe2_nvrtc_LIB_DEPENDS:STATIC=general;/home/mnaderan/cuda-10.1.168/lib64/stubs/libcuda.so;general;/home/mnaderan/cuda-10.1.168/lib64/libnvrtc.so;
//Dependencies for target
caffe2_observers_LIB_DEPENDS:STATIC=
//Dependencies for the target
caffe2_protos_LIB_DEPENDS:STATIC=general;protobuf::libprotobuf;
//Dependencies for target
caffe2_pybind11_state_LIB_DEPENDS:STATIC=
//Dependencies for target
caffe2_pybind11_state_gpu_LIB_DEPENDS:STATIC=
//Value Computed by CMake
clog_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/confu-deps/clog
//Dependencies for target
clog_LIB_DEPENDS:STATIC=
//Value Computed by CMake
clog_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/QNNPACK/deps/clog
//Value Computed by CMake
cpuinfo_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/confu-deps/cpuinfo
//Dependencies for the target
cpuinfo_LIB_DEPENDS:STATIC=general;-pthread;general;clog;
//Value Computed by CMake
cpuinfo_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/cpuinfo
//Dependencies for the target
cpuinfo_internals_LIB_DEPENDS:STATIC=general;-pthread;general;clog;
//Value Computed by CMake
fbgemm_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/third_party/fbgemm
//Dependencies for the target
fbgemm_LIB_DEPENDS:STATIC=general;asmjit;general;cpuinfo;
//Value Computed by CMake
fbgemm_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/fbgemm
//Value Computed by CMake
foxi_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/third_party/foxi
//Value Computed by CMake
foxi_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/foxi
//Dependencies for the target
foxi_dummy_LIB_DEPENDS:STATIC=general;dl;
//Dependencies for the target
foxi_loader_LIB_DEPENDS:STATIC=general;dl;
//Dependencies for the target
foxi_wrapper_LIB_DEPENDS:STATIC=general;foxi_loader;
//Value Computed by CMake
gloo_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/third_party/gloo
//Dependencies for the target
gloo_LIB_DEPENDS:STATIC=general;/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so;general;/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so;general;/home/mnaderan/cuda-10.1.168/lib64/libcudart.so;general;pthread;
//Value Computed by CMake
gloo_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/gloo
//Dependencies for the target
gloo_cuda_LIB_DEPENDS:STATIC=general;/home/mnaderan/cuda-10.1.168/lib64/libcudart.so;general;gloo;general;/home/mnaderan/nccl_2.4.8/lib/libnccl.so;general;dl;general;rt;
//Value Computed by CMake
gmock_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/third_party/googletest/googlemock
//Dependencies for the target
gmock_LIB_DEPENDS:STATIC=general;gtest;
//Value Computed by CMake
gmock_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/googletest/googlemock
//Build all of Google Mock's own tests.
gmock_build_tests:BOOL=OFF
//Dependencies for the target
gmock_main_LIB_DEPENDS:STATIC=general;gmock;
//Value Computed by CMake
googletest-distribution_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/third_party/googletest
//Value Computed by CMake
googletest-distribution_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/googletest
//Value Computed by CMake
gtest_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/third_party/googletest/googlemock/gtest
//Dependencies for target
gtest_LIB_DEPENDS:STATIC=
//Value Computed by CMake
gtest_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/googletest/googletest
//Build gtest's sample programs.
gtest_build_samples:BOOL=OFF
//Build all of gtest's own tests.
gtest_build_tests:BOOL=OFF
//Disable uses of pthreads in gtest.
gtest_disable_pthreads:BOOL=OFF
//Use shared (DLL) run-time lib even when Google Test is built
// as static lib.
gtest_force_shared_crt:BOOL=ON
//Build gtest with internal symbols hidden in shared libraries.
gtest_hide_internal_symbols:BOOL=OFF
//Dependencies for the target
gtest_main_LIB_DEPENDS:STATIC=general;gtest;
//Dependencies for the target
libprotobuf-lite_LIB_DEPENDS:STATIC=general;-lpthread;
//Dependencies for the target
libprotobuf_LIB_DEPENDS:STATIC=general;-lpthread;
//Dependencies for the target
libprotoc_LIB_DEPENDS:STATIC=general;libprotobuf;
//Value Computed by CMake
libshm_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/caffe2/torch/lib/libshm
//Value Computed by CMake
libshm_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/torch/lib/libshm
//Dependencies for target
mkldnn_LIB_DEPENDS:STATIC=
//Value Computed by CMake
modules_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/modules
//Value Computed by CMake
modules_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/modules
//Dependencies for the target
nnpack_LIB_DEPENDS:STATIC=general;cpuinfo;
//Dependencies for the target
nnpack_reference_layers_LIB_DEPENDS:STATIC=general;pthreadpool;
//Value Computed by CMake
onnx_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/third_party/onnx
//Dependencies for the target
onnx_LIB_DEPENDS:STATIC=general;onnx_proto;
//Value Computed by CMake
onnx_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/onnx
//Dependencies for the target
onnx_proto_LIB_DEPENDS:STATIC=general;protobuf::libprotobuf;
//Dependencies for the target
onnxifi_dummy_LIB_DEPENDS:STATIC=general;dl;
//Dependencies for the target
onnxifi_loader_LIB_DEPENDS:STATIC=general;dl;
//Dependencies for the target
onnxifi_wrapper_LIB_DEPENDS:STATIC=general;onnxifi_loader;
//Value Computed by CMake
protobuf_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/third_party/protobuf/cmake
//Build examples
protobuf_BUILD_EXAMPLES:BOOL=OFF
//Build libprotoc and protoc compiler
protobuf_BUILD_PROTOC_BINARIES:BOOL=ON
//Build Shared Libraries
protobuf_BUILD_SHARED_LIBS:BOOL=OFF
//Build tests
protobuf_BUILD_TESTS:BOOL=OFF
//Default debug postfix
protobuf_DEBUG_POSTFIX:STRING=d
//Install the examples folder
protobuf_INSTALL_EXAMPLES:BOOL=OFF
//CMake build-in FindProtobuf.cmake module compatible
protobuf_MODULE_COMPATIBLE:BOOL=OFF
//Link static runtime libraries
protobuf_MSVC_STATIC_RUNTIME:BOOL=OFF
//Value Computed by CMake
protobuf_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/protobuf/cmake
//Enable for verbose output
protobuf_VERBOSE:BOOL=OFF
//Build with zlib support
protobuf_WITH_ZLIB:BOOL=OFF
//Value Computed by CMake
psimd_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/confu-deps/psimd
//Value Computed by CMake
psimd_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/psimd
//Value Computed by CMake
pthreadpool_BINARY_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/build/confu-deps/pthreadpool
//Dependencies for the target
pthreadpool_LIB_DEPENDS:STATIC=general;-pthread;
//Value Computed by CMake
pthreadpool_SOURCE_DIR:STATIC=/mnt/local/mnaderan/pt/pytorch/third_party/pthreadpool
//The directory containing a CMake configuration file for pybind11.
pybind11_DIR:PATH=pybind11_DIR-NOTFOUND
//The directory where pybind11 includes reside
pybind11_INCLUDE_DIR:PATH=pybind11_INCLUDE_DIR-NOTFOUND
//Dependencies for the target
pytorch_qnnpack_LIB_DEPENDS:STATIC=general;clog;general;cpuinfo;
//Dependencies for the target
qnnpack_LIB_DEPENDS:STATIC=general;clog;general;cpuinfo;
//Dependencies for the target
shm_LIB_DEPENDS:STATIC=general;torch;general;c10;general;rt;
//Dependencies for target
sleef_LIB_DEPENDS:STATIC=
//Dependencies for target
torch_LIB_DEPENDS:STATIC=
//Dependencies for the target
torch_cpu_LIB_DEPENDS:STATIC=general;/usr/lib/gcc/x86_64-linux-gnu/7/libgomp.so;general;/usr/lib/x86_64-linux-gnu/libpthread.so;general;/usr/lib/gcc/x86_64-linux-gnu/7/libgomp.so;general;/usr/lib/x86_64-linux-gnu/libpthread.so;general;c10;general;qnnpack;general;pytorch_qnnpack;general;nnpack;general;cpuinfo;general;fbgemm;general;/usr/lib/x86_64-linux-gnu/libnuma.so;general;/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so;general;/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so;general;gloo;general;foxi_loader;general;rt;general;gcc_s;general;gcc;general;dl;general;rt;general;m;general;nnpack;general;mkldnn;general;cpuinfo;general;sleef;
//Dependencies for the target
torch_cuda_LIB_DEPENDS:STATIC=general;/home/mnaderan/cuda-10.1.168/lib64/libcudart.so;general;c10_cuda;general;/home/mnaderan/cuda-10.1.168/lib64/libnvToolsExt.so;general;/home/mnaderan/cuda-10.1.168/lib64/libcudart.so;general;gloo_cuda;general;/home/mnaderan/cuda-10.1.168/lib64/libcudart.so;general;/home/mnaderan/cuda-10.1.168/lib64/libcusparse.so;general;/home/mnaderan/cuda-10.1.168/lib64/libcurand.so;general;caffe2::curand;general;caffe2::cudnn;
//Dependencies for the target
torch_python_LIB_DEPENDS:STATIC=general;torch_library;general;shm;general;/home/mnaderan/cuda-10.1.168/lib64/libnvToolsExt.so;general;c10d;
########################
# INTERNAL cache entries
########################
//ADVANCED property for variable: BENCHMARK_CXX_FLAGS_COVERAGE
BENCHMARK_CXX_FLAGS_COVERAGE-ADVANCED:INTERNAL=1
//ADVANCED property for variable: BENCHMARK_EXE_LINKER_FLAGS_COVERAGE
BENCHMARK_EXE_LINKER_FLAGS_COVERAGE-ADVANCED:INTERNAL=1
//ADVANCED property for variable: BENCHMARK_SHARED_LINKER_FLAGS_COVERAGE
BENCHMARK_SHARED_LINKER_FLAGS_COVERAGE-ADVANCED:INTERNAL=1
//STRINGS property for variable: BLAS
BLAS-STRINGS:INTERNAL=Eigen;ATLAS;OpenBLAS;MKL;vecLib;FLAME
//ADVANCED property for variable: BLAS_Accelerate_LIBRARY
BLAS_Accelerate_LIBRARY-ADVANCED:INTERNAL=1
//ADVANCED property for variable: BLAS_acml_LIBRARY
BLAS_acml_LIBRARY-ADVANCED:INTERNAL=1
//ADVANCED property for variable: BLAS_blas_LIBRARY
BLAS_blas_LIBRARY-ADVANCED:INTERNAL=1
//ADVANCED property for variable: BLAS_blis_LIBRARY
BLAS_blis_LIBRARY-ADVANCED:INTERNAL=1
//ADVANCED property for variable: BLAS_goto2_LIBRARY
BLAS_goto2_LIBRARY-ADVANCED:INTERNAL=1
//ADVANCED property for variable: BLAS_openblas_LIBRARY
BLAS_openblas_LIBRARY-ADVANCED:INTERNAL=1
//ADVANCED property for variable: BLAS_ptf77blas_LIBRARY
BLAS_ptf77blas_LIBRARY-ADVANCED:INTERNAL=1
//ADVANCED property for variable: BLAS_vecLib_LIBRARY
BLAS_vecLib_LIBRARY-ADVANCED:INTERNAL=1
//Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS:INTERNAL=1
//Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS:INTERNAL=1
//Test CAFFE2_EXCEPTION_PTR_SUPPORTED
CAFFE2_EXCEPTION_PTR_SUPPORTED:INTERNAL=1
//Test CAFFE2_IS_NUMA_AVAILABLE
CAFFE2_IS_NUMA_AVAILABLE:INTERNAL=1
//Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING:INTERNAL=
//STRINGS property for variable: CLOG_RUNTIME_TYPE
CLOG_RUNTIME_TYPE-STRINGS:INTERNAL=default;static;shared
//ADVANCED property for variable: CMAKE_AR
CMAKE_AR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_ASM_COMPILER
CMAKE_ASM_COMPILER-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_ASM_COMPILER_AR
CMAKE_ASM_COMPILER_AR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_ASM_COMPILER_RANLIB
CMAKE_ASM_COMPILER_RANLIB-ADVANCED:INTERNAL=1
CMAKE_ASM_COMPILER_WORKS:INTERNAL=1
//ADVANCED property for variable: CMAKE_ASM_FLAGS
CMAKE_ASM_FLAGS-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_ASM_FLAGS_DEBUG
CMAKE_ASM_FLAGS_DEBUG-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_ASM_FLAGS_MINSIZEREL
CMAKE_ASM_FLAGS_MINSIZEREL-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_ASM_FLAGS_RELEASE
CMAKE_ASM_FLAGS_RELEASE-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_ASM_FLAGS_RELWITHDEBINFO
CMAKE_ASM_FLAGS_RELWITHDEBINFO-ADVANCED:INTERNAL=1
//This is the directory where this CMakeCache.txt was created
CMAKE_CACHEFILE_DIR:INTERNAL=/mnt/local/mnaderan/pt/pytorch/build
//Major version of cmake used to create the current loaded cache
CMAKE_CACHE_MAJOR_VERSION:INTERNAL=3
//Minor version of cmake used to create the current loaded cache
CMAKE_CACHE_MINOR_VERSION:INTERNAL=10
//Patch version of cmake used to create the current loaded cache
CMAKE_CACHE_PATCH_VERSION:INTERNAL=2
//ADVANCED property for variable: CMAKE_COLOR_MAKEFILE
CMAKE_COLOR_MAKEFILE-ADVANCED:INTERNAL=1
//Path to CMake executable.
CMAKE_COMMAND:INTERNAL=/usr/bin/cmake
//Path to cpack program executable.
CMAKE_CPACK_COMMAND:INTERNAL=/usr/bin/cpack
//Path to ctest program executable.
CMAKE_CTEST_COMMAND:INTERNAL=/usr/bin/ctest
//ADVANCED property for variable: CMAKE_CXX_COMPILER
CMAKE_CXX_COMPILER-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_CXX_COMPILER_AR
CMAKE_CXX_COMPILER_AR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_CXX_COMPILER_RANLIB
CMAKE_CXX_COMPILER_RANLIB-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_CXX_FLAGS
CMAKE_CXX_FLAGS-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_CXX_FLAGS_DEBUG
CMAKE_CXX_FLAGS_DEBUG-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_CXX_FLAGS_MINSIZEREL
CMAKE_CXX_FLAGS_MINSIZEREL-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_CXX_FLAGS_RELEASE
CMAKE_CXX_FLAGS_RELEASE-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_CXX_FLAGS_RELWITHDEBINFO
CMAKE_CXX_FLAGS_RELWITHDEBINFO-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_C_COMPILER
CMAKE_C_COMPILER-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_C_COMPILER_AR
CMAKE_C_COMPILER_AR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_C_COMPILER_RANLIB
CMAKE_C_COMPILER_RANLIB-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_C_FLAGS
CMAKE_C_FLAGS-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_C_FLAGS_DEBUG
CMAKE_C_FLAGS_DEBUG-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_C_FLAGS_MINSIZEREL
CMAKE_C_FLAGS_MINSIZEREL-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_C_FLAGS_RELEASE
CMAKE_C_FLAGS_RELEASE-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_C_FLAGS_RELWITHDEBINFO
CMAKE_C_FLAGS_RELWITHDEBINFO-ADVANCED:INTERNAL=1
//Executable file format
CMAKE_EXECUTABLE_FORMAT:INTERNAL=ELF
//ADVANCED property for variable: CMAKE_EXE_LINKER_FLAGS
CMAKE_EXE_LINKER_FLAGS-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_EXE_LINKER_FLAGS_DEBUG
CMAKE_EXE_LINKER_FLAGS_DEBUG-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_EXE_LINKER_FLAGS_MINSIZEREL
CMAKE_EXE_LINKER_FLAGS_MINSIZEREL-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_EXE_LINKER_FLAGS_RELEASE
CMAKE_EXE_LINKER_FLAGS_RELEASE-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO
CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_EXPORT_COMPILE_COMMANDS
CMAKE_EXPORT_COMPILE_COMMANDS-ADVANCED:INTERNAL=1
//Name of external makefile project generator.
CMAKE_EXTRA_GENERATOR:INTERNAL=
//Name of generator.
CMAKE_GENERATOR:INTERNAL=Unix Makefiles
//Name of generator platform.
CMAKE_GENERATOR_PLATFORM:INTERNAL=
//Name of generator toolset.
CMAKE_GENERATOR_TOOLSET:INTERNAL=
//Have symbol pthread_create
CMAKE_HAVE_LIBC_CREATE:INTERNAL=
//Have library pthreads
CMAKE_HAVE_PTHREADS_CREATE:INTERNAL=
//Have library pthread
CMAKE_HAVE_PTHREAD_CREATE:INTERNAL=1
//Have include pthread.h
CMAKE_HAVE_PTHREAD_H:INTERNAL=1
//Source directory with the top level CMakeLists.txt file for this
// project
CMAKE_HOME_DIRECTORY:INTERNAL=/mnt/local/mnaderan/pt/pytorch
//ADVANCED property for variable: CMAKE_INSTALL_BINDIR
CMAKE_INSTALL_BINDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_CMAKEDIR
CMAKE_INSTALL_CMAKEDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_DATADIR
CMAKE_INSTALL_DATADIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_DATAROOTDIR
CMAKE_INSTALL_DATAROOTDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_DOCDIR
CMAKE_INSTALL_DOCDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_INCLUDEDIR
CMAKE_INSTALL_INCLUDEDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_INFODIR
CMAKE_INSTALL_INFODIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_LIBDIR
CMAKE_INSTALL_LIBDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_LIBEXECDIR
CMAKE_INSTALL_LIBEXECDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_LOCALEDIR
CMAKE_INSTALL_LOCALEDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_LOCALSTATEDIR
CMAKE_INSTALL_LOCALSTATEDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_MANDIR
CMAKE_INSTALL_MANDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_OLDINCLUDEDIR
CMAKE_INSTALL_OLDINCLUDEDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_RUNSTATEDIR
CMAKE_INSTALL_RUNSTATEDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_SBINDIR
CMAKE_INSTALL_SBINDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_SHAREDSTATEDIR
CMAKE_INSTALL_SHAREDSTATEDIR-ADVANCED:INTERNAL=1
//Install .so files without execute permission.
CMAKE_INSTALL_SO_NO_EXE:INTERNAL=1
//ADVANCED property for variable: CMAKE_INSTALL_SYSCONFDIR
CMAKE_INSTALL_SYSCONFDIR-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_LINKER
CMAKE_LINKER-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_MAKE_PROGRAM
CMAKE_MAKE_PROGRAM-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_MODULE_LINKER_FLAGS
CMAKE_MODULE_LINKER_FLAGS-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_MODULE_LINKER_FLAGS_DEBUG
CMAKE_MODULE_LINKER_FLAGS_DEBUG-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_MODULE_LINKER_FLAGS_MINSIZEREL
CMAKE_MODULE_LINKER_FLAGS_MINSIZEREL-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_MODULE_LINKER_FLAGS_RELEASE
CMAKE_MODULE_LINKER_FLAGS_RELEASE-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_MODULE_LINKER_FLAGS_RELWITHDEBINFO
CMAKE_MODULE_LINKER_FLAGS_RELWITHDEBINFO-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_NM
CMAKE_NM-ADVANCED:INTERNAL=1
//number of local generators
CMAKE_NUMBER_OF_MAKEFILES:INTERNAL=104
//ADVANCED property for variable: CMAKE_OBJCOPY
CMAKE_OBJCOPY-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_OBJDUMP
CMAKE_OBJDUMP-ADVANCED:INTERNAL=1
//Platform information initialized
CMAKE_PLATFORM_INFO_INITIALIZED:INTERNAL=1
//ADVANCED property for variable: CMAKE_RANLIB
CMAKE_RANLIB-ADVANCED:INTERNAL=1
//Path to CMake installation.
CMAKE_ROOT:INTERNAL=/usr/share/cmake-3.10
//ADVANCED property for variable: CMAKE_SHARED_LINKER_FLAGS
CMAKE_SHARED_LINKER_FLAGS-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_SHARED_LINKER_FLAGS_DEBUG
CMAKE_SHARED_LINKER_FLAGS_DEBUG-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_SHARED_LINKER_FLAGS_MINSIZEREL
CMAKE_SHARED_LINKER_FLAGS_MINSIZEREL-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_SHARED_LINKER_FLAGS_RELEASE
CMAKE_SHARED_LINKER_FLAGS_RELEASE-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO
CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_SKIP_INSTALL_RPATH
CMAKE_SKIP_INSTALL_RPATH-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_SKIP_RPATH
CMAKE_SKIP_RPATH-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_STATIC_LINKER_FLAGS
CMAKE_STATIC_LINKER_FLAGS-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_STATIC_LINKER_FLAGS_DEBUG
CMAKE_STATIC_LINKER_FLAGS_DEBUG-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_STATIC_LINKER_FLAGS_MINSIZEREL
CMAKE_STATIC_LINKER_FLAGS_MINSIZEREL-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_STATIC_LINKER_FLAGS_RELEASE
CMAKE_STATIC_LINKER_FLAGS_RELEASE-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO
CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO-ADVANCED:INTERNAL=1
//ADVANCED property for variable: CMAKE_STRIP
CMAKE_STRIP-ADVANCED:INTERNAL=1
//uname command
CMAKE_UNAME:INTERNAL=/bin/uname
//ADVANCED property for variable: CMAKE_VERBOSE_MAKEFILE
CMAKE_VERBOSE_MAKEFILE-ADVANCED:INTERNAL=1
//Test COMPILER_SUPPORTS_AVX
COMPILER_SUPPORTS_AVX:INTERNAL=1
//Test COMPILER_SUPPORTS_AVX2
COMPILER_SUPPORTS_AVX2:INTERNAL=1
//Test COMPILER_SUPPORTS_AVX512
COMPILER_SUPPORTS_AVX512:INTERNAL=1
//Test COMPILER_SUPPORTS_AVX512F
COMPILER_SUPPORTS_AVX512F:INTERNAL=1
//Test COMPILER_SUPPORTS_BUILTIN_MATH
COMPILER_SUPPORTS_BUILTIN_MATH:INTERNAL=1
//Test COMPILER_SUPPORTS_FLOAT128
COMPILER_SUPPORTS_FLOAT128:INTERNAL=1
//Test COMPILER_SUPPORTS_FMA4
COMPILER_SUPPORTS_FMA4:INTERNAL=1
//Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY:INTERNAL=1
//Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
COMPILER_SUPPORTS_HIDDEN_VISIBILITY:INTERNAL=1
//Test COMPILER_SUPPORTS_LONG_DOUBLE
COMPILER_SUPPORTS_LONG_DOUBLE:INTERNAL=1
//Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
COMPILER_SUPPORTS_NO_AVX256_SPLIT:INTERNAL=1
//Test COMPILER_SUPPORTS_OPENMP
COMPILER_SUPPORTS_OPENMP:INTERNAL=1
//Test COMPILER_SUPPORTS_RDYNAMIC
COMPILER_SUPPORTS_RDYNAMIC:INTERNAL=1
//Test COMPILER_SUPPORTS_SSE2
COMPILER_SUPPORTS_SSE2:INTERNAL=1
//Test COMPILER_SUPPORTS_SSE4
COMPILER_SUPPORTS_SSE4:INTERNAL=1
//Test COMPILER_SUPPORTS_SYS_GETRANDOM
COMPILER_SUPPORTS_SYS_GETRANDOM:INTERNAL=1
//Test COMPILER_SUPPORTS_WEAK_ALIASES
COMPILER_SUPPORTS_WEAK_ALIASES:INTERNAL=1
//Test COMPILER_WORKS
COMPILER_WORKS:INTERNAL=1
//Result of TRY_COMPILE
COMPILER_WORKS_COMPILED:INTERNAL=TRUE
//Result of TRY_RUN
COMPILER_WORKS_EXITCODE:INTERNAL=0
//Result of TRY_COMPILE
COMPILE_HAVE_GNU_POSIX_REGEX:INTERNAL=FALSE
//Result of TRY_COMPILE
COMPILE_HAVE_POSIX_REGEX:INTERNAL=TRUE
//Result of TRY_COMPILE
COMPILE_HAVE_STD_REGEX:INTERNAL=TRUE
//Result of TRY_COMPILE
COMPILE_HAVE_STEADY_CLOCK:INTERNAL=TRUE
//STRINGS property for variable: CPUINFO_LIBRARY_TYPE
CPUINFO_LIBRARY_TYPE-STRINGS:INTERNAL=default;static;shared
//STRINGS property for variable: CPUINFO_LOG_LEVEL
CPUINFO_LOG_LEVEL-STRINGS:INTERNAL=default;debug;info;warning;error;fatal;none
//STRINGS property for variable: CPUINFO_RUNTIME_TYPE
CPUINFO_RUNTIME_TYPE-STRINGS:INTERNAL=default;static;shared
//ADVANCED property for variable: CUDA_64_BIT_DEVICE_CODE
CUDA_64_BIT_DEVICE_CODE-ADVANCED:INTERNAL=1
//List of intermediate files that are part of the cuda dependency
// scanning.
CUDA_ADDITIONAL_CLEAN_FILES:INTERNAL=/mnt/local/mnaderan/pt/pytorch/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir//gloo_cuda_generated_cuda.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir//gloo_cuda_generated_cuda_private.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCReduceApplyUtils.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCBlas.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCSleep.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCStorage.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCStorageCopy.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensor.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorCopy.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorMath.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorMathBlas.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorMathMagma.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorMathPairwise.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorMathReduce.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorMathScan.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorIndex.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorRandom.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorScatterGather.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorTopK.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorSort.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCSortUtils.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCTensorMode.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/generated/torch_cuda_generated_THCTensorSortByte.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/generated/torch_cuda_generated_THCTensorMathPointwiseByte.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/generated/torch_cuda_generated_THCTensorMathReduceByte.cu.o.depend;/mnt/local/mnaderan/pt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/gen
``` | module: build,triaged,module: mkl | low | Critical |
542,171,675 | flutter | Need support for placeholder widget in FadeInImage | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## It would be good to show a placeholder widget like Flutter's CircularProgressIndicator when images are loading from the network.
Although FadeInImage does provide a placeholder ImageProvider, it only supports images/gifs. Placeholder images really don't make for good user experience. And making or finding a loading gif to fit the app design is a challenging task. Although there's a [hack](https://flutter.dev/docs/cookbook/images/fading-in-images#in-memory) mentioned on the official website. It doesn't work well in the case of images containing transparency.
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
## If we could have an additional optional parameter of providing a placeholder widget, we could use widgets like CircularProgressIndicator or any other animated widget consistent with the design of the rest of the application, while the images are loading.
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
| c: new feature,framework,a: images,c: proposal,P3,team-framework,triaged-framework | low | Critical |
542,213,054 | flutter | TextEditingController should have a keepValue field just like ScrollController has a keepScrollOffset value | c: new feature,framework,a: quality,P2,team-framework,triaged-framework | low | Minor |
|
542,255,141 | TypeScript | Suggestion: type narrowing based on user-defined type guards against properties | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
## Search Terms
user-defined type guards, type predicates, type narrowing, object properties
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
TS can narrow object types in `if` statements and other conditional control flows by various means. Particularly, if the object's type is a union type distinguishable by a "tag" (like `Option<T>` in the below example), we can narrow the type by checking the value of the tag. However, if the check for the tag is done by user-defined type guards against the property (but not the object itself), the object's type is not narrowed.
The suggestion is to make it work even if user-defined type guards are applied to object properties
## Use Cases
The "tagged union" pattern is widely used in TS codebases. If tags theirselves are so complicated, we want to utilize user-defined type guards to check them.
## Examples
```ts
type Option<T> = {
type: "some"
value: T
} | {
type: "none"
}
const isSome = (type: "some" | "none"): type is "some" => type === "some"
declare const option: Option<number>
// Good: option is narrowed in the if block
if (option.type === "some") {
option.value
}
// Bad: option isn't narrowed
if (isSome(option.type)) {
// Error: Property 'value' does not exist on type 'Option<number>'.
option.value
}
// Available workaround
const isSomeObject = <T>(option: Option<T>): option is Extract<Option<T>, { type: "some" }> => option.type === "some"
if (isSomeObject(option)) {
option.value
}
```
[playground](https://www.typescriptlang.org/play/#code/C4TwDgpgBA8mwEsD2A7APAFQHxQLxQG8BYAKCilEgC4oAiAZyQFsJbTyA3AQwBsBXCDQykAvlAA+hdhXCC6KVK1GlSAY1T1gUBPQDKzaPgAUlOQwO0J8xbQCUNU9vp1GLS7hyPc3lxZUkAEwhVHi4AJ2h1FE0oJHhkFBo4RFQ0FD4mACMIMKx-AHp8qABxJCQAmjiUlCcoFHCwpAB3CADtGuAAC2gEADMoTJ4kVQBrUj6oIyqEgDovH3M3WykyWPjUGe5+CGUSUkKoACEuCrXqpxQAci16sMaWgPH+ox19Fin1lDnZW2XiVYOAFE7kgwjQAAqNSBhUBQS5bASXKABJAQZwKLQQAAeOi0qBkkDhyQSaQy2Vylxm0mmGwROxIIgKRQAgtwEKFBtAmqCRuEkHwUI8SFEYq8DDBMgArYJafCYLAfapJT7y+xnBK1QFY4BhLiqYBoYmpbAAGkIBLMrlYUBEOA86o283wiyUexIExeenFUplioSvxW5BpXzpoiAA)
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Critical |
542,283,057 | pytorch | Typo in `torch.utils.tensorboard.add_image` | ## 📚 Documentation
In [`torch.utils.tensorboard.add_image`](https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter.add_image) there is a small typo:
> ... is also suitible as long as corresponding `dataformats` argument is passed. e.g. CHW, HWC, HW.
"suitible" should be "suitable" | oncall: visualization | low | Minor |
542,286,998 | pytorch | No auto-suggest capacity for Transformer | ## 🐛 Bug
1. I use Pycharm for my developing tools.
1. When I typed `from torch.nn import Trans`, I couldn't find the prompt for Transformer
1. Then I checked the source code in[torch.nn.modules.__init__.pyi.in](https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/__init__.pyi.in) and cannot find the similiar code for Transformer
| triaged,enhancement | low | Critical |
542,312,735 | flutter | Duration and curve for FocusTraversalPolicy | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
I want to be able to set the duration and curve for animated scrolling to the focused element inside FocusTraversalPolicy.
## Proposal
Add duration and curves to ** FocusTraversalPolicy and pass this to the _focusAndEnsureVisible method
| framework,c: proposal,a: desktop,f: focus,P3,team-framework,triaged-framework | low | Critical |
542,322,061 | TypeScript | Optional chaining not working with void type | # Demo
## [playground demo](http://www.typescriptlang.org/play/?ts=3.8.0-dev.20191224#code/C4TwDgpgBAggJnAThAziqBeKBvAUFAqAYwEtQAuKFYREgOwHNcBfXXUSKAVRQkUxz5CACwD2AWwiV4SVOgA+UAK504EAGb0IcIQQDuoxAGtpCZGiiKAbqJI7WuNUQA2AQ2RRnEYMso8+bEoAdGKSAPxBpKC4wQbGEVEgbEA)
### TypeScript source with nightly version 3.8.0-dev.20191224
```ts
type Address = {
city: string
}
type User = {
home: Address | undefined
work: Address | void
}
declare let u: User
u.home?.city
u.work?.city
```
### JavaScript output `the same for both "work" and "home" properties`
```js
"use strict";
var _a, _b;
(_a = u.work) === null || _a === void 0 ? void 0 : _a.city;
(_b = u.home) === null || _b === void 0 ? void 0 : _b.city;
```
# Bug
If you run the code above, you will see that for `u.home?.city` there are no errors.
But for `u.work?.city` there is an error:
## `Error: Property 'city' does not exist on type 'void | Address'`
# Expectation
## `Optional chaining works for both 'undefined' and 'void' types the same way.` | Bug | medium | Critical |
542,328,528 | youtube-dl | Please add support for the onion URL of invidious. | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.12.25. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.12.25**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
Since invidious is a middle man for youtube, everything but the URL is the same as youtube. That URL is "kgg2m7yk5aybusll.onion"
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
I would like the onion URL for invidious ( kgg2m7yk5aybusll.onion ) to be supported.
| site-support-request | low | Critical |
542,334,036 | youtube-dl | Support to https://live.fc2.com/ | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.12.25. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [X ] I'm reporting a new site support request
- [X ] I've verified that I'm running youtube-dl version **2019.12.25**
- [X ] I've checked that all provided URLs are alive and playable in a browser
- [X ] I've checked that none of provided URLs violate any copyrights
- [X ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
Any current live can be used as example: https://live.fc2.com/
Live example: https://live.fc2.com/77598545/
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
I would like to be able to download lives from the site https://live.fc2.com/ I looked the bugtracker and the only relevant issue there was from 2013 so I am requesting support for the site now. Tried grabbing links from network tab in dev mode but won't work.
| site-support-request | low | Critical |
542,382,023 | rust | [Rustdoc] A way to disambiguate between Trait methods and implemented ones in search? | 
Maybe a duplicated of #48069.
| T-rustdoc,C-feature-request,A-rustdoc-search | low | Minor |
542,389,720 | neovim | Setting langmap doesn't handle macros. | - `nvim --version`:
```
NVIM v0.4.3
Build type: Release
LuaJIT 2.1.0-beta3
Compilation: /usr/lib/ccache/bin/cc -fstack-clash-protection -D_FORTIFY_SOURCE=2 -mtune=generic -O2 -pipe -g -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -O2 -DNDEBUG -DMIN_LOG_LEVEL=3 -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wshadow -Wconversion -Wmissing-prototypes -Wimplicit-fallthrough -Wvla -fstack-protector-strong -fdiagnostics-color=auto -DINCLUDE_GENERATED_DECLARATIONS -D_GNU_SOURCE -DNVIM_MSGPACK_HAS_FLOAT32 -DNVIM_UNIBI_HAS_VAR_FROM -I/builddir/neovim-0.4.3/build/config -I/builddir/neovim-0.4.3/src -I/usr/include -I/builddir/neovim-0.4.3/build/src/nvim/auto -I/builddir/neovim-0.4.3/build/include
Compiled by void-buildslave@a-hel-fi
Features: +acl +iconv +tui
See ":help feature-compile"
system vimrc file: "$VIM/sysinit.vim"
fall-back for $VIM: "/usr/share/nvim"
Run :checkhealth for more info
```
- `vim -u DEFAULTS` (version: ) behaves differently?
```
Nope.
```
- Operating system/version:
```
Void GNU+Linux (glib) (it is rolling release)
```
- Terminal name/version:
```
My own fork of ST 6.2
```
- `$TERM`:
```
st-256color
```
### Steps to reproduce using `nvim -u NORC`
1. Set `langmap` like this: `set langmap=kh,mj,lk,yl`
2. Create 3 by 3 area of random characters then go to the middle character of the middle (second) line like this:
```
asd
asd <-- Cursor should be on this line and on the "s" character.
asd
```
3. Record a macro with your favorite register and after that, just press `l` key.
4. Run the macro.
### Actual behavior
When you press `l` key, it moves your cursor to up and when you run it again, it moves your cursor to right (default behavior of `l` character in neovim/vim).
### Expected behavior
It has to move the cursor to up again.
### Side note
You can "fix" this behavior with `set langremap` ***but*** it will break all of the plug-ins you have and this is not a thing that you would ever want, I learned this the hard way. :) | bug-vim | medium | Critical |
542,396,057 | godot | Scene opens as [Unsaved] when it's already opened in Script Editor | **Godot version:**
3.2 beta4
**Issue description:**
This one is complicated, but well, a bug is a bug. Related to #33320
**Steps to reproduce:**
1. Add .tscn to `editor/search_in_file_extensions`
2. Search for something in a .tscn file (make sure it's not currently opened)
3. Click on the result
4. The .tscn file should be opened as text in Script Editor (this is important)
5. Open that scene from file explorer
6. The scene appears as [Unsaved] tab | bug,topic:editor,confirmed | low | Critical |
542,404,799 | opencv | 3.4.9 build error with matlab2019b | Ubuntu 16.04.06, opencv3.4.9 with modules, build with matlab 2019b.
```
[ 99%] Built target opencv_test_videostab
CMake Error at /home/<USER_NAME>/opencv/opencv_contrib-3.4.9/modules/matlab/compile.cmake:54 (message):
Failed to compile CamShift: In file included from
/home/<USER_NAME>/opencv/opencv-3.4.9/build/modules/matlab/src/CamShift.cpp:13:0:
/home/<USER_NAME>/opencv/opencv_contrib-3.4.9/modules/matlab/include/opencv2/matlab/bridge.hpp:95:17:
error: ‘TonemapDurand’ was not declared in this scope
typedef cv::Ptr<TonemapDurand> Ptr_TonemapDurand;
^
/home/<USER_NAME>/opencv/opencv_contrib-3.4.9/modules/matlab/include/opencv2/matlab/bridge.hpp:95:30:
error: template argument 1 is invalid
typedef cv::Ptr<TonemapDurand> Ptr_TonemapDurand;
^
/home/<USER_NAME>/opencv/opencv_contrib-3.4.9/modules/matlab/include/opencv2/matlab/bridge.hpp:512:11:
error: ‘cv::bridge::Bridge& cv::bridge::Bridge::operator=(const
Ptr_TonemapDurand&)’ cannot be overloaded
Bridge& operator=(const Ptr_TonemapDurand& ) { return *this; }
^
/home/<USER_NAME>/opencv/opencv_contrib-3.4.9/modules/matlab/include/opencv2/matlab/bridge.hpp:292:11:
error: with ‘cv::bridge::Bridge& cv::bridge::Bridge::operator=(const
int&)’
Bridge& operator=(const int& ) { return *this; }
^
modules/matlab/CMakeFiles/opencv_matlab.dir/build.make:60: recipe for target 'modules/matlab/compile.proxy' failed
make[2]: *** [modules/matlab/compile.proxy] Error 1
CMakeFiles/Makefile2:9484: recipe for target 'modules/matlab/CMakeFiles/opencv_matlab.dir/all' failed
make[1]: *** [modules/matlab/CMakeFiles/opencv_matlab.dir/all] Error 2
make[1]: *** 正在等待未完成的任务....
``` | priority: low,category: build/install,category: contrib,category: matlab bindings | low | Critical |
542,407,070 | pytorch | 'torch.load' report 'bad pickle data' | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Hello, everyone. I am using pytorch1.0.1 to load pretrained models.
According to torch/serialization.py,
```
Example:
>>> torch.load('tensors.pt')
# Load all tensors onto the CPU
>>> torch.load('tensors.pt', map_location=torch.device('cpu'))
# Load all tensors onto the CPU, using a function
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
# Load all tensors onto GPU 1
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
# Map tensors from GPU 1 to GPU 0
>>> torch.load('tensors.pt', map_location={'cuda:1':'cuda:0'})
# Load tensor from io.BytesIO object
>>> with open('tensor.pt') as f:
buffer = io.BytesIO(f.read())
>>> torch.load(buffer)
```
It means that i can read pth files in bytes, and then call io.BytesIO(). 'torch.load' can load the buffer because 'io.BytesIO' is a file-like object.
But when i test the [pth file](https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth), it failed, the detailed information is
```
Traceback (most recent call last):
File "test1.py", line 24, in <module>
model_params = torch.load(pth_buf, map_location='cpu')
File "/usr/local/lib/python3.5/dist-packages/torch/serialization.py", line 368, in load
return _load(f, map_location, pickle_module)
File "/usr/local/lib/python3.5/dist-packages/torch/serialization.py", line 532, in _load
magic_number = pickle_module.load(f)
_pickle.UnpicklingError: bad pickle data
```
my code is
```
with open('./model/alexnet/ch/alexnet-owt-4df8aa71.pth', 'rb') as f:
file_buf = f.read()
pth_buf = io.BytesIO(file_buf)
model_params = torch.load(pth_buf, map_location='cpu')
```
Notice that i must use pth buffer rather than pth file.
Steps to reproduce the behavior:
1.
1.
1.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): 1.0.1
- OS (e.g., Linux): ubuntu16.04
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.5.2
- CUDA/cuDNN version: no
- GPU models and configuration: no
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| module: serialization,triaged | low | Critical |
542,433,178 | rust | Wrong compiler error message. | ```rust
trait Baz {}
trait Bar2<A> { type X; }
struct Foo<'a>(&'a str);
impl<'a, A, B> Baz for B where B: Bar2<Foo<'a>, X=A> {}
```
The current error message is
"error[E0207]: the type parameter `A` is not constrained by the impl trait, self type, or predicates"
but the actually error in the code is that `'a` is not constrained. `A` is used in associated type, and is correct according to the documentation of `E0207`. The correct fix would be move to `'a` to a HRTB as in `for<'a> Bar2<Foo<'a>, X=A>`
| A-diagnostics,T-compiler,C-bug,D-incorrect | low | Critical |
542,455,573 | svelte | Allow to specify custom base class for Svelte Components | Could we have an option to specify a custom base class for Svelte components?
```html
<script>
import {BaseSvelteComponent} from './base';
</script>
<svelte:options base="BaseSvelteComponent"/>
```
base.js:
```
import {SvelteComponent} from 'svelte';
export class BaseSvelteComponent extends SvelteComponent {
/* or could be completly own custom impl */
}
```
It would step to make Svelte a more generic compiler and could move many conflict decisions out of the scope of Svelte project.
It allows end-users of the compiler to make own decision. I think it could resolve things like [Component inheritance](https://github.com/sveltejs/svelte/issues/192), move towards a solution for [Custom element without shadow DOM](https://github.com/sveltejs/svelte/issues/1748) and [Support attachShadow({mode: 'closed'})](https://github.com/sveltejs/svelte/issues/2972), handle [Native HTML elements](https://github.com/sveltejs/svelte/issues/1869) and
[Form-associated custom elements](https://html.spec.whatwg.org/multipage/custom-elements.html#custom-elements-face-example) and minor things like [Define some attributes on Custom Elements ](https://github.com/sveltejs/svelte/issues/3919)
And it seems pretty easy to implement - https://github.com/kmmbvnr/svelte/commit/4a7f3171050dea3d676bf928bbfb6644d09e0359
| feature request,custom element | medium | Minor |
542,503,964 | flutter | Horizontal Stepper in flutter | I am trying to create a horizontal stepper, but I am failing to do so.
I have raised a stackover flow question here as well,
https://stackoverflow.com/questions/59484581/horizontal-stepper-in-flutter/59485333#59485333
Please let me know where am I wrong | framework,d: examples,would be a good package,c: proposal,P3,team-framework,triaged-framework | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.