id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
598,275,239 |
rust
|
Auto-fix wrongly suggesting scope issues during struct manipulations
|
<!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
Hi, there's a auto-fix bug. I'm using VSCode. The code explains itself:
I tried this code:
```rust
use std::fmt;
use std::fmt::Error;
use std::fmt::Formatter;
struct Structure(i32);
#[derive(Debug)]
struct Point {
x : i32,
y : i32,
}
impl fmt::Display for Structure {
fn fmt(&self, f: &mut Formatter) -> Result<(), Error> {
write!(f, "The value inside the structure is: {}", self.0)
}
}
fn main() {
let s = Structure(12);
println!("{}", s);
let point = {x = 12; y = 45};
println!("{}")
}
```
I expected to see the auto-fix suggest ways to print Point correctly.
Instead, auto-fix in VSCode suggests changing x and y to s in line 23.
However, the error message from rust displays fine.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.42.0 (b8cedc004 2020-03-09)
binary: rustc
commit-hash: b8cedc00407a4c56a3bda1ed605c6fc166655447
commit-date: 2020-03-09
host: x86_64-unknown-linux-gnu
release: 1.42.0
LLVM version: 9.0
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
RUST_BACKTRACE=1 cargo build
Compiling hello_world v0.1.0 (/home/jerry/rust-experiment/hello_world)
error: 1 positional argument in format string, but no arguments were given
--> src/main.rs:26:15
|
26 | println!("{}")
| ^^
error[E0425]: cannot find value `x` in this scope
--> src/main.rs:23:9
|
23 | x = 12;
| ^ help: a local variable with a similar name exists: `s`
error[E0425]: cannot find value `y` in this scope
--> src/main.rs:24:9
|
24 | y = 45;
| ^ help: a local variable with a similar name exists: `s`
error: aborting due to 3 previous errors
For more information about this error, try `rustc --explain E0425`.
error: could not compile `hello_world`.
```
</p>
</details>
|
C-enhancement,A-diagnostics,T-compiler
|
low
|
Critical
|
598,282,397 |
opencv
|
opencv.js detectMultiScale piles up a lot of memory when using large images
|
##### System information (version)
- OpenCV => 4.3 (problem also exists for older versions)
- Operating System / Platform => Linux (with Mozilla Firefox 74.0 (64-bit))
- Compiler => using opencv.js from demo page
##### Detailed description
When detectMultiScale from opencv.js runs on large photos (e.g. from common DSLRs), a lot of memory is used and not freed after use.
Running the demo (https://docs.opencv.org/4.3.0/js_face_detection.html) on a 8MB jpeg file results in >1GB of used memory, which is not removed after detetctMultiScale has finished or after loading a different image.
I am not really sure, but it could be related to this issue: https://github.com/opencv/opencv/issues/15060 !?
##### Steps to reproduce
1. Go to: https://docs.opencv.org/4.3.0/js_face_detection.html
2. Load a large photo which contains faces (I used 6000x4000px, ~8MB, jpeg)
3. Check the memory of the tab (e.g. by using the browser's dev functions)
4. Load a different photo
5. Check memory again, and see that it is even larger than in step 3.
|
category: javascript (js)
|
low
|
Minor
|
598,285,696 |
flutter
|
Add consistent across all platforms onDonePressed callback to TextField
|
This issue is a follow up of https://github.com/flutter/flutter/issues/49785
Please check it out.
TextField's **onSubmitted** and **onEditingComplete** behaive differently on different platforms.
Currently, there are following differnces:
On mobile - **onSubmitted** **is not** firing if you tap another TextField or blank space.
On web - **onSubmitted** **is** firing if you tap another TextField or blank space (undesired).
On mobile web - **onSubmitted** **is** firing when the virtual keyboard is closed and when app sent to background (undesired).
I suggest adding **onDonePressed** callback that would be invoked **only** when user presses **Enter**.
It'd allow to have the same behavior on mobile and web without changing code and creating workarounds.
Thanks!
|
a: text input,c: new feature,framework,f: material design,a: quality,c: proposal,P3,team-text-input,triaged-text-input
|
low
|
Minor
|
598,293,384 |
pytorch
|
After `create_graph=True`, calculating `backward()` on sparse Tensor fails
|
## 🐛 Bug
After retaining the calculation graph of the gradients ( `create_graph=True`), `backward()` fails on sparse Tensor.
The following error occurs:
`RuntimeError: calculating the gradient of a sparse Tensor argument to mm is not supported.`
## To Reproduce
```
import torch
batch_size = 10
inp_size = 1000
out_size= 40
X = torch.randn(batch_size, inp_size).to_sparse().requires_grad_(True)
y = torch.randn(batch_size, out_size).requires_grad_(True)
weight = torch.nn.Parameter(torch.randn(inp_size, out_size))
y_out = torch.sparse.mm(X, weight)
criterion = torch.nn.MSELoss()
loss = criterion(y_out, y)
grad = torch.autograd.grad(loss, [weight], create_graph=True)[0]
expected_grad = torch.randn(grad.size())
L = ((grad - expected_grad) ** 2).sum()
L.backward()
```
## Expected behavior
After `L.backward()`, dL/dX and dL/dy should be calculated based on the gradient's calculation graph.
## Environment
```
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.1.85
GPU models and configuration:
GPU 0: Tesla V100-PCIE-32GB
GPU 1: Tesla V100-PCIE-32GB
Nvidia driver version: 440.33.01
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
```
- pip 20.0.2
- numpy 1.17.0
- torch 1.4.0
- torchvision 0.5.0
|
triaged,enhancement
|
low
|
Critical
|
598,300,467 |
pytorch
|
Issue when linking C++ code with libtorch_cpu: cuda not detected
|
## 🐛 Bug
(not sure how much of a bug this is, maybe it's expected)
I'm currently working on adapting some [rust bindings](https://github.com/LaurentMazare/tch-rs) for PyTorch and in the process came across the following issue for which I don't have a good solution. The issue can be show on C++ code.
- When linking the final binary with `-ltorch -ltorch_cpu -lc10`, `torch::cuda::is_available()` returns false.
- When linking the final binary with `-Wl,--no-as-needed -ltorch -ltorch_cpu -lc10`, `torch::cuda::is_available()` returns true.
- When linking without `-ltorch_cpu`, I get a missing symbol error for: `c10::Dispatcher::singleton`.
I tried compiling some C++ pytorch code with cmake and it seems to use --no-as-needed to get this to work.
Is there a way to get some external code to compile without libtorch_cpu?
One difficulty is that the rust build system does not let you specify arbitrary linker flags so I cannot easily set `-Wl,--no-as-needed`.
## To Reproduce
The issue can be reproduced using the C++ code below.
```c++
#include <torch/torch.h>
#include <iostream>
int main() {
std::cout << torch::cuda::is_available() << std::endl;
}
```
Then:
- `g++ test.cpp -std=gnu++14 -ltorch -ltorch_cpu -lc10 && ./a.out` prints 0.
- `g++ test.cpp -std=gnu++14 -Wl,--no-as-needed -ltorch -ltorch_cpu -lc10 && ./a.out` prints 1.
## Expected behavior
I would have hoped for cuda to be reported as available without the `-Wl,--no-as-needed` flag.
## Environment
- PyTorch Version (e.g., 1.0): release/1.5 branch as of 2020-04-11.
- OS (e.g., Linux): Linux (ubuntu 18.04)
- How you installed PyTorch (`conda`, `pip`, source): source, compiled with cuda support
- Build command you used (if compiling from source): `python setup.py build`
- Python version: 3.7.1.
- CUDA/cuDNN version: 10.0/none.
- GPU models and configuration: 1x GeForce RTX 2080.
- Any other relevant information: gcc/g++ 7.5.0, ld 2.3.0
## Additional context
cc @yf225
|
module: build,module: cpp,triaged
|
low
|
Critical
|
598,329,987 |
create-react-app
|
Update workbox plugin to 5.1.2 resolving ServiceWorker quota exceeded errors
|
### Is your proposal related to a problem?
Currently our e-commerce site is experiencing millions of ServiceWorker `DOMException: QuotaExceededError` errors ([we're not the only ones](https://github.com/GoogleChrome/workbox/pull/1505)), which almost blowing through our monthly Rollbar limit 😬. I did some brief research and found that Google is aware of this issue and [recommends `purgeOnQuotaError`](https://developers.google.com/web/tools/workbox/guides/storage-quota#purgeonquotaerror) to potentially fix most of these runtime quota issues.
I could be mistaken, but looks as if [CRA is using workbox 4.3.1](https://github.com/facebook/create-react-app/blob/c5b96c2853671baa3f1f297ec3b36d7358898304/packages/react-scripts/package.json#L84), and this `purgeOnQuotaError` was [added in 5.0.0 as default enabled](https://github.com/GoogleChrome/workbox/blob/v5.0.0/packages/workbox-build/src/options/defaults.js#L29).
Since CRA follows best practices, I highly suggest we update Workbox plugin ASAP so we can be on par with Google's best practices for handling ServiceWorkers.
Thanks and love this app!
### Describe the solution you'd like
[Update `workbox-webpack-plugin` to 5.1.2](https://github.com/facebook/create-react-app/pull/8822)
### Describe alternatives you've considered
None possible without ejecting.
|
issue: proposal,needs triage
|
low
|
Critical
|
598,334,041 |
go
|
x/tools/gopls: fuzzy completion does not return result when expected
|
<!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +801cd7c84d Thu Apr 2 09:00:44 2020 +0000 linux/amd64
$ go list -m golang.org/x/tools
golang.org/x/tools v0.0.0-20200408132156-9ee5ef7a2c0d => github.com/myitcvforks/tools v0.0.0-20200408225201-7e808beafd9f
$ go list -m golang.org/x/tools/gopls
golang.org/x/tools/gopls v0.0.0-20200408132156-9ee5ef7a2c0d => github.com/myitcvforks/tools/gopls v0.0.0-20200408225201-7e808beafd9f
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/myitcv/.cache/go-build"
GOENV="/home/myitcv/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/myitcv/gostuff"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/myitcv/gos"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/myitcv/gos/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/myitcv/.vim/plugged/govim/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build227804233=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
```
-- go.mod --
module github.com/myitcv/playground
go 1.12
-- main.go --
package main
import (
"fmt"
)
func main() {
fmt.Prnf
}
```
### What did you expect to see?
Triggering completion at the end of `fmt.Prnf` to return a single result, `fmt.Printf`
### What did you see instead?
Triggering completion at the end of `fmt.Prnf` did not return any results.
See the following `gopls` log: [bad.log](https://github.com/golang/go/files/4465248/bad.log)
---
cc @stamblerre @muirdm
FYI @leitzler
|
NeedsInvestigation,gopls,Tools
|
low
|
Critical
|
598,339,833 |
rust
|
Typo suggestion doesn't account for types
|
<!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
When using multiple transmitters with a single receiver across multiple thread, the `:help` message is totally bonkers. Refer the `backtrace` lines 10 and 16:
```rust
use std::sync::mpsc::channel;
use std::thread;
fn main() {
let (tx1, rx) = channel();
let tx2 = tx1.clone();
thread::spawn(move || {
for i in 1..10 {
tx.send(i);
}
});
thread::spawn(move || {
for i in 1..10 {
tx.send(i);
}
});
for i in 1..10 {
println!("{}", rx.recv().unwrap());
}
}
```
The auto-fix seems to be too ambitious at the moment by suggesting wrong fixes. Let me know if I need to file a RFC for this. JAVA's auto-fix seems smarter and Rust can definitely borrow some of it's implementation.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.42.0 (b8cedc004 2020-03-09)
binary: rustc
commit-hash: b8cedc00407a4c56a3bda1ed605c6fc166655447
commit-date: 2020-03-09
host: x86_64-unknown-linux-gnu
release: 1.42.0
LLVM version: 9.0
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
Compiling hello_world v0.1.0 (/home/jerry/rust-experiment/hello_world)
error[E0425]: cannot find value `tx` in this scope
--> src/main.rs:10:13
|
10 | tx.send(i);
| ^^ help: a local variable with a similar name exists: `rx`
error[E0425]: cannot find value `tx` in this scope
--> src/main.rs:16:13
|
16 | tx.send(i);
| ^^ help: a local variable with a similar name exists: `rx`
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0425`.
error: could not compile `hello_world`.
To learn more, run the command again with --verbose.
```
</p>
</details>
|
C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics,D-papercut
|
low
|
Critical
|
598,342,715 |
vscode
|
Task terminal truncates long/quick output.
|
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
VS Code version: Code 1.43.0 (78a4c91400152c0f27ba4d363eb56d2835f9903a, 2020-03-09T19:44:52.965Z)
OS version: Linux x64 5.4.25
### Steps to Reproduce
Run "test" task using this `.vscode/tasks.json`:
```json
{
"version": "2.0.0",
"tasks": [
{
"label": "test",
"type": "shell",
"command": "seq 1000 | sed 's/$/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/'",
}
]
}
```
### Expected behavior
All 1000 lines are printed, the last 3 lines being:
(this is how it works when ran in a regular VSCode Terminal)
```
998XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
999XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
1000XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
### Actual behavior
Task output is truncated randomly, these being two example runs' last 3 lines:
```
741XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
742XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
743XXXXXXXXXXXXXXXXX
```
```
754XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
755XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
756XXXXXXXXXXXXXXX
```
### Additional background
Reading the description of #75139, I'm *pretty sure* it's the same bug.
It was marked as a duplicate of #38137, but I'm not sure that's correct.
I've only seen this in the last few months, *after* #38137 was supposedly fixed, and assumed the problem was a change in the process producing the output, or some bug/setting in SSH, which is why I haven't reported it earlier.
However, after coming across #75139, I did some experimentation, and even a task that only does `cat build-log` results in truncation (`build-log` is 128kB and has 2665 lines, the current output from a real workload).
And then I simplified that to the command above, which reproduces easily.
Even `seq 1000` alone was truncated once, at `841`.
And `seq 10000` truncates even easier, example last 3 lines from some runs:
```
9595
9596
959
```
```
7437
7438
7439
```
```
8143
8144
814
```
|
bug,tasks,terminal-process
|
low
|
Critical
|
598,343,409 |
godot
|
node_added isn't triggered for nodes in the Main Scene
|
**Godot version:**
v3.2.1.stable.mono.official
**OS/device including version:**
Windows 10
**Issue description:**
I have an auto-loaded `Node` that connects to `node_added` in `_Ready()`.
My connected function triggers to nodes that are added by AddChild (even for nodes added in the Main Scene's _Ready()).
By logging, I confirmed the `node_added` signal is connected before the Main Scene's `_Ready()` is called.
**Steps to reproduce:**
(All in C#)
1. Create a node and have it connect to `node_added` in its `_Ready()` method.
2. Set that node to Auto Load in your project settings
3. Setup a scene with multiple nodes
4. Make that scene your Main Scene in your project settings
5. Run the project and notice that node_added is not triggered for any of the nodes in your Main Scene.
**Minimal reproduction project:**
[NodeAddedIssue.zip](https://github.com/godotengine/godot/files/4465343/NodeAddedIssue.zip)
|
bug,topic:core
|
low
|
Minor
|
598,350,725 |
rust
|
Lint exported_private_dependencies misses public dependency via trait impl
|
Tracking issue: #44663, RFC: rust-lang/rfcs#1977
Cargo.toml:
```toml
cargo-features = ["public-dependency"]
[package]
name = "playground"
version = "0.0.0"
edition = "2018"
[dependencies]
num-traits = "0.2"
```
lib.rs:
```rust
pub struct S;
impl std::ops::Add for S {
type Output = S;
fn add(self, _: Self) -> Self::Output {
unimplemented!()
}
}
impl num_traits::Zero for S {
fn zero() -> Self {
unimplemented!()
}
fn is_zero(&self) -> bool {
unimplemented!()
}
}
```
Also, a plain `pub use` seems to be missed as well.
|
A-lints,A-visibility,T-compiler,C-bug,F-public_private_dependencies
|
low
|
Minor
|
598,355,312 |
flutter
|
Ensure that macOS rebuilds are fast
|
We need to audit the macOS build pipeline and ensure that we aren't doing excessive work on what should be minimal rebuilds.
I believe there are still some known issues in how `assemble` interacts with the Xcode build. @jonahwilliams are there existing bugs for that I should list as blockers?
|
tool,platform-mac,a: desktop,a: build,P2,team-macos,triaged-macos
|
low
|
Critical
|
598,355,627 |
flutter
|
Ensure that Windows rebuilds are fast
|
We need to audit the Windows build pipeline and ensure that we aren't doing excessive work on what should be minimal rebuilds.
Some areas I'm aware of needing investigation:
- The interaction with the Flutter build step.
- Unconditionally writing config files, which may cause VS to rebuild everything even when their contents haven't changed.
|
tool,platform-windows,a: desktop,a: build,P2,team-windows,triaged-windows
|
low
|
Minor
|
598,355,801 |
flutter
|
Ensure that Linux rebuilds are fast
|
We need to audit the Linux build pipeline and ensure that we aren't doing excessive work on what should be minimal rebuilds.
This is something we should wait until we've switched to `CMake` to investigate, as there's no reason to tune the current Make builds.
|
tool,platform-linux,a: desktop,a: build,P3,team-linux,triaged-linux
|
low
|
Minor
|
598,370,573 |
godot
|
godot 3.2 VehicleBody and VehicleWheel engine_force strange behavior
|
**Some backstory:**
In godot 3.1 if I apply Engine force it works just about I expect it to work.
In godot 3.2 same projects start works strange. I found some reduce in power of vehicles.
After some investigations I found that engine force applies is not as I apply but actually it is divided by (number_of_wheels/number_of_traction_wheels). I tested only 4 wheels cars. In my case if I RWD car i need twice the engine force so car have acceleration as I want. If I make only 1WD car with total 4 wheels, then I need multiply engine force by 4.
I thought it can work good with new "Pre-Wheel Motion", but it works in same way. I need to apply Engine Force to wheel multiplied by (number_of_wheels/number_of_traction_wheels) to make it work properly.
**How I expect it to work:**
If I set engine_force to VehicleBody and it has at least one VehicleWheel with property use_as_traction=true and in contact with ground then VehicleBody is accelerated with engine_force I set.
OR
If I set engine_force to VehicleWheel with property use_as_traction=true and in contact with ground then this VehicleWheel is accelerated with engine_force I set.
**How it works:**
If I set engine_force to VehicleBody or VehicleWheel then actual acceleration that is applied is divided by (number_of_wheels/number_of_traction_wheels).
|
bug,topic:physics
|
low
|
Minor
|
598,377,751 |
pytorch
|
Comparison ops for Complex Tensors
|
This issue is intended to roll-up several conversations about how PyTorch should compare complex values and how functions that logically rely on comparisons, like min, max, sort, and clamp, should work when given complex inputs. See https://github.com/pytorch/pytorch/issues/36374, which discussed complex min and max, and https://github.com/pytorch/pytorch/issues/33568, which discussed complex clamp. The challenge of comparing complex numbers is not limited to PyTorch, either, see https://github.com/numpy/numpy/issues/15630 for NumPy's discussion of complex clip.
Comparing complex numbers is challenging because the complex numbers aren't part of any ordered field. In NumPy, they're typically compared lexicographically: comparing the real part and only comparing the imaginary part if the real parts are equal. C++ and Python, on the other hand, do not support comparison ops on complex numbers.
Let's use this issue to enumerate complex comparison options as well as their pros and cons.
The current options are:
- No default complex comparison
- Pros:
- Consistent with C++ and Python
- Behavior is always clear since the user must specify the type of comparison
- Cons:
- Divergent from NumPy, but a clear error
- Possibly inconvenient to always specify the comparison
- Lexicographic comparison
- Pros:
- Consistent with NumPy
- Cons:
- Clamp (clip) behavior seems strange: (3 - 100j) clamped below to (2 + 5j) is unchanged, clamped above to (2 + 5j) becomes (2 + 5j)
- Some users report wanting to compare complex by absolute value
- Absolute value comparison
- Pros:
- Some applications naturally compare complex numbers using their absolute values
- Cons:
- Divergent from NumPy, possibly a silent break
cc. @rgommers @dylanbespalko @ezyang @mruberry
cc @ezyang @anjali411 @dylanbespalko
|
triaged,module: complex,module: numpy
|
low
|
Critical
|
598,410,097 |
godot
|
Missing dependencies after renaming a placeholder scene
|
**Godot version:**
- v3.2.1.stable.official
- v4.0.dev.custom_build.9dc19f761
**OS/device including version:** macOS Catalina 10.15.4
**Issue description:**
Renaming / drag & dropping a scene to another folder may cause missing dependencies if the scene is marked "Load as Placeholder".
* If the main scene is still open, it can not be saved until I remove and readd the placeholder node.
* If the main scene is not open, I won't be able to open the scene again. The only way to fix it seems to be manually changing the `instance_placeholder="res://XXX.tscn"` field in the `tscn` file.
**Steps to reproduce:**
1. Create a new 2D scene, save as 'Sprite.tscn'
2. Create a new 2D scene, save as 'Main.tscn'
3. Instance Sprite in Main
4. Right click the Sprite node, and check 'Load as Placeholder'
6. Two cases:
1. Close the Main scene
* Rename 'Sprite.tscn' to 'Other.tscn' in FileSystem dock
* Try to open 'Main.tscn'
* Alert: Missing 'Main.tscn' or its dependencies.
2. Keep the Main scene open
* Rename 'Sprite.tscn' to 'Other.tscn' in FileSystem dock
* Try to save 'Main.tscn'
* Alert: Couldn't save scene. Likely dependencies (instances or inheritance) couldn't be satisfied.
**Minimal reproduction project:** N/A
|
bug,topic:editor,confirmed
|
low
|
Major
|
598,455,745 |
rust
|
no associated item found for struct in the current scope while it exists
|
The following code ([playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=e1b64e86edb39ace6b60a2cd4628a71b)):
```rust
trait Trait {
type Associated;
fn instance() -> Self::Associated;
}
struct Associated;
struct Struct;
impl Trait for Struct {
type Associated = Associated;
fn instance() -> Self::Associated {
Self::Associated
}
}
```
Fails with this error:
```
error[E0599]: no associated item named `Associated` found for struct `Struct` in the current scope
--> src/lib.rs:14:15
|
8 | struct Struct;
| -------------- associated item `Associated` not found for this
...
14 | Self::Associated
| ^^^^^^^^^^ associated item not found in `Struct`
```
However, if we alter `instance()` slightly to either of these, the code compiles successfully:
```rust
fn instance() -> Self::Associated {
Self::Associated {} // {} even though we have `struct S;`, not `struct S {}`
}
```
```rust
fn instance() -> Self::Associated {
Associated // outer scope struct definition
}
```
---
It is worth mentioning that explicitly using `as Trait` does **not** solve the issue, and fails with a different error:
```rust
fn instance() -> Self::Associated {
<Self as Trait>::Associated
}
```
Error:
```
error[E0575]: expected method or associated constant, found associated type `Trait::Associated`
--> src/lib.rs:14:9
|
14 | <Self as Trait>::Associated
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: can't use a type alias as a constructor
```
Adding `{}`:
```rust
fn instance() -> Self::Associated {
<Self as Trait>::Associated {}
}
```
Fails with:
```
error: expected one of `.`, `::`, `;`, `?`, `}`, or an operator, found `{`
--> src/lib.rs:14:37
|
14 | <Self as Trait>::Associated {}
| ^ expected one of `.`, `::`, `;`, `?`, `}`, or an operator
```
|
A-associated-items,T-lang,T-compiler,C-bug
|
low
|
Critical
|
598,458,429 |
rust
|
Support positional vectored IO for unix files
|
Next to the existing `write_vectored` and `read_vectored` APIs, it would be great to have `write_vectored_at` and `read_vectored_at` methods for `File`s on UNIX systems, corresponding to the `pwritev` and `preadv` syscalls. This would be an easy way to add means for high performance file IO using the standard library `File` type.
As far as I understand, the equivalent does not exist on Windows, so this would probably have to live in the [unix `FileExt` trait](https://doc.rust-lang.org/std/os/unix/fs/trait.FileExt.html#tymethod.write_at).
If this is deemed desirable, I'd be happy to send a PR for this.
|
T-libs-api,C-feature-request
|
low
|
Major
|
598,461,506 |
bitcoin
|
scripts: check for .text.startup sections
|
From #18553:
[theuni](https://github.com/bitcoin/bitcoin/pull/18553#issuecomment-611717976)
> Sidenote: we could potentially add a check for illegal instructions in the .text.startup section in one of our python binary checking tools. Though we'd have to create a per-arch blacklist/whitelist to define "illegal".
[laanwj](https://github.com/bitcoin/bitcoin/pull/18553#issuecomment-611733956)
> My idea was to forbid .text.startup sections completely in all 'special' compilation units, e.g. those compiled with non-default instruction sets. I think that's easier to implement a check for than instruction white/blacklists.
|
Scripts and tools
|
low
|
Minor
|
598,476,455 |
rust
|
Adding a `Send` bound generates `the parameter type `S` must be valid for any other region`
|
I'm using `cargo 1.43.0-nightly (bda50510d 2020-03-02)` / `rustc 1.43.0-nightly (c20d7eecb 2020-03-11)`.
With this code (simplified from [this crate](https://github.com/Ekleog/yuubind/tree/7dbebd924fe73e7de905c24a46564f631b7e46c1), edition 2018 with as only dependency `futures = "0.3.4"`):
```rust
use futures::{prelude::*, Future, Stream};
use std::{
pin::Pin,
task::{Context, Poll},
};
pub struct Foo {}
impl Foo {
pub fn foo<'a, S>(
&'a mut self,
reader: &'a mut StreamWrapperOuter<S>,
) -> Pin<Box<dyn 'a + Send + Future<Output = ()>>>
where
S: 'a + Unpin + Send + Stream<Item = Vec<()>>,
{
Box::pin(async move {
let _res = reader.concat().await;
unimplemented!()
})
}
}
pub struct StreamWrapperOuter<'a, S>
where
S: Unpin + Stream<Item = Vec<()>>,
{
_source: &'a mut StreamWrapperInner<S>,
}
impl<'a, S> Stream for StreamWrapperOuter<'a, S>
where
S: Unpin + Stream<Item = Vec<()>>,
{
type Item = Vec<()>;
fn poll_next(self: Pin<&mut Self>, _ctx: &mut Context) -> Poll<Option<Self::Item>> {
unimplemented!()
}
}
struct StreamWrapperInner<S: Stream> {
_stream: S,
}
impl<S: Stream> Stream for StreamWrapperInner<S> {
type Item = S::Item;
fn poll_next(self: Pin<&mut Self>, _ctx: &mut Context) -> Poll<Option<S::Item>> {
unimplemented!()
}
}
```
I get the following error:
```rust
error[E0311]: the parameter type `S` may not live long enough
--> src/lib.rs:17:9
|
17 | / Box::pin(async move {
18 | | let _res = reader.concat().await;
19 | | unimplemented!()
20 | | })
| |__________^
|
= help: consider adding an explicit lifetime bound for `S`
= note: the parameter type `S` must be valid for any other region...
note: ...so that the type `StreamWrapperInner<S>` will meet its required lifetime bounds
--> src/lib.rs:17:9
|
17 | / Box::pin(async move {
18 | | let _res = reader.concat().await;
19 | | unimplemented!()
20 | | })
| |__________^
error: aborting due to previous error
```
However, removing the `Send` bounds on both the return type of `Foo::foo` and its `S` type parameter, it compiles perfectly.
I'm pretty surprised with the fact that adding a `Send` bound apparently changes the lifetime; and the fact that `S` “must be valid for any other region” would make me guess it's a stray `for<'r>` like has already happened with async/await. What do you think about this?
|
A-lifetimes,T-compiler,C-bug
|
low
|
Critical
|
598,495,342 |
youtube-dl
|
clickfunnels
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.03.24. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2020.03.24**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- URL : https://hkowableyoutubedl.clickfunnels.com/membership-area1586699963857
(url can be with custom domain too, like for the script you made for Teachable course)
you need to log in.
I have created a membership clickfunnels that will work for 14 days.
login : [email protected]
password: youtubedl2020
## Description
video can be hosted on youtube, vimeo, wistia or thought direct custom video file link.
Thanks you and have a good day :)
|
site-support-request
|
low
|
Critical
|
598,503,366 |
PowerToys
|
[Shortcut Guide] dynamics add shortcut to swap keyboards when multiple are there
|
If a user has multiple keyboard layouts installed, we can detect this. If this happens, it would be a nice reminder about that shortcut.
Since this isn’t relevant if you don’t have it, it should be dynamic.
|
Idea-Enhancement,Product-Shortcut Guide
|
low
|
Minor
|
598,520,405 |
TypeScript
|
[Feature Request] Ability to auto-generate a type file for local JS module
|
I'm facing the current issue when importing a JS file into a TS project

The solution is to create a @types/cognitoAuth/index.d.ts file for the types, or convert the file to typescript.
It would be great if in vscode we could use the code action feature ( CTRL + . ) to either generate an empty types file with boilerplate, or try convert file to TS?
|
Suggestion,Awaiting More Feedback
|
low
|
Minor
|
598,526,536 |
rust
|
Filling short slices is slow even if they are provably short
|
Take this method as baseline ([playground](https://play.rust-lang.org/?version=stable&mode=release&edition=2018&gist=6eed91dbb458e8eef083d0d24a958182)):
```rust
pub fn add_padding(input_len: usize, output: &mut [u8]) -> usize {
let rem = input_len % 3;
let padding_length = (3 - rem) % 3;
for i in 0..padding_length {
output[i] = b'=';
}
padding_length
}
```
padding_length can take on values in the rage [0, 2], so we have to write either zero, one, or two bytes into our slice.
Benchmarking all three cases gives us these timings:
```
0: time: [2.8673 ns 2.8692 ns 2.8714 ns]
1: time: [3.2384 ns 3.2411 ns 3.2443 ns]
2: time: [3.7454 ns 3.7478 ns 3.7507 ns]
```
Chasing more idiomatic code we switch to iterators for the loop ([playground](https://play.rust-lang.org/?version=stable&mode=release&edition=2018&gist=d39fbb7d5c5bd2914ae14e538e589460)):
```rust
pub fn add_padding(input_len: usize, output: &mut [u8]) -> usize {
let rem = input_len % 3;
let padding_length = (3 - rem) % 3;
for byte in output[..padding_length].iter_mut() {
*byte = b'=';
}
padding_length
}
```
Given that this loop barely does any iterations and has thus not much performance potential in avoiding bounds checks, we expect about the same runtime.
```
absolute:
0: time: [3.2053 ns 3.2074 ns 3.2105 ns]
1: time: [5.4453 ns 5.4475 ns 5.4501 ns]
2: time: [6.0211 ns 6.0254 ns 6.0302 ns]
relative compared to baseline:
0: time: [+11.561% +11.799% +12.030%]
1: time: [+67.946% +68.287% +68.647%]
2: time: [+60.241% +60.600% +60.932%]
```
Oof, up to 68% slower...
**Let's see what's happening**
The baseline version copies one byte at a time, which isn't that bad when you copy at most two bytes:
```assembly
playground::add_padding:
pushq %rax
movq %rdx, %rcx
movabsq $-6148914691236517205, %r8
movq %rdi, %rax
mulq %r8
shrq %rdx
leaq (%rdx,%rdx,2), %rax
subq %rax, %rdi
xorq $3, %rdi
movq %rdi, %rax
mulq %r8
shrq %rdx
leaq (%rdx,%rdx,2), %rax
subq %rax, %rdi
je .LBB0_4
xorl %eax, %eax
.LBB0_2:
cmpq %rax, %rcx
je .LBB0_5
movb $61, (%rsi,%rax)
addq $1, %rax
cmpq %rdi, %rax
jb .LBB0_2
.LBB0_4:
movq %rdi, %rax
popq %rcx
retq
[snip panic code]
```
In comparison the iterator version does a full memset:
```assembly
playground::add_padding:
pushq %rbx
movq %rdx, %rcx
movq %rdi, %rbx
movabsq $-6148914691236517205, %rdi
movq %rbx, %rax
mulq %rdi
shrq %rdx
leaq (%rdx,%rdx,2), %rax
subq %rax, %rbx
xorq $3, %rbx
movq %rbx, %rax
mulq %rdi
shrq %rdx
leaq (%rdx,%rdx,2), %rax
subq %rax, %rbx
cmpq %rcx, %rbx
ja .LBB0_4
testq %rbx, %rbx
je .LBB0_3
movq %rsi, %rdi
movl $61, %esi
movq %rbx, %rdx
callq *memset@GOTPCREL(%rip)
.LBB0_3:
movq %rbx, %rax
popq %rbx
retq
[snip panic code]
```
For long slices memset would be a good choice but for just a few bytes the overhead is simply too big. When using a constant range for testing, we see the compiler emitting different combinations of `movb`, `movw`, `movl`, `movabsq`,`movaps+movups` up to a length of 256 byte. Only for slices longer than that a memset is used.
At some point the compiler already realizes that `padding_length` is always `< 3` as an `assert!(padding_length < 3);` gets optimized out completely. Whether this information is not available at the right place or is simply not utilized, I can't tell.
Wrapping the iterator version's loop in a `match` results in two things - the fastest version and a monstrosity ([playground](https://play.rust-lang.org/?version=stable&mode=release&edition=2018&gist=afa4e27d8e035a1419f784fb0b889a5a)).
```rust
pub fn add_padding(input_len: usize, output: &mut [u8]) -> usize {
let rem = input_len % 3;
let padding_length = (3 - rem) % 3;
match padding_length {
0 => {
for byte in output[..padding_length].iter_mut() {
*byte = b'=';
}
},
1 => {
for byte in output[..padding_length].iter_mut() {
*byte = b'=';
}
}
2 => {
for byte in output[..padding_length].iter_mut() {
*byte = b'=';
}
},
_ => unreachable!()
}
padding_length
}
```
```
absolute:
0: time: [2.8705 ns 2.8749 ns 2.8797 ns]
1: time: [3.2446 ns 3.2470 ns 3.2499 ns]
2: time: [3.4626 ns 3.4753 ns 3.4894 ns]
relative compared to baseline:
0: time: [-0.1432% +0.0826% +0.3052%]
1: time: [+0.0403% +0.2527% +0.4629%]
2: time: [-7.6693% -7.3060% -6.9496%]
```
It uses a `movw` when writing two bytes, which explains why this version is faster than baseline only in that case.
All measurements taken with criterion.rs and Rust 1.42.0 on an i5-3450. Care has been taken to ensure a low noise environment with reproducible results.
|
I-slow,C-enhancement,T-compiler,A-iterators
|
low
|
Major
|
598,529,555 |
flutter
|
[web] Web application not working on Legacy Edge and IE
|
## Steps to Reproduce
1. Run `fluter build web --relase` and run it using python
1. Open the browser with edge or Firefox
1. A blank grey screen is shown (example here https://drive.getbigger.io)
The output flutter doctor is the following
```
flutter doctor -v
[✓] Flutter (Channel beta, v1.17.0, on Mac OS X 10.15.1 19B88, locale fr-FR)
• Flutter version 1.17.0 at
• Framework revision d3ed9ec945 (6 days ago), 2020-04-06 14:07:34 -0700
• Engine revision c9506cb8e9
• Dart version 2.8.0 (build 2.8.0-dev.18.0 eea9717938)
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.3)
• Android SDK
• Platform android-29, build-tools 29.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 11.3.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.3.1, Build version 11C504
• CocoaPods version 1.9.1
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 3.6)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 44.0.2
• Dart plugin version 192.7761
• Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)
[✓] VS Code (version 1.44.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.9.1
[✓] Connected device (3 available)
• iPhone 11 Pro Max • 3086AEFA-A07A-4A91-B5D0-93D8D059A96E • ios • com.apple.CoreSimulator.SimRuntime.iOS-13-3 (simulator)
• Chrome • chrome • web-javascript • Google Chrome 80.0.3987.163
• Web Server • web-server • web-javascript • Flutter Tools
• No issues found!
```
|
engine,dependency: dart,platform-web,a: production,found in release: 1.17,browser: firefox,P2,team-web,triaged-web
|
low
|
Major
|
598,551,015 |
rust
|
rustc should suggest using async version of Mutex
|
If one accidentally uses `std::sync::Mutex` in asynchronous code and holds a `MutexGuard` across an await, then the future is marked `!Send`, and you can't spawn it off to run on another thread - all correct. Rustc even gives you an excellent error message, pointing out the `MutexGuard` as the reason the future is not `Send`.
But people new to asynchronous programming are not going to immediately realize that there is such a thing as an 'async-friendly mutex'. You need to be aware that ordinary mutexes insist on being unlocked on the same thread that locked them; and that executors move tasks from one thread to another; and that the solution is not to make ordinary mutexes more complex but to create a new mutex type altogether. These make sense in hindsight, but I'll bet that they leap to mind only for a small group of elite users. (But probably a majority of the people who will ever read this bug. Ahem.)
So I think rustc should provide extra help when the value held across an `await`, and thus causing a future not to be `Send`, is a `MutexGuard`, pointing out that one must use an asynchronous version of `Mutex` if one needs to hold guards across an await. It's awkward to suggest `tokio::sync::Mutex` or `async_std::sync::Mutex`, but surely there's some diplomatic way to phrase it that is still explicit enough to be helpful.
Perhaps this could be generalized to other types. For example, if the offending value is an `Rc`, the help should suggest `Arc`.
Here's an illustration of what I mean:
```
use std::future::Future;
use std::sync::Mutex;
fn fake_spawn<F: Future + Send + 'static>(f: F) { }
async fn wrong_mutex() {
let m = Mutex::new(1);
let mut guard = m.lock().unwrap();
(async { }).await;
*guard += 1;
}
fn main() {
fake_spawn(wrong_mutex());
//~^ERROR: future cannot be sent between threads safely
}
```
The error message is great:
```
error: future cannot be sent between threads safely
--> src/main.rs:14:5
|
4 | fn fake_spawn<F: Future + Send + 'static>(f: F) { }
| ---------- ---- required by this bound in `fake_spawn`
...
14 | fake_spawn(wrong_mutex());
| ^^^^^^^^^^ future returned by `wrong_mutex` is not `Send`
|
= help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `std::sync::MutexGuard<'_, i32>`
note: future is not `Send` as this value is used across an await
--> src/main.rs:9:5
|
8 | let mut guard = m.lock().unwrap();
| --------- has type `std::sync::MutexGuard<'_, i32>`
9 | (async { }).await;
| ^^^^^^^^^^^^^^^^^ await occurs here, with `mut guard` maybe used later
10 | *guard += 1;
11 | }
| - `mut guard` is later dropped here
```
I just wish it included:
```
help: If you need to hold a mutex guard while you're awaiting, you must use an async-aware version of the `Mutex` type.
help: Many asynchronous foundation crates provide such a `Mutex` type.
```
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
This issue has been assigned to @LucioFranco via [this comment](https://github.com/rust-lang/rust/issues/71072#issuecomment-658306268).
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"LucioFranco"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END -->
|
C-enhancement,A-diagnostics,T-compiler,A-async-await,A-suggestion-diagnostics,AsyncAwait-Triaged
|
medium
|
Critical
|
598,562,213 |
godot
|
Output spammed when using DLL
|
**Godot version:** 3.2
**Issue description:** The output window from the Godot editor gets spammed with this message:

> Condition "!script_data" is true.
I know my DLL is linking properly to my Player scene because the code gets executed and everything works when you start the game. But you get this message in the output log every time you tab out and tab in of the editor (or click outside and then click anywhere in the editor) when the Player scene is opened in the editor. The name of the Player class in my c++ code is the same as the ressource_name and class_name of the native script. Here's how the DLL linking looks like in my Player scene:

**Minimal reproduction project:**
[Project.zip](https://github.com/godotengine/godot/files/4466972/Project.zip)
|
bug,topic:editor,topic:gdextension
|
low
|
Minor
|
598,570,910 |
flutter
|
Remove AndroidX specific failure message, always show Gradle errors
|
I've seen several cases in the last few minutes of triage where the AndroidX message was shown despite the error being unrelated (keystore issue and ??). We should remove this error filtering and ensure that the raw gradle output is always shown
|
platform-android,tool,t: gradle,P2,team-android,triaged-android
|
low
|
Critical
|
598,597,749 |
nvm
|
cdvm & carriage return in .nvmrc
|
cdnvm does not like a .nvrmc file created in windows for example ( od -c )
0000000 l t s / * \r \n
This can be fixed by replacing
nvm_version=$(<"$nvm_path"/.nvmrc)
with
nvm_version=$(cat "$nvm_path"/.nvmrc | tr -d '\r')
Here I'm doing an echo $nvm_version | od -c before cdnvm
```
0000000 l t s / * \r \n
0000007
at illegal primary in regular expression *
input record number 16, file
source line number 4
` to browse available versions.te --lts=*
```
P.S. This is what is dropping the error - line 2651 nvm.sh ( nvm -> install )
` VERSION="$(NVM_VERSION_ONLY=true NVM_LTS="${LTS-}" nvm_remote_version "${provided_version}")"`
|
OS: windows
|
low
|
Critical
|
598,615,857 |
rust
|
Interactions in `type_alias_impl_trait` and `associated_type_defaults`
|
A question about impl Trait in trait type aliases: has any thought been given about how they should handle defaults?
I'm trying to do something like this, without success:
```rust
#![feature(type_alias_impl_trait, associated_type_defaults)]
use std::fmt::Debug;
struct Foo {
}
trait Bar {
type T: Debug = impl Debug;
fn bar() -> Self::T {
()
}
}
impl Bar for Foo {
type T = impl Debug;
fn bar() -> Self::T {
()
}
}
```
[(playground link)](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=0094ea01075cc106ff3c0be8780c39b1)
I'd guess this just hasn't been RFC'd yes, but does someone know if thought on the topic have already started?
_Originally posted by @Ekleog in https://github.com/rust-lang/rust/issues/63063#issuecomment-612619080_
|
T-lang,C-feature-request,needs-rfc,F-type_alias_impl_trait,requires-nightly,F-associated_type_defaults
|
low
|
Critical
|
598,623,108 |
pytorch
|
Wrong results for multiplication of non-finite complex numbers with real numbers
|
## 🐛 Bug
## To Reproduce
```
import torch
import math
a=torch.tensor([complex(math.inf, 0)])
print(a, a*1.)
#prints tensor([(inf+0.0000j)], dtype=torch.complex64) tensor([(inf+nanj)], dtype=torch.complex64)
```
## Expected behavior
expected answer is inf+0j
This is caused by pytorch first promoting 1. to 1.+0j, causing the unexpected multiplication result. Note, this behavior is consistent with python and numpy, but inconsistent with c++ std::complex (which would produce inf+0j for this computation).
cc @ezyang @anjali411 @dylanbespalko
|
triaged,module: complex,module: numpy
|
low
|
Critical
|
598,676,748 |
ant-design
|
表格中search组件里的clearFilters,对onchange导出的extra.currentDataSource不起效
|
- [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[](https://codesandbox.io/s/zidingyishaixuancaidan-ant-design-demo-vsvud)
### Steps to reproduce
1.点击搜索icon,搜索字段
2.重置
3.导出的extra.currentDataSource未改变,绑定了dataSource结果重置不正常
### What is expected?
正常重置
### What is actually happening?
显示数据未更变
| Environment | Info |
|---|---|
| antd | 4.1.0 |
| React | 16.8.3 |
| System | windows10 |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
Inactive
|
low
|
Minor
|
598,873,525 |
flutter
|
ImageCache should optionally store undecoded images (pre-codec decoding)
|
## Use case
In a current-day app, it is typical to process and display high-megapixel images. Some smartphones come with cameras whose resolution is 48 MP or even 92 MP, and >100 is coming soon. The uncompressed byte sizes of each such image are 48 MiB and 92 MiB respectively.
## Proposal
Short term, I propose significantly raising the default value of ImageCache.maximumSizeBytes and/or documenting the fact that this is counted against the number of pixel in the image (rather than the byte size of the uncompressed image).
Eventually, I would suggest (optionally) caching the compressed images instead of the uncompressed ones - the memory / CPU trade-off seems to make much more sense, since repeatedly decoding compressed images is a walk in the park for modern smartphones and certainly avoiding this doesn't justify a tenfold or so increase in RAM usage.
|
c: new feature,framework,a: images,c: proposal,P3,team-framework,triaged-framework
|
low
|
Minor
|
598,876,297 |
ant-design
|
Modal background scroll iOS
|
- [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[https://ant.design/components/modal/#header](https://ant.design/components/modal/#header)
### Steps to reproduce
1. Go to https://ant.design/components/modal/#header
2. Scroll a little bit to hide browser header
3. Open modal
4. Scroll at the background
### What is expected?
Scroll at background should be disabled
### What is actually happening?
Scroll is not disabled
| Environment | Info |
|---|---|
| antd | 4.1.3 |
| React | 16.8.6 |
| System | iOS 13.3.1 |
| Browser | Chrome 77.0.3865.103 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
help wanted,Inactive,📱Mobile Device
|
low
|
Major
|
598,888,285 |
rust
|
[codegen] unnecessary panicking branch in `foo().await` (vs equivalent `FutureImpl.await`)
|
I compiled this `no_std` code with (LTO / `-Oz` / `-C panic=abort`) optimizations (full repro instructions at the bottom):
```rust
#![no_std]
#![no_main]
#[no_mangle]
fn main() -> ! {
let mut f = async {
loop {
// uncomment only ONE of these statements
// Foo.await; // NO panicking branch
foo().await; // HAS panicking branch (though it should be equivalent to `Foo.await`?)
// bar().await; // NO panicking branch (because it's implicitly divergent?)
// baz().await; // HAS panicking branch (that it inherit from `foo().await`?)
}
};
let waker = waker();
let mut cx = Context::from_waker(&waker);
loop {
unsafe {
let _ = Pin::new_unchecked(&mut f).poll(&mut cx);
}
}
}
struct Foo;
impl Future for Foo {
type Output = ();
fn poll(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll<()> {
asm::nop();
Poll::Ready(())
}
}
async fn foo() {
asm::nop();
}
async fn bar() {
asm::nop();
loop {}
}
async fn baz() {
foo().await;
loop {}
}
```
I got machine code that includes a panicking branch:
``` asm
00000400 <main>:
400: push {r5, r6, r7, lr}
402: add r7, sp, #8
404: movs r0, #0
406: strh.w r0, [r7, #-2]
40a: subs r0, r7, #2
40c: bl 412 <app::main::{{closure}}>
410: udf #254 ; 0xfe
00000412 <app::main::{{closure}}>:
412: push {r7, lr}
414: mov r7, sp
416: mov r4, r0
418: ldrb r0, [r0, #0]
41a: cbz r0, 426 <app::main::{{closure}}+0x14>
41c: ldrb r0, [r4, #1]
41e: cbz r0, 42a <app::main::{{closure}}+0x18>
420: bl 434 <core::panicking::panic>
424: udf #254 ; 0xfe
426: movs r0, #0
428: strb r0, [r4, #1]
42a: bl 48e <__nop>
42e: movs r0, #1
430: strb r0, [r4, #1]
432: b.n 426 <app::main::{{closure}}+0x14>
00000434 <core::panicking::panic>:
434: push {r7, lr}
436: mov r7, sp
438: bl 43e <core::panicking::panic_fmt>
43c: udf #254 ; 0xfe
0000043e <core::panicking::panic_fmt>:
43e: push {r7, lr}
440: mov r7, sp
442: bl 48c <rust_begin_unwind>
446: udf #254 ; 0xfe
```
I expected to see no panicking branches in the output. If I comment out `foo().await` and uncomment `Foo.await` (which should be semantically equivalent) then I get the expected output:
``` asm
00000400 <main>:
400: push {r7, lr}
402: mov r7, sp
404: bl 40a <app::main::{{closure}}>
408: udf #254 ; 0xfe
0000040a <app::main::{{closure}}>:
40a: push {r7, lr}
40c: mov r7, sp
40e: bl 458 <__nop>
412: b.n 40e <app::main::{{closure}}+0x4>
```
Interestingly, `bar().await` contains no panicking branch (because it's divergent?), but `baz().await` does (because it inherits it from `foo().await`?).
### Meta
`rustc --version --verbose`:
```
rustc 1.44.0-nightly (94d346360 2020-04-09)
```
<details><summary>Steps to reproduce</summary>
<p>
``` console
$ git clone https://github.com/rust-embedded/cortex-m-quickstart
$ cd cortex-m-quickstart
$ git reset --hard 1a60c1d94489cec3008166a803bdcf8ac306b98f
$ $EDITOR Cargo.toml && cat Cargo.toml
```
``` toml
[package]
edition = "2018"
name = "app"
version = "0.0.0"
[dependencies]
cortex-m = "0.6.0"
cortex-m-rt = "0.6.10"
cortex-m-semihosting = "0.3.3"
panic-halt = "0.2.0"
[profile.dev]
codegen-units = 1
debug = 1
debug-assertions = false
incremental = false
lto = "fat"
opt-level = 'z'
overflow-checks = false
```
``` console
$ $EDITOR src/main.rs && cat src/main.rs
```
``` rust
#![no_std]
#![no_main]
use core::{
future::Future,
pin::Pin,
task::{Context, Poll, RawWaker, RawWakerVTable, Waker},
};
use cortex_m_rt::entry;
use cortex_m::asm;
use panic_halt as _;
#[no_mangle]
fn main() -> ! {
let mut f = async {
loop {
// uncomment only ONE of these statements
// Foo.await; // NO panicking branch
foo().await; // HAS panicking branch
// bar().await; // NO panicking branch
// baz().await; // HAS panicking branch
}
};
let waker = waker();
let mut cx = Context::from_waker(&waker);
loop {
unsafe {
let _ = Pin::new_unchecked(&mut f).poll(&mut cx);
}
}
}
struct Foo;
impl Future for Foo {
type Output = ();
fn poll(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll<()> {
asm::nop();
Poll::Ready(())
}
}
async fn foo() {
asm::nop();
}
async fn bar() {
asm::nop();
loop {}
}
async fn baz() {
foo().await;
loop {}
}
fn waker() -> Waker {
unsafe fn clone(_: *const ()) -> RawWaker {
RawWaker::new(&(), &VTABLE)
}
unsafe fn wake(_: *const ()) {}
unsafe fn wake_by_ref(_: *const ()) {}
unsafe fn drop(_: *const ()) {}
static VTABLE: RawWakerVTable = RawWakerVTable::new(clone, wake, wake_by_ref, drop);
unsafe { Waker::from_raw(clone(&())) }
}
```
``` console
$ # target = thumbv7m-none-eabi (see .cargo/config)
$ cargo build
$ arm-none-eabi-objdump -Cd target/thumbv7m-none-eabi/debug/app
```
</p>
</details>
|
A-LLVM,C-enhancement,A-codegen,T-compiler,A-coroutines,I-heavy,A-async-await,AsyncAwait-Triaged,C-optimization
|
low
|
Critical
|
598,922,381 |
godot
|
Implementing a generic interface with a type paramter of a generic class's inner class causes a build error
|
**Godot version:**
Godot Engine v3.2.2.rc.mono.custom_build.36a30f681
Commit: 36a30f681fc8c3256829616f32ee452b15674752
Also encountered on a 3.2.1-stable build, earlier versions not tested.
**OS/device including version:** 5.4.24-1-MANJARO x86_64 GNU/Linux
**Issue description:**
In C#, implementing a generic interface with the inner-member of a generic class causes godot to raise an error on build per source file. These errors do not stop the build from completing and the resulting build appears to work.
Error message:
`modules/mono/glue/gd_glue.cpp:250 - Failed to determine namespace and class for script: <script name>.cs. Parse error: Unexpected token: .`
**Steps to reproduce:**
1. Create generic class with a public inner class
2. Implement a generic interface, using that inner class as the generic parameter
3. Press build, note error in output but a functional build.
**Minimal reproduction project:**
[Test.tar.gz](https://github.com/godotengine/godot/files/4469842/Test.tar.gz)
|
bug,topic:dotnet
|
low
|
Critical
|
598,929,601 |
rust
|
Exploit mitigations applied by default are not documented
|
There seems to be no documentation on exploit mitigations in Rust, specifically:
1. What exploit mitigations are supported?
1. What mitigations are enabled by default?
1. Is that answer different if building with `cargo` instead of `rustc` directly?
1. Does that vary by platform?
1. How to enable/disable specific mitigations?
This is relevant not only for security assessment, but also for performance comparison against other languages - both languages need to have the same exploit mitigations enabled for an apples-to-apples comparison.
|
C-enhancement,A-security,T-compiler,A-docs,PG-exploit-mitigations
|
low
|
Major
|
598,933,304 |
PowerToys
|
[FancyZones] Consider caching results of IsProcessOfWindowElevated
|
Based on https://github.com/microsoft/PowerToys/pull/2103#discussion_r407515294, we might want to consider caching the results of `IsProcessOfWindowElevated`. It might give us a performance gain, but we must profile how much `IsProcessOfWindowElevated` costs first. If it's worth optimizing, we need to introduce the cache and measure its impact, so we don't actually decrease the performance while increasing the complexity.
|
Product-FancyZones,Area-Quality,Priority-3
|
low
|
Major
|
598,939,868 |
bitcoin
|
untrust vs unsafe ?
|
Some RPC commands like listunspent talks about "unsafe" funds and others like getbalances about "untrust"
If there isn't diff between them would be good rename it with an unique name
|
Bug
|
low
|
Minor
|
598,974,540 |
TypeScript
|
Give `{}` some other type than `{}` so it doesn't cause subtype reduction problems with primitives
|
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.9.0-beta
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
Array
**Code**
```ts
let a = [{a:1},1,2,3] // OK, Expected (number | { a: number; })[]
let b = [{},1,2,3] // Error, Expected: `({}|1)[]` Actual: `{}[]`
```
**Playground Link:**
[Playground Link](https://www.typescriptlang.org/play?ts=3.9.0-beta#code/FAGwpgLgBAhlC8UDaBvGAuAjAXwDSdwCZcBmAXSgHpKoB5AaVygFEAPABzAGMIwATKAAoAdgFcAtgCMwAJygAfKCljooYqbIDcUbAEokZUJCiSEyFHgLFyVGsxkyA9jKYsO3Xn1UADQRfmY+mTeUACCPKIwID4WBt5AA)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
|
Suggestion,Needs Proposal
|
low
|
Critical
|
599,083,758 |
excalidraw
|
Cache fonts for offline usage
|
We recently added support [for offline usage](https://github.com/excalidraw/excalidraw/pull/1286) but because we need to preload fonts and we add them directly to the `index.html`, they are not being included in Webpacks' pipeline and thus not being included in the service-worker that caches the assets.
We need to find a way to both preload those fonts and cache them for offline usage. At the moment, we haven't eject our CRA and we don't want to do so.
|
enhancement,help wanted,font
|
low
|
Minor
|
599,099,082 |
javascript
|
What is the point of the "Types" section of the guide?
|
Greetings,
The second section of the guide is "References", which is a prescription to use "const" over "var". This section showcases a "bad" code example and a "good" code example. So far, so good.
The first section of the guide is "Types", which, as far as I can tell, doesn't offer any prescriptions at all. It is simply educating the reader on the how variables work in the JavaScript programming language. Furthermore, the section does not show any "bad" code examples or "good" code examples.
Is there a particular reason that this section is included in the style guide? It seems notably out of place - every other section seems to impart a specific coding prescription. There is a time and a place to teach JavaScript newbies the basics on how the language works, and it doesn't seem like it should be in a style guide.
|
question
|
low
|
Major
|
599,106,914 |
flutter
|
Update all example/integration test/samples .gitignores to not track .last_build_id
|
https://github.com/flutter/flutter/pull/54428 added it to the generated gitignore, didn't actually update any of the example or integration test .gitignores. Running the build_tests shard, for example, left a bunch of .last_build_id in my working copy.
|
team,tool,P2,team-tool,triaged-tool
|
low
|
Minor
|
599,118,618 |
pytorch
|
Quantized _out functions don't follow same conventions as other out functions in the codebase
|
Currently they're defined like:
```
.op("quantized::add_out(Tensor qa, Tensor qb, Tensor(a!) out)"
"-> Tensor(a!) out",
c10::RegisterOperators::options()
.aliasAnalysis(at::AliasAnalysisKind::FROM_SCHEMA)
.kernel<QAddOut</*ReLUFused=*/false>>(DispatchKey::QuantizedCPU))
```
However, a standard out function looks like this:
```
- func: angle(Tensor self) -> Tensor
use_c10_dispatcher: full
variants: function, method
supports_named_tensor: True
- func: angle.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)
supports_named_tensor: True
```
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @dzhulgakov @kevinbchen (who added the alias analysis annotations to these functions) and @z-a-f (who appears to have originally added these out variants at https://github.com/pytorch/pytorch/pull/23971 )
|
oncall: quantization,low priority,triaged,better-engineering
|
low
|
Major
|
599,122,413 |
go
|
cmd/compile: non-symmetric inline cost when using named vs non-named returns
|
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.14.2 windows/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
set GO111MODULE=
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\Dante\AppData\Local\go-build
set GOENV=C:\Users\Dante\AppData\Roaming\go\env
set GOEXE=.exe
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOINSECURE=
set GONOPROXY=
set GONOSUMDB=
set GOOS=windows
set GOPATH=C:\Users\Dante\go
set GOPRIVATE=
set GOPROXY=https://proxy.golang.org,direct
set GOROOT=c:\go
set GOSUMDB=sum.golang.org
set GOTMPDIR=
set GOTOOLDIR=c:\go\pkg\tool\windows_amd64
set GCCGO=gccgo
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=C:\Users\Dante\Documents\Code\go3mf\go.mod
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\Dante\AppData\Local\Temp\go-build018008438=/tmp/go-build -gno-record-gcc-switches
</pre></details>
### What did you do?
Code: https://play.golang.org/p/bM-akhk0QXG
`go build -gcflags "-m=2"`
### What did you expect to see?
`NewTriangleInline` is equivalent to `NewTriangleNoInline`, the only difference is that the first uses a named return and the second one doesn´t, therefore I would expect that both have the same inline cost.
```go
func NewTriangleInline(v1, v2, v3 uint32) (t Triangle) {
t.SetIndex(0, v1)
t.SetIndex(1, v2)
t.SetIndex(2, v3)
return
}
```
```go
func NewTriangleNoInline(v1, v2, v3 uint32) Triangle {
var t Triangle
t.SetIndex(0, v1)
t.SetIndex(1, v2)
t.SetIndex(2, v3)
return t
}
```
### What did you see instead?
```
.\foo.go:11:6: can inline NewTriangleInline as: func(uint32, uint32, uint32) Triangle { t.SetIndex(0, v1); t.SetIndex(1, v2); t.SetIndex(2, v3); return }
.\foo.go:18:6: cannot inline NewTriangleNoInline: function too complex: cost 84 exceeds budget 80
```
|
NeedsDecision
|
low
|
Critical
|
599,136,166 |
go
|
runtime: sema: many many goroutines queueing up on many many distinct addresses -> slow
|
Hello up there. [libcsp](https://libcsp.com/performance/) claims to be 10x faster on a [benchmark](https://github.com/shiyanhui/libcsp/blob/ea0c5a41/benchmarks/sum.go#L28-L53) involving `sync.WaitGroup` for a divide-and-conqueror summation program. I've analyzed the [profile](https://lab.nexedi.com/kirr/misc/raw/443f9bf4/libcsp/pprof001.svg) and most of the time is being spent in `runtime.(*semaRoot).dequeue` and `runtime.(*semaRoot).queue` triggered by calls to `sync.WaitGroup` `.Done` and `.Wait`. The benchmark uses many (~ Ngoroutines) different WaitGroups and semaphores simultaneously.
Go commit https://github.com/golang/go/commit/45c6f59e1fd9 says
> There is still an assumption here that in real programs you don't have many many goroutines queueing up on many many distinct addresses. If we end up with that problem, we can replace the top-level list with a treap.
It seems the above particular scenario is being hit in this benchmark.
--------
<!--
Please answer these questions before submitting your issue. Thanks!
For questions please use one of our forums: https://github.com/golang/go/wiki/Questions
-->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.14.2 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE="off"
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/kirr/.cache/go-build"
GOENV="/home/kirr/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/kirr/src/neo:/home/kirr/src/tools/go/g.env"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/kirr/src/tools/go/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/kirr/src/tools/go/go/pkg/tool/linux_amd64"
GCCGO="/usr/bin/gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build136588881=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
Ran benchmarks in libcsp.
### What did you expect to see?
Libcsp and Go versions comparable in terms of speed.
### What did you see instead?
Go version 10x slower.
|
Performance,NeedsInvestigation,compiler/runtime
|
low
|
Critical
|
599,148,538 |
pytorch
|
DDP should divide bucket contents by the number of global replicas instead of world size
|
Currently, DDP divides the bucket content by world size before doing allreduce:
https://github.com/pytorch/pytorch/blob/ddf5755ff832b3115af3c57d0203d49c9c2308aa/torch/csrc/distributed/c10d/reducer.cpp#L368-L375
However, the reducer launched the allreduce on all model replicas.
https://github.com/pytorch/pytorch/blob/ddf5755ff832b3115af3c57d0203d49c9c2308aa/torch/csrc/distributed/c10d/reducer.cpp#L408-L427
In this case, if there is a DDP process that operates on multiple devices, the result would be wrong. Because dividing by world size does not lead to the average of all gradients, instead it should divide by the number of replicas.
### Solution
DDP needs to do a round a allgather in the constructor to collect the total number of module replicas globally and use that number to divide the bucket contents.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar
|
oncall: distributed,triaged
|
low
|
Minor
|
599,148,671 |
pytorch
|
XNNPACK operators are not actually registered under xnnpack namespace
|
Here's the registration line:
```
.op("prepacked::linear_clamp_prepack(Tensor W, Tensor? B=None, "
"Scalar? output_min=None, Scalar? output_max=None) "
"-> __torch__.torch.classes.xnnpack.LinearOpContext",
torch::RegisterOperators::options()
.aliasAnalysis(at::AliasAnalysisKind::PURE_FUNCTION)
.kernel<decltype(createLinearClampPrePackOpContext),
createLinearClampPrePackOpContext>(
DispatchKey::CPU))
```
Doesn't look like xnnpack namespace to me.
cc @kimishpatel
|
triaged,better-engineering,module: xnnpack
|
low
|
Minor
|
599,149,627 |
pytorch
|
Restructure test_c10d.py and test_distributed.py
|
Collective communication and DDP tests live in both `test_c10d.py` and `test_distributed.py`, making it confusing which file we should add a new test into. We should restructure them into `test_c10d.py` only for collective communications and `test_ddp.py` only for `DistributedDataParallel`.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar
|
oncall: distributed,triaged,better-engineering
|
low
|
Minor
|
599,168,204 |
flutter
|
Migrate away from rapidjson
|
Rapidjson had its last versioned release 3 years ago, its primary maintainer is no longer active and it has failed to pass an internal security review. Its internal use is frozen and the recommendation is to move to gRPC's json parser: https://github.com/grpc/grpc/blob/master/src/core/lib/json/json.h
|
engine,P2,team-engine,triaged-engine
|
low
|
Critical
|
599,170,897 |
pytorch
|
Drop _stacklevel from argspecs of F.softmax, F.softmin, F.log_softmax (for implicit dim has been long deprecated)
|
https://pytorch.org/docs/master/nn.functional.html?highlight=softmax#torch.nn.functional.softmax
If it's unused or has some special undocumented meaning, I think it's useful to explain it explicitly in Parameters section
|
module: docs,triaged
|
low
|
Minor
|
599,179,393 |
vscode
|
[folding] mode to have region based folding only
|
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Is it possible to remove folding for the language, but leave it only for writing keywords (for example in C# #region #endregion)
|
feature-request,editor-folding
|
low
|
Minor
|
599,208,512 |
TypeScript
|
Update Imports On File Move can create invalid and/or unexpected tsconfig changes
|
This happens frequently in a monorepo / project references setup when you move a file from one project to another. If the file you’re moving was included via an `include` glob, the tsconfig will get an additional entry in `include` to the file’s new location:

This is not only unexpected, but can also be invalid if `rootDir` is set. I think I’d describe my expected behavior as "moving a file should update tsconfig.json if and only if the original filename was explicitly listed in the tsconfig (not just covered by a glob or directory) and the new filename is inside (rootDir ?? directory of tsconfig)."
|
Bug,Domain: TSServer
|
low
|
Minor
|
599,244,222 |
pytorch
|
Typecasting issue in MSELoss
|
## 🐛 Bug
Cannot backprop through MSELoss when it is provided with different dtypes.
## To Reproduce
Steps to reproduce the behavior:
```py
import torch
class SimpleModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(1, 1)
def forward(self, x):
return self.layer(x)
model = SimpleModel().double()
X = torch.rand(100, 1).double()
Y = torch.rand(100, 1).float()
optimizer = torch.optim.Adam(model.parameters())
model.train()
optimizer.zero_grad()
loss = torch.nn.MSELoss()(model(X), Y) # leads to failure
# loss = ((model(X) - Y) ** 2).mean() # works
loss.backward() # fails here
optimizer.step()
```
> Traceback (most recent call last):
> File "foo.py", line 29, in <module>
> loss.backward()
> File "/home/twoertwe/miniforge3/envs/panama/lib/python3.8/site-packages/torch/tensor.py", line 195, in backward
> torch.autograd.backward(self, gradient, retain_graph, create_graph)
> File "/home/twoertwe/miniforge3/envs/panama/lib/python3.8/site-packages/torch/autograd/__init__.py", line 97, in backward
> Variable._execution_engine.run_backward(
> RuntimeError: expected dtype Double but got dtype Float
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
MSELoss either throws an error or it should be able to deal with mixed dtypes.
## Environment
```
Collecting environment information...
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 9.2.1-17ubuntu1~16.04) 9.2.1 20191102
CMake version: version 3.15.1
Python version: 3.8
Is CUDA available: Yes
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: GeForce RTX 2080 Ti
Nvidia driver version: 440.64
cuDNN version: /usr0/local/cuda-9.0/lib64/libcudnn.so.7.0.5
Versions of relevant libraries:
[pip] hamiltorch==0.3.1.dev3
[pip] numpy==1.18.1
[pip] torch==1.4.0
[pip] torchvision==0.5.0
[conda] cudatoolkit 10.1.243 h6bb024c_0
[conda] hamiltorch 0.3.1.dev3 pypi_0 pypi
[conda] mkl 2020.0 166 conda-forge
[conda] numpy 1.18.1 py38h8854b6b_1 conda-forge
[conda] pytorch 1.4.0 py3.8_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] torchvision 0.5.0 py38_cu101 pytorch
```
|
module: loss,triaged,module: type promotion
|
low
|
Critical
|
599,255,961 |
go
|
x/net/icmp: listen icmp in Windows not work properly
|
<!--
Please answer these questions before submitting your issue. Thanks!
For questions please use one of our forums: https://github.com/golang/go/wiki/Questions
-->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.8 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/censored/Library/Caches/go-build"
GOENV="/Users/censored/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/censored/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.13.8/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.13.8/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/censored/go/src/srt/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/4s/1bq_2censoredw0000gn/T/go-build488770590=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
I write a func to detect who ping to my computer
<pre>
conn, err := icmp.ListenPacket("ip4:icmp", "0.0.0.0")
if err != nil {
log.Fatalf("listen err, %s", err)
}
defer conn.Close()
if err != nil {
return
}
bytes := make([]byte, 512)
for {
fmt.Println("recv")
n,_, err := conn.ReadFrom(bytes)
if err != nil {
fmt.Println(err.Error())
continue
}
fmt.Println(n)
}
</pre>
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
### What did you expect to see?
fmt.Println(n) every ping.
### What did you see instead?
sometime it work, sometime it didn't work.
PoC video: <i>https://youtu.be/AyQDH9AQSRc</i>
|
OS-Windows,NeedsInvestigation
|
low
|
Critical
|
599,299,035 |
pytorch
|
Allow `__array__` to automatically detach and move to CPU
|
## 🚀 Feature
I would like `__array__` to always implicitly detach and transfer to CPU before returning a numpy array, so that `np.asarray(mytensor)` is guaranteed to work.
## Motivation
For good reasons detailed in [this Discourse thread](https://discuss.pytorch.org/t/should-it-really-be-necessary-to-do-var-detach-cpu-numpy/35489), a `torch.Tensor` with gradients needs to be `.detach()`ed before it is converted to NumPy, and further, if the Tensor is on the GPU it needs to be explicitly transferred back. Specifically:
> People not very familiar with `requires_grad` and cpu/gpu Tensors might go back and forth with numpy. For example doing pytorch -> numpy -> pytorch and backward on the last Tensor. This will backward without issue but not all the way to the first part of the code and won’t raise any error.
As someone not very familiar with `requires_grad`, I am fully on board with this. :sweat_smile:
However, the purpose of `__array__` is to allow functions to be written against a unique API (NumPy) to work on arrays of other types *without having to know anything about said array*. Having to go through `.detach()[.cpu()]` breaks this assumption.
## Pitch
The specific use case I have in mind is viewing tensors. In [napari](https://napari.org), we pass input arrays through `__getitem__` and then through `np.asarray`, which lets us lazily view n-dimensional arrays as long as they satisfy the `__array__` API. This works for NumPy, dask, zarr, and Xarray, but I stumbled when I was trying to demo it with torch Tensors. You can see a brief demo in this noise2self notebook:
https://github.com/jni/noise2self/blob/napari/notebooks/Intro%20to%20Neural%20Nets.ipynb
(You need the usual suspects, plus `pip install napari` or `conda install -c conda-forge napari` for the viewer.)

## Alternatives
Things could remain as is, and there are advantages (articulated in the Discourse post). However, this means that anyone else in the NumPy ecosystem that wants to play well with `torch.Tensor`s then needs to include PyTorch-specific code in their code base, which is not super nice. (Or monkey-patch it, which is also not super nice!)
## Additional context
original `__array__` proposal: #2935
original `__array__` implementation: #2945
possibly related issue: #9648
I couldn't find too much discussion on the issues or PRs about the decision not to `.detach()` automatically in `__array__`.
cc @ngimel
|
feature,module: cuda,triaged,module: numpy
|
medium
|
Critical
|
599,310,818 |
vscode
|
Support "perspectives" like approach in VSCode
|
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
**What is a Perspective?**
https://www.tutorialspoint.com/eclipse/eclipse_perspectives.htm
https://www.eclipse.org/articles/using-perspectives/PerspectiveArticle.html
Considering large projects with different components (backend, frontend projects with mix of languages) a feature similar to Eclipse IDE "perspectives" would be very useful to easy switch between the projects or scope to a section of the current project. To switch between the perspectives, a ui similar to open recent (Ctrl-R) project could be used and triggered with a shortcut (Ctrl-9). The icon in each entry can show the project's main technology (angular, c#, etc).
In the File Explorer (VSCode's Tab) a new context menu option "Create Perspective" is used to add perspectives based on a folder.
In the Sidebar a new item can be added named "Perspectives" that can list the existing perspectives with a highlight on the active one. A color can be used in the ui sidebar item or a status bar indicator similar to "Open Remote Window" with the name of the current perspective, at click will open Ctrl-9 ui mentioned above.
(Each perspective can have a picked color like in peakoc.)
**If accepted this feature can extend the functions that are now present in the "remote connect" feature and maybe not generating another item in the status bar.**

Benefits:
- large projects can be easier managed on each component (eg: backend perspective, web app perspective, mobile app perspective)
- scope to section in same project / perspective with "Create Perspective"
- keeps the same window but switches the workspace (no alt-tab jumps)
- keeps the same simple/clean layout (no need for multiple sidebars clutter)
|
feature-request,layout
|
high
|
Critical
|
599,332,087 |
youtube-dl
|
Add on24 downloader
|
## Checklist
- [x] I'm reporting a feature request
- [x] I've verified that I'm running youtube-dl version **2020.03.24**
- [x] I've searched the bugtracker for similar feature requests including closed ones
## Description
ON24 is used for online trainings like from the O'Reilly learning platform. Replays are available after the training but it seems they cannot be downloaded. I'd like to ask for a download functionality.
|
site-support-request
|
medium
|
Critical
|
599,348,558 |
go
|
x/net/http2: low tcp segment utilization
|
<!--
Please answer these questions before submitting your issue. Thanks!
For questions please use one of our forums: https://github.com/golang/go/wiki/Questions
-->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.14.2 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE="on"
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/detailyang/Library/Caches/go-build"
GOENV="/Users/detailyang/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOINSECURE=""
GONOPROXY="**.baidu.com**"
GONOSUMDB="*"
GOOS="darwin"
GOPATH="/Users/detailyang/go"
GOPRIVATE=""
GOPROXY="https://goproxy.baidu.com,https://goproxy.cn,direct"
GOROOT="/usr/local/go"
GOSUMDB="off"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/dev/null"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/n3/g3fv953s435gqv0x7557ftyc0000gp/T/go-build515636404=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
start an http2 echo server then write a response
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
### What did you expect to see?
I'm expecting the response `Header Frame` `DataFrame` and `Header Frame with EndStream` can encoding to the one TCP Segment but it's not :(
### What did you see instead?

one http2 response division into 3 TCP segments which means poor performance
the related codebase is the following [https://github.com/golang/net/blob/master/http2/server.go#L2393-L2499](https://github.com/golang/net/blob/master/http2/server.go#L2393-L2499)
The underlying serveConn.wantWriteFrameCh should consider how to batching fetch
```golang
func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) {
if !rws.wroteHeader {
rws.writeHeader(200)
}
isHeadResp := rws.req.Method == "HEAD"
if !rws.sentHeader {
rws.sentHeader = true
var ctype, clen string
if clen = rws.snapHeader.Get("Content-Length"); clen != "" {
rws.snapHeader.Del("Content-Length")
clen64, err := strconv.ParseInt(clen, 10, 64)
if err == nil && clen64 >= 0 {
rws.sentContentLen = clen64
} else {
clen = ""
}
}
if clen == "" && rws.handlerDone && bodyAllowedForStatus(rws.status) && (len(p) > 0 || !isHeadResp) {
clen = strconv.Itoa(len(p))
}
_, hasContentType := rws.snapHeader["Content-Type"]
// If the Content-Encoding is non-blank, we shouldn't
// sniff the body. See Issue golang.org/issue/31753.
ce := rws.snapHeader.Get("Content-Encoding")
hasCE := len(ce) > 0
if !hasCE && !hasContentType && bodyAllowedForStatus(rws.status) && len(p) > 0 {
ctype = http.DetectContentType(p)
}
var date string
if _, ok := rws.snapHeader["Date"]; !ok {
// TODO(bradfitz): be faster here, like net/http? measure.
date = time.Now().UTC().Format(http.TimeFormat)
}
for _, v := range rws.snapHeader["Trailer"] {
foreachHeaderElement(v, rws.declareTrailer)
}
// "Connection" headers aren't allowed in HTTP/2 (RFC 7540, 8.1.2.2),
// but respect "Connection" == "close" to mean sending a GOAWAY and tearing
// down the TCP connection when idle, like we do for HTTP/1.
// TODO: remove more Connection-specific header fields here, in addition
// to "Connection".
if _, ok := rws.snapHeader["Connection"]; ok {
v := rws.snapHeader.Get("Connection")
delete(rws.snapHeader, "Connection")
if v == "close" {
rws.conn.startGracefulShutdown()
}
}
endStream := (rws.handlerDone && !rws.hasTrailers() && len(p) == 0) || isHeadResp
err = rws.conn.writeHeaders(rws.stream, &writeResHeaders{
streamID: rws.stream.id,
httpResCode: rws.status,
h: rws.snapHeader,
endStream: endStream,
contentType: ctype,
contentLength: clen,
date: date,
})
if err != nil {
rws.dirty = true
return 0, err
}
if endStream {
return 0, nil
}
}
if isHeadResp {
return len(p), nil
}
if len(p) == 0 && !rws.handlerDone {
return 0, nil
}
if rws.handlerDone {
rws.promoteUndeclaredTrailers()
}
// only send trailers if they have actually been defined by the
// server handler.
hasNonemptyTrailers := rws.hasNonemptyTrailers()
endStream := rws.handlerDone && !hasNonemptyTrailers
if len(p) > 0 || endStream {
// only send a 0 byte DATA frame if we're ending the stream.
if err := rws.conn.writeDataFromHandler(rws.stream, p, endStream); err != nil {
rws.dirty = true
return 0, err
}
}
if rws.handlerDone && hasNonemptyTrailers {
err = rws.conn.writeHeaders(rws.stream, &writeResHeaders{
streamID: rws.stream.id,
h: rws.handlerHeader,
trailers: rws.trailers,
endStream: true,
})
if err != nil {
rws.dirty = true
}
return len(p), err
}
return len(p), nil
}
```
|
NeedsInvestigation
|
low
|
Critical
|
599,366,123 |
opencv
|
CvCapture_FFMPEG::open blocked
|
##### System information (version)
- OpenCV => 4.1.2
- Operating System / Platform => centos 7
- Compiler => gcc 4.8
##### Detailed description
CvCapture_FFMPEG::open will blocked when another thread also blocked in CvCapture_FFMPEG:: open.
I checked related code as follow, it seems the _mutext lock hold too much time:
cap_ffmpeg_impl.hpp:890
```
bool CvCapture_FFMPEG::open( const char* _filename )
{
InternalFFMpegRegister::init();
AutoLock lock(_mutex);
unsigned i;
bool valid = false;
```
|
feature,priority: low,category: videoio,incomplete,needs reproducer,needs investigation
|
low
|
Minor
|
599,393,399 |
youtube-dl
|
aShemaletube site support request
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.03.24. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [X] I'm reporting a new site support request
- [X] I've verified that I'm running youtube-dl version **2020.03.24**
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that none of provided URLs violate any copyrights
- [X] I've searched the bugtracker for similar site support requests including closed ones but no one has responded to open one from 2018
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.ashemaletube.com/videos/546887/new-tits-small-dick/
- Single video: https://www.ashemaletube.com/videos/546887
- Playlist: https://www.ashemaletube.com/playlists/419877/
- Model: https://www.ashemaletube.com/model/nbnabunny-6405/
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
You can find the direct video url from a page by finding this:
`<video preload="metadata" src="https://cdn.ashemaletube.com/key=HPSi85Yj4gniNhbBLE4PmA,end=1586860341/ip=0.0.0.0/speed=381694/buffer=3.0/2020-04/hq_69b0dff5baab7f3f242fcce17e6ab04c.mp4" playsinline="null" style="width: 990px; height: 557px;"></video>`
(From: https://www.ashemaletube.com/videos/546887/new-tits-small-dick/ and `ip=` was changed from my actual IP to 0.0.0.0)
Sometimes you're required to login to view some videos.
Username: `Nourins`
Password: `3y9C34a32!HwgCvR^Xi@328rJCqfMB&C`
|
site-support-request
|
low
|
Critical
|
599,405,710 |
pytorch
|
Libtorch build error when setting both `USE_GLOO` and `USE_SYSTEM_NCCL` to `ON`
|
## 🐛 Bug
Currently when setting `USE_GLOO` cmake option to `ON`, target `gloo_cuda` requires a dependency called `nccl_external`; however, this target is avaliable if and only if `USE_SYSTEM_NCCL` is `OFF`. Thus if both `USE_GLOO` and `USE_SYSTEM_NCCL` is set to `ON` cmake would report error during configuration phase.
https://github.com/pytorch/pytorch/blob/master/cmake/Dependencies.cmake#L1149
https://github.com/pytorch/pytorch/blob/master/cmake/External/nccl.cmake#L19
## To Reproduce
Steps to reproduce the behavior:
1. run `cmake -DBUILD_PYTHON=OFF -DUSE_CUDA=ON -DUSE_CUDNN=ON -DUSE_NCCL=ON -DUSE_SYSTEM_NCCL=ON -DUSE_DISTRIBUTED=ON -DUSE_GLOO=ON /path/to/torchsrc`
2. CMake will then report the following error:
```
-- Configuring done
CMake Error at cmake/Dependencies.cmake:1149 (add_dependencies):
The dependency target "nccl_external" of target "gloo_cuda" does not exist.
Call Stack (most recent call first):
CMakeLists.txt:421 (include)
-- Generating done
CMake Generate step failed. Build files cannot be regenerated correctly.
```
## Expected behavior
The configuration step should be executed without problem
## Environment
* OS: CentOS 7 x86_64 with `devtoolset-6` enabled
* CMake version: 3.17.0
* Cuda version: 9.2.148-1
* CuDNN version: libcudnn7-7.6.5.31-1.cuda9.2
* NCCL version: 2.4.8-ga-cuda9.2-1-1
All of Cuda, CuDNN and NCCL libraries are installed through NVIDIA's offical rpm package; no libtorch were installed before.
I've tried using current master HEAD and tag/v1.5.0-rc3 and the problem still exists.
## Additional context
Related commit seems to be https://github.com/pytorch/pytorch/commit/30da84fbe1614138d6d9968c1475cb7dc459cd4b
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar
|
oncall: distributed,module: build,module: docs,triaged,module: nccl
|
low
|
Critical
|
599,407,738 |
rust
|
Semantics of MIR function calls
|
Some discussion at https://github.com/rust-lang/rust/pull/71005#issuecomment-612242397 revealed that we are not entirely sure what exactly the semantics of passing arguments and return values for function calls should be.
The simplest possible semantics is to say that when a stack frame is created, we allocate fresh memory for all arguments and return values (according to the known layout, determined by the callee). We copy the function arguments into the argument slots. Then we evaluate the function, and when it returns, we copy the return value back.
However, such a model is hard to compile down to destination-passing style, where the callee actually writes its return value directly into caller-provided memory. If that aliases with other things the function can access, behavior could differ with and without destination-passing style. This is complicated by the fact that in MIR right now a `Call` does not provide a return place, but even with destination-passing style diverging functions (without a return place) may access their return local `_0` . Moreover @eddyb says that also for some function arguments, we might want to elide the copy during codegen; it is unclear whether that is behaviorally equivalent to the above copying semantics or not.
This is something of a sibling to https://github.com/rust-lang/rust/issues/68364. We should have a good way to collect all these "MIR semantics" issues...
|
T-lang,A-MIR,A-miri
|
medium
|
Critical
|
599,413,894 |
TypeScript
|
Workspace Symbol Provider called with empty string doesn't return all workspace symbols
|
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.8.3
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** Workspace Symbol Provider executeWorkspaceSymbolProvider
Running this code from vscode extension:
**Code**
```ts
const allSymbols = await commands.executeCommand('vscode.executeWorkspaceSymbolProvider', '');
console.log(allSymbols);// []
```
**Expected behavior:**
All the symbols in workspace returned
**Actual behavior:**
An empty array returned
---
Workspace symbol provider says empty string should return all symbols.
```ts
/**
* Project-wide search for a symbol matching the given query string.
*
* The `query`-parameter should be interpreted in a *relaxed way* as the editor will apply its own highlighting
* and scoring on the results. A good rule of thumb is to match case-insensitive and to simply check that the
* characters of *query* appear in their order in a candidate symbol. Don't use prefix, substring, or similar
* strict matching.
*
* To improve performance implementors can implement `resolveWorkspaceSymbol` and then provide symbols with partial
* [location](#SymbolInformation.location)-objects, without a `range` defined. The editor will then call
* `resolveWorkspaceSymbol` for selected symbols only, e.g. when opening a workspace symbol.
*
* @param query A query string, can be the empty string in which case all symbols should be returned.
* @param token A cancellation token.
* @return An array of document highlights or a thenable that resolves to such. The lack of a result can be
* signaled by returning `undefined`, `null`, or an empty array.
*/
```
Redirected from https://github.com/microsoft/vscode/issues/94770
|
Bug
|
low
|
Critical
|
599,438,007 |
pytorch
|
[1.4.1] cmake3 not found
|
On CentOS7 the cmake 3 binary is called cmake3, but at the build for pytorch only an call to cmake is done. But cmake is not installed,
'which cmake3' will find the path to it.
As I see in the source code of pytorch, there is some code to find cmake3, but it looks like it will nor work.
```
which cmake3
/usr/bin/cmake3
+ CFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
+ LDFLAGS='-Wl,-z,relro '
+ /usr/bin/python3 setup.py build '--executable=/usr/bin/python3 -s'
Building wheel torch-1.4.0a0
-- Building version 1.4.0a0
cmake -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/builddir/build/BUILD/torch-1.4.1/torch -DCMAKE_PREFIX_PATH=/usr/lib/python3.6/site-packages -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/usr/lib64/libpython3.6m.so.1.0 -DTORCH_BUILD_VERSION=1.4.0a0 -DUSE_NUMPY=True /builddir/build/BUILD/torch-1.4.1
Traceback (most recent call last):
File "setup.py", line 755, in <module>
build_deps()
File "setup.py", line 316, in build_deps
cmake=cmake)
File "/builddir/build/BUILD/torch-1.4.1/tools/build_pytorch_libs.py", line 59, in build_caffe2
rerun_cmake)
File "/builddir/build/BUILD/torch-1.4.1/tools/setup_helpers/cmake.py", line 319, in generate
self.run(args, env=my_env)
File "/builddir/build/BUILD/torch-1.4.1/tools/setup_helpers/cmake.py", line 141, in run
check_call(command, cwd=self.build_dir, env=env)
File "/usr/lib64/python3.6/subprocess.py", line 306, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/lib64/python3.6/subprocess.py", line 287, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib64/python3.6/subprocess.py", line 729, in __init__
restore_signals, start_new_session)
File "/usr/lib64/python3.6/subprocess.py", line 1364, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake': 'cmake'
```
|
module: build,triaged
|
low
|
Critical
|
599,480,830 |
electron
|
NativeWindowViews::SetFocusable should call SetSkipTaskbar conditionally
|
<!-- As an open source project with a dedicated but small maintainer team, it can sometimes take a long time for issues to be addressed so please be patient and we will get back to you as soon as we can.
-->
### Preflight Checklist
<!-- Please ensure you've completed the following steps by replacing [ ] with [x]-->
* [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project.
* [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to.
* [x] I have searched the issue tracker for a feature request that matches the one I want to file, without success.
### Problem Description
<!-- Is your feature request related to a problem? Please add a clear and concise description of what the problem is. -->
SetFocusable shows the app at the taskbar because it also resets the skipTaskbar flag. Calling SetSkipTaskbar afterwards removes the app from the taskbar again but this results in a unwanted flickering of the taskbar.
```
void NativeWindowViews::SetFocusable(bool focusable) {
widget()->widget_delegate()->SetCanActivate(focusable);
#if defined(OS_WIN)
LONG ex_style = ::GetWindowLong(GetAcceleratedWidget(), GWL_EXSTYLE);
if (focusable)
ex_style &= ~WS_EX_NOACTIVATE;
else
ex_style |= WS_EX_NOACTIVATE;
::SetWindowLong(GetAcceleratedWidget(), GWL_EXSTYLE, ex_style);
SetSkipTaskbar(!focusable); // <- conditionally
Focus(false);
#endif
}
```
### Proposed Solution
<!-- Describe the solution you'd like in a clear and concise manner -->
Add a second parameter (updateSkipTaskbar) to SetFocusable with a default value to keep the current behavior and do not call SetSkipTaskbar if set.
### Alternatives Considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
Patching Electron
### Additional Information
<!-- Add any other context about the problem here. -->
This problem occurs on Windows.
|
enhancement :sparkles:
|
low
|
Minor
|
599,498,031 |
pytorch
|
One confusion about the CompilationUnit destructuring process in torch/jit/__init__.py
|
hi, my friends
I have one confusion about the destructuring process of the CompilationUnit object in torch/jit/__init__.py file:
_python_cu = torch._C.CompilationUnit()
As i know, many shared_ptr pointers has pointed this CompilationUnit object in pytorch jit. When no shared_ptr pointers have pointed it, this CompilationUnit object will be destructured. I have developed a c++ extension module of pytorch, when the program exits, I find the Compilation object has different destructive behavior. Sometimes it will firstly destruct the shared_ptr pointer of CompilationUnit object in pybind11::class_<torch::jit::script::CompilationUnit, std::shared_ptr<torch::jit::script::CompilationUnit>>::dealloc() of third_party/pybind11/include/pybind11/pybind11.h, then this CompilationUnit object will be destructed. However, sometimes it will not enter the pybind11::class_<>::dealloc() function when the program exists, this will cause this CompilationUnit object can't be destructed because one shared_ptr pointer in pybind11.h still point this object.
I don't know why different destructuring process occurs for different running. I would appreciate it it you can give me some advice for this problem. Thanks.
cc @suo
|
oncall: jit,triaged
|
low
|
Minor
|
599,513,397 |
godot
|
Creating new plugin C# doesn't create the correct file
|
**Godot version:**
3.2.1 stable mono
**OS/device including version:**
Debian 10
**Issue description:**
Following the instruction from documentation [here](https://docs.godotengine.org/en/stable/tutorials/plugins/editor/making_plugins.html) Godot does not create the correct file:
```
using Godot;
using System;
public class plugin : EditorPlugin
{
// Declare member variables here. Examples:
// private int a = 2;
// private string b = "text";
// Called when the node enters the scene tree for the first time.
public override void _Ready()
{
}
// // Called every frame. 'delta' is the elapsed time since the previous frame.
// public override void _Process(float delta)
// {
//
// }
}
```
Expected:
```
#if TOOLS
using Godot;
using System;
[Tool]
public class CustomNode : EditorPlugin
{
public override void _EnterTree()
{
// Initialization of the plugin goes here
}
public override void _ExitTree()
{
// Clean-up of the plugin goes here
}
}
#endif
```
**Steps to reproduce:**
follow the instructions on the page
**Minimal reproduction project:**
NA
|
enhancement,topic:editor,topic:dotnet
|
low
|
Minor
|
599,513,957 |
pytorch
|
Add load_state_dict and state_dict() in C++
|
## 🚀 Feature
I would be able to clone a model into another model. Such as being done in the [Reinforcement Learning (DQN) Tutorial](https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html) at **Training**.
The requested functions that do exist in python but not C++ are:
```
load_state_dict()
state_dict()
target_net.load_state_dict(policy_net.state_dict())
```
## Motivation
It would be neat to be able to follow the pytorch example listed above. However the C++ library are missing the necessary functions for doing this.
cc @yf225
|
module: cpp,triaged
|
low
|
Major
|
599,523,260 |
godot
|
Executable project + plugin crash (Linux + C#)
|
**Godot version:**
3.2.1 stable mono
**OS/device including version:**
Debian 10
**Issue description:**
Trying to execute the project it crash with this error:
```
(base) vale@debianace:~/godot/tutorials/test$ ./test.x86_64
Godot Engine v3.2.1.stable.mono.official - https://godotengine.org
OpenGL ES 3.0 Renderer: Mesa DRI Intel(R) Ivybridge Mobile
Mono: Logfile is: /home/vale/.local/share/godot/app_userdata/test/mono/mono_logs/2020_04_14 12.56.01 (4277).txt
ERROR: update_godot_api_cache: Mono Cache: Member cached_data.methodthunk_GodotTaskScheduler_Activate is null.
At: modules/mono/mono_gd/gd_mono_cache.cpp:276.
ERROR: _load_api_assemblies: The loaded assembly 'GodotSharp' is in sync, but the cache update failed.
At: modules/mono/mono_gd/gd_mono.cpp:928.
ERROR: _load_api_assemblies: FATAL: Method failed.
At: modules/mono/mono_gd/gd_mono.cpp:937.
=================================================================
Native Crash Reporting
=================================================================
Got a SIGILL while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.
=================================================================
=================================================================
Native stacktrace:
=================================================================
0xa712b4 - ./test.x86_64 : (null)
0xa7162c - ./test.x86_64 : (null)
0xa640d8 - ./test.x86_64 : (null)
0xa7d3aa - ./test.x86_64 : (null)
0x7f6454056730 - /lib/x86_64-linux-gnu/libpthread.so.0 : (null)
0x24e451f - ./test.x86_64 : _ZN6GDMono20_load_api_assembliesEv
0x24e46de - ./test.x86_64 : _ZN6GDMono26initialize_load_assembliesEv
0x2251b22 - ./test.x86_64 : _ZN14CSharpLanguage4initEv
0x101436f - ./test.x86_64 : _ZN12ScriptServer14init_languagesEv
0x255d85e - ./test.x86_64 : _ZN4Main6setup2Em
0x2561d7f - ./test.x86_64 : _ZN4Main5setupEPKciPPcb
0x995438 - ./test.x86_64 : main
0x7f6453d1509b - /lib/x86_64-linux-gnu/libc.so.6 : __libc_start_main
0x99571e - ./test.x86_64 : (null)
=================================================================
Telemetry Dumper:
=================================================================
Pkilling 0x7f6448ffe700 from 0x7f6451ac3880
* Assertion: should not be reached at threads.c:6254
Entering thread summarizer pause from 0x7f6451ac3880
Finished thread summarizer pause from 0x7f6451ac3880.
Waiting for dumping threads to resume
=================================================================
External Debugger Dump:
=================================================================
[New LWP 4278]
[New LWP 4283]
[New LWP 4284]
[New LWP 4285]
[New LWP 4286]
[New LWP 4287]
[New LWP 4288]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f64540560ca in __waitpid (pid=pid@entry=4292, stat_loc=stat_loc@entry=0x7ffffcd3fa64, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:30
30 ../sysdeps/unix/sysv/linux/waitpid.c: No such file or directory.
Id Target Id Frame
* 1 Thread 0x7f6451ac3880 (LWP 4277) "test.x86_64" 0x00007f64540560ca in __waitpid (pid=pid@entry=4292, stat_loc=stat_loc@entry=0x7ffffcd3fa64, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:30
2 Thread 0x7f645155c700 (LWP 4278) "test.x86_64" futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x513a928) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
3 Thread 0x7f644bcb4700 (LWP 4283) "test.x8:disk$0" futex_wait_cancelable (private=0, expected=0, futex_word=0x53223d8) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
4 Thread 0x7f644b39a700 (LWP 4284) "test.x86_64" futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x53851f8) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
5 Thread 0x7f644b0d8700 (LWP 4285) "test.x86_64" 0x00007ffffcdd3b34 in clock_gettime ()
6 Thread 0x7f644b097700 (LWP 4286) "test.x86_64" 0x00007f6453db7720 in __GI___nanosleep (requested_time=requested_time@entry=0x7f644b096ce0, remaining=remaining@entry=0x0) at ../sysdeps/unix/sysv/linux/nanosleep.c:28
7 Thread 0x7f644a7ff700 (LWP 4287) "SGen worker" futex_wait_cancelable (private=0, expected=0, futex_word=0x312a888 <work_cond+40>) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
8 Thread 0x7f6448ffe700 (LWP 4288) "Finalizer" futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x3118020 <finalizer_sem>) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
Thread 8 (Thread 0x7f6448ffe700 (LWP 4288)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x3118020 <finalizer_sem>) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1 do_futex_wait (sem=sem@entry=0x3118020 <finalizer_sem>, abstime=0x0) at sem_waitcommon.c:111
#2 0x00007f6454054988 in __new_sem_wait_slow (sem=sem@entry=0x3118020 <finalizer_sem>, abstime=0x0) at sem_waitcommon.c:181
#3 0x00007f64540549f9 in __new_sem_wait (sem=sem@entry=0x3118020 <finalizer_sem>) at sem_wait.c:42
#4 0x0000000000b2e4a8 in mono_os_sem_wait (flags=MONO_SEM_FLAGS_ALERTABLE, sem=0x3118020 <finalizer_sem>) at ../../mono/utils/mono-os-semaphore.h:203
#5 mono_coop_sem_wait (flags=MONO_SEM_FLAGS_ALERTABLE, sem=0x3118020 <finalizer_sem>) at ../../mono/utils/mono-coop-semaphore.h:41
#6 finalizer_thread (unused=unused@entry=0x0) at gc.c:967
#7 0x0000000000c02185 in start_wrapper_internal (stack_ptr=0x7f6448fff000, start_info=0x0) at threads.c:1222
#8 start_wrapper (data=0x5daef00) at threads.c:1295
#9 0x00007f645404bfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#10 0x00007f6453dea4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 7 (Thread 0x7f644a7ff700 (LWP 4287)):
#0 futex_wait_cancelable (private=0, expected=0, futex_word=0x312a888 <work_cond+40>) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1 __pthread_cond_wait_common (abstime=0x0, mutex=0x312a8a0 <lock>, cond=0x312a860 <work_cond>) at pthread_cond_wait.c:502
#2 __pthread_cond_wait (cond=cond@entry=0x312a860 <work_cond>, mutex=mutex@entry=0x312a8a0 <lock>) at pthread_cond_wait.c:655
#3 0x0000000000c6cc86 in mono_os_cond_wait (mutex=0x312a8a0 <lock>, cond=0x312a860 <work_cond>) at ../../mono/utils/mono-os-mutex.h:177
#4 get_work (job=<synthetic pointer>, do_idle=<synthetic pointer>, work_context=<synthetic pointer>, worker_index=0) at sgen-thread-pool.c:165
#5 thread_func (data=<optimized out>) at sgen-thread-pool.c:196
#6 0x00007f645404bfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#7 0x00007f6453dea4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 6 (Thread 0x7f644b097700 (LWP 4286)):
#0 0x00007f6453db7720 in __GI___nanosleep (requested_time=requested_time@entry=0x7f644b096ce0, remaining=remaining@entry=0x0) at ../sysdeps/unix/sysv/linux/nanosleep.c:28
#1 0x00007f6453de2874 in usleep (useconds=<optimized out>) at ../sysdeps/posix/usleep.c:32
#2 0x00000000024cdc02 in JoypadLinux::monitor_joypads() ()
#3 0x000000770000007c in ?? ()
#4 0x0000000005675610 in ?? ()
#5 0x0000000000000018 in ?? ()
#6 0x00007f6424000b50 in ?? ()
#7 0x706e692f7665642f in ?? ()
#8 0x746e6576652f7475 in ?? ()
#9 0xffffffffff003133 in ?? ()
#10 0x00007f644b097700 in ?? ()
#11 0x00007ffffcd40ba0 in ?? ()
#12 0x00007f6453d7556a in __GI___libc_malloc (bytes=140068732366120) at malloc.c:3057
#13 0x00007ffffcd409be in ?? ()
#14 0x00007ffffcd409bf in ?? ()
#15 0x00007f644b097700 in ?? ()
#16 0x00007ffffcd40ba0 in ?? ()
#17 0x0000000001f366d5 in ThreadPosix::thread_callback(void*) ()
#18 0x0000000000000000 in ?? ()
Thread 5 (Thread 0x7f644b0d8700 (LWP 4285)):
#0 0x00007ffffcdd3b34 in clock_gettime ()
#1 0x00007f6453df7ff6 in __GI___clock_gettime (clock_id=4, tp=0x7f644b0d7d00) at ../sysdeps/unix/clock_gettime.c:115
#2 0x0000000000cd5862 in OS_Unix::get_ticks_usec() const [clone .constprop.10157] ()
#3 0x000000000000c1d1 in ?? ()
#4 0x0000000017739113 in ?? ()
#5 0x00007ffffcd41780 in ?? ()
#6 0x0000000000ee04bf in OS::get_ticks_msec() const ()
#7 0x0000000001e9b640 in ?? ()
#8 0x0000000001f77f9c in AudioDriverPulseAudio::thread_func(void*) ()
#9 0x0000000000000018 in ?? ()
#10 0x00007f644b0d7d68 in ?? ()
#11 0xffffffff00000000 in ?? ()
#12 0x00000000000004b0 in ?? ()
#13 0x00007ffffcd40ba0 in ?? ()
#14 0x00007f6453d7556a in __GI___libc_malloc (bytes=2048) at malloc.c:3057
#15 0x00007ffffcd408fe in ?? ()
#16 0x00007ffffcd408ff in ?? ()
#17 0x00007f644b0d8700 in ?? ()
#18 0x00007ffffcd40ba0 in ?? ()
#19 0x0000000001f366d5 in ThreadPosix::thread_callback(void*) ()
#20 0x0000000000000000 in ?? ()
Thread 4 (Thread 0x7f644b39a700 (LWP 4284)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x53851f8) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1 do_futex_wait (sem=sem@entry=0x53851f8, abstime=0x0) at sem_waitcommon.c:111
#2 0x00007f6454054988 in __new_sem_wait_slow (sem=0x53851f8, abstime=0x0) at sem_waitcommon.c:181
#3 0x0000000001ebbe2d in SemaphorePosix::wait() [clone .part.0] ()
#4 0x00007f644b39b020 in ?? ()
#5 0x0000000001368913 in VisualServerScene::_gi_probe_bake_thread() ()
#6 0x0000000000000008 in ?? ()
#7 0x00000000053a9ad0 in ?? ()
#8 0x0000000000000000 in ?? ()
Thread 3 (Thread 0x7f644bcb4700 (LWP 4283)):
#0 futex_wait_cancelable (private=0, expected=0, futex_word=0x53223d8) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1 __pthread_cond_wait_common (abstime=0x0, mutex=0x5322388, cond=0x53223b0) at pthread_cond_wait.c:502
#2 __pthread_cond_wait (cond=0x53223b0, mutex=0x5322388) at pthread_cond_wait.c:655
#3 0x00007f645066de83 in ?? () from /usr/lib/x86_64-linux-gnu/dri/i965_dri.so
#4 0x00007f645066dbd7 in ?? () from /usr/lib/x86_64-linux-gnu/dri/i965_dri.so
#5 0x00007f645404bfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#6 0x00007f6453dea4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 2 (Thread 0x7f645155c700 (LWP 4278)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x513a928) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1 do_futex_wait (sem=sem@entry=0x513a928, abstime=0x0) at sem_waitcommon.c:111
#2 0x00007f6454054988 in __new_sem_wait_slow (sem=0x513a928, abstime=0x0) at sem_waitcommon.c:181
#3 0x0000000001ebbe2d in SemaphorePosix::wait() [clone .part.0] ()
#4 0x000000000513a960 in ?? ()
#5 0x0000000000e58e94 in _IP_ResolverPrivate::_thread_function(void*) ()
#6 0x0000000000000000 in ?? ()
Thread 1 (Thread 0x7f6451ac3880 (LWP 4277)):
#0 0x00007f64540560ca in __waitpid (pid=pid@entry=4292, stat_loc=stat_loc@entry=0x7ffffcd3fa64, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:30
#1 0x0000000000a714a1 in dump_native_stacktrace (mctx=mctx@entry=0x7ffffcd403c0, signal=0x2601137 "SIGILL") at mini-posix.c:1089
#2 0x0000000000a7162c in mono_dump_native_crash_info (signal=signal@entry=0x2601137 "SIGILL", mctx=mctx@entry=0x7ffffcd403c0, info=info@entry=0x7ffffcd406b0) at mini-posix.c:1135
#3 0x0000000000a640d8 in mono_handle_native_crash (signal=signal@entry=0x2601137 "SIGILL", mctx=mctx@entry=0x7ffffcd403c0, info=info@entry=0x7ffffcd406b0) at mini-exceptions.c:3423
#4 0x0000000000a7d3aa in mono_sigill_signal_handler (_dummy=4, _info=0x7ffffcd406b0, context=0x7ffffcd40580) at mini-runtime.c:3244
#5 <signal handler called>
#6 0x00000000024e451f in GDMono::_load_api_assemblies() ()
#7 0x0000000005cf5c00 in ?? ()
#8 0x00000000024e46de in GDMono::initialize_load_assemblies() ()
#9 0x0000000005bec830 in ?? ()
#10 0x00007ffffcd40c60 in ?? ()
#11 0x0000000005cf5c00 in ?? ()
#12 0x00007ffffcd40b60 in ?? ()
#13 0x0000000002251b22 in CSharpLanguage::init() ()
#14 0x0000000000000000 in ?? ()
[Inferior 1 (process 4277) detached]
=================================================================
Basic Fault Address Reporting
=================================================================
Memory around native instruction pointer (0x24e451f):0x24e450f 8d 8a 3c 00 48 8d 3d 96 3c 40 00 e8 11 81 aa fe ..<.H.=.<@......
0x24e451f 0f 0b 66 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 ..ffffff........
0x24e452f 00 45 31 c9 4c 8d 05 d6 b5 3c 00 48 8d 0d f7 32 .E1.L....<.H...2
0x24e453f 1f 00 ba ab 03 00 00 48 8d 35 53 8a 3c 00 48 8d .......H.5S.<.H.
=================================================================
Managed Stacktrace:
=================================================================
=================================================================
Aborted
```
Log file:
[2020_04_14 12.56.01 (4277).txt](https://github.com/godotengine/godot/files/4475101/2020_04_14.12.56.01.4277.txt)
**Steps to reproduce:**
Follow the instruction on the page then add "[Tool]" and pragma tags
**Minimal reproduction project:**
[test.zip](https://github.com/godotengine/godot/files/4475119/test.zip)
|
bug,platform:linuxbsd,topic:dotnet,crash
|
low
|
Critical
|
599,536,060 |
opencv
|
Assertion Failed on Batch with EfficientNetB0 (TF)
|
##### System information (version)
- OpenCV => 4.3.0-dev
- Operating System / Platform => Ubuntu 18.04
- Compiler => Python 3.6.8
##### Detailed description
Using OpenCV DNN with an EfficientNetB0 saved using Tensorflow I can input a batch of 1 image but it breaks while doing the forward when I input a batch higher than that. The error is:
> Traceback (most recent call last):
> File "efficientnet_batch_error.py", line 18, in <module>
> output = model.forward(output_layers)
> cv2.error: OpenCV(4.3.0-dev) /home/computer/workspace/opencv/modules/dnn/include/opencv2/dnn/shape_utils.hpp:171: error: (-215:Assertion failed) start <= (int)shape.size() && end <= (int)shape.size() && start <= end in function 'total'
##### Steps to reproduce
```
from cv2 import dnn
import numpy as np
model_bytes = open('b0.pb', 'rb').read()
model = dnn.readNetFromTensorflow(model_bytes)
del model_bytes
ln = model.getLayerNames()
output_layers = [ln[i[0] - 1] for i in model.getUnconnectedOutLayers()]
output_layers = output_layers[-1]
# This works
input_images = np.random.random((1, 3, 224, 224))
model.setInput(input_images)
output = model.forward(output_layers)
print(output.shape)
# This breaks
input_images = np.random.random((32, 3, 224, 224))
model.setInput(input_images)
output = model.forward(output_layers)
print(output.shape)
```
The model can be found [here](https://drive.google.com/open?id=1xnV2uydgaUffVTCwBhz51qN2qcZKunVa).
|
bug,category: dnn
|
low
|
Critical
|
599,550,928 |
godot
|
Mouse exited signal gets emitted upon moving the mouse (On a Control node) after pausing
|
**Godot version:**
Godot 3.2.1.stable
**OS/device including version:**
Windows 10 1909
**Issue description:**
The mouse exited signal gets emitted upon moving the mouse when the game was paused and the mouse is on a UI node blocking the mouse.
**Steps to reproduce:**
- Create a KinematicBody2D
- Make it pickable
- Attach mouse entered / mouse exited signals
- Add a way to pause your game
- Upon pausing show a ColorRect which takes up the whole screen with Mouse Filter set to Pass/Stop
- Play
- Enter the KinematicBody2D
- Pause the game
- Move the mouse
**Minimal reproduction project:**
https://drive.google.com/file/d/1cOBPeKE4dtqZXh15ciZTOG5fytpBnLhq/view?usp=sharing
|
bug,topic:core,confirmed,topic:gui
|
low
|
Major
|
599,563,588 |
godot
|
Can't use arrows in ItemList after pressing a letter
|
**Godot version:**
3.2.1
**Issue description:**
When you click ItemList, you can move through elements with arrow keys. But you can't anymore after pressing a letter to search something. You need to click again or wait few secs.
After checking the code, seems like incremental search has a separate path for arrow keys, but I don't much understand what is it for. In the end it just blocks arrow keys after searching.
**Steps to reproduce:**
1. Create ItemList with some elements
2. Press some letter
3. Try to use arrow keys
|
bug,discussion,confirmed,usability,topic:gui
|
low
|
Minor
|
599,565,001 |
flutter
|
video_player does not support interlaced video streams.
|
I have tried Video Player plug-in for Flutter( video_player 0.10.8+1) example code on Android platform with video stream that is interlaced, and was sadly surprised that it does not de-interlace the video...
Tried the same stream with Exoplayer and it does the job.
so I am little confused as i know that video_player uses exoplayer on Android...
Could it be related to flutter architecture?
Any suggestions?
|
c: new feature,d: api docs,p: video_player,package,team-ecosystem,P3,triaged-ecosystem
|
low
|
Minor
|
599,580,064 |
vscode
|
Investigate into JAWS support
|
Look into how VS Code works with JAWS and identify the current limitations.
@GS-Avaluka feedback is very welcome
|
accessibility,windows,editor-core,under-discussion
|
medium
|
Critical
|
599,589,019 |
godot
|
Wrong blending in-between two AnimationNodeStateMachine-states
|
**Godot version:**
3.2.1.stable
**OS/device including version:**
Lenovo Yoga 730-13IKB (Windows 10)
**Issue description:**

**In an immidiate transition in-between two animation nodes, the add_amount is changed.**
As you can see in the upper GIF, the two arms of the player are raised during the blending from one to another state although there is no need to take such a long detour because the arms are down in both states.
This is the BlendTree inside the "CLIMB"-state and it seems like the two add_amount factors are lowered during the blending in-between the states are lowered from 1 to 0.8.

**Steps to reproduce:**
After you opened the project, open the AnimationTree in the "Spatial.tscn" scene. Go into the AnimationTree "go_off_road" and set "Seek/seek_position" to 1.
If you retrun now and click on the play-button of the state with immidiate traveling activated you can see how the sphere is going off the road which is wrong because it says in the AnimationTree "go_off_road" is added with the add_amount of 1.
**Minimal reproduction project:**
[WrongAnimationBlending.zip](https://github.com/godotengine/godot/files/4475548/WrongAnimationBlending.zip)
|
bug,topic:core
|
low
|
Minor
|
599,689,393 |
opencv
|
Possible error in documentation of initUndistortRectifyMap
|
OpenCV 4.2 (but valid also for other versions).
In the documentation of `initUndistortRectifyMap()` we find:
> In case of a stereo camera, this function is called twice: once for each camera head, after stereoRectify, which in its turn is called after stereoCalibrate. But if the stereo camera was not calibrated, it is still possible to compute the rectification transformations directly from the fundamental matrix using stereoRectifyUncalibrated. For each camera, the function computes homography H as the rectification transformation in a pixel domain, not a rotation matrix R in 3D space. **R can be computed from H as**
> R = cameraMatrix^(−1) ⋅ H ⋅ cameraMatrix
> **where cameraMatrix can be chosen arbitrarily**.
But, in the general pinhole camera model we have:
`cameraMatrix ⋅ [R0 | t0]`
where `[R0 | t0]` is the extrinsic 3x4 matrix.
Then we apply a rectification homography H (e.g. computed with Loop-Zhang algorithm) in the pixel domain, and after that we choose an arbitrary `newCameraMatrix` to show the image within the frame. Then the final projection matrix is:
`newCameraMatrix ⋅ H ⋅ cameraMatrix ⋅ [R0 | t0]`
So, for my understanding, the rotation in 3D space is only `H ⋅ cameraMatrix` and it should be wrong to write `R = cameraMatrix^(−1) ⋅ H ⋅ cameraMatrix` as in the documentation, simply because the right cameraMatrix is not arbitrary, it's the camera one, while the left one is actually arbitrary.
Please correct me if I am wrong.
Thanks.
|
category: documentation,category: calib3d,RFC
|
low
|
Critical
|
599,696,334 |
flutter
|
[google_maps_flutter] Should have a callback when map finished rendering
|
Google map package should expose the API to notify when the map finished rendering. This is to be distinguish with [onMapCreated](https://github.com/flutter/packages/blob/main/packages/google_maps_flutter/google_maps_flutter/lib/src/google_map.dart#L94). [onMapCreated] is called when the view is ready, not necessarily when the content of the map is rendered.
[`mapViewDidFinishTileRendering`](https://developers.google.com/maps/documentation/ios-sdk/reference/protocol_g_m_s_map_view_delegate-p#adaaa744f353baf4fc3f41b3b6801fba5): on iOS.
[`onMapLoaded`](https://developers.google.com/android/reference/com/google/android/gms/maps/GoogleMap.OnMapLoadedCallback): on Android.
|
c: new feature,platform-android,platform-ios,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem
|
medium
|
Major
|
599,699,660 |
go
|
x/mobile/cmd: Applications are signed with insecure hash algorithm (SHA1)
|
### Note:
Please note that I am opening this security issue publicly after initially reporting the issues over email to the great people over at the Golang security team and then being told to open it here due to gomobile not being an officially supported project.
### Description:
I decided to take a look at the signing of the gomobile tsuggestool and I managed to find out that applications currently are signed with SHA1 hashes in the [cmd/gomobile/cert.go](https://github.com/golang/mobile/blob/master/cmd/gomobile/cert.go) and [cmd/gomobile/writer.go](https://github.com/golang/mobile/blob/master/cmd/gomobile/writer.go) files. It has, since 2005, been treated as insecure and should be replaced as soon as possible due to the possible collision attacks that attackers could use to make it look like (in this case) as if the application hasn't been tampered with, but might have been. This means that an practice, an attacker could sneak in attacks and avoiding the checksum checks for the application.
From the looks of it, the supported checksum algorithms in the v1 signing scheme for Android are MD5, SHA1 and SHA-256. As both MD5 and SHA1 have been cracked using collision attacks in the latest years, I highly suggest moving over to SHA-256 for signing all the applications as it has yet to be cracked.
### What operating system and processor architecture are you using (`go env`)?
Linux but compiling to mobile per description above.
### What did you expect to see?
Signing applications and producing a checksum using a secure hashing algorithm, without known collision attacks.
### What did you see instead?
Use of SHA1 as an insecure cryptographic primitive when signing applications.
|
help wanted,NeedsFix,mobile
|
low
|
Minor
|
599,703,151 |
go
|
x/mobile: Add support for signing Android applications using v2+ scheme
|
### Note:
This was suggested as part of the security issue filed as part of #38438 and might be relevant.
### Description:
The signing of Android applications in gomobile is currently using the old v1 signing scheme and not the new and improved v2 or v3 schemes that have been available for quite some time now. See the [Application Signing](https://source.android.com/security/apksigning/) documentation for more information regarding the various signing schemes.
The v2+ schemes introduce both security and performance improvements and could be beneficial for improving the application signing. There are two way that this could be done due to v2+ being compatible on older Android phones as long as applications are signed with both (or possibly all three) protocols. This means that we could fix the security issue mentioned in #38438 and still sign with the v1 scheme for supporting Android 7 and below, but also sign with v2+ for better security and install performance on newer versions. Another possible way to solve it would be to drop support for v1 and just support v2+ and thus Android 7 and newer. The choice of the best option depends on how you look at it, but I will leave that to someone else to decide.
|
help wanted,NeedsFix,mobile
|
low
|
Major
|
599,710,337 |
angular
|
Angular 9.1.1 fixture.query() can't see DOM built directly with Renderer
|
<!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 🐞 bug report
### Affected Package
Probably `@angular/compiler`
### Is this a regression?
Yes. Test success in angular 8 (ViewEngine)
### Description
When a element is inserted into DOM directly for performance reasons, I can't query it in tests using `fixture.query()`.
If I use `fixture.debugElement.queryAll(By.css('span')).length` I got 1 element.
But when I query it directly from the DOM `fixture.nativeElement.querySelectorAll('span').length` I got 2 elements.
## 🔬 Minimal Reproduction
Component:
```
<span>foo</span>
<ng-container #footerSection></ng-container>
```
```
@ViewChild('footerSection', {read: ViewContainerRef, static: true})
footerSection: ViewContainerRef;
....
constructor(private renderer: Renderer2) {}
....
if (this.footerTemplate instanceof TemplateRef) {
this.footerSection.createEmbeddedView(this.footerTemplate);
} else {
let span = this.renderer.createElement('span');
this.renderer.addClass(span, 'footer');
this.renderer.appendChild(
span, this.renderer.createText(this.footerTemplate));
this.renderer.appendChild(this.elementRef.nativeElement, span);
}
```
Test:
```
fit('Should show string as footer', () => {
component.templateId = 2;
fixture.detectChanges();
console.log(fixture.debugElement.queryAll(By.css('span')).length); // 1 element
console.log(fixture.nativeElement.querySelectorAll('span').length); // 2 elements
let span = fixture.debugElement.queryAll(By.css('span'))[1];
expect(span).toBeTruthy();
expect(span.nativeElement.textContent.trim()).toEqual('test string');
});
```
## 🌍 Your Environment
**Angular Version:**
<pre><code>
Angular CLI: 9.1.1
Node: 12.14.1
OS: darwin x64
Angular: 9.1.1
... animations, bazel, cli, common, compiler, compiler-cli, core
... forms, language-service, localize, platform-browser
... platform-browser-dynamic, router
Ivy Workspace: Yes
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.803.26
@angular-devkit/build-angular 0.803.26
@angular-devkit/build-optimizer 0.803.26
@angular-devkit/build-webpack 0.803.26
@angular-devkit/core 9.1.1
@angular-devkit/schematics 9.1.1
@angular/cdk 8.2.3
@angular/service-worker 8.2.14+3.sha-d2f7315
@bazel/bazel 2.0.0
@bazel/bazel-darwin_x64 2.0.0
@bazel/hide-bazel-files 1.5.0
@bazel/ibazel v0.11.1
@bazel/karma 1.2.2
@bazel/protractor 1.2.2
@bazel/rollup 1.2.2
@bazel/terser 1.2.2
@bazel/typescript 1.2.2
@ngtools/webpack 8.3.26
@schematics/angular 9.1.1
@schematics/update 0.901.1
rxjs 6.5.4
typescript 3.8.3
webpack 4.39.2
</code></pre>
|
area: testing,state: confirmed,P3
|
low
|
Critical
|
599,726,396 |
material-ui
|
[docs] Missing component name API section for components
|
In #20434, a Component Name section was added to many components.
Unfortunately, the section seems to be missing for the following components:
* ClickAwayListener
* Fade
* Grow
* Hidden
* MenuList
* Modal
* NoSsr
* Popper
* Portal
* RadioGroup
* RootRef
* Slide
* SwipeableDrawer
* TextareaAutosize
* TouchRipple
* Zoom
|
core,scope: docs-infra
|
low
|
Major
|
599,758,729 |
rust
|
"constant expression depends on a generic parameter" error can be improved.
|
Original discussion happened at https://github.com/rust-lang/rust/pull/70452#discussion_r399406715.
<hr/>
This was my last proposal in that comment thread (https://github.com/rust-lang/rust/pull/70452#discussion_r399580683):
> So perhaps a better phrasing for the error itself might be "cannot prove/determine/guarantee/etc. that constant expression will evaluate successfully" and then we can search its `Substs` for type/const parameters and list them out to give the more helpful information.
>
> We should also link [`https://github.com/rust-lang/rust/issues/68436`](https://github.com/rust-lang/rust/issues/68436), in some sort of help messaging along the lines of "there is no way to currently write the necessary `where` clause, watch this space".
cc @rust-lang/wg-diagnostics @varkor @yodaldevoid
|
C-enhancement,A-diagnostics,T-compiler,A-const-eval,PG-const-generics
|
low
|
Critical
|
599,768,401 |
flutter
|
Calling layout from performResize should throw assert
|
Calling `layout` on a child from within `performResize` is illegal, `performResize` is only allowed to operate on the given constraints, see https://master-api.flutter.dev/flutter/rendering/RenderBox/performResize.html.
|
framework,a: error message,P2,team-framework,triaged-framework
|
low
|
Minor
|
599,784,024 |
rust
|
[rustdoc] Remove object methods coming from trait impls in search results?
|
Currently, if an object `A` implements a trait `B`, the search will show both `A::whatever` and `B::whatever`. I think this might be a bit too much. Removing this would in addition makes the search-index file ligher.
What do you think @rust-lang/rustdoc ?
|
T-rustdoc,C-discussion,A-rustdoc-search,A-trait-objects
|
low
|
Minor
|
599,822,195 |
go
|
net/mail: mail.ParseAddress() does not handle comments correctly
|
### What version of Go are you using (`go version`)?
<pre>
go version go1.14.2 linux/amd64
go go1.13.7 playground
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
GO111MODULE="on"
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/rmccoll/.cache/go-build"
GOENV="/home/rmccoll/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/rmccoll/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/dev/null"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build270827567=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
https://play.golang.org/p/SGqTDFa_wVA
```
package main
import (
"fmt"
"net/mail"
)
func main() {
m, err := mail.ParseAddress(`Pete(A nice \) chap) <pete(his account)@silly.test(his host)>`)
fmt.Println(m, err)
}
```
### What did you expect to see?
As much as I would like it not to be, I believe this is a valid address perf RFC 5322. It is included as an example in Appendix A.5 White Space, Comments, and Other Oddities https://tools.ietf.org/html/rfc5322#appendix-A.5
```
"Pete" <[email protected]> <nil>
```
### What did you see instead?
```
<nil> mail: missing @ in addr-spec
```
|
NeedsInvestigation
|
low
|
Critical
|
599,823,007 |
pytorch
|
TorchScript Support for Named Tensors
|
## 🚀 Feature
Currently, the JIT doesn't seem to support named tensors. When tracing or scripting a module, an error is thrown about TorchScript not supporting named tensors.
Currently, trying to script a module using `align_to` gives:
```python-traceback
RuntimeError:
Tried to access nonexistent attribute or method 'align_to' of type 'Tensor'.:
File "<ipython-input-85-f71c2bbde53b>", line 4
def forward(self, x):
return x.align_to('A', 'B')
~~~~~~~~~~ <--- HERE
```
And upon tracing, you get
```python-traceback
RuntimeError: NYI: Named tensors are currently unsupported in TorchScript. As a workaround please drop names via `tensor = tensor.rename(None)`. (guardAgainstNamedTensor at /opt/conda/conda-bld/pytorch_1579022051443/work/torch/csrc/jit/pybind_utils.h:335)
```
## Motivation
It would be useful to be able use named tensors in scripted modules. It would enable easier interoperatability between modules.
## Pitch
As a start, probably only the `align_to`, `rename` operations and propogation of names in and out of script modules would cover most of our use cases. `flatten` and `unflatten` would also be nice to have.
## Alternatives
Manually adding transpositions and premutes in between modules based on the user's understanding of their conventions.
cc @suo
|
triage review,oncall: jit,triaged
|
low
|
Critical
|
599,825,678 |
flutter
|
flutter app crashes after a few hot reloads. On physical device, samsung S10e.
|
My flutter app crashes after a few hot reloads. On physical device, samsung S10e.
## Steps to Reproduce
Just debug/run and save modifications to hot deploy.
<details>
<summary>Logs</summary>
```
Performing hot reload...
Syncing files to device SM G970F...
W/yeay.tv.yeay(15176): 0xebadde09 skipped times: 0
F/libc (15176): Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0xd1400040d17 in tid 15455 (Thread-18), pid 15176 (yeay.tv.yeay)
*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
Build fingerprint: 'samsung/beyond0ltexx/beyond0:10/QP1A.190711.020/G970FXXU4CTC9:user/release-keys'
Revision: '26'
ABI: 'arm64'
Timestamp: 2020-04-14 17:01:27-0300
pid: 15176, tid: 15455, name: Thread-18 >>> yeay.tv.yeay <<<
uid: 10381
signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0xd1400040d17
x0 0000007c1a9221f9 x1 0000007cb8633298 x2 0000007c1a922208 x3 000000000000006a
x4 0000000000000010 x5 0000000000000068 x6 404040403fff6b6b x7 7f7f7f7f7f7f7f7f
x8 0000007c1a9221f9 x9 00000d1400040d18 x10 0000007ce2700041 x11 0000007ce4f34d78
x12 0000007ce4f34d94 x13 0000000000000000 x14 0000000000000001 x15 0000007cb8632f69
x16 0000007ce58fd638 x17 0000007ded824318 x18 0000007cdce28000 x19 0000007c9cbd4180
x20 0000007cb86332c8 x21 0000007cb8632f00 x22 0000000000000000 x23 0000000000000000
x24 0000007cb86332d0 x25 0000007c9cbd4288 x26 0000007cb8632fe8 x27 0000007cb8632f00
x28 0000000000000000 x29 0000007d53a29460
sp 0000007ce8a54e60 lr 0000007ce54bdb48 pc 0000007ce54bdb98
backtrace:
#00 pc 00000000015f7b98 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#01 pc 00000000015f67bc /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#02 pc 00000000015f1a7c /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#03 pc 00000000016d01c8 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#04 pc 00000000016c9318 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#05 pc 00000000016c9848 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#06 pc 00000000015ef448 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#07 pc 0000000001611d84 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#08 pc 0000000001612010 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#09 pc 00000000018a2c78 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#10 pc 00000000014efb50 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#11 pc 00000000014efd08 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#12 pc 0000000001225b5c /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#13 pc 000000000122aa40 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#14 pc 000000000001800c /system/lib64/libutils.so (android::Looper::pollInner(int)+860) (BuildId: e401a05bdd74f2cd876793e31ceba528)
#15 pc 0000000000017c10 /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+56) (BuildId: e401a05bdd74f2cd876793e31ceba528)
#16 pc 0000000000014730 /system/lib64/libandroid.so (ALooper_pollOnce+96) (BuildId: 65dd3a31fde27dd73c2284fbc239fb7e)
#17 pc 000000000122a9c4 /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#18 pc 0000000001225abc /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#19 pc 00000000012288fc /data/app/yeay.tv.yeay-ucEfZRkqannSihtDoQPJtw==/lib/arm64/libflutter.so!libflutter.so (offset 0x1210000) (BuildId: 51ddc8e5385a77c5)
#20 pc 00000000000e3b24 /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+36) (BuildId: 13817077d0d892b63e2f982cf91d02fa)
#21 pc 0000000000085330 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64) (BuildId: 13817077d0d892b63e2f982cf91d02fa)
Lost connection to device.
```
Flutter Doctor:
[√] Flutter (Channel stable, v1.12.13+hotfix.9, on Microsoft Windows [Version 10.0.17763.1098], locale pt-BR)
[√] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
[√] Android Studio (version 3.6)
[√] Connected device (1 available)
• No issues found!
</details>
|
c: crash,tool,dependency: dart,t: hot reload,P2,team-tool,triaged-tool
|
low
|
Critical
|
599,865,044 |
flutter
|
Can't handle width inside an appBar of a Scaffold
|
Hello,
i can't manage to handle width of any child of an appBar ( on web or mobile ) and i was wondering if this is something considered a bug or part of Material UI ?
Thanks a lot
## Steps to Reproduce
1. Run `flutter create appbar_width `.
2. Update the files as follows:
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Testing appbar width',
home: MyHomePage(),
);
}
}
class MyHomePage extends StatelessWidget {
@override
Widget build(BuildContext context) {
final desiredWidth = 200.0;
return Scaffold(
appBar: PreferredSize(
preferredSize: Size(desiredWidth, 80),
child: AppBar(
flexibleSpace: Container(
width: desiredWidth,
height: 80,
color: Colors.red,
),
title: Text('Testing appbar width'),
),
),
body: Center(
child: Text(
'What a great app',
),
),
);
}
}
```
3. Run your app
**Expected results:**
A red container of width 200
**Actual results:** <!-- what did you see? -->
A red container of screen width
```
flutter analyze
Analyzing appbar_width...
No issues found! (ran in 1.6s)
```
```
flutter doctor -v
[✓] Flutter (Channel master, v1.18.0-6.0.pre.4, on Mac OS X 10.15.3 19D76, locale fr-FR)
• Flutter version 1.18.0-6.0.pre.4 at /Users/jocelynkerouanton/Dev/flutter
• Framework revision fe41c9726a (34 hours ago), 2020-04-13 06:45:02 -0400
• Engine revision 521c1d4431
• Dart version 2.8.0 (build 2.8.0-dev.20.0 c9710e5059)
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at /Users/jocelynkerouanton/Library/Android/sdk
• Platform android-29, build-tools 29.0.2
• Java binary at: /Users/jocelynkerouanton/Library/Application
Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/191.6010548/Android
Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 11.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.4, Build version 11E146
• CocoaPods version 1.8.4
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 3.5)
• Android Studio at /Users/jocelynkerouanton/Library/Application
Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/191.6010548/Android
Studio.app/Contents
• Flutter plugin version 42.1.1
• Dart plugin version 191.8593
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[!] VS Code (version 1.41.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
✗ Flutter extension not installed; install from
https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (2 available)
• Chrome • chrome • web-javascript • Google Chrome 81.0.4044.92
• Web Server • web-server • web-javascript • Flutter Tools
! Doctor found issues in 1 category.
```
|
framework,f: material design,d: api docs,team-design,triaged-design
|
low
|
Critical
|
599,886,169 |
godot
|
Mono: Check that min mono version is satisfied before building
|
Hello,
I've just downloaded godot repo and wanted to compile it as mono.
*scons p=x11 tools=yes module_mono_enabled=yes mono_glue=no*
During compilation an error occured:
```c
...
[Initial build] Compiling ==> modules/mono/glue/string_glue.cpp
[Initial build] Compiling ==> modules/mono/glue/string_name_glue.cpp
[Initial build] Compiling ==> modules/mono/mono_gd/gd_mono.cpp
modules/mono/mono_gd/gd_mono.cpp: In member function 'void GDMono::initialize()':
modules/mono/mono_gd/gd_mono.cpp:359:2: error: 'mono_install_unhandled_exception_hook' was not declared in this scope
mono_install_unhandled_exception_hook(&unhandled_exception_hook, nullptr);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
modules/mono/mono_gd/gd_mono.cpp:359:2: note: suggested alternative: 'mono_print_unhandled_exception'
mono_install_unhandled_exception_hook(&unhandled_exception_hook, nullptr);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mono_print_unhandled_exception
scons: *** [modules/mono/mono_gd/gd_mono.linuxbsd.tools.64.o] Error 1
scons: building terminated because of errors.
```
I tried to clone the repository once again but with the same result.
Did somebody make a mistake in master branch or I am doing sth wrong?
(normal compilation without mono is done correctly)
|
enhancement,topic:buildsystem,topic:dotnet
|
low
|
Critical
|
599,891,569 |
TypeScript
|
Add 'browser' module resolution, type checking native JS module imports
|
## Search Terms
Disable non-relative modules
Disable absolute module path
Change node_modules folder
Disable node_modules non-relative module lookup
## Suggestion
A compiler option to disable non-relative modules, since these are invalid in the browser,
May be
- An option to disable `node_modules` lookup
- Or a new `moduleResolution` option: `browser`, with the same behavior as `node` but without looking into `node_modules` and allowing only `*.js` imports
## Use Cases
Now that native JS modules are becoming popular is possible to create web apps without any bundling and/our module path rewriting. Currently this is possible but typescript allows non-relative modules, leaving the task of checking all imports for any invalid path to the user.
[Here](https://github.com/microsoft/TypeScript/issues/15479#issuecomment-300240856) is stated that per design one should write the import path that works at runtime, so is sensible to ask for a way to check for path validation considering that the code will run as-is on the browser
## Examples
```ts
//No problem! Native JS modules supported via pathMapping:
import * from "/web_modules/redux.js"
//ok too
import * from "/src/ui/button.js";
//wrong! Non-relative modules are not supported in browser, but typescript allows it (should raise compiler error)
import * from "react";
///wrong! An import without '.js' extension will not be resolved correctly by the browser
import * from "/src/ui/button"
```
## Checklist
My suggestion meets these guidelines:
* [ X ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [ X ] This wouldn't change the runtime behavior of existing JavaScript code
* [ X ] This could be implemented without emitting different JS based on the types of the expressions
* [ X ] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ X ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
|
Suggestion,Awaiting More Feedback
|
low
|
Critical
|
599,900,342 |
PowerToys
|
AutoHotkey and PowerToy improved compatibility
|
We're seeing a lot of AHK conflicts here. #1958 is where we have an ask to detect conflicting applications like AHK and warn users, however, i feel we should also see if there are things we can do to improve compatibility with it.
|
Area-Quality
|
low
|
Minor
|
599,921,483 |
go
|
time: LoadLocation on Windows does not read zone information from the Windows API
|
### What version of Go are you using (`go version`)?
<pre>
go version go1.13.4 windows/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
</pre></details>
### What did you do?
https://play.golang.org/p/VuJ3ofkdk8
```golang
package main
import (
"fmt"
"time"
)
func main() {
loc, err := time.LoadLocation("Europe/Paris")
if err != nil {
panic(err)
}
fmt.Println(loc)
}
```
I built this basic go code and generated an executable file.
I have an error when I run this executable on a windows machine without Go installed because `time.LoadLocation` needs to access to "zoneinfo.zip" located at `$GOROOT/lib/time/zoneinfo.zip`.
### What did you expect to see?
Europe/Paris
### What did you see instead?
open c:\go\lib\time\zoneinfo.zip: The system cannot find the file specified.
---
This is a followup to ticket #21881 .
Some workarounds are available:
1. Ship a copy of zoneinfo.zip and use LoadLocationFromTZData (added in go1.10)
2. Ship a copy of zoneinfo.zip embedded into the Go binary, by the new import statement `import _ "time/tzdata"` (will land in go1.15)
However, the ability to ship a fixed, unchanging copy of the zone database is an orthogonal solution to properly using the available, up-to-date copy maintained by the OS. It has quite different properties and is not the same solution.
- this quickly gets out of date
- (e.g. Brazil DST rules changed recently in 2019, affecting > 200 million people)
- this makes the release artifacts much larger
- (for case 1) it is difficult to ensure any imported packages do this
- (for case 2) it is difficult to prevent any imported packages from doing this
- in neither case is the solution automatic
Shipping a copy of the zone database is an interesting solution if the system timezone information is unreliable, but this is really not the case in most environments where the system timezone database is continually updated by the host OS. It may also be preferable for Go to use the same timezone information as other programs on the host OS even if the database is not fully up-to-date.
On Linux, this has long since been a non-issue as Go always relies on the system timezone information. The default behavior should be to do the same on Windows.
There is real, system-updated timezone information available from the Windows API, that can resolve the above points. Go programs should make use of it, instead of bundling a large archive that quickly becomes outdated.
---
The classic win32 Windows API is not a 1:1 match for Go's existing timezone APIs, but there is a mapping table in CLDR:
- http://unicode.org/repos/cldr/trunk/common/supplemental/windowsZones.xml
Most of this information is already included in the time package for parsing the current Windows timezone:
- https://github.com/golang/go/blob/master/src/time/zoneinfo_abbrs_windows.go
It would be possible to extend this to support all the other timezone APIs in Go:
- https://github.com/golang/go/issues/21881#issuecomment-342347609
---
Alternatively, more recent versions of Windows do include the full zone database:
- https://github.com/golang/go/issues/21881#issuecomment-552239335
With a more extensive look at the available API surface in supported versions of Windows, other data sources may be found for this information too.
|
help wanted,OS-Windows,NeedsInvestigation
|
low
|
Critical
|
599,965,734 |
ant-design
|
selectTree的节点超过下拉菜单宽度时,希望支持超过的文字用【…】代替
|

- [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
tree节点文字过长的问题
### What does the proposed API look like?
在现有基础上增加一种方式,支持超过固定宽度的文字用【…】代替
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
help wanted,Inactive,3.x
|
low
|
Major
|
599,982,274 |
pytorch
|
Poor elmenetwise_kernel performance becomes critical on small mini-batch sizes
|
I am experiencing a huge slowdown of a CNN training on small mini-batch sizes caused by elementwise_kernels. These kernels are called on optimizer updates: optimizer.zero_grad() and optimizer.step(). I suppose they are responsible for the model weight updates.
I have tested with Pytorch 1.4.0 and with 1.6.0a0+1e22717.
The plots below show the aggregated time of CUDA kernels of one epoch of the CNN training for a range of mini-batch sizes for the two Pytorch versions.


Elementwise_kernels have numbers 0, 1, 7... On smaller mini-batch sizes they take about a half of the epoch time. The kernel number 0 takes the largest amount of time among CUDA kernels. It is responsible for optimizer updates.
There is not much difference between the two Pytorch versions. Please note, that the order and the colours of the kernels on the two plots are different.
Tested on Ubuntu 18.04.3 LTS in a Docker container.
GPU: NVIDIA Quadro P2000
CUDA: 10.1
cuDNN: 7.6.3.30-1
Nvidia driver: 430.64
First reported here: https://github.com/pytorch/pytorch/issues/31975#issuecomment-610302768
cc @VitalyFedyunin @ngimel
|
module: performance,module: cuda,triaged,module: TensorIterator
|
low
|
Major
|
600,032,387 |
three.js
|
Examples get scrollbars when resizing the window and custom scaling in the os.
|
##### Description of the problem
II noticed today all the examples currently get scrollbars as I resize them on Windows. Depending on the size they appear and disappear.
Chrome 80

Firefox 75

I'd like to suggest again that I think the default CSS for the examples should effectively be
```css
html,
body,
canvas {
margin: 0;
width: 100%;
height: 100%;
display: block;
}
```
or if you want it separated
```css
html {
height: 100%;
}
body {
margin: 0;
height: 100%;
}
canvas {
width: 100%;
height: 100%;
display: block;
}
```
* The height: 100% in body is needed to make sure the body covers the entire window
* The height: 100% in html is needed on firefox
And I'd suggst at a minimum the example code should be changed to
1. not have a container. AFAICT it has no point
```js
container = document.createElement( 'div' );
document.body.appendChild( container );
...
container.appendChild( renderer.domElement );
```
to just
```js
document.body.appendChild( renderer.domElement );
```
3 lines to 1
2. pass false to `setSize`
```js
renderer.setSize( window.innerWidth, window.innerHeight, false );
```
Of course I'd still suggest switching the default to do this all deeper in three and just make CSS the sizing the default and mostly get rid of the need for `setSize` in most apps as somewhat suggested in #4903
In any case how you fix it or if you fix it is up to you. I just assumed you didn't want scrollbars to appear.
##### Three.js version
- [x] Dev
- [x] r115
- [ ] ...
##### Browser
- [x] All of them
- [x] Chrome
- [x] Firefox
- [?] Internet Explorer
##### OS
- [ ] All of them
- [x] Windows
- [ ] macOS
- [ ] Linux
- [ ] Android
- [ ] iOS
##### Hardware Requirements (graphics card, VR Device, ...)
|
Addons
|
low
|
Major
|
600,047,886 |
flutter
|
[Image_picker][iOS] get actual image path
|
I am extracting images from my mobile phone so that I differentiate gallery and whatsapp images but the problem occurs is that I can not get actual path of image in IOS.In android I am getting the actual path so I easily differentiate the gallery and whatsapp images but in ios I get path like `private/var/mobile/Containers/Data/Application/89e/image_picker/113.jpg`
|
platform-ios,p: image_picker,package,has reproducible steps,P3,found in release: 3.10,found in release: 3.11,team-ios,triaged-ios
|
medium
|
Major
|
600,079,411 |
flutter
|
Flex() make it easier to reverse child order
|
## Use case
Given that Flutter is also used for web development, flex layout in native web technology have this "comprehensive" property values for the direction, for example in CSS we can do:
`flex-direction: row` equivalent with `direction: Axis.horizontal`
`flex-direction: column` equivalent with `direction: Axis.vertical`
**But, how about these?**
`flex-direction: row-reverse`
`flex-direction: column-reverse`
Let say I have code like below:
```
final _isPortrait = MediaQuery.of(context).orientation == Orientation.portrait;
```
```
Flex(
direction: _isPortrait ? Axis.horizontal : Axis.vertical,
children: <Widget>[
Expanded(flex: _isPortrait ? 2 : 1, child: MyWidget1()),
Expanded(flex: _isPortrait ? 1 : 0, child: MyWidget2()),
],
),
```
When orientation is portrait, of course the `MyWidget1()` will be on the left side as a first child.
However, when orientation is landscape, I want to swap the position of `MyWidget1()` and `MyWidget2()`, thus first child's position is now below of the second child. How to do that in Flutter?
**Current solution:** (Use more ternary)
```
Flex(
direction: _isPortrait ? Axis.horizontal : Axis.vertical,
children: <Widget>[
Expanded(
flex: _isPortrait ? 2 : 1,
child: _isPortrait ? MyWidget1() : MyWidget2(),
),
Expanded(
flex: _isPortrait ? 1 : 0,
child: _isPortrait ? MyWidget2() : MyWidget1(),
),
],
),
```
How if the child not just two, but three or more? Use additional "over-the-hood" script like `_isPortrait
? widgets : widgets.reversed.toList()` is not good in my opinion.
## Proposal
Why not add more property to make it cleaner and easier?
```
Flex(
direction: _isPortrait ? Axis.horizontal : Axis.vertical,
reverseOrder: !_isPortrait,
children: <Widget>[
Expanded(flex: _isPortrait ? 2 : 1, child: BottomButton1()),
Expanded(flex: _isPortrait ? 1 : 0, child: BottomButton2()),
],
),
```
|
c: new feature,framework,a: quality,c: proposal,P3,team-framework,triaged-framework
|
medium
|
Major
|
600,108,903 |
go
|
x/tools/present: issues with preformatted text
|
commit 0037cb7812fa1343616f77d5e8457b530a8f5bcc of x/tools.
The following presentation doesn't display correctly. The comment in the code block is ignored. The indented text after the bullet point is not displayed in preformatted style as it should be by my reading of the [docs](https://pkg.go.dev/golang.org/x/tools/present?tab=doc).
# Title
## This talk
```
// This comment should be displayed.
There's a comment above this.
```
- point
This text is should be
preformatted but is not
|
NeedsInvestigation,Tools
|
low
|
Minor
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.