id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
522,417,556 | go | runtime: de-duplicate bit operations with math/bits | For the page allocator rewrite, @mknyszek tried to depend on `math/bits` in the runtime. This *almost* works, but conflicts with `go test -coverpkg=all` (#35461) because that tries to instrument `math/bits` in ways that are incompatible with running inside the runtime. In order to get this working, [CL 206199](https://golang.org/cl/206199) duplicated some `math/bits` functions into the runtime (and [CL 206200](https://golang.org/cl/206200) intrinsified them).
This duplication was expedient, but unfortunate, and for 1.15 I'd like to reconsider this. A few possibilities:
1. `-coverpkg=all` shouldn't apply to anything the runtime depends on. You'd hardly be losing anything if it didn't cover `math/bits` since almost all of those functions are intrinsified anyway.
2. If the cover tool can switch to using compiler-inserted coverage information, rather than source rewriting, it's possible this problem will go away.
/cc @dr2chase @cherrymui @mdempsky | NeedsDecision,compiler/runtime | low | Major |
522,421,612 | angular | Missing space between currency code and value in currency pipe #2 | # ๐ bug report
### Affected Package
The issue is caused by package @angular/common
### Is this a regression?
No
### Description
The original bug was closed by a bot before it was solved (see https://github.com/angular/angular/issues/20708).
@ocombe wrote:
> Since CLDR doesn't contain formats for finance currencies, I don't think that we can fix that. They have formats for currency accounting, but it doesn't seem to be that.
> The description for the currency pattern that we use is:
>
> Used for currency values. A currency symbol (ยค) is will be replaced by the appropriate currency symbol for whatever currency is being formatted. The choice of whether to use the international currency symbols (USD, EUR, JAY, RUB,โฆ) or localized symbols ($, โฌ, ยฅ, ััะฑ.,โฆ) is up to the application program that uses CLDR.
>
> Which seem to point to the fact that we are indeed using the correct pattern.
You are using the correct pattern, but according to the CLDR website (http://cldr.unicode.org/translation/number-patterns) there should be a space between currency code and numbers:
> ยค | This will be replaced by a currency symbol, such as $ or USD. Note: by default a space is automatically added between letters in a currency symbol and adjacent numbers. So you don't need to add a space between them if your language writes "$12" but "USD 12".
| area: common,area: i18n,P4 | low | Critical |
522,432,502 | opencv | Feature Request: Can we get Subdiv3D (Voronoi for Point3f)? | ### System information (version)
- OpenCV => Latest
- Operating System / Platform => Ubuntu 18.04
- Compiler => GCC 7
##### Detailed description
Currently, OpenCV provides `cv::Subdiv2D` and it allows us to construct the Voronoi diagram with `getVoronoiFacetList`, and the OpenCV version runs a lot faster than what I implemented myself. However, currently, it is only limited to 2d points. I think it would be amazing if there's something like `cv::Subdiv3D` that can construct the Voronoi diagram with 3d points. | feature,effort: few weeks | low | Minor |
522,438,375 | flutter | Expose TextPainter.computeLineMetrics on RenderParagraph | Hi there,
I have a need to access the line metrics of a rendered `Text` widget, and while the information is available on `TextPainter`, `RenderParagraph` doesn't expose it. Can we get a `computeLineMetrics` method on `RenderParagraph`?
Thanks!
cc @GaryQian | c: new feature,engine,a: typography,good first issue,P3,team-engine,triaged-engine | low | Minor |
522,473,815 | create-react-app | Why can't I use the homepage parameter? |
Here is a parameter description https://docs.npmjs.com/files/package.json#homepage
that does not match the description from https://create-react-app.dev/docs/deployment/#step-1-add-homepage-to-packagejson | issue: proposal | low | Minor |
522,479,942 | pytorch | [jit] Printing the graph doesn't include function calls | See question 1 here https://discuss.pytorch.org/t/jit-mobile-is-isinstance-supposed-to-work-with-torchscript/60845
cc @suo | oncall: jit,triaged | low | Minor |
522,487,234 | pytorch | [FR] general nll_loss and cross_entropy along arbitrary dimension | Currently it is always taken along `dim=1`. Making the dim configurable should be really easy. I'd do it if this sounds reasonable. | module: loss,triaged | low | Minor |
522,497,968 | pytorch | [doc] Tensor.mean: dtype kwarg is not documented | The following works:
```
>>> torch.__version__
'1.4.0a0+5635a72'
>>> x.mean(dtype=torch.float)
tensor(1.5000)
>>> help(x.mean)
>>> x.mean(dim=1, dtype=torch.float)
tensor([1.6667, 1.3333])
```
Yet the doc doesn't say anything about the `dtype` argument:
```
Help on built-in function mean:
mean(...) method of torch.Tensor instance
mean(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)
See :func:`torch.mean`
``` | module: docs,triaged,module: reductions | low | Minor |
522,499,625 | go | x/build/cmd/relui: Windows installation has misconfigured ACL: privilege escalation possible between users | The Golang msi installer in Windows install by default Go in C:\Go location.
Files and subfolders of folders created under C:\ by default can be edited, created, deleted.
<pre>
PS C:\Go> icacls .
BUILTIN\Administrators:(I)(OI)(CI)(F)
NT AUTHORITY\SYSTEM:(I)(OI)(CI)(F)
BUILTIN\Users:(I)(OI)(CI)(RX)
NT AUTHORITY\Authenticated Users:(I)(M)
NT AUTHORITY\Authenticated Users:(I)(OI)(CI)(IO)(M)
</pre>
This means that in a shared Windows environment, is it possible to exploit this insecure ACL to replace/backdoor go.exe binaries, dll and so on.
### Scenario Local Privilege Escalation
A Standard User backdoor go.exe, waits for an Administrator to log in and run "go ..." or another component under C:\Go to successfully execute code under the latter elevated context.
### Scenario Horizontal Privilege Escalation
A Standard User can backdoor/replace any component under C:\Go and wait for another Standard User to login and run Golang environment to achieve code execution in the context of the target user.
### What version of Go are you using (`go version`)?
Up to latest Golang version: 1.13
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<pre>
$ go env
set GOHOSTARCH=amd64
set GOHOSTOS=windows
</pre>
### What did you expect to see?
I was expecting the msi installer to reconfigure after installation the C:\Go default destination folder to have an ACL hardened to allow only Administrators, Administrator, SYSTEM, TrustedInstaller to have modify, write and special permissions over the Golang components.
### What did you see instead?
Authenticated Users have Modify permission over any Golang component, thus local privilege escalation is possible. | Security,OS-Windows,Builders,NeedsInvestigation | low | Minor |
522,507,910 | opencv | Not sure, think calib3d's calibration.cpp broke between 4.1.0 and 4.1.1 | - OpenCV => 4.1.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2019
I've been looking at the code for stereoRectify for a long time. It's returning a lot of nonsense values for some of the calibrated cameras we have. I noticed the code changed from 4.1.0 to 4.1.1, in stereoRectify and it looks like the acos(x) stuff in there is silly and/or doesn't work. If whoever put that code in there really thinks its valid, how about some comments, because what is in there is really weird.

| category: calib3d,incomplete | low | Major |
522,510,302 | terminal | UIA Formatted Text Navigation | # Description of the new feature/enhancement
Text Ranges can be explored via a number of ways:
- Character
- Format
- Word
- Line
- Paragraph
- Page
- Document
We don't necessarily have to support all of them (refer to #3161 for us adding word navigation). But it would be interesting if we supported "format".
I envision us being able to jump between runs of formatted text. _Today_, the main formatting I can think of is _colored_ text composed of foreground and background color. Maybe we could just see if any have changed and, if that's the case, that would be a breakpoint for formatted text navigation.
# Proposed technical implementation details (optional)
Coincidentally, I just did a little bit of work in the AttrRowIterator. If that's actually taking a look at the text attributes, this would be an integral part of this design. | Issue-Feature,Product-Conhost,Area-Accessibility,Product-Terminal,InclusionBacklog,InclusionBacklog-Windows TerminalWin32,A11yMAS,Disability-All | low | Major |
522,518,750 | vscode | Enable breadcrumbs for unsaved files | Right now breadcrumbs for files (json, yaml, etc) only work if the file is saved. If I add a new file (File > New File), and set the language mode, the breadcrumbs should show the file structure without the file path part. | feature-request,breadcrumbs,workbench-untitled-editors | medium | Major |
522,574,884 | vscode | Word navigation in QuickOpen | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
- VSCode Version:1.40.1
- OS Version:window10
Steps to Reproduce:
1.By going to file snapshot (CTRL +P),Type the word aaa:BBB:XXX in the open box
2.Then use the shortcut key CTRL + left or right head to move the cursor position
ไธๅ็็ฐ่ฑก:ๅ
ๆ ไธ่ฝไปฅๅๅท(:)ไธบๅบๆฌๅไฝๅป็งปๅจไฝ็ฝฎ
Unreasonable: the cursor cannot be moved with a colon (:) as the basic unit
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
| feature-request,quick-pick | low | Major |
522,596,926 | vscode | Make "move line" command to work contextually | VSCode "move line" command is a fantastic feature. But it would be way more productive if it works with multiline key-values in objects.
For example imagine the following object declaration:
```js
const someObj = {
isValid: false,
position: {
top: 1,
bottom: 0,
},
};
```
if we wanna move `position` including its value one line higher above `isValid`(either we have to select multiple lines or) it only moves the `position: {`.
It would be much more productive and ideal when the cursor is on the `position: {` line, the "move up" command moves the whole `position` key including its value (the nested object declaration) all together.
This would be the ideal result moving the `position` key only by one "move up" command:
```js
const someObj = {
position: {
top: 1,
bottom: 0,
},
isValid: false,
};
```
instead of this:
```js
const someObj = {
position: {
isValid: false,
top: 1,
bottom: 0,
},
};
```
Of course this could be done as an extension but I thought first brainstorm this with the team. | feature-request,typescript,javascript,editor-autoindent | low | Major |
522,608,603 | rust | Rustdoc: maybe don't display #[repr(C)] sometimes | If a struct has some public fields and some non-public fields, `// some fields omitted` is displayed at the end of the declaration shown by rustdoc. Rustdoc also shows attributes on the declaration, such as `#[repr(C)]`.
This is actively misleading for `#[repr(C)]` types. If a private type is before any public types, then the struct as displayed suggests that the public fields are at the prefix of the struct, and thus have defined offsets smaller than where they actually are.
Example:
```rust
#[repr(C)]
#[derive(Debug, Eq, PartialEq, Hash)]
pub struct ThinData<Head, SliceItem> {
length: usize,
pub head: Head,
pub slice: [SliceItem],
}
```

There are two obvious potential solutions:
- Put `// some fields omitted` in the correct place(s) among public fields for `#[repr(C)]` types, or
- Don't display `#[repr(C)]` for types with some fields omitted. | T-rustdoc,A-attributes,C-bug | low | Critical |
522,760,489 | pytorch | android run build_pytorch_android.sh error | ## ๐ Bug
<!-- A clear and concise description of what the bug is. --> android build_pytorch_android.sh run error
## To Reproduce
Steps to reproduce the behavior:
1.connect android phone
1.compiled the whole code
1.run android/run_test.sh
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
Welcome to Gradle 5.0!
Here are the highlights of this release:
- Kotlin DSL 1.0
- Task timeouts
- Dependency alignment aka BOM support
- Interactive `gradle init`
For more details see https://docs.gradle.org/5.0/release-notes.html
Starting a Gradle Daemon (subsequent builds will be faster)
FAILURE: Build failed with an exception.
* Where:
Build file '/home/Downloads/pytorch/android/libs/fbjni_local/build.gradle' line: 40
* What went wrong:
A problem occurred evaluating project ':fbjni'.
> Could not read script '/home/Downloads/pytorch/android/gradle/release.gradle' as it does not exist.
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
Versions of relevant libraries:
[pip3] numpy==1.17.0
[conda] blas 1.0 mkl
[conda] magma-cuda101 2.5.1 1 pytorch
[conda] mkl 2019.4 243
[conda] mkl-include 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.15 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] torch 1.4.0a0+104bb57 pypi_0 pypi
- PyTorch Version (e.g., 1.4):
- OS (e.g., Linux):
- How you installed PyTorch (source in conda):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:10.1
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| triaged,oncall: mobile | low | Critical |
522,765,996 | pytorch | unsqueeze has 'out=' option documented but not implemented(?) | ## ๐ Documentation
I think the documentation shouldn't include an _out=_ option,
I don't see any _unsqueeze_out()_ function in ATen..
```
>>> torch.unsqueeze(torch.tensor([1,2.0]),-1,out=r)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsqueeze() got an unexpected keyword argument 'out'
``` | module: docs,triaged | low | Critical |
522,805,853 | TypeScript | A property descriptor must have keys "enumerable" and "configurable" | **TypeScript Version:** 3.7.2
**Expected behavior:** In a property descriptor object, keys `configurable` and `enumerable` should not be optional.
**Actual behavior:** These properties are marked as optional (see [here][1] and [here][2]).
[1]: https://github.com/microsoft/TypeScript/blob/v3.7.2/src/lib/es5.d.ts#L88-L89
[2]: https://github.com/microsoft/TypeScript/blob/v3.7.2/src/lib/es5.d.ts#L1368-L1369
| Bug,Domain: lib.d.ts | low | Minor |
522,806,770 | TypeScript | optional and readonly Filter Modifier for keyof | ## Search Terms
- keyof modifiers
- only optional keys
- only readonly keys
- pick optional properties
- pick required properties
- pick readonly properties
## Suggestion
Adding Support for modifiers to keyof could be very helpful. I'm picturing something like this
```typescript
type Koptional = keyof? T;
type Kreadonly = readonly keyof T;
// Negations
type Krequired = keyof!? T;
type Keditable = !readonly keyof T;
// Combinations
type KrequiredReadonly = readonly keyof!? T;
```
The modifiers would act as "filters", only selecting the keys fitting the given modifiers.
***
As @jcalz pointed out, it might make more sense to use the existing `+` and `-` operators instead of the `!`.
## Use Cases
This could be really useful for selecting properties of types based on them being `optional` / `readonly`. A possible use case for this would be the React `defaultProps` object (see example).
Of course this could negatively affect readability as especially Mapped Types can get fairly long
```typescript
type PickEditableRequired<T> = {
[P in !readonly keyof!? T]: T[P];
};
```
but I think it's still in the realms of comprehensibility.
## Examples
```typescript
type PickOptional<T> = {
[P in keyof? T]-?: T[P];
};
type DrinkProps = {
whoGetsUpAndMakesIt: string,
type?: "water" | "coffee" | "coke" | "more coffee",
vessel?: "cup" | "glass" | "HUGE cup",
amount?: number
};
class Drink extends React.Component<Required<DrinkProps>> {
public static defaultProps: PickOptional<DrinkProps> = {
type: "water",
vessel: "glass",
amount: 250
}
render() {
return <p>
{this.props.whoGetsUpAndMakesIt} gets up and pours
{this.props.amount}ml of {this.props.type} into a
{this.props.vessel}.
</p>
}
}
```
Especially with many optional props the current method of
```typescript
public static defaultProps: Pick<DrinkProps, "type" | "vessel" | "amount"> = { ... };
```
can get fairly tedious, especially as you have to add new optional props at 3 places (the type definition, the `Pick` UtilityType and in the `defaultProps`). The method using PickOptional and the `?`-`keyof`-modifier reduces it to 2, and immediately notifies you if you declared a parameter optional, but haven't defined it in `defaultProps`.
This would also work without React in terms of default values:
```typescript
type Order = {
item: string,
amount?: number,
shipping?: "standard" | "express"
}
const defaultOrder: PickOptional<Order> = {
amount: 1,
shipping: "standard"
}
function placeOrder(order: Order): void {
let data: Required<Order> = {...defaultOrder, ...order};
console.log(`You ordered ${data.amount} ${data.item} with ${data.shipping} shipping.`);
}
```
I'm sure there are more (and better) use cases for this feature I can't think of right now...
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Major |
522,809,278 | pytorch | Parallel data loader performance degradation for IterableDataset with num_workers > 1 (but not for Dataset). | ## ๐ Bug
Data loader performance degrades with > 1 data loader worker when using `IterableDataset` but not when using (map-style) `Dataset`.
## To Reproduce
Steps to reproduce the behavior:
My experiment (code below) consisted of:
1. Make a (Iterable)Dataset class. This synthesizes dummy data and possibly adds a time delay to simulate batch loading work.
2. Make a Dataloader that consumes the above dataset instance. The number of workers varied from 0 (loading in the main process) to 4.
3. Iterate through N=10 batches of data yielded by the dataloader, with possibly a time delay to simulate batch processing work (e.g. training).
Here are my measurements for `IterableDataset`:
```
(num_workers=0, batch_loading_time=1, batch_process_time=1) execution finished. Time elapsed = 20.021s.
(num_workers=1, batch_loading_time=1, batch_process_time=1) execution finished. Time elapsed = 11.433s.
(num_workers=2, batch_loading_time=1, batch_process_time=1) execution finished. Time elapsed = 21.469s.
(num_workers=3, batch_loading_time=1, batch_process_time=1) execution finished. Time elapsed = 31.517s.
(num_workers=4, batch_loading_time=1, batch_process_time=1) execution finished. Time elapsed = 41.545s.
(num_workers=0, batch_loading_time=1, batch_process_time=0.1) execution finished. Time elapsed = 11.014s.
(num_workers=1, batch_loading_time=1, batch_process_time=0.1) execution finished. Time elapsed = 10.518s.
(num_workers=2, batch_loading_time=1, batch_process_time=0.1) execution finished. Time elapsed = 10.643s.
(num_workers=3, batch_loading_time=1, batch_process_time=0.1) execution finished. Time elapsed = 10.745s.
(num_workers=4, batch_loading_time=1, batch_process_time=0.1) execution finished. Time elapsed = 10.862s.
(num_workers=0, batch_loading_time=0.1, batch_process_time=1) execution finished. Time elapsed = 11.013s.
(num_workers=1, batch_loading_time=0.1, batch_process_time=1) execution finished. Time elapsed = 10.515s.
(num_workers=2, batch_loading_time=0.1, batch_process_time=1) execution finished. Time elapsed = 20.541s.
(num_workers=3, batch_loading_time=0.1, batch_process_time=1) execution finished. Time elapsed = 30.617s.
(num_workers=4, batch_loading_time=0.1, batch_process_time=1) execution finished. Time elapsed = 40.610s.
(num_workers=0, batch_loading_time=0.1, batch_process_time=0.1) execution finished. Time elapsed = 2.004s.
(num_workers=1, batch_loading_time=0.1, batch_process_time=0.1) execution finished. Time elapsed = 1.543s.
(num_workers=2, batch_loading_time=0.1, batch_process_time=0.1) execution finished. Time elapsed = 2.553s.
(num_workers=3, batch_loading_time=0.1, batch_process_time=0.1) execution finished. Time elapsed = 3.593s.
(num_workers=4, batch_loading_time=0.1, batch_process_time=0.1) execution finished. Time elapsed = 4.649s.
```
This is the code I used to run this experiment:
```
import os
import time
import itertools
import numpy as np
import torch
class TimedBlock:
"""
Context manager to measure wall time.
"""
def __init__(self, name):
self.name = name
def __enter__(self):
self.t_start = time.perf_counter()
def __exit__(self, *args):
print("(%d) %s execution finished. Time elapsed = %.3fs." % (os.getpid(), self.name,
time.perf_counter() - self.t_start))
class IterableDataset(torch.utils.data.IterableDataset):
"""
Synthetic iterable dataset that simulates data loading work.
"""
def __init__(self, n_items, shape, batch_loading_time):
"""
Args:
n_items: number of items to return.
shape: shape of batch tensor to return.
batch_loading_time: each batch will take at least this many seconds to return.
"""
self.n_items = n_items
self.shape = shape
self.batch_loading_time = batch_loading_time
def __iter__(self):
self.i = 0
while self.i < self.n_items:
t_start = time.perf_counter()
X = np.random.randn(*self.shape)
t_remaining = self.batch_loading_time - (time.perf_counter() - t_start)
if t_remaining > 0:
time.sleep(t_remaining)
yield X
self.i += 1
class Dataset(torch.utils.data.Dataset):
"""
Synthetic map-style dataset that simulates data loading work.
"""
def __init__(self, n_items, shape, batch_loading_time):
"""
Args:
n_items: number of items to return.
shape: shape of batch tensor to return.
batch_loading_time: each batch will take at least this many seconds to return.
"""
self.n_items = n_items
self.shape = shape
self.batch_loading_time = batch_loading_time
def __len__(self):
return self.n_items
def __getitem__(self, idx):
t_start = time.perf_counter()
X = np.random.randn(*self.shape)
t_remaining = self.batch_loading_time - (time.perf_counter() - t_start)
if t_remaining > 0:
time.sleep(t_remaining)
return X
def test_simpleloader(n_iters=int(1e2), shape=(1,), num_workers=0, batch_loading_time=0, batch_process_time=0,
exp_name="simpleloader", dataset_cls=IterableDataset):
"""
Load and process n_iters batches of data using a dataloader and one of the above dataset classes. We simulate
training by waiting batch_process_time seconds per batch.
"""
dataset = dataset_cls(n_iters, shape, batch_loading_time)
data_loader = torch.utils.data.DataLoader(dataset,
batch_size=None,
num_workers=num_workers,
collate_fn=None,
pin_memory=False,
worker_init_fn=None,
multiprocessing_context=None)
with TimedBlock(exp_name):
for i, x in enumerate(data_loader):
time.sleep(batch_process_time)
pass
def simpleloader_grid(num_workers_grid=[0, 1, 2, 3, 4],
batch_loading_time_grid=[10**i for i in range(0, -2, -1)],
batch_process_time_grid=[10**i for i in range(0, -2, -1)],
dataset_cls=IterableDataset):
n_iters = int(1e1)
shape = (1,)
params = itertools.product(num_workers_grid, batch_loading_time_grid, batch_process_time_grid)
for p in params:
test_simpleloader(n_iters, shape, p[0], p[1], p[2], str(p), dataset_cls=dataset_cls)
if __name__ == "__main__":
torch.multiprocessing.set_start_method("spawn")
t_start = time.perf_counter()
simpleloader_grid(dataset_cls=IterableDataset)
print("Main execution time = %.3fs" % (time.perf_counter() - t_start))
```
## Expected behavior
Here are my measurements for (map-style)`Dataset`, which I would also expect from `IterableDataset`:
```
(num_workers=0, batch_loading_time=1, batch_process_time=1) execution finished. Time elapsed = 20.022s.
(num_workers=1, batch_loading_time=1, batch_process_time=1) execution finished. Time elapsed = 11.432s.
(num_workers=2, batch_loading_time=1, batch_process_time=1) execution finished. Time elapsed = 11.455s.
(num_workers=3, batch_loading_time=1, batch_process_time=1) execution finished. Time elapsed = 11.493s.
(num_workers=4, batch_loading_time=1, batch_process_time=1) execution finished. Time elapsed = 11.479s.
(num_workers=0, batch_loading_time=1, batch_process_time=0.1) execution finished. Time elapsed = 11.014s.
(num_workers=1, batch_loading_time=1, batch_process_time=0.1) execution finished. Time elapsed = 10.529s.
(num_workers=2, batch_loading_time=1, batch_process_time=0.1) execution finished. Time elapsed = 5.640s.
(num_workers=3, batch_loading_time=1, batch_process_time=0.1) execution finished. Time elapsed = 4.537s.
(num_workers=4, batch_loading_time=1, batch_process_time=0.1) execution finished. Time elapsed = 3.652s.
(num_workers=0, batch_loading_time=0.1, batch_process_time=1) execution finished. Time elapsed = 11.012s.
(num_workers=1, batch_loading_time=0.1, batch_process_time=1) execution finished. Time elapsed = 10.502s.
(num_workers=2, batch_loading_time=0.1, batch_process_time=1) execution finished. Time elapsed = 10.538s.
(num_workers=3, batch_loading_time=0.1, batch_process_time=1) execution finished. Time elapsed = 10.594s.
(num_workers=4, batch_loading_time=0.1, batch_process_time=1) execution finished. Time elapsed = 10.604s.
(num_workers=0, batch_loading_time=0.1, batch_process_time=0.1) execution finished. Time elapsed = 2.005s.
(num_workers=1, batch_loading_time=0.1, batch_process_time=0.1) execution finished. Time elapsed = 1.518s.
(num_workers=2, batch_loading_time=0.1, batch_process_time=0.1) execution finished. Time elapsed = 1.540s.
(num_workers=3, batch_loading_time=0.1, batch_process_time=0.1) execution finished. Time elapsed = 1.601s.
(num_workers=4, batch_loading_time=0.1, batch_process_time=0.1) execution finished. Time elapsed = 1.572s
```
## Environment
```Collecting environment information...
PyTorch version: 1.3.0
Is debug build: No
CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: GeForce GTX 1060 6GB
GPU 1: GeForce GTX 1060 6GB
Nvidia driver version: 430.50
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4
Versions of relevant libraries:
[pip] numpy==1.17.2
[pip] torch==1.3.0
[pip] torch-cluster==1.4.3
[pip] torch-geometric==1.3.0
[pip] torch-scatter==1.4.0
[pip] torch-sparse==0.4.3
[pip] torch-spline-conv==1.1.0
[pip] torchcontrib==0.0.2
[pip] torchvision==0.2.2
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] mkl 2019.3 199
[conda] mkl_fft 1.0.10 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] tensorflow 1.13.1 mkl_py37h54b294f_0
[conda] tensorflow-base 1.13.1 mkl_py37h7ce6ba3_0
[conda] torch 1.3.0 pypi_0 pypi
[conda] torch-cluster 1.4.3 dev_0 <develop>
[conda] torch-geometric 1.3.0 pypi_0 pypi
[conda] torch-scatter 1.4.0 pypi_0 pypi
[conda] torch-sparse 0.4.3 pypi_0 pypi
[conda] torch-spline-conv 1.1.0 pypi_0 pypi
[conda] torchcontrib 0.0.2 pypi_0 pypi
[conda] torchvision 0.2.2 py_3 pytorch
```
## Additional context
<!-- Add any other context about the problem here. -->
cc @SsnL | module: dataloader,triaged | low | Critical |
522,868,226 | terminal | Add a setting to control selection foreground color | This is highly related to #3326/#3471 and #3561.
We've added a setting to control what color we use to highlight text with when it's selected in #3471.
However, that doesn't _necessarily_ fix our contrast ratio problems w.r.t. selection, since the configured color might still not have enough difference from the foreground of the highlighted text.
Conhost had an inverting selection, which didn't have this problem. However, we think that might be Hard to implement in the dx renderer. Adding that to the Terminal is being tracked in #3561.
A selection foreground color
A: might be more feasible to implement
B: is a good configurable setting to add regardless, to enable further personalization of the terminal.
Implementation might still be tricky though, since the Renderer is only ever told about the selection rects _after_ it has drawn the text. Re-drawing the text seems like a bad idea. I'm not sure we'll be able to re-order things in the Renderer safely without also impacting the GDI and other rendering heads. This is hard, but easier than inverting selection. | Help Wanted,Area-Rendering,Area-TerminalControl,Product-Terminal,Issue-Task | low | Major |
522,883,003 | flutter | [local_auth] sticky set to true does not work | Hi, I'm using this plugin on my Xiami Redmi Note 5 and I have a problem when app call uthenticate and then goes in background.
When app resume the promp authenticate is not shown and authenticateWithBiometrics return false.
I found out onActivityPaused is not fired.
Thanks!
Logs
```
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, v1.9.1+hotfix.2, on Microsoft Windows [Versione 10.0.16299.1451], locale it-IT)
[โ] Android toolchain - develop for Android devices (Android SDK version 29.0.1)
[โ] Android Studio (version 3.3)
[โ] IntelliJ IDEA Community Edition (version 2018.1)
[โ] VS Code, 64-bit edition (version 1.23.1)
[โ] Proxy Configuration
[โ] Connected device (1 available)
โข No issues found!_auth
``` | platform-android,p: local_auth,package,P2,team-android,triaged-android | low | Minor |
522,939,058 | pytorch | SGD fails on sparse matrix | ## ๐ Bug
One SGD step for optimizing a model with a sparse parameter matrix gives me a RuntimeError
## To Reproduce
```
import torch
from torch import nn
class TrainNet(nn.Module):
def __init__(self, in_features, out_features):
super(TrainNet, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = torch.nn.Parameter(torch.randn(out_features, in_features).to_sparse().requires_grad_(True))
def forward(self, input):
x = torch.sparse.mm(self.weight, input)
return x
model = TrainNet(10, 20)
opt = torch.optim.SGD(model.parameters(), 0.01)
inp = torch.ones((10, 30), dtype=torch.float32)
model.train()
model.zero_grad()
out = model(inp)
out.sum().backward()
opt.step()
```
gives me the error
```
Traceback (most recent call last):
File "<input>", line 25, in <module>
File "/Users/antonio/miniconda3/envs/python3_6/lib/python3.6/site-packages/torch/optim/sgd.py", line 106, in step
p.data.add_(-group['lr'], d_p)
RuntimeError: set_indices_and_values_unsafe is not allowed on a Tensor created from .data or .detach().
If your intent is to change the metadata of a Tensor (such as sizes / strides / storage / storage_offset)
without autograd tracking the change, remove the .data / .detach() call and wrap the change in a `with torch.no_grad():` block.
For example, change:
x.data.set_(y)
to:
with torch.no_grad():
x.set_(y)
```
## Environment
```
PyTorch version: 1.3.1
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.14.6
GCC version: Could not collect
CMake version: version 3.15.4
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] numpy==1.17.2
[pip] torch==1.3.1
[pip] torchvision==0.4.2
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] mkl 2019.4 233
[conda] mkl-service 2.3.0 py36hfbe908c_0
[conda] mkl_fft 1.0.14 py36h5e564d8_0
[conda] mkl_random 1.1.0 py36ha771720_0
[conda] pytorch 1.3.1 py3.6_0 pytorch
[conda] tensorflow 1.14.0 mkl_py36h933f829_0
[conda] tensorflow-base 1.14.0 mkl_py36h655c25b_0
[conda] torchvision 0.4.2 py36_cpu pytorch
```
## Additional context
Just changing on `sgd.py`:
```
p.data.add_(-group['lr'], d_p)
```
to:
```
with torch.no_grad():
p.add_(-group['lr'], d_p)
```
seems to fix the problem.
cc @vincentqb | module: sparse,module: optimizer,triaged | low | Critical |
522,947,448 | pytorch | AdamSparse fails to run | ## ๐ Bug
One AdamSparse step for optimizing a model with a sparse parameter matrix gives me a RuntimeError
## To Reproduce
```
# %%
import torch
from torch import nn
class TrainNet(nn.Module):
def __init__(self, in_features, out_features):
super(TrainNet, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = torch.nn.Parameter(torch.randn(out_features, in_features).to_sparse().requires_grad_(True))
def forward(self, input):
x = torch.sparse.mm(self.weight, input)
return x
model = TrainNet(10, 20)
opt = torch.optim.SparseAdam(model.parameters(), 0.01)
inp = torch.ones((10, 30), dtype=torch.float32)
model.train()
model.zero_grad()
out = model(inp)
out.sum().backward()
opt.step()
```
gives me the error:
```
Traceback (most recent call last):
File "<input>", line 25, in <module>
File "/Users/antonio/miniconda3/envs/python3_6/lib/python3.6/site-packages/torch/optim/sparse_adam.py", line 86, in step
old_exp_avg_values = exp_avg.sparse_mask(grad)._values()
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
```
## Environment
```
PyTorch version: 1.3.1
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.14.6
GCC version: Could not collect
CMake version: version 3.15.4
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] numpy==1.17.2
[pip] torch==1.3.1
[pip] torchvision==0.4.2
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] mkl 2019.4 233
[conda] mkl-service 2.3.0 py36hfbe908c_0
[conda] mkl_fft 1.0.14 py36h5e564d8_0
[conda] mkl_random 1.1.0 py36ha771720_0
[conda] pytorch 1.3.1 py3.6_0 pytorch
[conda] tensorflow 1.14.0 mkl_py36h933f829_0
[conda] tensorflow-base 1.14.0 mkl_py36h655c25b_0
[conda] torchvision 0.4.2 py36_cpu pytorch
```
## Additional context
I did manage to fix Sparse Adam to run in this minimal example by:
1. Doing as proposed in https://discuss.pytorch.org/t/pytorch-sparse-adam-how-to-run/39871/2
and changing on ``sparse_adam.py``:
```
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
```
to:
```
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data.to_dense())
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data.to_dense())
```
2. Similar to #29814, changing:
```
p.data.add_(make_sparse(-step_size * numer.div_(denom)))
```
to:
```
with torch.no_grad():
p.add_(make_sparse(-step_size * numer.div_(denom)))
```
on `sparse_adam.py`
cc @vincentqb | module: sparse,triaged | low | Critical |
522,962,509 | rust | Demangle C++ functions in backtraces | I'm modding a C++ game (which contains debug symbols, thanks flibitijibibo!) using Rust. If I panic in a Rust function from my mod, this is what the backtrace looks like:
```
thread '<unnamed>' panicked at 'panic test', src/lib.rs:37:5
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:77
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:61
4: core::fmt::write
at src/libcore/fmt/mod.rs:1030
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1412
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:65
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:50
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:188
9: std::panicking::default_hook
at src/libstd/panicking.rs:205
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:464
11: std::panicking::begin_panic
at /rustc/bc0e288ad02ef362b5a6c42aaf61f2901c9b46db/src/libstd/panicking.rs:400
12: vloader::hook_physfs_init
at src/lib.rs:37
13: core::ops::function::Fn::call
at /rustc/bc0e288ad02ef362b5a6c42aaf61f2901c9b46db/src/libcore/ops/function.rs:69
14: <alloc::boxed::Box<F> as core::ops::function::Fn<A>>::call
at /rustc/bc0e288ad02ef362b5a6c42aaf61f2901c9b46db/src/liballoc/boxed.rs:956
15: vloader::PHYSFS_INIT::__ffi_detour
at /home/leo60228/vloader/<::detour::macros::static_detour macros>:31
16: _Z15FILESYSTEM_initPc
at /home/flibitijibibo/Programming/cppProjects/Contracts/VVVVVV/Src/FileSystemUtils.cpp:39
17: main
at /home/flibitijibibo/Programming/cppProjects/Contracts/VVVVVV/Src/main.cpp:39
18: __libc_start_main
19: <unknown>
```
This backtrace is almost perfect, except for the ugly mangled C++ symbol. Now that #65646 allows unwinding through C++ code, it would be nice if Rust demangled them. `backtrace-rs` already supports this behind a feature, and I tried enabling it through xargo, but it didn't work. I think this might be because of the way std prints backtraces. | A-runtime,T-libs-api,C-feature-request | low | Critical |
523,013,361 | pytorch | [jit] Traced `cat` on GPU doesn't support negative indexing | ```python
import torch.nn as nn
import torch
class MyNet(nn.Module):
def __init__(self):
super().__init__()
self.param = nn.Linear(3, 5)
def forward(self, x):
# change dim=-1 to dim=0 or dim=1 works
return torch.cat((self.param(x), self.param(x)), dim=-1)
def input_prototype(self):
return torch.tensor([[1.0, 2.0, 3.0]])
# work on cpu
a = MyNet()
input_prototype = a.input_prototype()
print(a(input_prototype))
b = torch.jit.trace(
a, input_prototype
)
print(b(input_prototype))
# not working on gpu
a = MyNet().cuda()
input_prototype = a.input_prototype().cuda()
print(a(input_prototype))
b = torch.jit.trace(
a, input_prototype
)
print(b(input_prototype))
```
throws
```
TracingCheckError: Tracing failed sanity checks!
Encountered an exception while running the trace with test inputs.
Exception:
vector::_M_range_check: __n (which is 18446744073709551615) >= this->size() (which is 2)
The above operation failed in interpreter, with the following stack trace:
Exception type: std::runtime_error
```
scripting fixes the issue
cc @suo | oncall: jit,triaged | low | Critical |
523,019,342 | go | x/mobile: when a panic happens, a long enough traceback deadlocks the runtime on Android | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.1 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/<hidden>/.cache/go-build"
GOENV="/home/<hidden>/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/opt/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/lib/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build557544076=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I have an Go application compiled for Android with gomobile. When a panic happens, the runtime prints a traceback and quits. However, when this traceback is long enough, the application freezes and never quits.
[Gomobile redirects `stderr` to Android](https://github.com/golang/mobile/blob/master/internal/mobileinit/mobileinit_android.go#L74-L82) (Android log tag being "GoLog"). When a panic happens, the world is frozen, so:
- the goroutine `lineLog` does not consume anymore the pipe `dup`-ed to `stderr`;
- at some point the pipe becomes full;
- [the runtime continues to write the long traceback in `stderr`](https://github.com/golang/go/blob/master/src/runtime/write_err_android.go#L49) and blocks until `stderr` is writable;
- deadlock.
Disabling the gomobile's redirection from `stderr` makes the application crashes "properly".
Note that [the runtime already redirect logs to Android](https://github.com/golang/go/blob/master/src/runtime/write_err_android.go#L51-L83) (with the tag "Go"), so the redirection of `stderr` by gomobile seems redundant to me?
To reproduce, you can modify the runtime to `println` lot of lines when a panic is caught, until the runtime freezes.
### What did you expect to see?
When a panic happens, and the resulting traceback is long enough, I expect the runtime to print the full traceback and quit.
### What did you see instead?
When a panic happens, and the resulting traceback is long enough, I see the runtime prints a truncated traceback and freezes. | NeedsInvestigation,mobile | low | Critical |
523,020,796 | vue | Allow accessing events registered via `vm.$on(...)` via a property, similar to `$listeners` | ### What problem does this feature solve?
Currently, if an event is registered via `vm.$on('event-name', handler)`, it does not appear in the `this.$listeners` object (Vue 2.6.x)
In some instances you only want to handle the event processing if there is indeed a listener registered (for performance reasons). But when component event listeners are registered programatically via `this.$on` (or `vmReference.$on`) it is not currently possible to see them in `this.$listeners`, e.g.:
```js
if (this.$listeners['event-name']) {
// Do something computationally intensive
// then emit event
this.$emit('event-name', resultOfComputation)
}
```

### What does the proposed API look like?
No new API for the public.
<!-- generated by vue-issues. DO NOT REMOVE --> | discussion | low | Major |
523,049,431 | storybook | React - Browser error: fn.apply is not a function | **Describe the bug**
Most of my components produce the following error in the browser.
`TypeError: fn.apply is not a function
`
**To Reproduce**
Steps to reproduce the behavior:
1. `npm run storybook`
2. Error shows up in browser
**Expected behavior**
No error, show my component
**Screenshots**
<img width="1202" alt="Screen Shot 2019-11-14 at 11 05 09 AM" src="https://user-images.githubusercontent.com/12503822/68887875-a0dba400-06ce-11ea-90be-3909b3c93b33.png">
**System:**
```
Environment Info:
System:
OS: macOS 10.15.1
CPU: (12) x64 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Binaries:
Node: 10.15.3 - ~/.nvm/versions/node/v10.15.3/bin/node
Yarn: 1.19.0 - /usr/local/bin/yarn
npm: 6.4.1 - ~/.nvm/versions/node/v10.15.3/bin/npm
Browsers:
Chrome: 78.0.3904.97
Safari: 13.0.3
npmPackages:
@storybook/react: ^5.2.6 => 5.2.6
```
| question / support,react | low | Critical |
523,057,902 | rust | Fix `trait_ref_to_existential` as trait aliases is stabilized | https://github.com/rust-lang/rust/pull/66392#discussion_r346112428
There appears to be a missing filter on top of `expand_trait_aliases`, which picks up non-supertraits where clauses - but also, the object safety completely ignores trait aliases, which could be object safety hazards. We now use `delay_span_bug` there to avoid an ICE in stable even when the feature is disabled. | T-compiler,C-bug,F-trait_alias,requires-nightly,T-types,A-trait-objects,A-dyn-compatibility | low | Critical |
523,085,327 | godot | Loading FPS Demo cause of using nan values | **Godot version:**
3.2.beta.custom_build.7d836a7cc
**OS/device including version:**
Ubuntu 19.10
**Issue description:**
Loading FPS tutorial, cause that Godot use in Image::set_pixel, p_color variable with NaN values inside it


**Steps to reproduce:**
1. Just run minimal project with breakpoints like from above
**Minimal reproduction project:**
https://github.com/TwistedTwigleg/Godot_FPS_Tutorial | bug,topic:core | low | Minor |
523,096,570 | TypeScript | Type narrowing for optional chaining in OR expression in AND expression | **TypeScript Nightly** version 3.8.0-dev.20191113
[Playground link](http://www.typescriptlang.org/play/index.html?ts=3.8.0-dev.20191113&ssl=1&ssc=1&pln=1&pc=79#)
Compiler Options:
```json
{
"compilerOptions": {
"noImplicitAny": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"strictPropertyInitialization": true,
"strictBindCallApply": true,
"noImplicitThis": true,
"noImplicitReturns": true,
"useDefineForClassFields": false,
"alwaysStrict": true,
"allowUnreachableCode": false,
"allowUnusedLabels": false,
"downlevelIteration": false,
"noEmitHelpers": false,
"noLib": false,
"noStrictGenericChecks": false,
"noUnusedLocals": false,
"noUnusedParameters": false,
"esModuleInterop": true,
"preserveConstEnums": false,
"removeComments": false,
"skipLibCheck": false,
"checkJs": false,
"allowJs": false,
"experimentalDecorators": false,
"emitDecoratorMetadata": false,
"target": "ES2017",
"module": "ESNext"
}
}
```
**Input:**
```typescript
const f = (a: { b: boolean, c: number } | undefined) => (a?.b || null) && a.c;
```
**Output:**
Compiler error: `Object is possibly 'undefined'.(2532)`
**Expected behavior:**
Successful compilation
**Note:**
```typescript
const f = (a: { b: boolean, c: number } | undefined) => (a?.b || false) && a.c;
```
(using `false` instead of `null`) works as expected.
**Related issues:**
https://github.com/microsoft/TypeScript/issues/34570
https://github.com/microsoft/TypeScript/issues/33806 | Bug | low | Critical |
523,121,627 | pytorch | torch.distributions.normal.Normal is not JIT supported | ## ๐ Bug
When trying to use `torch.distributions.normal.Normal` in a JIT function, you get `torch.jit.frontend.NotSupportedError: comprehension ifs not supported yet`.
## To Reproduce
I am writing a function and calling `torch.distributions.normal.Normal.sample()`.
Steps to reproduce the behavior:
1. Call `torch.distributions.normal.Normal` in a function tagged with `@torch.jit.script`.
```
torch.distributions.normal.Normal(torch.mean(waveform), 1).sample()
```
## Expected behavior
JIT to compile funciton
## Traceback
```
Traceback (most recent call last):
File "test/test_functional.py", line 6, in <module>
import torchaudio
File "/private/home/cwillycs/audio/torchaudio/__init__.py", line 7, in <module>
from torchaudio import transforms, datasets, kaldi_io, sox_effects, compliance, _docs
File "/private/home/cwillycs/audio/torchaudio/transforms.py", line 6, in <module>
from . import functional as F
File "/private/home/cwillycs/audio/torchaudio/functional.py", line 904, in <module>
def dither(waveform, probability_density_function="TPDF", noise_shaping=False, ns_filter=""):
File "/private/home/cwillycs/.local/lib/python2.7/site-packages/torch/jit/__init__.py", line 1226, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
File "/private/home/cwillycs/.local/lib/python2.7/site-packages/torch/jit/__init__.py", line 1075, in _compile_and_register_class
ast = get_jit_class_def(obj, obj.__name__)
File "/private/home/cwillycs/.local/lib/python2.7/site-packages/torch/jit/frontend.py", line 148, in get_jit_class_def
self_name=self_name) for method in methods]
File "/private/home/cwillycs/.local/lib/python2.7/site-packages/torch/jit/frontend.py", line 169, in get_jit_def
return build_def(ctx, py_ast.body[0], type_line, self_name)
File "/private/home/cwillycs/.local/lib/python2.7/site-packages/torch/jit/frontend.py", line 209, in build_def
build_stmts(ctx, body))
File "/private/home/cwillycs/.local/lib/python2.7/site-packages/torch/jit/frontend.py", line 127, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File "/private/home/cwillycs/.local/lib/python2.7/site-packages/torch/jit/frontend.py", line 185, in __call__
return method(ctx, node)
File "/private/home/cwillycs/.local/lib/python2.7/site-packages/torch/jit/frontend.py", line 283, in build_Assign
rhs = build_expr(ctx, stmt.value)
File "/private/home/cwillycs/.local/lib/python2.7/site-packages/torch/jit/frontend.py", line 185, in __call__
return method(ctx, node)
File "/private/home/cwillycs/.local/lib/python2.7/site-packages/torch/jit/frontend.py", line 681, in build_ListComp
raise NotSupportedError(r, "comprehension ifs not supported yet")
torch.jit.frontend.NotSupportedError: comprehension ifs not supported yet:
at /private/home/cwillycs/.local/lib/python2.7/site-packages/torch/distributions/distribution.py:263:23
def __repr__(self):
param_names = [k for k, _ in self.arg_constraints.items() if k in self.__dict__]
<--- HERE
args_string = ', '.join(['{}: {}'.format(p, self.__dict__[p]
if self.__dict__[p].numel() == 1
else self.__dict__[p].size()) for p in param_names])
return self.__class__.__name__ + '(' + args_string + ')'
'Normal' is being compiled since it was called from 'dither'
at /private/home/cwillycs/audio/torchaudio/functional.py:952:8
random_tensor = int(torch.randint(wave_size, [1, ]).item())
RPDF_dither = waveform[0][random_tensor] - 0.5
signal_scaled_RPDF_dithered = signal_scaled + RPDF_dither
quantised_signal_scaled_RPDF_dithered = torch.round(signal_scaled_RPDF_dithered)
quantised_signal_RPDF_dithered = quantised_signal_scaled_RPDF_dithered / down_scaling
dithered = quantised_signal_RPDF_dithered
elif (probability_density_function == "GPDF"):
gaussian_dither = torch.distributions.normal.Normal(torch.mean(waveform), 1).sample()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
signal_scaled_gaussian_dithered = signal_scaled + gaussian_dither
quantised_signal_scaled_gaussian_dithered = torch.round(signal_scaled_gaussian_dithered)
quantised_signal_gaussian_dithered = quantised_signal_scaled_gaussian_dithered / down_scaling
dithered = quantised_signal_gaussian_dithered
else:
TPDF_dither = torch.bartlett_window(wave_size + 1)
```
## Additional context
Relevant to [PR #319](https://github.com/pytorch/audio/pull/319)
cc @suo | triage review,oncall: jit,feature,triaged | medium | Critical |
523,121,914 | pytorch | Fold DispatchStub into c10 dispatcher | Right now we have two levels of dynamic dispatch: c10 dispatcher does a first level of dispatch, and then DispatchStub does a second level of dispatch. For CUDA/HIP dispatch, DispatchStub's extra dispatch is completely unnecessary. However, for CPU dispatch, DispatchStub supports dispatching to AVX/AVX2/default CPU kernels at runtime depending on what instruction set is supported by the processor. We should adjust c10 dispatcher to support this use-case; combined with an extra dispatch step for backend common code (the other reason why people commonly use DispatchStub), so we can eliminate DispatchStub entirely.
We will achieve this by adding a priority and conditional registration to the dispatch registration API.
**Kernel priorities.** Every kernel will be associated with a priority, described as a closed enum type:
```
enum class DispatchPriority {
Default,
AVX,
AVX2,
// Enum supports other priorities, e.g.,
// PrivateUse_HighestPriority,
};
```
`DispatchPriority::Default` is used if none is specified. If a kernel registration occurs where another kernel already exists, we compare priorities and only take the kernel with highest priority.
The intended use of this mechanism is to associate kernels with higher vectorization layers with higher priority, so we prefer to use them.
**Conditional registration.** Conditional registration is simple: instead of unconditionally registering a kernel, we test a condition in our static initializer. If the condition is false, we don't register the kernel (we still may register the schema it corresponds to).
The intended use of this mechanism is to avoid registering kernels for CPU capabilities which we don't actually have. | triaged,module: dispatch | low | Major |
523,142,887 | rust | #[must_use] does not yield warning in pattern match | The following code does not warn that `_f` in `use_foos` is marked as `#[must_use]`:
```rust
#[must_use]
pub struct Foo;
pub enum Foos {
F(Foo)
}
pub fn use_foos(f: Foos) {
match f {
Foos::F(_f) => {}
}
}
```
If we change `_f` to `f`, we get a warning that `f` is unused, but still no `#[must_use]` warning.
Adding `#[must_use]` to the enum and variant does not make produce the warning either:
```rust
#[must_use]
pub struct Foo;
#[must_use]
pub enum Foos {
F(#[must_use] Foo)
}
pub fn use_foos(f: Foos) {
match f {
Foos::F(_f) => {}
}
}
``` | A-lints,T-lang,C-feature-request | low | Minor |
523,144,106 | terminal | Open a new pane by prompting the user for which profile to use | Bear with me for a second. Somebody suggested this in another issue, and I wonder if we should consider it here.

The new pane is focused, and put in a [number entry] or [search for profiles by name] mode, just like one of our earlier issues suggested.
We probably want that UI _anyway_, since it has some benefits. Thoughts?
_Originally posted by @DHowett-MSFT in https://github.com/microsoft/terminal/issues/1756#issuecomment-507507450_
<hr>
See also #998, #1756, #1000 | Help Wanted,Area-UserInterface,Area-Settings,Product-Terminal,Issue-Task | medium | Major |
523,149,213 | TypeScript | Consider some way of serializing open projects in tsserver, potentially leveraging `.tsbuildinfo` files | One thing that users often hit is that file navigation might trigger opening an entire project. Opening an entire project involves
1. File loading
1. Scanning/parsing
1. Resolving dependencies
1. Keep repeating file loading on dependencies until no new files are found
This is a lot of work! If a user jumps back and forth from this file, it can re-trigger this work even if nothing has changed!
A `.tsbuildinfo` file is used to save time on cold compiler invocations doing this exact set of work, and to reduce work when something actually has changed. It would be interesting to see whether generating a `.tsbuildinfo` file after project loads could help cut down on this work. | Suggestion,Needs Proposal,In Discussion,Domain: TSServer,Domain: Performance,Experimentation Needed,Domain: --incremental | low | Minor |
523,154,720 | TypeScript | Grace period strategy for unloading projects | Today, navigating to a file in another project might involve a full project load. Bad! Users often navigate back to the original project, and then dive right back into the loaded project. Unfortunately, by the time that they do that, that project is unloaded and all the work has to be done **all over again**.
We might want to consider something like a "grace period" for unloading projects when the last open file of a project is closed. | Domain: Performance | low | Minor |
523,163,813 | flutter | flutter run stuck on "Installing and launching" if the iOS app crashes on launch | ## Steps to Reproduce
1. `flutter create test_run`
2. Replace `ios/Runner/AppDelegate.swift` with asserting code:
```swift
import UIKit
import Flutter
@UIApplicationMain
@objc class AppDelegate: FlutterAppDelegate {
override func application(
_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?
) -> Bool {
GeneratedPluginRegistrant.register(with: self)
assert(false, "Crashing for no reason");
return super.application(application, didFinishLaunchingWithOptions: launchOptions)
}
}
```
3. `flutter run -d <>`
See the app crash on the device but `flutter run` never exits and is stuck on "Installing and launching".
## Logs
```
$ flutter run -d d83d5bc53967baa0ee18626ba87b6254b2ab5418
Launching lib/main.dart on Flutter iOS Device in debug mode...
Automatically signing iOS for device deployment using specified development team in Xcode project: S8QB4VV633
Running pod install... 1.8s
Running Xcode build...
โโAssembling Flutter resources... 2.7s
โโCompiling, linking and signing... 6.8s
Xcode build done. 12.8s
Installing and launching... โฃฏ
```
```
$ flutter doctor -v
[โ] Flutter (Channel master, v1.12.1-pre.59, on Mac OS X 10.14.6 18G87, locale en-US)
โข Flutter version 1.12.1-pre.59 at /Users/m/Projects/flutter
โข Framework revision 6cb6857f92 (2 hours ago), 2019-11-14 16:50:17 -0500
โข Engine revision 77c3512ec8
โข Dart version 2.7.0
[โ] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
โข Android SDK at /Users/m/Library/Android/sdk
โข Android NDK location not configured (optional; useful for native profiling support)
โข Platform android-28, build-tools 28.0.3
โข Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 11.2)
โข Xcode at /Users/m/Applications/Xcode-11_2.app/Contents/Developer
โข Xcode 11.2, Build version 11B41
โข CocoaPods version 1.8.4
[โ] Android Studio (version 3.5)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin version 40.2.2
โข Dart plugin version 191.8593
โข Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[โ] IntelliJ IDEA Community Edition (version 2019.2.3)
โข IntelliJ at /Applications/IntelliJ IDEA CE.app
โข Flutter plugin version 41.1.3
โข Dart plugin version 192.7402
[โ] VS Code (version 1.39.2)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.6.0
[โ] Connected device (2 available)
โข Flutter iOS Device โข d83d5bc53967baa0ee18626ba87b6254b2ab5418 โข ios โข iOS 13.1.3
โข iPhone 11 Pro Max โข 389148FB-E436-4F8B-B37A-8564AF1627C0 โข ios โข com.apple.CoreSimulator.SimRuntime.iOS-13-2 (simulator)
โข No issues found!
```
| platform-ios,tool,P3,team-ios,triaged-ios | medium | Critical |
523,169,758 | electron | [Bug]: will-navigate doesn't fire when navigating to about: URLs | ### Preflight Checklist
<!-- Please ensure you've completed the following steps by replacing [ ] with [x]-->
* [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project.
* [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to.
* [x] I have searched the issue tracker for an issue that matches the one I want to file, without success.
### Issue Details
* **Electron Version:** 5.0.7
* **Operating System:** Windows 10
### Expected Behavior
Based on the documentation, I expect that navigating to `"about:blank"` will trigger the webcontent's `will-navigate` event.
### Actual Behavior
`will-navigate` does not get triggered.
The [docs](https://electronjs.org/docs/api/web-contents#event-will-navigate) make no mention of `"about:blank"` being an exception, and so I expect that `will-navigate` should get emitted when navigating to it.
Either this is a documentation error, or it's a bug with `will-navigate`.
### To Reproduce
```
const { app, BrowserWindow, BrowserView } = require('electron')
async function onReady() {
const mainWindow = new BrowserWindow();
const view = new BrowserView();
view.webContents.addListener("will-navigate", (e, url) => {
console.log("navigating to: " + url);
});
mainWindow.setBrowserView(view);
await view.webContents.loadURL("https://google.com");
view.webContents.openDevTools();
}
app.on('ready', onReady)
```
After `npm start`ing, in the devtools, run `location = "https://google.com"`. Notice that we log it.
Then run `location = "about:blank"`. Notice, that there's no log. | platform/windows,bug :beetle:,5-0-x,7-1-x,10-x-y,has-repro-gist,20-x-y,22-x-y | medium | Critical |
523,180,271 | pytorch | torch.save/load shows raw path on the pickle_module arg | The doc now looks like below. I would assume the path was not intended to be there?
<img width="877" alt="Screen Shot 2019-11-14 at 4 12 15 PM" src="https://user-images.githubusercontent.com/16999635/68906684-fb3e2a00-06f9-11ea-812a-90e66aecc787.png">
| module: docs,module: serialization,triaged | low | Minor |
523,186,969 | pytorch | n-dimensional non-constant padding functional | ## ๐ Feature
It'd be nice to have n-dimensional non-constant padding functional available.
```
import torch
n = 2
a = torch.randn(4, 3)
torch.nn.functional.pad(a, (n,n), mode='replicate')
# NotImplementedError: Only 3D, 4D, 5D padding with non-constant padding are supported for now
```
## Motivation
Non-constant would simplify `compute_deltas` in pytorch/audio#337.
## Additional context
n-dimensional constant padding was introduced in #2657.
cc @albanD @mruberry @jbschlosser | module: nn,triaged,enhancement,module: padding | low | Critical |
523,200,178 | TypeScript | Investigate making the binding phase lazy | Today, any semantic or language service operations **must** be preceded by a phase of our compiler called binding. This phase does two things:
* creates symbol tables as well as symbols per scope
* sets parent pointers (because it's already walking the tree anyhow)
However, this can end up being a of unnecessary up-front work. For type-checking a given file, the only files that need to be bound are
* files that affect global namespaces (e.g. global files or files containing module augmentations, global augmentations, and UMD namespaces)
* any file that needs to be checked to check the current file
Recently, I spent a bit of time on a plane ride wondering if we could do less work based on this. Instead of forcing all files to be bound, we could bind only global-affecting files up-front, and then force a bind prior to checking or resolving a given file. This has the advantage that something like quick info only needs to bind the minimal set of dependencies before coming back with an answer, making checking significantly lazier. It also means that `skipLibCheck` could end up working faster in command-line scenarios by binding fewer `.d.ts` files that are automatically included (e.g. why bind `.d.ts` files for Jest if you're compiling app code instead of test code?).
The flip side of this is that making this lazy can complicate a lot of other operations. Many language service operations don't actually care about binding, but they do care about parent pointers being set. They'll be preceded by a call to `getTypeChecker()` just to ensure files are bound before performing specific steps.
> Yeah, I know, weird design!
The other issue is that certain type-checker APIs likely need to be guarded against to ensure a requested file is bound. I haven't dived deep here, so this is more of a speculative concern.
Finally, while laziness means that we can partially amortize each operation into incremental chunks of work, there's no telling when pulling on a thread of work will trigger TypeScript to do ALL of the work. Currently TypeScript does ALL the work up front, but that might be good for avoiding frustrating delays later on. For example, if not all files are bound yet, TypeScript can't immediately respond to *go to symbol*, *find all references*, or even some cases of *get completions* (thanks to auto-imports!) before ensuring every file is bound.
On the other hand, once that work gets done, it's done! Only re-parsed files need to be re-bound. So TypeScript might start out slow on some operations, warming up, and eventually staying hot going forward. There are also other possibilities of making this easier. For example, the services layer could also potentially bind unbound files in the background on idle time if it turned out we really needed to. | Domain: Performance | low | Major |
523,209,732 | TypeScript | Investigate implicit excludes | Today, TypeScript will implicitly set `exclude` to `./node_modules` and something like `.*/**/*` to avoid dot files.
Unfortunately, there are two problems I believe I've found from chatting with users.
First, users often end up setting `exclude` which overrides the defaults. This often means that users accidentally over-include files, and that can cause performance issues.
Next, these defaults don't work out well enough, and end up going a bit off the rails in certain scenarios. For example, consider the following `.js` monorepo.
```
packages
+- package-aaa
| +- node_modules <- the cute tiny 10MB node_modules
|
+- jsconfig.json
+- node_modules <- the slightly bigger 10GB node_modules
shared by each package
```
Notice that `jsconfig.json` or not, one of these `node_modules` is going to be crawled through unless `exclude` is set appropriately. And regardless of whether `exclude` is implicitly or explicitly on, it's usually wrong. In this case, it should (probably) be `**/node_modules/**/*`. For example, check out https://github.com/IBM/report-toolkit/pull/44/files
What I'm suggesting is to harden `exclude` to **always** contain the following paths regardless of whether `exclude` is set.
* `**/node_modules/**/*`
* `.*/**`
To disable this, users will have to turn off a setting like `enableImplicitExcludePatterns`. | Domain: Performance | low | Major |
523,220,548 | ant-design | the prev and next button not work as expected when carousel stay in another carousel in some case | - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[](https://codesandbox.io/s/wizardly-hawking-cp0e3)
### Steps to reproduce
## one
1. open the link https://codesandbox.io/s/wizardly-hawking-cp0e3
2. click next button once at the top carousel
3. click next or prev button at first child carousel
## another:
1. open the link https://codesandbox.io/s/wizardly-hawking-cp0e3
2. click prev button once at the top carousel
3. click next or prev button at second child carousel
### What is expected?
when click prev or next button of child carousel, it should slide
### What is actually happening?
not slide, nothing happen
| Environment | Info |
|---|---|
| antd | 3.25.1 |
| React | 16.7.0 |
| System | macOS Catalina 10.15.1 |
| Browser | 78.0.3904.87 (64) |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,๐ External Dependency | low | Major |
523,291,635 | pytorch | Get wrong precision when multi nodes run in docker | I can finetune a pretrained model using two nodes outside docker by several steps to get better precision. But when I do the same thing in the docker( without any error messages) by several steps in the same machine, the test precision is always 0. Does anyone know why?
Pytorch version is 1.2.0, nccl version is 2.4.8 for inside or outside docker container.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 | needs reproduction,oncall: distributed,triaged | low | Critical |
523,295,237 | opencv | opencv-python VideoCapture in dell EMC centos gets 10s delay | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
Please:
* Read the documentation to test with the latest developer build.
* Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue.
* Try to be as detailed as possible in your report.
* Report only one problem per created issue.
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV =>4.1.1
- Operating System / Platform => centos7.7
- Compiler => python,anaconda
##### Detailed description
when i use
```
cv2.VideoCapture('') rtsp camera stream
```
and then imshow the realtime frame,i get a 10s delay,sometimes 8s.
My machine is DELL EMC double cores.
The same code in other machine ,windows or other centos machine ,it shows the realtime camera.
i think if opencv is incompatible with the machine with double core .
I try change to ubuntu ,but it has same problem.
Hope for get your reply,thanks.
<
!-- your description -->
##### Steps to reproduce
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
-->
| incomplete,needs reproducer,needs investigation | low | Critical |
523,325,475 | PowerToys | RAM disk | # Summary of the new feature/enhancement
Now that a modern desktop PCs comes with a ton of RAM, it would be awesome to have a MS-supported RAM-disk solution. Modern SSDs are fast, but not a match for a fast DDR4 unit.
It could be used as a:
- "transient" tmp folder
- storage for compilation artifacts/system headers directory(build time improvements, yay!)
- any HDD-intensive application
Also It might be able to automatically copy some folder on it on Windows startup and periodically write-back changes.
| Idea-New PowerToy | low | Major |
523,329,066 | vscode | SCM - Make registerDiffInformationCommand public | I see that the `registerDiffInformationCommand` has been propsed for a while. I was wondering if you have an eta for when it will be stable. The selected ranges functionality would be useful.
| feature-request,api,scm | low | Major |
523,435,325 | node | doc: http.ServerResponse close event | <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: 12.x
<!-- Please provide more details below this comment. -->
In the [version 10 documentation](https://nodejs.org/docs/latest-v10.x/api/http.html#http_event_close_1) it states that the 'close' event is emitted on a response when the connection is terminated before the response has ended.
In the [version 12 documentation](https://nodejs.org/docs/latest-v12.x/api/http.html#http_event_close_1) it simply states that the connection was terminated. This does not seem consistent with the behaviour I have seen, where the event is actually emitted at the end of each response, which I have also found described in [a commit](https://github.com/nodejs/node/commit/ffb503be5f07a26d73a2b3b59955636452948ba7) introduced in version 11.
Can you clarify that the event in version 12 is in fact emitted at the end of each response, in which case I would be happy to submit a PR to update the docs. | http,doc | low | Critical |
523,453,244 | node | console.log failure while working with worker threads. | * **Version**: 12.4.0
* **Platform**: Docker which runs on mac (Darwin Kernel Version 18.7.0, xnu-4903.278.12~1/RELEASE_X86_64 x86_64)
* **Subsystem**:
<!-- Please provide more details below this comment. -->
Hello everyone.
Thank you for your hard work on NodeJS.
Now, i developed a small service, which takes 4 csv files, parses them, maps them together and imports them into elasticsearch.
Each file is being parsed on a different thread.
The parsed content of one of the files is being send via an Event to a different file, this file spawns for each set of data, a new thread that will import that set into ES.
In parallel on the main thread, i send the content of one of the files in chunks via an Event with the contents of the remaining 2 files, to a different script again.
Which will spawn a new thread for that chunk of data. That thread will map the given data to the chunk provided, if they match. Send the mapped data back to the main thread, which again will spawn a new thread who will import the mapped data into ES.
The issue i have here is, that once everything is working at the same time, the only console.logs i get are the ones from the main thread. Everything that is being logged on a worker thread, is being lost somewhere, while the main thread is under load.
Note: The actual code is being processed as it should, it is just the console.logs who do not care.
This makes debugging on worker threads really difficult. Maybe i am missing something. | worker,stdio | medium | Critical |
523,461,440 | rust | Too many `type inside `async` object must be known in this context` errors | This code ([Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=1bc26deb3213fe0b2fb95f71a2c6e60d))
```rust
use futures::stream::{iter/*, StreamExt*/};
async fn produce_11_errors() {
iter(vec![1, 2, 3])/*.collect::<Vec<_>>()*/.await;
}
fn main() {}
```
currently produces 11 errors:
- 1 of `the trait bound is not satisfied`
- 10 of `type inside async object must be known in this context`.
If you add elements to vec, it will generate even more.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"doc-jones"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-diagnostics,T-compiler,A-inference,C-bug,A-async-await,AsyncAwait-Triaged,D-verbose | low | Critical |
523,560,575 | rust | Syntax of `rustc --cfg` with key and value is obscure. | Hi,
I was recently defeated by `rustc --cfg`.
`rustc -h` says:
```
--cfg SPEC Configure the compilation environment
```
Hrm, well I know that stuff like `--cfg var` should work, but that I should also be able to assign a value:
```
$ rustc --cfg key=val -
error: invalid `--cfg` argument: `key=val` (expected `key` or `key="value"`)
$ rustc --cfg key="val" -
error: invalid `--cfg` argument: `key=val` (expected `key` or `key="value"`)
```
Eh? I passed exactly what the error asked. And obviously the backticks can't form the solution, as they'd open a sub-shell.
The solution is to use: `rustc --cfg 'key="val"'`
The quoting is very specific.
Swapping the single and double quotes won't work:
```
$ rustc --cfg "key='val'" -
error: character literal may only contain one codepoint
--> <quote expansion>:1:5
|
1 | key='val'
| ^^^^^
help: if you meant to write a `str` literal, use double quotes
|
1 | key="val"
| ^^^^^
```
And oddly, quoting the value is ok, but not the key:
```
$ rustc --cfg '"key"="val"' -
error: invalid `--cfg` argument: `"key"="val"` (expected `key` or `key="value"`)
```
I find this wholly unexpected and I had to grep the source code to find out how to use the argument :(
So are the current limitations of the interface intentional?
If so, we should improve the `-h` doc and error message.
If not, what do we want? Should `--cfg` accept:
* `key=val`
* `'key="val"`
* `"key='val'"`
* `'"key"="val"'`
* `"key=val"`
Thanks. | A-frontend,C-enhancement,A-diagnostics,T-compiler | low | Critical |
523,623,547 | godot | Color values outside the range of representable values of type 'unsigned char' | **Godot version:**
Godot 3.2 Beta 1
**Issue description:**
```
core/color.cpp:55:27: runtime error: 8.04522e+16 is outside the range of representable values of type 'unsigned char'
core/color.cpp:57:27: runtime error: 2.80387e+16 is outside the range of representable values of type 'unsigned char'
core/color.cpp:59:27: runtime error: 1.07219e+17 is outside the range of representable values of type 'unsigned char'
core/color.cpp:94:28: runtime error: 2.75553e+19 is outside the range of representable values of type 'short unsigned int'
core/color.cpp:96:28: runtime error: 7.20595e+18 is outside the range of representable values of type 'short unsigned int'
core/color.cpp:98:28: runtime error: 2.06762e+19 is outside the range of representable values of type 'short unsigned int'
core/color.cpp:105:36: runtime error: 2.72692e+19 is outside the range of representable values of type 'short unsigned int'
core/color.cpp:107:28: runtime error: 7.0107e+18 is outside the range of representable values of type 'short unsigned int'
core/color.cpp:109:28: runtime error: 8.46188e+18 is outside the range of representable values of type 'short unsigned int'
core/color.cpp:53:35: runtime error: 5.03603e+16 is outside the range of representable values of type 'unsigned char'
core/color.cpp:92:36: runtime error: 1.29426e+19 is outside the range of representable values of type 'short unsigned int'
core/color.cpp:66:35: runtime error: 8.748e+16 is outside the range of representable values of type 'unsigned char'
core/color.cpp:68:27: runtime error: 5.48411e+16 is outside the range of representable values of type 'unsigned char'
core/color.cpp:70:27: runtime error: 4.08141e+16 is outside the range of representable values of type 'unsigned char'
core/color.cpp:72:27: runtime error: 5.03603e+16 is outside the range of representable values of type 'unsigned char'
core/color.cpp:79:36: runtime error: 4.22969e+18 is outside the range of representable values of type 'short unsigned int'
core/color.cpp:81:28: runtime error: 1.26467e+19 is outside the range of representable values of type 'short unsigned int'
core/color.cpp:83:28: runtime error: 2.59119e+19 is outside the range of representable values of type 'short unsigned int'
core/color.cpp:85:28: runtime error: 3.43659e+18 is outside the range of representable values of type 'short unsigned int'
core/color.cpp:40:35: runtime error: 1.64579e+16 is outside the range of representable values of type 'unsigned char'
core/color.cpp:42:27: runtime error: 1.33719e+16 is outside the range of representable values of type 'unsigned char'
core/color.cpp:44:27: runtime error: 1.00824e+17 is outside the range of representable values of type 'unsigned char'
core/color.cpp:46:27: runtime error: 4.92091e+16 is outside the range of representable values of type 'unsigned char'
core/color.cpp:426:21: runtime error: 1.33719e+16 is outside the range of representable values of type 'int'
core/color.cpp:111:28: runtime error: 4.22969e+18 is outside the range of representable values of type 'short unsigned int'
```
**Steps to reproduce:**
1. Run project(to see errors run Godot compiled with sanitizer support)
**Minimal reproduction project:**
[C.zip](https://github.com/godotengine/godot/files/3852724/C.zip)
| bug,topic:core,confirmed | low | Critical |
523,632,718 | terminal | Azure Cloud Shell to connect to Sovereign Clouds |
The Azure Cloud Shell component needs to be able to connect to Azure Clouds other than the default Commercial cloud. Currently the relevant URL endpoints are hard coded to Commercial.
| Product-Terminal,Issue-Task,Area-AzureShell | low | Major |
523,685,861 | TypeScript | Support "evolving any" with forEach / internal functions | TypeScript 2.1 [introduced](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-1.html#improved-any-inference) the notion of an "evolving" `any` type, a variable whose type is determined based on subsequent usage:
```ts
function square(xs: number[]) {
const ys = [];
for (const x of xs) {
ys.push(x * x);
}
return ys; // type is number[], hooray!
}
```
Unfortunately this is a bit brittle. If you ever introduce an internal function (say if you use `forEach`) then the inferred type is lost and you get a no implicit any error:
```ts
function square(xs: number[]) {
const ys = []; // Variable 'ys' implicitly has type 'any[]' in some locations
// where its type cannot be determined. (7034)
xs.forEach(x => {
ys.push(x * x);
});
return ys; // Variable 'ys' implicitly has an 'any[]' type. (7005)
}
```
The two forms do the same thing, so it's somewhat surprising that the for-of version passes the type checker but the `forEach` version does not.
I find the "evolving any" construct to be incredibly useful, but I have to scrap it every time I introduce an arrow function. I'm proposing that it be extended to work for cases like this.
**TypeScript 3.7.2**
[Playground link](https://www.typescriptlang.org/play/?ssl=16&ssc=6&pln=13&pc=8#code/GYVwdgxgLglg9mABAZwI4gIYCcCmAKAD2QC5EwQBbAIxywG0BdASkQG8AoRLxCBZKRAE9kiALyJGAbk7dgcLIjy8w-RAURxga5Cw7d9Q5ADoADiGQALQogBUaptP0BfGV1xQQWJMOkv2oSFgEFHRsHAAmQhIyShp6ZjZXHj4BYTEJBkduIiM5LABRDAgrdVEAPkSDbmFTcxLbeyyuJwck909vZF92IA)
**Expected behavior:**
I'd expect both versions to pass the type checker and infer the type of `ys` as `number[]`.
| Suggestion,Awaiting More Feedback | low | Critical |
523,700,403 | go | net/http: ReadTimeout is not honored when ReadHeaderTimeout > ReadTimeout | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.4 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
It should.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOENV="/root/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build860032074=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
The following simple program runs a Hello World HTTP server with 5 second ReadTimeout and 10 second ReadHeaderTimeout.
```
package main
import (
"fmt"
"net/http"
"time"
)
func main() {
http.HandleFunc("/", HelloServer)
server := &http.Server{
Addr: ":1234",
ReadHeaderTimeout: 10 * time.Second,
ReadTimeout: 5 * time.Second,
}
server.ListenAndServe()
}
func HelloServer(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello, %s!", r.URL.Path[1:])
}
```
Start the server. Now, test the timeouts by connecting as a client as follows, where `nc` is the netcat program. The client initiates a connection but never sends any bytes containing request headers.
```
jamesjohnston-mac:website jamesjohnston$ time nc localhost 1234
real 0m10.019s
user 0m0.005s
sys 0m0.010s
```
### What did you expect to see?
The connection should time out after 5 seconds. The documentation for ReadTimeout states the following:
```
// ReadTimeout is the maximum duration for reading the entire
// request, including the body.
```
Since headers are part of the request, we'd also expect this timeout to apply to headers as well.
### What did you see instead?
The connection timed out after 10 seconds, not 5 seconds.
NOTE: If we instead did an HTTP POST with a request body included, and then sent the full request (headers+body) in the time between 5 and 10 seconds (with a flush on network connection after sending all headers and before the body), then we would find that the headers would be successfully received by server but then the body would instantly time out when trying to read it. I struggle to see how this would be of any use in the real world, especially seeing as how ServeHTTP cannot adjust read timeouts/deadlines - i.e. lengthen them to something that isn't timing out immediately. (see https://github.com/golang/go/issues/16100 )
### Suggested action items
I do not know what the actual intended behavior the maintainers of the http package wish to have. It seems to me there are two options to resolve this contradiction:
- Update documentation to reflect actual behavior. For example, update comment on ReadTimeout to look something like:
```
// ReadTimeout is the maximum duration for reading the entire
// request, including the body. As an exception, if
// ReadHeaderTimeout > ReadTimeout, then ReadHeaderTimeout will
// apply for reading the header portion of the request, but then the
// request body will immediately time out when attempting to read it if
// the ReadTimeout deadline has already elapsed.
```
- Update code to match current documented behavior. For example, update https://github.com/golang/go/blob/440f7d64048cd94cba669e16fe92137ce6b84073/src/net/http/server.go#L946 to look something like:
```
t0 := time.Now()
if d := c.server.readHeaderTimeout(); d != 0 {
hdrDeadline = t0.Add(d)
}
if d := c.server.ReadTimeout; d != 0 {
wholeReqDeadline = t0.Add(d)
}
// New: Enforce hdrDeadline <= wholeReqDeadline
// (Not shown: logic to deal with infinite ReadHeaderTimeout and/or ReadTimeout
if wholeReqDeadline.Before(hdrDeadline) {
hdrDeadline = wholeReqDeadline
}
``` | NeedsInvestigation | low | Critical |
523,736,263 | go | x/build/env/windows: add GUI shell? | Now that we have `gomote rdp` (#26090) for RDP access to our Windows buildlets, I discovered our Windows images are pretty spartan over RDP.
I guess they don't have the Windows GUI enabled?
@johnsonj, is that configurable? I assume, and Google suggests? I don't see anything in https://github.com/golang/build/blob/master/env/windows/startup.ps1 that looks like it's explicitly disabling it, at least.
| help wanted,OS-Windows,Builders,NeedsInvestigation,FeatureRequest | low | Minor |
523,748,297 | flutter | flutter does not support GBK encoding. | ## Steps to Reproduce
flutter does not support GBK encoding.
We wish gbk supporting coming soon.
1. put gbk encoding file in the assets dir.
the attachment file is here.
[gbk.txt](https://github.com/flutter/flutter/files/3853512/gbk.txt)
file content is:
```
# this file should be opened by encoding GBK or GB2312.
ๆต่ฏไธญๆ
ๅฆๆๆ้ฎ้ขๅฐฑๆฅ้
GBKๅGB2312็ผ็
ๅ ๅ
ฅๆ ็น็ฌฆๅท๏ผใๆต่ฏ
ไธญ่ฑๆpppๆททๆ-@abcๆต่ฏ
flutter่ฝๅฆๆฏๆGBK๏ผ
ๅพๅคๆไปถ้ฝๅจไฝฟ็จ
```
2. load file content.
codes are here:
```
_readContentByString() async {
final content = await rootBundle.loadString('assets/gbk.txt');
print(content);
}
_readContentByBytes() async {
final bytes = await rootBundle.load('assets/gbk.txt');
final array = bytes.buffer.asUint8List();
final str = String.fromCharCodes(array);
print(str);
}
```
3. Sample codes link here:
[sample project](https://gist.github.com/monkingame/271a7ce45b66c08bd930fa2922bfa4bf)
Note: gbk.txt must be in [assets] directory.
[gbk.txt](https://github.com/flutter/flutter/files/3853512/gbk.txt)
4. Run the project
If run _readContentByString,then exception occured:
```
Unhandled Exception: FormatException: Bad UTF-8 encoding 0xb2 (at offset 56)
```
If run _readContentByBytes,then file content shows:
```
flutter: # this file should be opened by encoding GBK or GB2312.
ยฒรขรรรรรร
รรงยนรปรรรรรรขยพรยฑยจยดรญ
GBKยผยฐGB2312ยฑร รรซ
ยผรรรซยฑรชยตรฃยทรปยบร
ยฃยฌยกยฃยฒรขรร
รรรยขรรpppยปรฌรร-@abcยฒรขรร
flutterรรยทรฑรยงยณรGBKยฃยฟ
ยบรยถร รรยผรพยถยผรรรยนรร
```
**Target Platform:**
iOS
**Target OS version/browser:**
13.2
**Devices:**
Simulator or physical iphone
## Exception Logs
```
[VERBOSE-2:ui_dart_state.cc(148)] Unhandled Exception: FormatException: Bad UTF-8 encoding 0xb2 (at offset 56)
[38;5;244m#0 _Utf8Decoder.convert (dart:convert/utf.dart:530:13)[39;49m
[38;5;244m#1 Utf8Decoder.convert (dart:convert/utf.dart:327:13)[39;49m
[38;5;244m#2 Utf8Codec.decode (dart:convert/utf.dart:59:56)[39;49m
[38;5;244m#3 AssetBundle.loadString[39;49m
<asynchronous suspension>
[38;5;244m#4 CachingAssetBundle.loadString.<anonymous closure>[39;49m
[38;5;244m#5 _LinkedHashMapMixin.putIfAbsent (dart:collection-patch/compact_hash.dart:291:23)[39;49m
[38;5;244m#6 CachingAssetBundle.loadString[39;49m
[38;5;248m#7 MyApp._readContentByString[39;49m
<asynchronous suspension>
[38;5;244m#8 _InkResponseState._handleTap[39;49m
[38;5;244m#9 _InkResponseState.build.<anonymous closure>[39;49m
dart-lang/core#257 GestureRecognizer<โฆ>
```
## flutter analyze
```
No issues found! (ran in 2.1s)
```
## flutter doctor -v
```
No issues found!
```
| c: new feature,framework,a: internationalization,dependency: dart,c: proposal,P2,team-framework,triaged-framework | low | Minor |
523,794,308 | youtube-dl | Failing to retry download or try next format on "Did not get any data blocks" error | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.11.05. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Read bugs section in FAQ: http://yt-dl.org/reporting
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a broken site support issue
- [x] I've verified that I'm running youtube-dl version **2019.11.05**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar bug reports including closed ones
- [x] I've read bugs section in FAQ
## Verbose log
<!--
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2019.11.05
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
$ youtube-dl --verbose -f 22/best 'http://youtube.com/watch?v=Msn9L0IXPgw'
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['--verbose', '-f', '22/best', 'http://youtube.com/watch?v=Msn9L0IXPgw']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.11.05
[debug] Python version 3.7.4 (CPython) - Darwin-18.7.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 4.0.2, ffprobe 4.0.2
[debug] Proxy map: {}
[youtube] Msn9L0IXPgw: Downloading webpage
[youtube] Msn9L0IXPgw: Downloading video info webpage
[youtube] {18} signature length 104, html5 player vflFlp-mq
[youtube] Msn9L0IXPgw: Downloading player https://www.youtube.com/yts/jsbin/player_ias-vflFlp-mq/en_US/base.js
[youtube] {22} signature length 104, html5 player vflFlp-mq
[youtube] {43} signature length 104, html5 player vflFlp-mq
[youtube] {313} signature length 104, html5 player vflFlp-mq
[youtube] {271} signature length 104, html5 player vflFlp-mq
[youtube] {137} signature length 104, html5 player vflFlp-mq
[youtube] {248} signature length 104, html5 player vflFlp-mq
[youtube] {136} signature length 100, html5 player vflFlp-mq
[youtube] Msn9L0IXPgw: Downloading player https://www.youtube.com/yts/jsbin/player_ias-vflFlp-mq/en_US/base.js
[youtube] {247} signature length 104, html5 player vflFlp-mq
[youtube] {135} signature length 104, html5 player vflFlp-mq
[youtube] {244} signature length 104, html5 player vflFlp-mq
[youtube] {134} signature length 104, html5 player vflFlp-mq
[youtube] {243} signature length 104, html5 player vflFlp-mq
[youtube] {133} signature length 104, html5 player vflFlp-mq
[youtube] {242} signature length 104, html5 player vflFlp-mq
[youtube] {160} signature length 100, html5 player vflFlp-mq
[youtube] {278} signature length 100, html5 player vflFlp-mq
[youtube] {140} signature length 100, html5 player vflFlp-mq
[youtube] {249} signature length 104, html5 player vflFlp-mq
[youtube] {250} signature length 104, html5 player vflFlp-mq
[youtube] {251} signature length 104, html5 player vflFlp-mq
[debug] Invoking downloader on 'https://r5---sn-u2bpouxgoxu-5qal.googlevideo.com/videoplayback?expire=1573901343&ei=v3_PXbG2L9uN3LUPm9O44Ac&ip=124.168.216.76&id=o-AMX8d-vP-bCczOwjEQSlnuJLnzWnz6SWtK7KC449kC4f&itag=22&source=youtube&requiressl=yes&mm=31%2C29&mn=sn-u2bpouxgoxu-5qal%2Csn-ntqe6n7r&ms=au%2Crdu&mv=m&mvi=4&pl=21&initcwndbps=1237500&mime=video%2Fmp4&ratebypass=yes&dur=235.241&lmt=1569437714851264&mt=1573879663&fvip=5&fexp=23842630&beids=9466585&c=WEB&txp=2316222&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cmime%2Cratebypass%2Cdur%2Clmt&lsparams=mm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=AHylml4wRAIgds4ItG2cSzxz3rBZ8XJbNIbSJwB4TrqTUlVJAEvzZisCIFjeNldL0kpo7BtoGtfgV1A428l-J1XxR_OlqlTYUkIF&sig=ALgxI2wwRgIhAKXV-eQbHZpt0mMcFLF3G3QUoJ4ihutHi9g9gOptjBRyAiEAr4McLw2mFeR1oU_-270t3HdxOafLf4JdJt8zp4VPkm0='
[download] Destination: Carlo Traversi... SENDING in RMNP-Msn9L0IXPgw.mp4
[download] 0.2% of 41.69MiB at 689.91KiB/s ETA 01:01[download] Got server HTTP error: Downloaded 88699 bytes, expected 43715037 bytes. Retrying (attempt 1 of 10)...
ERROR: Did not get any data blocks
File "/Users/dbr/code/venvs/ytdl/bin/youtube-dl", line 8, in <module>
sys.exit(main())
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/__init__.py", line 474, in main
_real_main(argv)
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/__init__.py", line 464, in _real_main
retcode = ydl.download(all_urls)
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 2018, in download
url, force_generic_extractor=self.params.get('force_generic_extractor', False))
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 807, in extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 862, in process_ie_result
return self.process_video_result(ie_result, download=download)
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 1643, in process_video_result
self.process_info(new_info)
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 1925, in process_info
success = dl(filename, info_dict)
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 1864, in dl
return fd.download(name, info)
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/downloader/common.py", line 366, in download
return self.real_download(filename, info_dict)
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/downloader/http.py", line 342, in real_download
return download()
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/downloader/http.py", line 312, in download
self.report_error('Did not get any data blocks')
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/downloader/common.py", line 165, in report_error
self.ydl.report_error(*args, **kargs)
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 624, in report_error
self.trouble(error_message, tb)
File "/Users/dbr/code/venvs/ytdl/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 586, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
```
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
I'm using youtube-dl to download videos as 720p mp4a (`-f 22`). As per issues like #17148 and #20296 this is apparently an issue with Youtube, and as of the last few months(?) I often get downloads failing with the `Did not get any data blocks`.
So as a solution to this I tried using the format fallback mechanism and specifying `-f 22/best` (as per the [format section docs](https://github.com/ytdl-org/youtube-dl/blob/master/README.md#format-selection) and this very old bug #650), so it'll prefer the mp4a then fallback to to the default format. However this doesn't work - youtube-dl stops after the first error occurs:
```
$ youtube-dl -f 22/best 'http://youtube.com/watch?v=Msn9L0IXPgw'
[...]
[download] 0.2% of 41.69MiB at 122.11KiB/s ETA 05:48[download] Got server HTTP error: Downloaded 88699 bytes, expected 43715037 bytes. Retrying (attempt 1 of 10)...
ERROR: Did not get any data blocks
$
```
Although it would be useful in my case, I'm not certain if it would be correct for the "no data blocks" error to try the next format (since I assume Youtube API claims the format *does* exist), however the output saying "Retrying (attempt 1 of 10)" but it immediately exiting definitely seems like a bug? | cant-reproduce | low | Critical |
523,795,735 | pytorch | Add support for integer matrix multiplication (particularly for dtype = torch.int8 ) | ## ๐ Feature
<!-- A clear and concise description of the feature proposal -->
Implement GPU INT8 matrix multiplication in PyTorch.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
In order to save time by using less GPU memory per data (hence, being able to use bigger batch sizes), I think it would be nice to be able to use int8 when representing the data, for example, for combinatorial problems, since the combinatorial space is vast.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
I'd like to be able to perform a matrix multiplication in GPU when using dtype = torch.uint8
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
A current alternative is to use float32 or float16 dtypes
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
As far as I'm aware, INT8 GPU matrix multiplication is already supported for CUDA and cuBLAS, but I'm not sure if it is competitive with using half precision.
cc @ngimel | feature,module: cuda,triaged | low | Major |
523,800,578 | go | x/crypto/blake2b: provide reset that allows specifying a new key | blake2b digests are pretty large. They account for 60% of allocated space in my benchmark, so they're a good candidate for re-use, except that (at least in my case, PASETO), I need to use a new key every time. It'd be nice if there was a reset function that let me specify a new key.
The API options for this aren't particularly nice, since the blake2b functions all return `hash.Hash` instead of a concrete type. The two options I see are: (1) provide a package-level `ResetKey` function that accepts a `hash.Hash` and panics if given the wrong kind of hash, and (2) provide a package-level `ResetKey` interface and promise that blake2b `hash.Hash`es can be type-asserted to that interface. (Or some other name. The reset should perhaps also accept a new size.)
If there's API consensus, I can try my hand at implementing.
cc the three people who have touched that file: @aead @rasa @ValarDragon
| NeedsInvestigation | low | Minor |
523,820,998 | flutter | Custom Overlay for Tooltip Widget | Internal: b/148141781
## Use case
I want to provide a "bubble" in the bottom of **Tooltip**. And also, "X" icon to close the tooltip. Currently, Tooltip class doesn't provide any property to customize overlay. It is showing a simple Text widget inside a ConstrainedBox. It also lacks a way of showing/hiding tooltip programmatically.
I'm achieving my use case with basically **copying the whole Tooltip class** and changing the __TooltipOverlay_ [build](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/material/tooltip.dart#L493) method.
Tooltip decoration property is only useful to change the shape and background color. There is also no way to show/hide tooltip programmatically.
## Proposal
How about providing a **overlay** parameter in Tooltip?
If it is not provided, `_TooltipOverlay` can be used as a default overlay.
I would be happy to give a PR with tests. I don't want to copy the whole **Tooltip** class from the framework to just modify a single method!
And also, exposing show/hide tooltip would make the widget very much useful!
| c: new feature,framework,f: material design,customer: mulligan (g3),c: proposal,P3,team-design,triaged-design | low | Minor |
523,826,095 | godot | [Mono] Runtime loaded assemblies are not getting recognized by the engine | **Godot version:**
3.1.1
**OS/device including version:**
Windows 7 x64
**Issue description:**
If you are going to load an external node that got some sort of attached script to it, even if you are going to load the assembly with the implementation of that class, Godot still cannot find that class by any means. It only shows up a message, simular to this: `Cannot instance script because the class '%ClassName%' could not be found. Script: res://%ClassName%.cs`
**Minimal reproduction project:**
[Core.zip](https://github.com/godotengine/godot/files/3854240/Core.zip) | bug,topic:dotnet | low | Minor |
523,831,679 | neovim | API: named extmarks enhancements (tracking issue) | followup of https://github.com/neovim/neovim/pull/11356
potential enhancements (discuss / RFC):
- list all marks, not only per-buffer
- jump to a mark from anywhere
- set a "named mark" with an arbitrary name, via `:mark {name}`
- jump to a "named mark" , perhaps by `:jump {name}` | enhancement,marks | low | Minor |
523,848,022 | opencv | Add support for super-fisheye lenses | Initial work has been done in https://github.com/opencv/opencv/pull/6801. Find all discussions and opened issues in the PR. | feature,category: calib3d | low | Minor |
523,849,041 | TypeScript | Compiler error message improvement: Special casing container objects | ## Search Terms
array promise container error message compare
## Suggestion
Could it be worth having an extra special case message when the only difference is that you haven't included an array indicator instead of listing the property differences e.g.
3.7:
```
src/index.ts:356:5 - error TS2322: Type '{ renderedMessage: string; id: string; category: 0 | 1 | 2 | 3; code: number; start: number | undefined; }[][]' is not assignable to type '{ renderedMessage: string; id: string; category: 0 | 1 | 2 | 3; code: number; start: number | undefined; }[]'.
Type '{ renderedMessage: string; id: string; category: 0 | 1 | 2 | 3; code: number; start: number | undefined; }[]' is missing the following properties from type '{ renderedMessage: string; id: string; category: 0 | 1 | 2 | 3; code: number; start: number | undefined; }': renderedMessage, id, category, code, start
356 errors,
~~~~~~
src/index.ts:235:3
235 errors: {
~~~~~~
The expected type comes from property 'errors' which is declared here on type 'TwoSlashReturn'
```
To:
```
src/index.ts:356:5 - error TS2322: Type '{ renderedMessage: string; id: string; category: 0 | 1 | 2 | 3; code: number; start: number | undefined; length: number | undefined; }[][]' is not assignable to type '{ renderedMessage: string; id: string; category: 0 | 1 | 2 | 3; code: number; start: number | undefined; length: number | undefined; }[]'.
Type '{ renderedMessage: string; id: string; category: 0 | 1 | 2 | 3; code: number; start: number | undefined; length: number | undefined; }[]' is in an array here:
356 errors,
~~~~~~
src/index.ts:235:3
235 errors: {
~~~~~~
But not here
```
and the reverse?
<!-- A summary of what you'd like to see added or changed -->
## Use Cases
Any case of mismatched types where the difference is that the object on one side of the comparison is the same object but wrapped in an array. Perhaps this could also work for a [Promise too](https://www.typescriptlang.org/play/index.html#code/C4TwDgpgBAglC8UDeBYAUFKALCAbXA9gFxQDOwATgJYB2A5upgO4EW4AmJNArgLYBGECugC+6dOwgBjXAEMK0XBGBRZRGOLSyEUAAoUCvKqQgA6BaQK4AbhAAU22aVgBKTdsQBtWQF0gA)?
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Experience Enhancement | low | Critical |
523,851,078 | react | Feature Request: Soft Component | ```
import React, {useState} from 'react';
import ReactDOM from 'react-dom';
function PageLayout({title, children}) {
return <div>
<h1>{title}</h1>
<input type="text"/>
{children}
</div>;
}
function Page2({setPage}) {
return <PageLayout title="Page2">
<button onClick={() => {setPage(() => Page1);}}>Test</button>
</PageLayout>
}
function Page1({setPage}) {
return <PageLayout title="Page1">
<button onClick={() => {setPage(() => Page2);}}>Test</button>
</PageLayout>
}
function App() {
let [Page, setPage] = useState(() => Page1);
return <Page setPage={setPage}/>;
}
ReactDOM.render(<App />, document.getElementById('app'));
```
https://codesandbox.io/embed/serene-browser-tehj4?fontsize=14
The above code is most intuitive pattern for build multiple page web app. -- Don't mind the setPage. Just focus Page component returns PageLayout instance.
But react's diff algorithm is not optimized for that pattern. If you click "Test" button. The text you inputed in input will lost.
So I proposal "Soft Component" concept. Two soft component will be treated as same component in diff algorithm. In the example, we change Page1 and Page2 to soft components. Thus solve the problem I shown above. | Type: Discussion | low | Major |
523,870,834 | youtube-dl | Please add support for flimmit.com |
```
youtube-dl -u ******** -p ******** -v https://www.flimmit.com/video/stream/play/product_id/19151
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-u', u'PRIVATE', u'-p', u'PRIVATE', u'-v', u'https://www.flimmit.com/video/stream/play/product_id/19151']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.11.05
[debug] Python version 2.7.17rc1 (CPython) - Linux-5.3.0-23-generic-x86_64-with-Ubuntu-19.10-eoan
[debug] exe versions: ffmpeg 4.1.4-1build2, ffprobe 4.1.4-1build2
[debug] Proxy map: {}
[generic] 19151: Requesting header
[redirect] Following redirect to https://www.flimmit.com/catalog/product/view/id/19151/
[generic] 19151: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 19151: Downloading webpage
[generic] 19151: Extracting information
ERROR: Unsupported URL: https://www.flimmit.com/catalog/product/view/id/19151/
Traceback (most recent call last):
File "/home/user/.local/lib/python2.7/site-packages/youtube_dl/extractor/generic.py", line 2373, in _real_extract
doc = compat_etree_fromstring(webpage.encode('utf-8'))
File "/home/user/.local/lib/python2.7/site-packages/youtube_dl/compat.py", line 2551, in compat_etree_fromstring
doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory)))
File "/home/user/.local/lib/python2.7/site-packages/youtube_dl/compat.py", line 2540, in _XML
parser.feed(text)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1659, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1523, in _raiseerror
raise err
ParseError: not well-formed (invalid token): line 9, column 322
Traceback (most recent call last):
File "/home/user/.local/lib/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 796, in extract_info
ie_result = ie.extract(url)
File "/home/user/.local/lib/python2.7/site-packages/youtube_dl/extractor/common.py", line 530, in extract
ie_result = self._real_extract(url)
File "/home/user/.local/lib/python2.7/site-packages/youtube_dl/extractor/generic.py", line 3354, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: https://www.flimmit.com/catalog/product/view/id/19151/
``` | account-needed | low | Critical |
523,871,713 | godot | Cannot build module with a reference to an int | **Godot version:** 3.1
**OS/device version:** Windows 10 64bits
**Issue description:** I need to use a reference to an int for my module, but it apparently doesn't work.
I got this when I typed `scons platform=windows bits=64 vsproj=yes`
`C:\Users\FlutterTal\Desktop\godot-3.1\core/method_bind_ext.gen.inc(2066): error C2039: 'convert'ย : is not a member of 'PtrToArg<P3>'
with
[
P3=int &
]
C:\Users\FlutterTal\Desktop\godot-3.1\core/method_bind_ext.gen.inc(2066): note: see declaration of 'PtrToArg<P3>'
with
[
P3=int &
]
scons: *** [modules\ggpo\ggpo.windows.tools.64.obj] Error 2
scons: building terminated because of errors.`
And I got a bunch of them while it's still the same error. | discussion,topic:core,documentation | low | Critical |
523,879,757 | godot | Crash when starting thread | **Godot version:**
Mono 98caeb635
**OS/device including version:**
Windows 10 1903
**Issue description:**
Game crashes when trying to start a thread. Also tried `GodotSharp.attach_thread()` since i use mono version:
```GDScript
extends Node2D
onready var thread = Thread.new()
func _ready() -> void:
thread.start(self, "test")
thread.wait_to_finish()
func test():
print("Thread started")
func test2():
GodotSharp.attach_thread()
print("Thread started")
GodotSharp.detach_thread()
```
EDIT: Turns out that you HAVE to pass an argument, even if you don't use it. Since this is still a bug, i'm not sure if i should close. There are probably duplicates out there. Here is the code that worked for me:
```GDScript
extends Node2D
onready var thread = Thread.new()
func _ready() -> void:
thread.start(self, "test", [])
thread.wait_to_finish()
func test(p_dummy_param = ["NEVER USED!"]):
print("Thread started")
func test2():
GodotSharp.attach_thread()
print("Thread started")
GodotSharp.detach_thread()
```
Reproduction project: [ThreadError.zip](https://github.com/godotengine/godot/files/5826542/ThreadError.zip)
| bug,topic:gdscript | low | Critical |
523,887,754 | svelte | Master-Detail example | I'd like to contribute an example that displays a Svelte implementation of a Master->Detail pattern that also includes an example of delegation (child delegate messages parent with data). My approach is a bit different than the CRUD example.
Master-Detail
https://svelte.dev/repl/1fe5803ad7914054905f43910607eda1?version=3.14.1 | temp-stale,documentation | low | Minor |
523,888,379 | pytorch | Indexing into tensor order of magnitude slower than numpy | ## ๐ Bug
Indexing into a pytorch tensor is an order of magnitude slower than numpy.
## To Reproduce
Steps to reproduce the behavior:
```python
import torch
import numpy as np
BATCH_SIZE = 32
SEQUENCE_LENGTH = 512
TORCH_MATRIX = torch.full(
size = (BATCH_SIZE, SEQUENCE_LENGTH),
fill_value = 0,
dtype = int,
)
NUMPY_MATRIX = np.full(
shape = (BATCH_SIZE, SEQUENCE_LENGTH),
fill_value = 0,
dtype = int,
)
def index_over_matrix(matrix):
for row_index in range(BATCH_SIZE):
for column_index in range(SEQUENCE_LENGTH):
matrix[row_index][column_index]
```
```
%%timeit -n 30
index_over_matrix(NUMPY_MATRIX)
```
>>> 5.9 ms ยฑ 917 ยตs per loop (mean ยฑ std. dev. of 7 runs, 30 loops each)
```
%%timeit -n 30
index_over_matrix(TORCH_MATRIX)
```
>>> 50.3 ms ยฑ 2.5 ms per loop (mean ยฑ std. dev. of 7 runs, 30 loops each)
## Line profiler
```
Timer unit: 1e-06 s
Total time: 0.143154 s
File: <ipython-input-16-108e7beab457>
Function: index_over_matrix at line 21
Line # Hits Time Per Hit % Time Line Contents
==============================================================
21 def index_over_matrix(matrix):
22 33 48.0 1.5 0.0 for row_index in range(BATCH_SIZE):
23 16416 14083.0 0.9 9.8 for column_index in range(SEQUENCE_LENGTH):
24 16384 129023.0 7.9 90.1 matrix[row_index][column_index]
```
## Expected behavior
I expect indexing to be quite quick.
## Environment
```
PyTorch version: 1.3.0
Is debug build: No
CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] msgpack-numpy==0.4.4.3
[pip3] numpy==1.17.4
[pip3] pytorch-lamb==1.0.0
[pip3] torch==1.3.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.4.1
[pip3] torchviz==0.0.1
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @albanD @gqchen @pearu @nikitaved @soulitzer @VitalyFedyunin @ngimel @SsnL @jerryzh168 @mruberry | high priority,module: performance,module: autograd,triaged,module: advanced indexing | medium | Critical |
523,898,338 | godot | [WebGL1] game takes a long time to load on Mobile Chrome | **Godot version:**
3.2 alpha 3
**OS/device including version:**
Android 8.1.0
Mobile Chrome Browsers
**Issue description:**
HTML game take a long time to load in mobile browsers ,while in desktop the game loads in 5-15 seconds the mobile versions takes 35-50 seconds to load
On firefox the loading time is normal but on Chrome takes 40-50 seconds
**Minimal reproduction project:**
https://gustavo-marciano.itch.io/cordeis | discussion,topic:core | low | Minor |
523,902,510 | youtube-dl | Danish silent movies | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.11.05. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.11.05**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
A couple of examples
- Single video: https://www.stumfilm.dk/stumfilm/streaming/film/korsel-med-gronlandske-hunde
- Single video: https://www.stumfilm.dk/stumfilm/streaming/film/henrettelsen
*It appears that the site is using the vimeo player.*
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE
This is a page that contains a large number of danish silent movies - some quite old. | site-support-request | low | Critical |
523,904,755 | youtube-dl | Support for Nosey.com |
I was Wondering if the team could add support for nosey.com. here is a link example: https://www.nosey.com/watch/channel/nosey_general/14-c3al0cgz1ub4-stunning-secrets-revealed | site-support-request | low | Minor |
523,911,189 | rust | gdb "cannot subscript non-array type" to index a Vec | Attempt to print content of a `Vec` at index results in error. Since it works fine for C++, I think the problem is in `rustc` not providing the necessary debug information for this to work.
`Vec`s are a widely used type, and not being able to print their content at index hurts debugging process noticeably.
## Steps to reproduce *(in terms of terminal commands)*
```
$ cat -n test2.rs
1 fn main() {
2 let x: Vec<usize> = vec![1,2,3];
3 println!("{:?}", x);
4 }
$ rustc test2.rs -o a -g
$ gdb ./a
Reading symbols from ./a...
warning: Missing auto-load script at offset 0 in section .debug_gdb_scripts
of file /tmp/a.
Use `info auto-load python-scripts [REGEXP]' to list them.
gdb ฮป br 3
Breakpoint 1 at 0x5c05: file test2.rs, line 3.
gdb ฮป r
Starting program: /tmp/a
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
Breakpoint 1, test2::main () at test2.rs:3
3 println!("{:?}", x);
gdb ฮป p x[0]
```
### Expected
A print:
1
### Actual
It prints:
Cannot subscript non-array type
## Additional information
rustc version: `rustc 1.41.0-nightly (1bd30ce2a 2019-11-15)`
gdb version: `8.3.1`
| A-debuginfo,P-medium,T-compiler,C-bug | medium | Critical |
523,923,575 | node | Passing fd between threads causes VM to crash, or EBADF | <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: 12.13.0
* **Platform**: Linux x 5.0.0-36-generic #39-Ubuntu SMP Tue Nov 12 09:46:06 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**: net, worker_threads
<!-- Please provide more details below this comment. -->
## From forks to threads
I was working on moving from forks to Worker Threads and was experimenting with passing file descriptors between threads. Forks work great with passing file descriptors around due to the extended IPC, making it possible to share open ports.
## Sharing file descriptors
Using Worker Threads, this becomes impossible, For that reason, I had to create a master thread handling connection events, and pass those connections to worker threads through `postMessage`. Since the whole object cannot be passed through message, I thought the lightest and fastest way to do this would be to post the fd as a message, and have the worker thread create a new `Socket` using the fd.
It works great until it does not. It is really unpredictable, but always seems to crash at some point.
## Repro steps
Here is the smallest portion of code I could come up with to reproduce the issue. Those are two files : **master.js** and **worker.js**.
#### Full test **[HERE](https://github.com/rykdesjardins/nodejs_fd_passing_testing)**.
```js
// master.js
const {ย Worker } = require('worker_threads');
const net = require('net');
const workers = new Worker("./worker.js");
const server = net.createServer(conn => {
conn.unref();
workers.postMessage({ duplex_fd : conn._handle.fd });
});
server.listen(12345);
```
```js
// worker.js
const { Socket } = require('net');
require('worker_threads').parentPort.on('message', (msg) => {
const sock = new Socket({ fd : msg.duplex_fd, readable : true, writable : true, allowHalfOpen : true });
sock.end("Hello, World", () => {
sock.destroy();
});
})
```
After _an unpredictable while_, I get either one of those two errors :
```
node: ../deps/uv/src/unix/core.c:930: uv__io_stop: Assertion `loop->watchers[w->fd] == w' failed.
Aborted (core dumped)
```
or
```
events.js:187
throw er; // Unhandled 'error' event
^
Error: read EBADF
at TCP.onStreamRead (internal/stream_base_commons.js:201:27)
Emitted 'error' event on Socket instance at:
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
errno: 'EBADF',
code: 'EBADF',
syscall: 'read'
}
```
This is something I used to do in C++ : have a master thread handle incoming connections, and pass the `fd` integer to whatever thread is available. Maybe I'm misunderstanding how Nodejs handles file descriptors in the background?
The full example I wrote had a worker pool and sometimes was able to handle over 5000 requests before crashing. The crashes are random.
If it can help, here is the `stress.js` file I used to conduct the tests.
```js
// stress.js
const net = require('net');
const cluster = require('cluster');
if (cluster.isMaster) {
let reqSent = 0;
for (let i = 0; i < 10; i++) cluster.fork().on('message', m => m == "+" && console.log(reqSent++));
} else {
const sendReq = () => {
const sock = net.connect(12345, 'localhost', () => {
sock.write("Hello?", () => {
sock.end();
sock.destroy();
process.send("+");
setImmediate(() => sendReq());
});
});
};
sendReq();
}
```
Let me know if you need more info, or if I simply misunderstand how to use this feature.
### Notes
This also happens with file streams, and sockets on top of HTTP. | worker | low | Critical |
523,923,675 | rust | Consider using DWARF modules instead of namespaces. | Looking at the DWARF5 spec I noticed it supports "modules", with Modula-2 and Fortran given as examples, not just "namespaces" (typically associated with C++).
However, it's unclear if we can do everything we use namespaces for, with modules, and whether debuggers support modules at all.
Also see #33123, which seems potentially related.
cc @michaelwoerister | A-debuginfo,C-enhancement,P-medium,T-compiler,WG-debugging | low | Critical |
523,934,823 | vscode | Support external diff algorithms in internal diff editor | Reopening closed feature request #30694
Openness, extensibility and customisability are the features that keep me coming back to VSCode. Being built upon an OSS core (Electron) was strong evidence of a change of culture at Microsoft.
In that spirit, would you please reconsider the original request from @myfairsyer. Different diffing algorithms produce [significantly different results][1] and are suited to different purposes. There is no one-size-fits-all solution.
Paraphrasing @myfairsyer, **please consider abstracting the diff functionality to allow different *hunk-list providers***. Respecting [the algorithm set in Git configuration][2] would be an ideal baseline.
As another example, **BeyondCompare** allows [significant customisation][3], including 2 of the 4 algorithms offered by Git.
[1]: https://link.springer.com/article/10.1007/s10664-019-09772-z
[2]: https://git-scm.com/docs/git-diff#Documentation/git-diff.txt---diff-algorithmpatienceminimalhistogrammyers
[3]: https://www.scootersoftware.com/v4help/index.html?sessiontextalignment.html | feature-request,diff-editor | high | Critical |
523,941,434 | vscode | Allow detail option on task input pickString | Similar to what was recently implemented for tasks, it would be nice to be able to add a detail string to the input options of a task so that it can be explained what picking that option means for the task in cases where it may not be clear. | feature-request,tasks,variable-resolving | medium | Major |
523,943,979 | flutter | initializing DevFS: JSON-RPC error needs to surface cause and potential solution | Hi i am constantly having error : **Error initializing DevFS: JSON-RPC error -32603 (internal error): Internal error** after **Built build\app\outputs\apk\debug\app-debug.apk.** step of flutter run.
Here is output of my flutter run -v:
```
> [ +2 ms] > Task :app:assembleDebug
> [ +1 ms] 32 actionable tasks: 30 executed, 2 up-to-date
> [ +676 ms] Running Gradle task 'assembleDebug'... (completed in 47.8s)
> [+1287 ms] calculateSha: LocalDirectory: 'E:\Mobility\gdg\dyn_link_gdg\build\app\outputs\apk'/app.apk
> [ +146 ms] calculateSha: reading file took 138us
> [ +931 ms] calculateSha: computing sha took 929us
> [ +9 ms] Built build\app\outputs\apk\debug\app-debug.apk.
> [ +7 ms] executing: C:\Users\LENOVO\AppData\Local\Android\sdk\build-tools\28.0.3\aapt dump xmltree
> E:\Mobility\gdg\dyn_link_gdg\build\app\outputs\apk\app.apk AndroidManifest.xml
> [ +49 ms] Exit code 0 from: C:\Users\LENOVO\AppData\Local\Android\sdk\build-tools\28.0.3\aapt dump xmltree
> E:\Mobility\gdg\dyn_link_gdg\build\app\outputs\apk\app.apk AndroidManifest.xml
> [ +3 ms] N: android=http://schemas.android.com/apk/res/android
> E: manifest (line=2)
> A: android:versionCode(0x0101021b)=(type 0x10)0x1
> A: android:versionName(0x0101021c)="1.0.0" (Raw: "1.0.0")
> A: android:compileSdkVersion(0x01010572)=(type 0x10)0x1c
> A: android:compileSdkVersionCodename(0x01010573)="9" (Raw: "9")
> A: package="com.example.dyn_link_gdg" (Raw: "com.example.dyn_link_gdg")
> A: platformBuildVersionCode=(type 0x10)0x1
> A: platformBuildVersionName="1.0.0" (Raw: "1.0.0")
> E: uses-sdk (line=7)
> A: android:minSdkVersion(0x0101020c)=(type 0x10)0x10
> A: android:targetSdkVersion(0x01010270)=(type 0x10)0x1c
> E: uses-permission (line=14)
> A: android:name(0x01010003)="android.permission.INTERNET" (Raw: "android.permission.INTERNET")
> E: uses-permission (line=15)
> A: android:name(0x01010003)="android.permission.ACCESS_NETWORK_STATE" (Raw: "android.permission.ACCESS_NETWORK_STATE")
> E: uses-permission (line=16)
> A: android:name(0x01010003)="android.permission.WAKE_LOCK" (Raw: "android.permission.WAKE_LOCK")
> E: uses-permission (line=17)
> A: android:name(0x01010003)="com.google.android.c2dm.permission.RECEIVE" (Raw:
> "com.google.android.c2dm.permission.RECEIVE")
> E: permission (line=19)
> A: android:name(0x01010003)="com.example.dyn_link_gdg.permission.C2D_MESSAGE" (Raw:
> "com.example.dyn_link_gdg.permission.C2D_MESSAGE")
> A: android:protectionLevel(0x01010009)=(type 0x11)0x2
> E: uses-permission (line=23)
> A: android:name(0x01010003)="com.example.dyn_link_gdg.permission.C2D_MESSAGE" (Raw:
> "com.example.dyn_link_gdg.permission.C2D_MESSAGE")
> E: application (line=31)
> A: android:label(0x01010001)="dyn_link_gdg" (Raw: "dyn_link_gdg")
> A: android:icon(0x01010002)=@0x7f030000
> A: android:name(0x01010003)="io.flutter.app.FlutterApplication" (Raw: "io.flutter.app.FlutterApplication")
> A: android:debuggable(0x0101000f)=(type 0x12)0xffffffff
> E: activity (line=36)
> A: android:theme(0x01010000)=@0x7f050000
> A: android:name(0x01010003)="com.example.dyn_link_gdg.MainActivity" (Raw: "com.example.dyn_link_gdg.MainActivity")
> A: android:launchMode(0x0101001d)=(type 0x10)0x1
> A: android:configChanges(0x0101001f)=(type 0x11)0x400037b4
> A: android:windowSoftInputMode(0x0101022b)=(type 0x11)0x10
> A: android:hardwareAccelerated(0x010102d3)=(type 0x12)0xffffffff
> E: meta-data (line=50)
> A: android:name(0x01010003)="io.flutter.app.android.SplashScreenUntilFirstFrame" (Raw:
> "io.flutter.app.android.SplashScreenUntilFirstFrame")
> A: android:value(0x01010024)=(type 0x12)0xffffffff
> E: intent-filter (line=54)
> E: action (line=55)
> A: android:name(0x01010003)="android.intent.action.MAIN" (Raw: "android.intent.action.MAIN")
> E: category (line=57)
> A: android:name(0x01010003)="android.intent.category.LAUNCHER" (Raw: "android.intent.category.LAUNCHER")
> E: receiver (line=61)
> A: android:name(0x01010003)="com.google.android.gms.measurement.AppMeasurementReceiver" (Raw:
> "com.google.android.gms.measurement.AppMeasurementReceiver")
> A: android:enabled(0x0101000e)=(type 0x12)0xffffffff
> A: android:exported(0x01010010)=(type 0x12)0x0
> E: receiver (line=66)
> A: android:name(0x01010003)="com.google.android.gms.measurement.AppMeasurementInstallReferrerReceiver" (Raw:
> "com.google.android.gms.measurement.AppMeasurementInstallReferrerReceiver")
> A: android:permission(0x01010006)="android.permission.INSTALL_PACKAGES" (Raw: "android.permission.INSTALL_PACKAGES")
> A: android:enabled(0x0101000e)=(type 0x12)0xffffffff
> A: android:exported(0x01010010)=(type 0x12)0xffffffff
> E: intent-filter (line=71)
> E: action (line=72)
> A: android:name(0x01010003)="com.android.vending.INSTALL_REFERRER" (Raw: "com.android.vending.INSTALL_REFERRER")
> E: service (line=76)
> A: android:name(0x01010003)="com.google.android.gms.measurement.AppMeasurementService" (Raw:
> "com.google.android.gms.measurement.AppMeasurementService")
> A: android:enabled(0x0101000e)=(type 0x12)0xffffffff
> A: android:exported(0x01010010)=(type 0x12)0x0
> E: service (line=80)
> A: android:name(0x01010003)="com.google.android.gms.measurement.AppMeasurementJobService" (Raw:
> "com.google.android.gms.measurement.AppMeasurementJobService")
> A: android:permission(0x01010006)="android.permission.BIND_JOB_SERVICE" (Raw: "android.permission.BIND_JOB_SERVICE")
> A: android:enabled(0x0101000e)=(type 0x12)0xffffffff
> A: android:exported(0x01010010)=(type 0x12)0x0
> E: receiver (line=86)
> A: android:name(0x01010003)="com.google.firebase.iid.FirebaseInstanceIdReceiver" (Raw:
> "com.google.firebase.iid.FirebaseInstanceIdReceiver")
> A: android:permission(0x01010006)="com.google.android.c2dm.permission.SEND" (Raw:
> "com.google.android.c2dm.permission.SEND")
> A: android:exported(0x01010010)=(type 0x12)0xffffffff
> E: intent-filter (line=90)
> E: action (line=91)
> A: android:name(0x01010003)="com.google.android.c2dm.intent.RECEIVE" (Raw:
> "com.google.android.c2dm.intent.RECEIVE")
> E: category (line=93)
> A: android:name(0x01010003)="com.example.dyn_link_gdg" (Raw: "com.example.dyn_link_gdg")
> E: receiver (line=100)
> A: android:name(0x01010003)="com.google.firebase.iid.FirebaseInstanceIdInternalReceiver" (Raw:
> "com.google.firebase.iid.FirebaseInstanceIdInternalReceiver")
> A: android:exported(0x01010010)=(type 0x12)0x0
> E: service (line=107)
> A: android:name(0x01010003)="com.google.firebase.iid.FirebaseInstanceIdService" (Raw:
> "com.google.firebase.iid.FirebaseInstanceIdService")
> A: android:exported(0x01010010)=(type 0x12)0xffffffff
> E: intent-filter (line=110)
> A: android:priority(0x0101001c)=(type 0x10)0xfffffe0c
> E: action (line=111)
> A: android:name(0x01010003)="com.google.firebase.INSTANCE_ID_EVENT" (Raw: "com.google.firebase.INSTANCE_ID_EVENT")
> E: provider (line=115)
> A: android:name(0x01010003)="com.google.firebase.provider.FirebaseInitProvider" (Raw:
> "com.google.firebase.provider.FirebaseInitProvider")
> A: android:exported(0x01010010)=(type 0x12)0x0
> A: android:authorities(0x01010018)="com.example.dyn_link_gdg.firebaseinitprovider" (Raw:
> "com.example.dyn_link_gdg.firebaseinitprovider")
> A: android:initOrder(0x0101001a)=(type 0x10)0x64
> E: meta-data (line=121)
> A: android:name(0x01010003)="com.google.android.gms.version" (Raw: "com.google.android.gms.version")
> A: android:value(0x01010024)=@0x7f020000
> [ +204 ms] Stopping app 'app.apk' on SM A205F.
> [ +8 ms] executing: C:\Users\LENOVO\AppData\Local\Android\sdk\platform-tools\adb.exe -s R58M680628W shell am force-stop
> com.example.dyn_link_gdg
> [ +233 ms] executing: C:\Users\LENOVO\AppData\Local\Android\sdk\platform-tools\adb.exe -s R58M680628W shell pm list packages
> com.example.dyn_link_gdg
> [ +257 ms] package:com.example.dyn_link_gdg
> [ +12 ms] executing: C:\Users\LENOVO\AppData\Local\Android\sdk\platform-tools\adb.exe -s R58M680628W shell cat
> /data/local/tmp/sky.com.example.dyn_link_gdg.sha1
> [ +187 ms] a5ce69b28dbfcc03487f8cbd61e2547ba82e317a
> [ +5 ms] Installing APK.
> [ +24 ms] executing: C:\Users\LENOVO\AppData\Local\Android\sdk\platform-tools\adb.exe version
> [ +96 ms] Android Debug Bridge version 1.0.39
> Version 0.0.1-4500957
> Installed as C:\Users\LENOVO\AppData\Local\Android\sdk\platform-tools\adb.exe
> [ +26 ms] executing: C:\Users\LENOVO\AppData\Local\Android\sdk\platform-tools\adb.exe start-server
> [ +92 ms] Installing build\app\outputs\apk\app.apk...
> [ +5 ms] executing: C:\Users\LENOVO\AppData\Local\Android\sdk\platform-tools\adb.exe -s R58M680628W install -t -r
> E:\Mobility\gdg\dyn_link_gdg\build\app\outputs\apk\app.apk
> [+14070 ms] Success
> [ +4 ms] Installing build\app\outputs\apk\app.apk... (completed in 14.1s)
> [ +29 ms] executing: C:\Users\LENOVO\AppData\Local\Android\sdk\platform-tools\adb.exe -s R58M680628W shell echo -n
> 61a953a40313c14e38b86e90b32f2a9d2a26b265 > /data/local/tmp/sky.com.example.dyn_link_gdg.sha1
> [ +190 ms] SM A205F startApp
> [ +3 ms] executing: C:\Users\LENOVO\AppData\Local\Android\sdk\platform-tools\adb.exe -s R58M680628W shell am start -a
> android.intent.action.RUN -f 0x20000000 --ez enable-background-compilation true --ez enable-dart-profiling true --ez enable-checked-mode
> true --ez verify-entry-points true com.example.dyn_link_gdg/com.example.dyn_link_gdg.MainActivity
> [ +529 ms] Starting: Intent { act=android.intent.action.RUN flg=0x20000000 cmp=com.example.dyn_link_gdg/.MainActivity (has extras) }
> [ +4 ms] Waiting for observatory port to be available...
> [+3590 ms] Observatory URL on device: http://127.0.0.1:45912/Xz_Ai1I0gkM=/
> [ +8 ms] executing: C:\Users\LENOVO\AppData\Local\Android\sdk\platform-tools\adb.exe -s R58M680628W forward tcp:0 tcp:45912
> [ +125 ms] 61323
> [ +4 ms] Forwarded host port 61323 to device port 45912 for Observatory
> [ +37 ms] Connecting to service protocol: http://127.0.0.1:61323/Xz_Ai1I0gkM=/
> [ +666 ms] Successfully connected to service protocol: http://127.0.0.1:61323/Xz_Ai1I0gkM=/
> [ +43 ms] Sending to VM service: getVM({})
> [ +27 ms] Result: {type: VM, name: vm, architectureBits: 64, hostCPU: Unknown, operatingSystem: android, targetCPU: arm64, version: 2.5.0(Fri Sep 6 20:10:36 2019 +0200) on "android_arm64", _profilerMode: VM, _nativeZoneMemoryUsage: 0, pid: 21324, startTime: 157396...
> [ +24 ms] Sending to VM service: getIsolate({isolateId: isolates/3528477923465135})
> [ +36 ms] Sending to VM service: _flutter.listViews({})
> [ +17 ms] Result: {type: Isolate, id: isolates/3528477923465135, name: main, number: 3528477923465135, _originNumber: 3528477923465135,
> startTime: 1573969840947, _heaps: {new: {type: HeapSpace, name: new, vmName: Scavenger, collections: 0, avgCollectionPeriodMillis...
> [ +30 ms] Result: {type: FlutterViewList, views: [{type: FlutterView, id: _flutterView/0x78e9e72320, isolate: {type: @Isolate, fixedId:
> true, id: isolates/3528477923465135, name: main.dart$main-3528477923465135, number: 3528477923465135}}]}
> [ +35 ms] DevFS: Creating new filesystem on the device (null)
> [ +17 ms] Sending to VM service: _createDevFS({fsName: dyn_link_gdg})
> [ +78 ms] Error -32603 received from application: Internal error
> [ +8 ms] {details: _createDevFS: Unexpected exception:FileSystemException: Creation of temporary directory failed, path =
> '/data/user/0/com.example.dyn_link_gdg/code_cache' (OS Error: Permission denied, errno = 13)
> #0 _Directory.createTemp.<anonymous closure> (dart:io/directory_impl.dart:162:9)
> #1 _RootZone.runUnary (dart:async/zone.dart:1379:54)
> #2 _FutureListener.handleValue (dart:async/future_impl.dart:137:18)
> #3 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:678:45)
> #4 Future._propagateToListeners (dart:async/future_impl.dart:707:32)
> #5 Future._completeWithValue (dart:async/future_impl.dart:522:5)
> #6 Future._asyncComplete.<anonymous closure> (dart:async/future_impl.dart:552:7)
> #7 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
> #8 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
> #9 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:116:13)
> #10 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:173:5)
> }
> [ +47 ms] Error initializing DevFS: JSON-RPC error -32603 (internal error): Internal error
> [ +28 ms] "flutter run" took 95,767ms.
>
> #0 throwToolExit (package:flutter_tools/src/base/common.dart:28:3)
> #1 RunCommand.runCommand (package:flutter_tools/src/commands/run.dart:477:7)
> <asynchronous suspension>
> #2 FlutterCommand.verifyThenRunCommand (package:flutter_tools/src/runner/flutter_command.dart:490:18)
> #3 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:71:64)
> #4 _rootRunUnary (dart:async/zone.dart:1132:38)
> #5 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
> #6 _FutureListener.handleValue (dart:async/future_impl.dart:137:18)
> #7 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:678:45)
> #8 Future._propagateToListeners (dart:async/future_impl.dart:707:32)
> #9 Future._completeWithValue (dart:async/future_impl.dart:522:5)
> #10 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:30:15)
> #11 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:288:13)
> #12 RunCommand.usageValues (package:flutter_tools/src/commands/run.dart)
> #13 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:71:64)
> #14 _rootRunUnary (dart:async/zone.dart:1132:38)
> #15 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
> #16 _FutureListener.handleValue (dart:async/future_impl.dart:137:18)
> #17 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:678:45)
> #18 Future._propagateToListeners (dart:async/future_impl.dart:707:32)
> #19 Future._completeWithValue (dart:async/future_impl.dart:522:5)
> #26 _FutureListener.handleValue (dart:async/future_impl.dart:137:18)
> #27 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:678:45)
> #28 Future._propagateToListeners (dart:async/future_impl.dart:707:32)
> #29 Future._completeWithValue (dart:async/future_impl.dart:522:5)
> #30 Future._asyncComplete.<anonymous closure> (dart:async/future_impl.dart:552:7)
> #31 _rootRun (dart:async/zone.dart:1124:13)
> #32 _CustomZone.run (dart:async/zone.dart:1021:19)
> #33 _CustomZone.runGuarded (dart:async/zone.dart:923:7)
> #34 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:963:23)
> #35 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
> #36 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
> #37 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:116:13)
> #38 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:173:5)
>
```
Here is the output of flutter doctor:
> `[โ] Flutter (Channel stable, v1.9.1+hotfix.6, on Microsoft Windows [Version 10.0.18362.476], locale en-US)
> โข Flutter version 1.9.1+hotfix.6 at E:\Installations\flutter_windows_v1.0.0-stable\flutter
> โข Framework revision 68587a0916 (9 weeks ago), 2019-09-13 19:46:58 -0700
> โข Engine revision b863200c37
> โข Dart version 2.5.0
>
> [!] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
> โข Android SDK at C:\Users\LENOVO\AppData\Local\Android\sdk
> โข Android NDK location not configured (optional; useful for native profiling support)
> โข Platform android-28, build-tools 28.0.3
> โข Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
> โข Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b02)
> X Android license status unknown.
> Try re-installing or updating your Android SDK Manager.
> See https://developer.android.com/studio/#downloads or visit https://flutter.dev/setup/#android-setup for detailed instructions.
>
> [โ] Android Studio (version 3.1)
> โข Android Studio at C:\Program Files\Android\Android Studio
> โข Flutter plugin version 24.1.1
> โข Dart plugin version 173.4700
> โข Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b02)
>
> [โ] IntelliJ IDEA Ultimate Edition (version 2018.1)
> โข IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA 2018.1.2
> โข Flutter plugin version 24.1.2
> โข Dart plugin version 181.4668.60
>
> [โ] VS Code, 64-bit edition (version 1.33.1)
> โข VS Code at C:\Program Files\Microsoft VS Code
> โข Flutter extension version 3.2.0
>
> [โ] Connected device (1 available)
> โข SM A205F โข R58M680628W โข android-arm64 โข Android 9 (API 28)` | tool,a: quality,P2,team-tool,triaged-tool | low | Critical |
523,965,936 | pytorch | [feature request] Multivariate normal CDF | I'd like to discuss here the implementation of the multivariate normal cumulative distribution function (CDF), as the following code
```
from torch.distributions import MultivariateNormal
mvn = MultivariateNormal(torch.zeros(2), torch.eye(2))
mvn.cdf(torch.ones(3))
```
raises a _NotImplementedError_.
The cdf of the mvn has no closed-form solution, and is implemented [in scipy](https://github.com/scipy/scipy/blob/master/scipy/stats/mvndst.f) (Fortran code) and [in Matlab](https://fr.mathworks.com/help/stats/mvncdf.html) based on the work by Genz (paper [here](http://www.math.wsu.edu/faculty/genz/papers/mvn.pdf)).
If needed, we can discuss the derivatives of the cdf w.r.t. the location and the correlation coefficient here. There exist formulas in the bivariate case but not in the general multivariate case (at least to my knowledge).
cc @vincentqb @fritzo @neerajprad @alicanb @vishwakftw | module: distributions,feature,triaged | low | Critical |
523,971,238 | godot | The status of the tabs in the editor is not memorized | **Godot version:**
3.2 beta 1 mono - v3.5.beta1.official [b9b23d222]
**OS/device including version:**
Win 10 64 Home
**Issue description:**
The editor does not remember the status of the tabs.
I use the mono version and in the "Editor settings" window I close the "Text Edtor" and "Network" tabs because I don't need to edit them.
This way I can navigate faster between the options in the window.
The problem occurs when I close the window and open it again, then the "Text editor" and "Network" tabs reappear open, the tabs do not memorize their status.
This also happens with the "Project settings" window.
This should also be memorized in custom layouts.
So the tabs memorize their status when I use the "Save layout" option and "Load layout".
**Steps to reproduce:**
1 - Open the "Editor settings" (Editor->Editor Settings) window
2 - Close the "Interface" tab
3 - Close the "Editor settings" window
4 - Open the "Editor settings" (Editor->Editor Settings) window

**Minimal reproduction project:**
| enhancement,topic:editor | low | Minor |
523,977,365 | TypeScript | Rest parameter in callback function using generic tuple types: Forces definition of all parameters | **TypeScript Version:** 3.8.0-dev.20191116
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
rest parameter, spread, tuple type, generics, conditional types
**Code**
https://github.com/BTOdell/typescript-tuple-rest-spread-bug
This issue was discovered when implementing a type-safe event emitter class. I've produced a minimum reproducible example at the GitHub link above as well as in a Playground link below.
- `index.ts` defines an unimplemented `EventEmitter` that includes 4 functions which should be equivalent (except for the `eventName` parameter).
- `test.ts` consumes the `EventEmitter` class and calls the various functions to test type-safety.
An `Events` interface is used to define the supported events by mapping the event name to a tuple type for the parameters of the event listener.
This interface is passed through a generic parameter `E` in the `Test` class and then passed through a generic parameter `E` in the `EventEmitter` class.
**Expected behavior:**
A listener function should be able to define 1 parameter even if the caller might pass 2 parameters. The listener will then only have access to the first parameter passed. This is how JavaScript normally behaves.
**Actual behavior:**
The unexpected behavior occurs on lines 29 and 33 (in `test.ts`) and only occurs when the `Events` interface type has to pass through 2 generic class parameters.
If line 14 is uncommented, and line 13 is commented then all unexpected behavior is resolved.
```
Error:(29, 47) TS2345: Argument of type '(b: boolean) => void' is not assignable to parameter of type '(...args: Args<E["multiArray"]>) => any'.
Types of parameters 'b' and 'args' are incompatible.
Type 'Args<E["multiArray"]>' is not assignable to type '[boolean]'.
Type 'E["multiArray"] | [E["multiArray"]]' is not assignable to type '[boolean]'.
Type 'any[] & E["multiArray"]' is not assignable to type '[boolean]'.
Types of property 'length' are incompatible.
Type '2' is not assignable to type '1'.
Type '[boolean, boolean]' is not assignable to type '[boolean]'.
Types of property 'length' are incompatible.
Type '2' is not assignable to type '1'.
```
It seems like the tuple types `[boolean, boolean]` and `[boolean]` are not being treated like they're being spread as a rest parameter in a function. Could the context information be getting lost due to jumping through multiple generic parameters?
**Playground Link:**
[Playground link](http://www.typescriptlang.org/play/?esModuleInterop=false&module=1&ts=3.8.0-dev.20191116#code/PQKhCgAIUhVBnApvSByAxgewHYBMCWALvjgIYA2khAngA7KpWaS0BOmuAruopKVZ1rleAClKtWpagEpImAGZ9WAc04BbRNkJU6ySACNSSXHOxUAFr3zZanbTXqQAKgDooMAKIAPUmqGIALndISABaSABJRSdIfBRSM3FJagAaC010pWTYwiRyRTjITmM+FEJLJVUNLR16eDdoEPCo51j47Gpy62VIRHIkNPKM-kJBYUgRBUh4fAAvXgBGWSwtUmtuzJnsZXGHKxQ2Dm5EXAaQYHA9yABBFXgAHicAPkgAXlbEL0JNXHbqAG0ALqQAD8rQCkH+TkBAG5wOBPrRMKxtOhyEYUB4AG6aQgeNREb6se4eF4Ab3hIRCtn05Hw6EgrGQhAAwjgCMQyOR7gBpXpfH4oADWiGoU1JIkQOK0ADlfIFIDy0nT4N9sIhWBCRC4deJlPAIbd9ST-jzAU9ZK8XgkZBCsZh8CYKVTIABfSlUml0hlM1UAcU0GvpDz5nzVv0gIrFiglUtxco0EKVkBVao1Wp1Lj1Bsh+kwmGECTSeYLiASgMt1o60jtDqdUCp7ob1M4tPpjOZIlTgc1E0z2Yh-xLhewxfzI4rbyrtsg9sdkGdjY9LbbDOwyLUFC7cTTvZE+ghw7Lo4MACZD+Pj5W+NXa-PFyEm03rET5KQeJBsbiUA-IGpOOQxC3Mkg5HkWBiXuWcIuv+gH4AA8rQnLYBQwFSKBkEnmB2AgrC4BNmiGLOMyJL8uGmLSrkbyfpR8DksukCgBALqNC0QzTIgKwmHS6ptEU2BYGo1TfLgaQUJQnDql49DoCJvQSMi8RMh28AFjipzNiE5yaWw+BYqQ3wdqQuA4OQ1C9AqX5aPihIaiSLzvOqADuNG4jZhBEiI0jQVSwDALp+mGUyxmmeZiCWZR7lEiStEOZAzmudZBIeRqXlwppKyqqw3CEMiXkLppLpdPU4UuOurCbuQ+5TgurreYxwCQDKzAauwrCFVSxUuKV5WVSI+7Fqe15knVMINU1LUKe1LGdeYcTdYgZUbluA1nsWADMw2jeN3gyXJrXIgxRVzSVi2+oQ1VWrV9V+RN8ltR1IRdaV539fog1bTdjXNfdh0zU9J0LS4r2rfop4bZ9Y23btnH7VNR2zfNL3MgG6qsMGIgAESwUBEhSJjxY1SNX13Qd03-c9Z0oz2GPYwBuPJATEzvWekPjT9ZOPRYSNU-6NPoPAWM4-gaHUEzoPgwYm1E9t0PSbDJy-e1XOU8DzJsngRAkChVV03BoviweEGlgkbO3bAUl7YrnP-dzp1q6qGsctrW56wz+NpBLZvfZND226r51O1rXJC-TIt42Lnss2DEMyyTMOydb8Mq4DyOO+ywc66HcGIchqER4bF4m9g3twJbCsmDbFOp7zrIZ3nuvC7nLvkAbUeDSCpcc-D-s1w7deaw32fEM3XJt8zH1x1DjUJ3DftLvh4BAA)
| Needs Investigation | low | Critical |
524,006,637 | flutter | Support both primitives and Java wrapper classses like Booleans in StandardMessageCodec | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Steps to Reproduce
<!--
Please tell us exactly how to reproduce the problem you are running into.
Please attach a small application (ideally just one main.dart file) that
reproduces the problem. You could use https://gist.github.com/ for this.
If the problem is with your application's rendering, then please attach
a screenshot and explain what the problem is.
-->
Hi,
I recently ran into an interesting issue using Flutter and it's PlatformChannels.
**Background**: I use a Realm db (https://realm.io) to store an array of booleans on the native Android side. And I want to send these booleans to the Flutter side. Due to the Realm db those booleans are stored as type of **java.lang.Boolean** and not as type of Kotlin booleans. Actually the small difference shouldn't be a problem.
However, when the result (which is send to the Flutter channel) contains a boolean of type **java.lang.Boolean** an exception is thrown in...
**io.flutter.plugin.common.StandardMessageCodec.writeValue**
the interessing part of my code:
```dart
//returns the result which is send to the flutter side
fun convertToMap(question: Questions): HashMap<String, Any> {
val dict: HashMap<String, Any> = hashMapOf<String, Any>(
"history" to convertToBooleanArray(question.history)
)
return dict
}
//HELPER FUNCTION
fun convertToBooleanArray(boolList: RealmList<Boolean>): ArrayList<Boolean> {
var newBoolList = arrayListOf<Boolean>()
boolList.forEach {
newBoolList.add(it) // **it** is type of java.lang.Boolean as shown in the Log.d
Log.d("type of boolean:", it.javaClass.name)
//current workaround is to compare the java.lang.Boolean to a Kotlin Boolean
//something like this:
//newBoolList.add(it==true) //the result of the comparison is of type boolean (kotlin) and this works!
}
return newBoolList
}
```
**further details:**
Questions.history: RealmList<Boolean>
<!--
Please tell us which target platform(s) the problem occurs (Android / iOS / Web / macOS / Linux / Windows)
Which target OS version, for Web, browser, is the test system running?
Does the problem occur on emulator/simulator as well as on physical devices?
-->
**Target Platform: Android**
**Target OS version/browser: 8 and 9 (probably all)**
**Devices: Samsung Galaxy S7/8/9 and all Pixel Emulators**
## Logs
<!--
Run your application with `flutter run --verbose` and attach all the
log output below between the lines with the backticks. If there is an
exception, please see if the error message includes enough information
to explain how to solve the issue.
-->
```
D/type of boolean:( 9988): java.lang.Boolean
[ +18 ms] I/zygote64( 9988): Do partial code cache collection, code=61KB, data=42KB
[ +1 ms] I/zygote64( 9988): After code cache collection, code=61KB, data=42KB
[ ] I/zygote64( 9988): Increasing code cache capacity to 256KB
[ +50 ms] E/MethodChannel#skeleton/realm( 9988): Failed to handle method call
[ ] E/MethodChannel#skeleton/realm( 9988): java.lang.IllegalArgumentException: Unsupported value: true
[ ] E/MethodChannel#skeleton/realm( 9988): at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:294)
[ ] E/MethodChannel#skeleton/realm( 9988): at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:283)
[ ] E/MethodChannel#skeleton/realm( 9988): at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:291)
[ ] E/MethodChannel#skeleton/realm( 9988): at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:283)
[ ] E/MethodChannel#skeleton/realm( 9988): at io.flutter.plugin.common.StandardMethodCodec.encodeSuccessEnvelope(StandardMethodCodec.java:57)
[ ] E/MethodChannel#skeleton/realm( 9988): at io.flutter.plugin.common.MethodChannel$IncomingMethodCallHandler$1.success(MethodChannel.java:225)
[ ] E/MethodChannel#skeleton/realm( 9988): at RealmService.getQuestionsWithIds(RealmService.kt:162)
[ ] E/MethodChannel#skeleton/realm( 9988): at com.example.flutterskeleton.MainActivity$onCreate$1.onMethodCall(MainActivity.kt:33)
[ ] E/MethodChannel#skeleton/realm( 9988): at io.flutter.plugin.common.MethodChannel$IncomingMethodCallHandler.onMessage(MethodChannel.java:222)
[ ] E/MethodChannel#skeleton/realm( 9988): at io.flutter.embedding.engine.dart.DartMessenger.handleMessageFromDart(DartMessenger.java:96)
[ ] E/MethodChannel#skeleton/realm( 9988): at io.flutter.embedding.engine.FlutterJNI.handlePlatformMessage(FlutterJNI.java:656)
[ ] E/MethodChannel#skeleton/realm( 9988): at android.os.MessageQueue.nativePollOnce(Native Method)
[ ] E/MethodChannel#skeleton/realm( 9988): at android.os.MessageQueue.next(MessageQueue.java:325)
[ ] E/MethodChannel#skeleton/realm( 9988): at android.os.Looper.loop(Looper.java:142)
[ ] E/MethodChannel#skeleton/realm( 9988): at android.app.ActivityThread.main(ActivityThread.java:6944)
[ ] E/MethodChannel#skeleton/realm( 9988): at java.lang.reflect.Method.invoke(Native Method)
[ ] E/MethodChannel#skeleton/realm( 9988): at com.android.internal.os.Zygote$MethodAndArgsCaller.run(Zygote.java:327)
[ ] E/MethodChannel#skeleton/realm( 9988): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1374)
[ +49 ms] I/flutter ( 9988): PlatformException(error, Unsupported value: true, null)
```
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
```
```
<!-- Finally, paste the output of running `flutter doctor -v` here. -->
**flutter doctor -v:**
```
[โ] Flutter (Channel stable, v1.9.1+hotfix.6, on Mac OS X 10.14.4 18E226, locale de-DE)
โข Flutter version 1.9.1+hotfix.6 at /Users/carlostrachwitz/Development/flutter
โข Framework revision 68587a0916 (9 weeks ago), 2019-09-13 19:46:58 -0700
โข Engine revision b863200c37
โข Dart version 2.5.0
[โ] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
โข Android SDK at /Users/carlostrachwitz/Library/Android/sdk
โข Android NDK location not configured (optional; useful for native profiling support)
โข Platform android-29, build-tools 29.0.2
โข Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 11.0)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Xcode 11.0, Build version 11A420a
โข CocoaPods version 1.8.3
[โ] Android Studio (version 3.2)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin version 31.3.1
โข Dart plugin version 181.5656
โข Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06)
[โ] VS Code (version 1.40.1)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.6.0
[โ] Connected device (2 available)
โข SM G930F โข 9885e638394a4a3750 โข android-arm64 โข Android 8.0.0 (API 26)
โข Android SDK built for x86 โข emulator-5554 โข android-x86 โข Android 6.0 (API 23) (emulator)
โข No issues found!
```
| c: new feature,platform-android,engine,P3,a: plugins,team-android,triaged-android | low | Critical |
524,019,108 | angular | Type Provider interfaces when used with InjectionToken<T> | <!--๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
Oh hi there! ๐
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
๐
-->
# ๐ feature request
### Relevant Package
<!-- Can you pin-point one or more @angular/* packages the are relevant for this feature request? -->
<!-- โ๏ธedit: --> This feature request is for @angular/....
@angular/core
@angular/compiler
### Description
`InjectionToken`s are typed, but you can provide any value for them without any type-check error, even in an AOT build.
```ts
const MY_TOKEN = new InjectionToken<string>("my token");
@NgModule({
providers: [
{provide: MY_TOKEN, useValue: 5}, // no type error
]
})
export class MyModule {}
```
### Describe the solution you'd like
I expect the above to produce some sort of error, at least in an AOT build. I think there needs to be a broader discussion around how we can tighten up the `Provider` types. At the moment, they seem extremely liberal, and I see a lot of `any`s, which I don't think is the way to go. I suspect tightening them up entirely will cause a lot of breakages in tests in particular, but it may be worth the pain if Angular wants to become more typesafe in future. Alternatively, a safer intermediary step is to tighten up `Provider` types only when an `InjectionToken<T>` is being provided.
Besides catching mistakes by developers at compile time rather than runtime by end-users, another reason to begin to type these interfaces more strongly is to enable developers to benefit from TS language service feedback when writing providers and refactoring the types of values that are being provided (if I have an InjectionToken\<MyType>, and I'm providing that across my codebase, when I refactor MyType using WebStorm, those changes should propagate to the provider `useValue` etc. sites).
### Describe alternatives you've considered
<!-- โ๏ธ--> Have you considered any alternative solutions or workarounds?
| feature,breaking changes,freq1: low,area: core,core: di,cross-cutting: types,feature: under consideration,feature: votes required,canonical | medium | Critical |
524,021,090 | scrcpy | Request: Make resize-window a toggle | Fullscreen mode (Ctrl+f) is a toggle, but 1:1 pixel mode seems to be only turn-on. Once on there is no way to turn it off again that I have found, except by killing and restarting scrcpy.
Could it be made a toggle? | feature request | low | Minor |
524,089,760 | opencv | Transparent Api on RK3399-Mali T860 producing running slower than cpu | System information (version)
- OpenCV => 3.3.1
- Operating System / Platform => Ubuntu 18.04.3 LTS / Firefly RK3399 - Mali-T860
- Compiler => g++
##### Detailed description
Im having problems leveraging the transparent API on the arm RK3399 board.
Using the cv::UMat data structure I cant have faster result in any program that I write.
##### Steps to reproduce
transparent api code
#include "opencv2/opencv.hpp"
#include <chrono>
using namespace cv;
int main(int argc, char** argv)
{
UMat img, gray;
imread("lena.jpg", IMREAD_COLOR).copyTo(img); //use lena.jpg from CV examples
auto t1 = std::chrono::high_resolution_clock::now();
cvtColor(img, gray, COLOR_BGR2GRAY);
GaussianBlur(gray, gray,Size(3, 3), 1.5);
Canny(gray, gray, 0, 50);
auto t2 = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>( t2 - t1).count();
std::cout << duration << std::endl;
return 0;
}
Im sending my clinfo output containing informations related to the OpenCL instalation and gpu device.
[clinfo.txt](https://github.com/opencv/opencv/files/3856496/clinfo.txt)
| category: ocl,platform: arm | low | Major |
524,094,053 | rust | cannot resolve `<&'t1 T as std::iter::IntoIterator>::IntoIter == _` | Hi! Let's consider the following code:
```rust
use std::rc::Rc;
use core::cell::RefCell;
use core::cell::Ref;
pub struct SharedDirtyFlag<T> {
data: Rc<RefCell<T>>
}
impl<T> SharedDirtyFlag<T>
where for<'t> &'t T: IntoIterator {
pub fn iter(&self) -> SharedDirtyFlagIter<T> {
let borrow = self.data.borrow();
let iter = borrow.into_iter();
let iter = unsafe { unbound_lifetimes(iter) };
SharedDirtyFlagIter { iter, borrow }
}
}
pub struct SharedDirtyFlagIter<'t,T>
where &'t T: IntoIterator {
pub iter : <&'t T as IntoIterator>::IntoIter,
pub borrow : Ref<'t,T>
}
unsafe fn unbound_lifetimes<'t1, 't2, T>
(t: <&'t1 T as IntoIterator>::IntoIter) -> <&'t2 T as IntoIterator>::IntoIter
where &'t1 T: IntoIterator,
&'t2 T: IntoIterator {
std::mem::transmute(t)
}
```
Playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=5dca98d86cd811ec108f83e01435e69a
It creates an internal-mutable wrapper over a generic type `T` and should expose the iterator. In order to expose the iterator we need to keep both the original iterator as well as the dynamic borrow of `T`. In order to do so we need to keep two interconnected values in a single structure and we need to tell rustc to allow it by relaxing constraints on the lifetimes. The problem is that it does not compile here. The error is:
```rust
error[E0284]: type annotations required: cannot resolve `<&'t1 T as std::iter::IntoIterator>::IntoIter == _`
--> src/lib.rs:25:1
|
25 | / unsafe fn unbound_lifetimes<'t1, 't2, T>
26 | | (t: <&'t1 T as IntoIterator>::IntoIter) -> <&'t2 T as IntoIterator>::IntoIter
27 | | where &'t1 T: IntoIterator,
28 | | &'t2 T: IntoIterator {
29 | | std::mem::transmute(t)
30 | | }
| |_^
```
And now:
1. It seems like a bug. If this is a bug, **is there any way now to express it somehow?**
2. If this is not a bug, could I ask for an explanation of why it does not compile (both I and several people on Rust Discord do not understand it) and an explanation of how to implement this pattern in a generic form?
-------------
**EDIT**
There is also a minimized form of this issue shown below (thanks Rantanen for discovering it!). However, the questions above still apply and I'd be thankful for the answers.
```rust
unsafe fn unbound_lifetimes<'t1, 't2, T>
(t: <&'t1 T as IntoIterator>::IntoIter) -> <&'t2 T as IntoIterator>::IntoIter
where &'t1 T: IntoIterator,
&'t2 T: IntoIterator {
unimplemented!()
}
``` | A-lifetimes,A-trait-system,T-lang,T-compiler,C-bug | low | Critical |
524,101,595 | go | path/filepath: Join documentation inconsistencies with path.Join | This is a follow-up to #29875, which is closed.
Problems:
1) The fix was only applied to `path.Join` docs, but `filepath.Join` has the same problem and should be fixed too.
2) The new wording drops the sentence "all empty strings are ignored", leaving no clue that `path.Join("", "foo")` results in `"foo"` rather than `"/foo"`.
New:
> Join joins the argument's path elements into a single path, separating them with slashes. The result is Cleaned. However, if the argument list is empty or all its elements are empty, Join returns an empty string.
Old:
> Join joins any number of path elements into a single path, adding a separating slash if necessary. The result is Cleaned; in particular, all empty strings are ignored.
I think it should be clear that `Join` can return the empty string, but it should be also clear that empty strings at the beginning don't make the path absolute.
While I'm at it, I'm also confused by `path.Dir`:
> Dir returns all but the last element of path, typically the path's directory. After dropping the final element using Split, the path is Cleaned and trailing slashes are removed. If the path is empty, Dir returns ".". If the path consists entirely of slashes followed by non-slash bytes, Dir returns a single slash. In any other case, the returned path does not end in a slash.
It should probably read "If the path consists entirely of slashes *possibly* followed by non-slash bytes" or "If the path consists entirely of slashes followed by *zero or more* non-slash bytes", otherwise the last sentence ("In any other case ...") is false. | Documentation,NeedsInvestigation | low | Major |
524,124,883 | pytorch | Move the attributes of a module to the given device | ## ๐ Feature
In the current implementation of *[Module](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/script/module.h#L289-L313)*, there are methods ([torch::jit::script::Module::to(...)](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/script/module.cpp#L138-L150)) to move all the parameters to the given device (e.g. from CPU to GPU). However, there is no corresponding method to move the attributes in the same way.
## Motivation
Parameters and attributes are both organic parts of the module, when moving the module to a given device, both of these two components should be moved.
A concrete use case would be remote execution, where the inputs (in form of IValue) are assembled on the client side and are sent over RPC to the server side. Usually the clients only have CPUs but the servers have GPUs.
As we haven't been able to directly serialize/deserialize the IValue (see #25591 for context), a workaround is to leverage the [serialization methods](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/script/module.h#L331-L339) of Module as follows: 1) create a container module; 2) pack the IValue inside this container module; 3) serialize the module to bytes and send via PRC to the server side; 4) deserialize and then move the received module to GPU and unpack the IValue.
## Pitch
When a user move the module to the given device, both its attributes and parameters should be moved.
## Alternatives
A more general solution would be implementing IValue::to(device) or some IValue pickle function and make Module::to to leverage the API of Ivalue.
## Additional context
The relevant IValue serialization issues are #25109, #25502, #25591.
cc @suo | oncall: jit,triaged | low | Minor |
524,130,833 | flutter | Screen turns black after attempting to lock the screen, while launching app | ## Flutter doctor
```
[โ] Flutter (Channel unknown, v1.10.6, on Microsoft Windows [Version 10.0.18362.476], locale zh-CN)
โข Flutter version 1.10.6 at G:\Softwares\flutter-sdk
โข Framework revision cc3ca9a916 (8 weeks ago), 2019-09-25 10:57:58 -0400
โข Engine revision 63949eb0fd
โข Dart version 2.6.0 (build 2.6.0-dev.2.0 69b5681546)
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
โข Android SDK at C:\Users\Lenovo\AppData\Local\Android\Sdk
โข Android NDK location not configured (optional; useful for native profiling support)
โข Platform android-29, build-tools 29.0.2
โข ANDROID_HOME = C:\Users\Lenovo\AppData\Local\Android\Sdk
โข Java binary at: G:\Softwares\AndroidStudio\jre\bin\java
โข Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
[โ] Android Studio (version 3.5)
โข Android Studio at G:\Softwares\AndroidStudio
โข Flutter plugin version 41.1.2
โข Dart plugin version 191.8593
โข Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
[โ] IntelliJ IDEA Community Edition (version 2019.2)
โข IntelliJ at G:\Softwares\IntelliJ IDEA Community Edition 2019.2.4
โข Flutter plugin version 41.1.4
โข Dart plugin version 192.7402
[โ] Connected device (2 available)
โข SM G960F โข 221d64242a0c7ece โข android-arm64 โข Android 8.1.0 (API 27)
โข BTV DL09 โข K6T6R17322001986 โข android-arm64 โข Android 7.0 (API 24)
```
## Steps to Reproduce
When I touch app icon to launch app in desk, when it launching, I press the power button to lock Screen, after a while. Then, app is locking on black screen (the device SM G960F) or can load some `Text Widget`, but it's not possible to click (the device BTV DL09). This happens not only the debug, but also the release mode.
When I operate normally, no exceptions occur.
This also happen when I press back button to desk, and re-enter to app. So I added some logic to kill app when press back. | platform-android,engine,a: existing-apps,P3,team-android,triaged-android | low | Critical |
524,145,958 | TypeScript | Assigning a callable return type of a generic function directly to some generic parameter results in error | **TypeScript Version:** above 3.3.3
**Code**
To see the actual behavior depending on a version of TS it is better to [open the code below in the playground](http://www.typescriptlang.org/play/?ssl=118&ssc=3&pln=110&pc=1#code/C4TwDgpgBAwghgGwXARgiAeAKgPigXigG8oAKYALiiwEoq4A7EAbigF9mAobgEwgGNkAJ2gAzAK4N+wAJYB7BlHGyEAQWw5SnKFH6JkaCFXhJU6DQBptUUQyrkqtAnkYhOdKK94DhYydPlFZRkEACENLR09U0NjfTNMXCsdMDghOABbAGcqImsdW3tKahpnTyZrNnd6Cs4+QTS-KVkFJRUYCOtU9Ozc-N142NhB8yT+wrJip3wXCp0qjy9OAHoAKlXtVeGs6ABGKh4ZEWkEEE8srJkAcwYZBivPAZj0TeXvBpFdBSzgJ4N0OLPTAMcQZFAQIQ4bjBNSRP4JZJQX4zJEAOmAcgAogAPMAKCAMWSIUilZbLJFQGRZR4gsEQixQADyAGl3NCVKE4dF-hBEXkdAU7BSUcB0VjcfjCTJiaTyb8qTTQeChAyWZU2ZwYTBSPz4YZERNkXhRRicXiGASiQgSVAyRSFXAoLTlarWWwaNw1hsoFt4DsoAAmA5HATAU7nS43O4PYAAC2gIiy4gQvzkokeEmagSgImA4iEt3uj25CVe718Nn8LUU-DkGTAyggJh5qhJgJ5GGdEKhmpUrestfrjebCVbNERRrRpolFqlMttcsp1MdXZVTNZHt7IU5A7rDeATZGEDHfPGQsnJvF5st0utsvty6dSvp6-Vm61OoHR6og-3h6BJ7WIaZSXmakpWjadryo+q6upw7qeusmzbNAADMwbHGGZxwBc1yFjG8Y5hASYplAaaPFcBIQjI-CVlmrS5vm+HFkeZZ1D4jR0QErS-sOR7hLgDglO2CQaOyIT9lEe58UCnK7AADPJ47WBeYpgbOEH3tBip0muarvhyXLSQeI6GHJinKTouqClQqnTtec53guD46S6r7zBqH66iWQy8SZ-GkApSkGueIFqTON7zlBS7lEoDAANYMHIADuDDwZuXrIVghHID8Az+gqPzpPc0AAJIALKMqi1CEYmya-KAkBkemAAGfn-jyoQtTFMgZBkECHHAB7holKUMKiyHIQAEnIABu1FFnG0C5b8LVat1WSxil1JLTYchICl0YUJNPpcdWbQhB0q6aN0mQ5MQyFRN+wxAp2z6Qlwp02ZMVCrqUKKuJ9WzujUbinchAAixFgDIB5IoRohwNI8NDdQADKlIMKIELUpRFpCDRUAtVg3XkWt7QtQyOGRgw-WEkichEz56DdbdGTIVcMjzfdABEWDgNAADkpnmNdgsxUlvzU3hCQM0iAtQMLR4YJIo2pTgguojzE3g6dpXabGaQ8HLkh8EIPyMMbyWG8AuNyNGZHjVA5VyJ8caMCjwAMhA81CCAcYO8lrvxdSdzo1As2oaiUdR8hpWKBayUQhHOOBDthHgobs3yEIMUoEIcjxQSOtbFlsYKoby6KL1qTI60u0IDI+dpGcpVQPGCBgHLGRwHcwC907aPiPwsZKP6eg7MunySzmnGW8hkByGA6CeJ8oiu-wA1y9LNzwwmxH1c1TPGR1CTmUp3UYo8s1pNKhg2EcPzJFslt74oV+Om1R6s2kmQQAea5rY0VHnAUQ2NpDUghgADWOj6N4nBawMDyu1EWEBQgACUD6kUICggKQUDKXU-E9IEP4T6oIwVgr2zlsoKiDkIEOABCICoURThQcppZy2kVzvTgu6IAA)
```ts
type Callable<T> = { (t: T): any; };
declare function utilA<T>(
callable: Callable<T>,
fn: (t: T) => any
): any
declare function utilB<T>(
callable: Callable<T>,
params: {
fn: (t: T) => any
}
): any
declare function utilC<T>(
params: {
callable: Callable<T>,
fn: (t: T) => any
}
): any
/**
* Case 1: directly assigning a callable
*/
declare const callable: Callable<number>
utilA(
callable,
t => t.toExponential() // t is a number, OK
)
utilB(
callable,
{
fn: t => t.toExponential() // t is a number, OK
}
)
utilC({
callable,
fn: t => t.toExponential() // t is a number, OK
})
/**
* Case 2: directly assigning the result of a function returning a callable
*/
declare function computeCallableA(): Callable<number>
utilA(
computeCallableA(),
t => t.toExponential() // t is a number, OK
)
utilB(
computeCallableA(),
{
fn: t => t.toExponential() // t is a number, OK
}
)
utilC({
callable: computeCallableA(),
fn: t => t.toExponential() // t is a number, OK
})
/**
* Case 3: directly assigning the result of a generic function returning a callable
*/
declare function computeCallableB<T>(t: T): Callable<T>
utilA(
computeCallableB(100),
t => t.toExponential() // t is a number, OK
)
utilB(
computeCallableB(100),
{
fn: t => t.toExponential() // t is a number, OK
}
)
utilC({
callable: computeCallableB(100), // Error
fn: t => t.toExponential() // t is an unknown
})
/**
* The last case is strange IMO. The result type of `computeCallableB` is immediately known.
*
* Hovering the last `utilC` shows the following:
*
* function utilC<number>(params: {
* callable: Callable<number>;
* fn: (t: number) => any;
* }): any
*
* Despite the fact that TS infers generic `T` of `utilC`, assignment to `callable` param
* gives: "Type 'Callable<number>' is not assignable to type 'Callable<unknown>'.".
*
* It is hard to understand whats going on. More than that, everything works in TS v3.3.3.
* In newer versions the behavior is broken.
*
* This has an impact on the library I help to maintain. Such use cases are not rare and
* people are forced to assign the result of `computeCallableB(100)` to a variable first,
* and then to a `callable` parameter, which affects DX:
*/
const computeCallableBResult = computeCallableB(100)
utilC({
callable: computeCallableBResult, // This works!
fn: t => t.toExponential() // t is a number, OK
})
```
**Expected behavior:**
The invocation of
```ts
utilC({
callable: computeCallableB(100), // Error
fn: t => t.toExponential() // t is an unknown
})
```
gives no errors.
**Actual behavior:**
Strange behavior in versions above 3.3.3. | Needs Investigation | low | Critical |
524,171,877 | opencv | Add OpenCL events support | Initial solution: https://github.com/opencv/opencv/pull/14600/ | feature,category: ocl | low | Minor |
524,205,138 | flutter | [material] allow all overrides Theme().copyWith() | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Steps to Reproduce
<!--
Please tell us exactly how to reproduce the problem you are running into.
Please attach a small application (ideally just one main.dart file) that
reproduces the problem. You could use https://gist.github.com/ for this.
If the problem is with your application's rendering, then please attach
a screenshot and explain what the problem is.
-->
While creating a copy of theme primarySwatch and fontFamily isn't available for override.
Is this normal or a limitation of some kind?
<img width="789" alt="Screenshot 2019-11-18 at 1 41 13 PM" src="https://user-images.githubusercontent.com/13887407/69035491-d1db1200-0a09-11ea-9da8-b32060423d94.png">
<!--
Please tell us which target platform(s) the problem occurs (Android / iOS / Web / macOS / Linux / Windows)
Which target OS version, for Web, browser, is the test system running?
Does the problem occur on emulator/simulator as well as on physical devices?
-->
**Target Platform:** MacOS
**Target OS version/browser:** MacOS
**Devices:** MacOSS
## Logs
<!--
Run your application with `flutter run --verbose` and attach all the
log output below between the lines with the backticks. If there is an
exception, please see if the error message includes enough information
to explain how to solve the issue.
-->
<!-- Finally, paste the output of running `flutter doctor -v` here. -->
```
[โ] Flutter (Channel master, v1.12.3-pre.45, on Mac OS X 10.14.6 18G95, locale en-GB)
โข Flutter version 1.12.3-pre.45 at /Users/ayushpgupta/development/flutter_master/flutter
โข Framework revision c82c587b33 (33 hours ago), 2019-11-16 18:02:51 -0500
โข Engine revision 7ef587220a
โข Dart version 2.7.0
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
โข Android SDK at /Users/ayushpgupta/Library/Android/sdk
โข Android NDK location not configured (optional; useful for native profiling support)
โข Platform android-29, build-tools 29.0.2
โข Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[โ] Xcode - develop for iOS and macOS (Xcode 10.3)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Xcode 10.3, Build version 10G8
โข CocoaPods version 1.6.1
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 3.5)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin version 39.0.3
โข Dart plugin version 191.8423
โข Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[โ] IntelliJ IDEA Ultimate Edition (version 2019.2.3)
โข IntelliJ at /Applications/IntelliJ IDEA.app
โข Flutter plugin version 41.2.3
โข Dart plugin version 192.7402
[โ] Connected device (3 available)
โข macOS โข macOS โข darwin-x64 โข Mac OS X 10.14.6 18G95
โข Chrome โข chrome โข web-javascript โข Google Chrome 78.0.3904.97
โข Web Server โข web-server โข web-javascript โข Flutter Tools
! Doctor found issues in 1 category.
```
| c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Critical |
524,243,890 | tensorflow | tf.io.gfile.glob does not list all files in a Google Cloud Storage bucket | **System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): ?
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: /
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): 2.0.0
- Python version: 3
- Bazel version (if compiling from source): /
- GCC/Compiler version (if compiling from source): /
- CUDA/cuDNN version: /
- GPU model and memory: /
**Describe the current behavior**
When listing file with `tf.io.gfile.glob` not all images are returned. It seems it is not resolving the folders recursively.
When using the same path with gsutils we get the correct image count.
**Describe the expected behavior**
When using the same gs:// path with **gsutils** we get the correct amount of images.
**Code to reproduce the issue**
In order to reproduce the behavior I prepared a Google Bucket with the following structure. The bucket is public accessible, please feel free to use it to reproduce the behavior on your end: `gs://tensorflow-issue-reproduction`




In summary we have 4 jpg images nested in different folder levels.
TensorFlow 2 code to reproduce
```
files = tf.io.gfile.glob('gs://tensorflow-issue-reproduction/**/*.jpg')
print('file count: ', len(files))
# found files 1
```
gsutil command which works properly
```
gsutil du gs://tensorflow-issue-reproduction/**/*.jpg | wc -l
# found files 4
```
**Other info / logs**
/
Best regards
Sascha
| stat:awaiting tensorflower,type:bug,comp:ops | medium | Major |
524,278,267 | godot | GI Probe streaking artifacts | **Godot version**
3.1.1
**OS/device including version:**
Windows 10(64 bit)
Running on a laptop with a 1060, i7-8750H @ 2.20GHz and 16 GB ram
**Issue description:**
Building lighting with a GI probe produces weird streaks on most models in the scene:

**Steps to reproduce:**
Model and unwrap a medium sized scene in blender and export to Godot as a GLB.
Import it as a scene.
Add directional light, environment light and GI probe.
Build GI with maximum GI subdiv.
**Minimal reproduction project:**
[GiProbeIssue.zip](https://github.com/godotengine/godot/files/3858165/GiProbeIssue.zip)
| bug,topic:rendering,confirmed | low | Major |
524,287,537 | go | gccgo: compiling cmd/internal/obj/x86 on ppc64le takes 14.5GB of RAM and 20 minutes of CPU | Bootstrapping the golang.org Go distribution using this version of gccgo takes an excessive amount of time:
```
$ go version
go version go1.12.2 gccgo (GCC) 9.2.1 20190827 (Red Hat 9.2.1-1) linux/ppc64le
```
Specifically, while compiling cmd/internal/obj/x86, `go1` seems to take nearly 20 minutes and uses 14.5GB of RAM for the first 8.5 minutes (after that, it drops to 3GB of RAM). For comparison, make.bash in its entirety takes about 24 minutes of wall clock time and 33 minutes of CPU time.
It seems like the large arrays/slices in that package are hitting some super-linear complexity.
/cc @ianlancetaylor | NeedsInvestigation | low | Minor |
524,352,949 | go | x/tools/cmd/present: support play snippets in module mode | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +9325bec899 Wed Nov 13 11:59:24 2019 +0100 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE="on"
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/elias/Library/Caches/go-build"
GOENV="/Users/elias/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/elias/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/Users/elias/go-tip"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/Users/elias/go-tip/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/elias/proj/gophercon-2019-talk/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/_7/lnt35k555hl2bs7fjygkhgx00000gp/T/go-build183530635=/tmp/go-build -gno-record-gcc-switches -fno-common"
GOROOT/bin/go version: go version devel +9325bec899 Wed Nov 13 11:59:24 2019 +0100 darwin/amd64
GOROOT/bin/go tool compile -V: compile version devel +9325bec899 Wed Nov 13 11:59:24 2019 +0100
uname -v: Darwin Kernel Version 18.7.0: Sat Oct 12 00:02:19 PDT 2019; root:xnu-4903.278.12~1/RELEASE_X86_64
ProductName: Mac OS X
ProductVersion: 10.14.6
BuildVersion: 18G1012
lldb --version: lldb-1100.0.28.19
Apple Swift version 5.1 (swiftlang-1100.0.270.13 clang-1100.0.33.7)
</pre></details>
### What did you do?
```
$ git clone https://github.com/eliasnaur/gophercon-2019-talk
$ cd gophercon-2019-talk
$ GO111MODULE=on present
```
Then, I opened http://127.0.0.1:3999/gophercon-2019.slide#7 in a browser and pressed "run".
### What did you expect to see?
The program running locally.
### What did you see instead?
```
compile1.go:7:5: cannot find module providing package gioui.org/app: working directory is not part of a module
Program exited: exit status 1
```
`present` used to work with `GO111MODULE=on`. I believe the fix for #32027 broke it.
| NeedsInvestigation,modules,Tools | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.