id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
418,795,298 | pytorch | Feature Request: deterministic CUDA torch.nn.CTCLoss | Updated again.
The inclusion of the CTC loss function in pytorch is great. However, the loss may be deterministic or non-deterministic under some conditions. In typical speech training scenarios, it could actually happen that for some batches the CUDNN algorithm is used (which can be set to deterministic or non-deterministic modes), and for some batches the CUDA algorithm is used (which is always non-deterministic).
| Device | `USE_CUDNN` | `torch.backends.cudnn.deterministic` | Code | Deterministic |
|---|---|---|---|---|
| CPU | - | -| [ATen/native/LossCTC.cpp](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/LossCTC.cpp) | Yes |
| GPU | False | - | [ATen/native/cuda/LossCTC.cu](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/LossCTC.cu) | No |
| GPU | True | True | [ATen/native/cudnn/LossCTC.cpp](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cudnn/LossCTC.cpp) | Yes |
| GPU | True | False | [ATen/native/cudnn/LossCTC.cpp](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cudnn/LossCTC.cpp) | No|
- `USE_CUDNN` is a boolean variable re-generated for every batch [here](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/LossCTC.cpp#L336-L355)
I have attached a code sample for reproducing the non-deterministic loss function. Even though the difference in this code sample is small, it gets larger in training of speech recognition models. To the best of my knowledge, warp-ctc is deterministic on GPU (e.g.via [Sean Naren's pytorch binding](https://github.com/SeanNaren/warp-ctc)).
Would it be possible to introduce a deterministic mode in the CUDA version? To the best of my knowledge, this needs an alternative for `atomicAdd` in [`LossCTC.cu`](https://github.com/pytorch/pytorch/blob/29f096cc70cf9cc1a317ae7107228215b7dde60b/aten/src/ATen/native/cuda/LossCTC.cu#L409-L412).
Output:
```
7402
Trial 0 ::: Device cuda ::: Iter 99 ::: net_csum -1.2945041656 ::: log_prob_checksum -500026.3125000000 ::: loss 10.0990858078 ::: inc_loss 1045.5471239090
Trial 1 ::: Device cuda ::: Iter 99 ::: net_csum -1.2944996357 ::: log_prob_checksum -500026.2500000000 ::: loss 10.0990848541 ::: inc_loss 1045.5470838547
Trial 2 ::: Device cuda ::: Iter 99 ::: net_csum -1.2945439816 ::: log_prob_checksum -500026.2500000000 ::: loss 10.0990819931 ::: inc_loss 1045.5470523834
Trial 0 ::: Device cpu ::: Iter 99 ::: net_csum 2.3939034939 ::: log_prob_checksum -499604.8750000000 ::: loss 10.1452789307 ::: inc_loss 1050.8508319855
Trial 1 ::: Device cpu ::: Iter 99 ::: net_csum 2.3939034939 ::: log_prob_checksum -499604.8750000000 ::: loss 10.1452789307 ::: inc_loss 1050.8508319855
Trial 2 ::: Device cpu ::: Iter 99 ::: net_csum 2.3939034939 ::: log_prob_checksum -499604.8750000000 ::: loss 10.1452789307 ::: inc_loss 1050.8508319855
```
Code:
```
import torch
import torch.nn as nn
def train_with_ctc(trial, device):
# Random seed
torch.manual_seed(0)
# Data
torch.manual_seed(0)
inp=torch.randn(500,16,10, device=device)
targets = torch.randint(1, 20, (sum(range(100,116)),), dtype=torch.int)
input_lengths = torch.full((16,), 500, dtype=torch.long)
target_lengths = torch.arange(100,116, dtype=torch.long)
# Network
net=nn.Sequential(nn.Linear(10,20), torch.nn.LogSoftmax(2))
net.to(device)
optimizer=torch.optim.Adam(net.parameters())
ctc_loss = nn.CTCLoss(blank=19) # use cuDNN: blank=0, use CUDA: blank !=0 (only true when other parameters are not changed)
# Training loop
inc_loss=0
for it in range(100):
optimizer.zero_grad()
log_probs=net(inp)
loss = ctc_loss(log_probs, targets, input_lengths, target_lengths)
loss.backward()
optimizer.step()
inc_loss+=float(loss)
# Print
torch.cuda.synchronize()
net_csum=torch.nn.utils.parameters_to_vector(net.parameters()).sum()
print('Trial {} ::: Device {:4} ::: Iter {} ::: net_csum {:.10f} ::: log_prob_checksum {:.10f} ::: loss {:.10f} ::: inc_loss {:.10f}'.format(trial, device, it, net_csum,log_probs.sum(), float(loss), inc_loss))
if __name__ == '__main__':
# Get cuDNN version
print(torch.backends.cudnn.version())
# Toggle cuDNN backend
# torch.backends.cudnn.deterministic = True # change to True and blank label to 0 to toggle CUDNN_CTC_LOSS_ALGO_DETERMINISTIC / CUDNN_CTC_LOSS_ALGO_NON_DETERMINISTIC in pytorch/aten/src/ATen/native/cudnn/LossCTC.cpp
# torch.backends.cudnn.benchmark = False
# Train with CTC
for device in ['cuda', 'cpu']:
for trial in range(3):
train_with_ctc(trial, device)
```
cc @albanD @mruberry @jbschlosser @walterddr @kurtamohler | feature,module: nn,module: loss,triaged,module: determinism | low | Major |
418,809,502 | pytorch | [Caffe2] cudnn mismatch | ## β Questions and Help
### Hi, Everyone.
This is my system config:
### cuda: 8.0 cudnn: 6.0.21
### Environment : A virtual environment, with
### python2.7 in Anaconda2, cudatoolkit: 8.0, cudnn: 7.1.3
### When I install caffe2 from **source code**, and run codes based on it, **i met following issue:**

### And When I build, I check the cuda and cudnn information like this:

### And i met these warning infos at the last:

### This is the command I used in cmake:
**cmake .. -DCMAKE_PREFIX_PATH=/home/pci/anaconda2/envs/spotter -DCMAKE_INSTALL_PREFIX=/home/pci/anaconda2/envs/spotter -DPYTHON_LIBRARY=/home/pci/anaconda2/envs/spotter/lib/python2.7/site-packages -DPYTHON_INCLUDE_DIR=/home/pci/anaconda2/envs/spotter/include/python2.7
-DUSE_NCCL=OFF**
### **It shows that caffe2 builded with cuda8 and cudnn7.1 in build information, But...**
### So Why this happend...? And I want to know how to fix it...
### Thank you very much ! | caffe2 | low | Major |
418,816,272 | godot | Android export should give clear error on Release export when matching keystore is not configured | **Godot version:**
3.1 b11
**OS/device including version:**
Windows 10 x64
**Issue description:**
As related [here](https://godotengine.org/qa/41815/android-export-generates-error-export-with-debug-option-godot), Android export generates an error when "Export With Debug" option os off
**Steps to reproduce:**
[Disable the option](https://i.imgur.com/BrF1kbC.png)
[First Error](https://i.imgur.com/VCkoF6Q.png)
[Second Error](https://i.imgur.com/JBMXdUb.png)
Turning this option on again, the export works ok.
What's wrong? | enhancement,platform:android,topic:editor,usability | low | Critical |
418,847,087 | TypeScript | JSDoc: Typescript is interpreting varargs in function signature as required argument if jsdoc function notation is used in typedef. | **TypeScript Version:** [email protected]
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** jsdoc, varargs, typedef,
**Code**
```js
// if I want to use typescript format
// this format is not supported by WebStorm
/** @typedef {(...a2:Array<string>) => void} A */
/** @type {A} */
let a;
a(); // but this works fine with typescript
// if I want to use JSDoc format
// this format is supported by WebStorm
/** @typedef {function(...string):void} B */
/** @type {B} */
let b;
b(); // but this not work, TS2555: Expected at least 1 arguments, but got 0
```
**Expected behavior:**
Both formats should be interpreted in same way.
**Actual behavior:**
```
test.js:13:1 - error TS2555: Expected at least 1 arguments, but got 0.
13 b(); // but this not work, TS2555: Expected at least 1 arguments, but got 0
~~~
test.js:10:24
10 /** @typedef {function(...string):void} B */
~~~~~~~~~
An argument for '0' was not provided.
```
| Suggestion,Awaiting More Feedback | low | Critical |
418,869,988 | nvm | installation of nvm with XDG variables sets doesn't seem to use a subdirectory | Hey,
I tried to install nvm according to the README and my environment contains a few XDG variables, like
```
declare -x XDG_CACHE_HOME="$HOME/.cache"
declare -x XDG_CONFIG_HOME="$HOME/.config"
declare -x XDG_DATA_HOME="$HOME/.local/share"
```
The installation (using bash) took place inside of the `.config` directory - was used as root Git directory. Could it be possible that `/nvm` should not be part of the variable scoping - [bash parameter substitution](http://tldp.org/LDP/abs/html/parameter-substitution.html)?
https://github.com/creationix/nvm/blob/2410215b6a96c5a4376af8a41b6e2942b4b6cc2d/install.sh#L11
```
$ echo "${XDG_CONFIG_HOME}"
$HOME/.config
$ echo "${XDG_CONFIG_HOME/nvm}"
$HOME/.config
$ bash --version
GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
```
Cheers | bugs,pull request wanted,installing nvm | low | Major |
418,908,450 | rust | Allow lints to declare that they're pure | Clippy and rustc both have a bunch of allow by default lints. Due to the way lint passes work, these have to be run even if never enabled, since lint passes may store state between `check_foo` calls.
Most lints don't do this, most lints are zero-sized.
It would be nice to allow LintPasses to declare that they are "pure", and have rustc omit allowed pure passes from the list of lint passes until enabled. This has some performance benefits but would also allow for incremental linting if we ever want that.
cc @Zoxc @oli-obk | C-enhancement,A-lints,T-compiler | low | Major |
418,939,990 | react | Can an error boundary prevent React's error logging? | I noticed [this unconditional `console.error`](https://github.com/facebook/react/blob/d0289c7e3a2dfc349dcce7f9eb3dee22464e97bd/packages/react-reconciler/src/ReactFiberErrorLogger.js#L86) which I'd like to prevent to keep the console clean from errors that are already "caught" in an error boundary.
Maybe a condition on `capturedError.errorBoundaryFound` could prevent this logging? | Type: Feature Request | high | Critical |
418,976,684 | youtube-dl | Add Atlantic Broadband to the MSO list. | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2019.03.09*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2019.03.09**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [x] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2019.03.09
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
...
<end of log>
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.
If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
I'm not a programmer but have done some scripting so I know a little at what I'm looking at.
Modified the adobepass.py with the following:
'ABB': {
'name': 'Atlantic Broadband',
'username_field': 'username',
'password_field': 'password',
},
I wanted to see if it would just work out of the box.
Here is the result:
youtube-dl --ap-mso ABB --ap-username [email protected] --ap-password xxxxxxxx https://disneynow.go.com/shows/pj-masks/season-02/episode-25-gekko-and-the-opposite-ray-pj-masks-vs-bad-guys-united/vdka8567972?pid=PL554034612 --verbose
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['--ap-mso', 'ABB', '--ap-username', 'PRIVATE', '--ap-password', 'PRIVATE', 'https://disneynow.go.com/shows/pj-masks/season-02/episode-25-gekko-and-the-opposite-ray-pj-masks-vs-bad-guys-united/vdka8567972?pid=PL554034612', '--verbose']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.03.01
[debug] Python version 3.6.1 (CPython) - Linux-3.10.0-862.3.2.el7.x86_64-x86_64-with-centos-7.5.1804-Core
[debug] exe versions: ffmpeg 3.4.5, ffprobe 3.4.5
[debug] Proxy map: {}
[Go] vdka8567972: Downloading webpage
[Go] VDKA8567972: Downloading JSON metadata
[Go] VDKA8567972: Downloading Provider Redirect Page
ERROR: Unable to download webpage: HTTP Error 400: Bad Request (caused by <HTTPError 400: 'Bad Request'>); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "/usr/local/lib/python3.6/site-packages/youtube_dl/extractor/common.py", line 605, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/lib/python3.6/site-packages/youtube_dl/YoutubeDL.py", line 2225, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/local/lib/python3.6/urllib/request.py", line 532, in open
response = meth(req, response)
File "/usr/local/lib/python3.6/urllib/request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python3.6/urllib/request.py", line 570, in error
return self._call_chain(*args)
File "/usr/local/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/usr/local/lib/python3.6/urllib/request.py", line 650, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
I feel like I need to modify something to get it further so that is why I'm reaching out.
Some additional info:
Redirects to this url: https://auth.atlanticbb.net/saml/module.php/authSynacor/login.php?AuthState=_f667de4449b7f14af0e7a2b8c1df271ed8bfdc6146%3Ahttps%3A%2F%2Fauth.atlanticbb.net%2Fsaml%2Fsaml2%2Fidp%2FSSOService.php%3Fspentityid%3Dhttps%253A%252F%252Fadobe.auth-gateway.net%252Fsaml%252Fmodule.php%252Fsaml%252Fsp%252Fmetadata.php%252Fproxy_auth.atlanticbb.net%26cookieTime%3D1552082646%26RequesterID%3D%255B%2522DisneyChannels%2522%252C%2522https%253A%255C%252F%255C%252Fsaml.sp.auth.adobe.com%2522%255D%26NameIDFormat%3Durn%253Aoasis%253Anames%253Atc%253ASAML%253A2.0%253Anameid-format%253Atransient
Looking at the source code in the html authentication page, I found the following:
<div id="login_form" class="form"><span class="login_message">Welcome, Please Log In</span><label for="username">Email Address</label><input placeholder="Email Address" class="input" id="username" type="text" name="username" value="[email protected]" tabindex="1" autocapitalize="off" autocorrect="off" required="" autofocus=""><div class="field-help"></div><label for="password">Password</label><input placeholder="Password" class="input" id="password" type="password" name="password" value="" tabindex="2" autocapitalize="off" autocorrect="off" required=""><div class="field-help"></div><input type="hidden" name="login_type" value="username,password"><input type="hidden" name="source" value=""><button id="login" type="submit" name="source_button" value="" class="login" tabindex="3">
Log In
</button><span class="remember"><input type="checkbox" name="remember_me" value="yes" tabindex="4" checked="checked">
Remember Me
</span><!-- CUSTOM SELFCARE AREA --><ul id="selfcare_list"><li><a target="_blank" id="FORGOT_USERNAME" href="https://emailtools.atlanticbb.net/etools/recovery/forgotYourUsername/">Forgot or need an E-Mail Address?</a></li><li><a target="_blank" id="FORGOT_PASSWORD" href="https://emailtools.atlanticbb.net/etools/recovery/forgotYourPassword/">Forgot your password?</a></li></ul><!-- CUSTOM SELFCARE AREA --></div>
I'll gladly work on this my self, just need some help. I can also create a sub email account that one the developers could use to also test. Thanks for your help. | tv-provider-account-needed | low | Critical |
419,019,169 | flutter | Explore making State loss more obvious in widgets tests | Make it easier for users to notice States are being disposed.
One way is to make all State disposes need to be expected in widgets tests. Test: https://github.com/flutter/flutter/pull/29076
By blinding adding it in, 601 out of 3,804 (16%) of existing tests fail.
By filtering some common states such as _AutomaticKeepAliveState, the failure rate is down to 497 (13%). Though maintaining a whitelist is unlikely the right solution.
Perhaps modify pumpWidget such that only explicitly constructed widgets sent into pumpWidget have their associated States tracked. | a: tests,framework,a: quality,c: proposal,P3,team-framework,triaged-framework | low | Critical |
419,020,425 | flutter | Add a State create/dispose debug rainbow | Similar to #15550. Though rather than asserting for performance, this should help checking for functional correctness.
| c: new feature,framework,a: debugging,customer: google,P3,team-framework,triaged-framework | low | Critical |
419,024,321 | godot | TextureRegion doesn't update to new copy of AtlasTexture | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
master 86d6a72c9745bd15e3217d63b2cc5ba73fe4ad34
both GLES3 and GLES2
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Windows 10 64-bit
GTX 1060 6GB
**Issue description:**
<!-- What happened, and what was expected. -->
TextureRegion gets "disconnected from duplicate TextureRect's Atlas Texture when you make it unique"
It works fine with a Sprite.
**Steps to reproduce:**
- Create TextureRect and add Atlas Texture
- Select any image as atlas
- (Optional) Select any region
- Duplicate TextureRect
- Select TextureRect2 and click on AtlasTexture resource to open TextureRegion
- Make AtlasTexture unique
- Change region inside TextureRegion
- It doesn't update current region
To fix it:
- Click AtlasTexture resource to close TextureRegion and open it again (Doing it by clicking TextureRegion tab won't fix it)
| bug,topic:editor,confirmed | low | Major |
419,027,824 | flutter | [Discussion] A communication channel between an isolate and platform code on an arbitrary thread | Filing this to start and track a discussion.
Currently the only mechanism that Flutter plugins can use for communication between the Dart code and the platform specific code on Android and iOS are platform channels.
Platform channels enforce a single cross-thread flow: messages from the main Dart isolate(running on the UI thread) are sent to the platform specific code running on the platform thread.
Allowing passing messages from Dart code running in a different isolate to platform-specific code(e.g Java on Android) running not on the platform thread provides some needed flexibility, specifically I have 2 main concrete scenarios in mind:
### Streaming data from Dart to platform code
#27896 is a request for allowing the Dart code to provide a byte stream for data to be loaded by a webview. The Android API lets us provide a Java Stream to the webview, and the webview will read this stream on a background thread.
What we need here is a stream that is fed by Dart code on one side, and is read by the Java code on the other side, implementing such a stream over a platform channel enforces involving the UI thread and the platform thread in the dance (UI_THREAD->PLATFORM_THREAD->BACKGROUND_THREAD) for every data chunk.
A communication channel between a Dart isolate and platform code on an arbitrary thread(or maybe on the thread running the Dart isolate) will allow doing so without involving the UI and platform threads.
Note that I did not run benchmarks and don't have a good feeling for how significant the UI performance gain would be for such a thing(I guess going through the UI and platform threads might not be that bad as the Dart code won't be blocking on IO but would just occasionally send some data to the platform thread which will just have to notify the background thread).
I wonder if others have a better hunch than me for the performance cost involved with going through the UI and platform threads.
### Making "synchronous" calls from platform code to Dart code
This came up most recently in #25329 where we need to provide a Java API with a callback that has to make a synchronous decision(and this decision makes most sense to be made by Dart code). For #25329 we came up with a pretty complex workaround to avoid the need for a synchronous decision(that has drawbacks, specifically giving up on some webview security mechanisms).
This has also came up in other use-cases where we wanted to use platform APIs that requires being handed with callbacks that makes synchronous decisions.
While we can devise a simpler mechanism for allowing synchronous calls(@jason-simmons and I have discussed a few options), we should discourage users from using it unless they really know what they are doing(generally you shouldn't block the platform thread), and what better way is there other than making it hard to do :)
Specifically if we had a communication channel from Dart to Java code running not on the platform thread (whether through a separate isolate or not), we could block the platform thread and unblock it by sending a message from Dart to that other thread that will unblock the platform thread.
Happy to hear your thoughts, and ideally decide whether this is something worth doing or that the benefits are not important enough.
cc @chinmaygarde @jason-simmons @cbracken | engine,customer: google,c: proposal,P3,a: plugins,team-engine,triaged-engine | low | Major |
419,041,038 | rust | Resolve intra-doc-links on Self type implicitly | With intra-doc-links, the following works:
```rust
/// [Self::bar]
pub struct Foo;
impl Foo {
/// [Self::bar]
fn bar() {}
}
```
but this does not:
```
/// [bar]
pub struct Foo;
impl Foo {
/// [bar]
fn bar() {}
}
```
For structs and impls we should try resolving things against `Self` anyway | T-rustdoc,C-enhancement,A-intra-doc-links | medium | Major |
419,043,482 | rust | Rust ppc64 requires AltiVec, which is not available on PowerPC e5500 | rustc -Vv
rustc 1.35.0-nightly (88f755f8a 2019-03-07)
binary: rustc
commit-hash: 88f755f8a84df1d9e6b17cf10c96ae8b93481b2e
commit-date: 2019-03-07
host: x86_64-unknown-linux-gnu
release: 1.35.0-nightly
LLVM version: 8.0
strace log:
munmap(0x3fffa2913000, 14223) = 0
set_tid_address(0x3fffa290c0d0) = 3111
set_robust_list(0x3fffa290c0e0, 24) = 0
rt_sigaction(SIGRTMIN, {0x3fffa288f1c0, [], SA_SIGINFO}, NULL, 8) = 0
rt_sigaction(SIGRT_1, {0x3fffa288f1d8, [], SA_RESTART|SA_SIGINFO}, NULL, 8) = 0
rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0
ugetrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
--- SIGILL {si_signo=SIGILL, si_code=ILL_ILLOPC, si_addr=0x37149e34} ---
+++ killed by SIGILL (core dumped) +++
Illegal instruction (core dumped)
(gdb) b reset_sigpipe
Function "reset_sigpipe" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (reset_sigpipe) pending.
(gdb) r
Starting program: /root/xx
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Program received signal SIGILL, Illegal instruction.
0x0000000020017e34 in reset_sigpipe () at src/libstd/sys/unix/mod.rs:77
77 src/libstd/sys/unix/mod.rs: No such file or directory.
(gdb) r
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /root/xx
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Breakpoint 1, lang_start_internal () at src/libstd/rt.rs:30
30 src/libstd/rt.rs: No such file or directory.
(gdb) s
init () at src/libstd/rt.rs:30
30 in src/libstd/rt.rs
(gdb) disassemble
Dump of assembler code for function lang_start_internal:
0x0000000020017dec <+0>: mflr r0
0x0000000020017df0 <+4>: std r0,16(r1)
0x0000000020017df4 <+8>: stdu r1,-512(r1)
0x0000000020017df8 <+12>: li r7,416
0x0000000020017dfc <+16>: std r3,120(r1)
=> 0x0000000020017e00 <+20>: li r3,13
0x0000000020017e04 <+24>: std r4,128(r1)
0x0000000020017e08 <+28>: li r4,1
0x0000000020017e0c <+32>: std r23,440(r1)
0x0000000020017e10 <+36>: std r24,448(r1)
0x0000000020017e14 <+40>: std r25,456(r1)
0x0000000020017e18 <+44>: std r26,464(r1)
0x0000000020017e1c <+48>: std r27,472(r1)
0x0000000020017e20 <+52>: std r28,480(r1)
0x0000000020017e24 <+56>: std r29,488(r1)
0x0000000020017e28 <+60>: mr r29,r5
0x0000000020017e2c <+64>: std r30,496(r1)
0x0000000020017e30 <+68>: mr r30,r6
0x0000000020017e34 <+72>: stvx v31,r1,r7
0x0000000020017e38 <+76>: bl 0x20008fa0 <0000097b.plt_call.signal@@GLIBC_2.3>
0x0000000020017e3c <+80>: ld r2,40(r1)
---Type <return> to continue, or q <return> to quit---q
Quit
(gdb) c
Continuing.
Program received signal SIGILL, Illegal instruction.
0x0000000020017e34 in reset_sigpipe () at src/libstd/sys/unix/mod.rs:77
77 src/libstd/sys/unix/mod.rs: No such file or directory.
readelf -V xx
Version symbols section '.gnu.version' contains 89 entries:
Addr: 00000000000010f8 Offset: 0x0010f8 Link: 5 (.dynsym)
000: 0 (*local*) 0 (*local*) 0 (*local*) d (GLIBC_2.22)
004: 2 (GLIBC_2.3) 2 (GLIBC_2.3) 2 (GLIBC_2.3) 2 (GLIBC_2.3)
008: 3 (GCC_3.3) 4 (GLIBC_2.3) 4 (GLIBC_2.3) 3 (GCC_3.3)
00c: 2 (GLIBC_2.3) 0 (*local*) 4 (GLIBC_2.3) 4 (GLIBC_2.3)
010: 2 (GLIBC_2.3) 4 (GLIBC_2.3) 2 (GLIBC_2.3) 2 (GLIBC_2.3)
014: 5 (GLIBC_2.18) 6 (GLIBC_2.3.4) 7 (GCC_3.0) 7 (GCC_3.0)
018: 7 (GCC_3.0) 8 (GLIBC_2.3.2) 2 (GLIBC_2.3) 4 (GLIBC_2.3)
01c: 9 (GLIBC_2.3) 2 (GLIBC_2.3) a (GCC_4.2.0) 7 (GCC_3.0)
020: 8 (GLIBC_2.3.2) 2 (GLIBC_2.3) 2 (GLIBC_2.3) 4 (GLIBC_2.3)
024: 2 (GLIBC_2.3) 4 (GLIBC_2.3) 4 (GLIBC_2.3) 0 (*local*)
028: 4 (GLIBC_2.3) 2 (GLIBC_2.3) 4 (GLIBC_2.3) 4 (GLIBC_2.3)
02c: 8 (GLIBC_2.3.2) 2 (GLIBC_2.3) 4 (GLIBC_2.3) 2 (GLIBC_2.3)
030: 2 (GLIBC_2.3) 4 (GLIBC_2.3) b (GLIBC_2.4) 4 (GLIBC_2.3)
034: 4 (GLIBC_2.3) 2 (GLIBC_2.3) 4 (GLIBC_2.3) 2 (GLIBC_2.3)
038: c (GLIBC_2.3.3) 4 (GLIBC_2.3) 4 (GLIBC_2.3) 2 (GLIBC_2.3)
03c: 4 (GLIBC_2.3) 4 (GLIBC_2.3) 2 (GLIBC_2.3) 4 (GLIBC_2.3)
040: 2 (GLIBC_2.3) 7 (GCC_3.0) 7 (GCC_3.0) 4 (GLIBC_2.3)
044: 4 (GLIBC_2.3) 2 (GLIBC_2.3) 8 (GLIBC_2.3.2) 2 (GLIBC_2.3)
048: 2 (GLIBC_2.3) 0 (*local*) 2 (GLIBC_2.3) 0 (*local*)
04c: 7 (GCC_3.0) 2 (GLIBC_2.3) 2 (GLIBC_2.3) 7 (GCC_3.0)
050: 2 (GLIBC_2.3) 4 (GLIBC_2.3) 2 (GLIBC_2.3) 4 (GLIBC_2.3)
054: 4 (GLIBC_2.3) 7 (GCC_3.0) 1 (*global*) 1 (*global*)
058: 1 (*global*)
Version needs section '.gnu.version_r' contains 5 entries:
Addr: 0x00000000000011b0 Offset: 0x0011b0 Link: 6 (.dynstr)
000000: Version: 1 File: ld64.so.1 Cnt: 1
0x0010: Name: GLIBC_2.22 Flags: none Version: 13
0x0020: Version: 1 File: libdl.so.2 Cnt: 1
0x0030: Name: GLIBC_2.3 Flags: none Version: 9
0x0040: Version: 1 File: libpthread.so.0 Cnt: 3
0x0050: Name: GLIBC_2.3.3 Flags: none Version: 12
0x0060: Name: GLIBC_2.3.2 Flags: none Version: 8
0x0070: Name: GLIBC_2.3 Flags: none Version: 4
0x0080: Version: 1 File: libgcc_s.so.1 Cnt: 3
0x0090: Name: GCC_4.2.0 Flags: none Version: 10
0x00a0: Name: GCC_3.0 Flags: none Version: 7
0x00b0: Name: GCC_3.3 Flags: none Version: 3
0x00c0: Version: 1 File: libc.so.6 Cnt: 4
0x00d0: Name: GLIBC_2.4 Flags: none Version: 11
0x00e0: Name: GLIBC_2.3.4 Flags: none Version: 6
0x00f0: Name: GLIBC_2.18 Flags: none Version: 5
0x0100: Name: GLIBC_2.3 Flags: none Version: 2
Version needs section '.gnu.version_r' contains 5 entries:
Addr: 0x00000000000011b0 Offset: 0x0011b0 Link: 6 (.dynstr)
000000: Version: 1 File: ld64.so.1 Cnt: 1
0x0010: Name: GLIBC_2.22 Flags: none Version: 13
0x0020: Version: 1 File: libdl.so.2 Cnt: 1
0x0030: Name: GLIBC_2.3 Flags: none Version: 9
0x0040: Version: 1 File: libpthread.so.0 Cnt: 3
0x0050: Name: GLIBC_2.3.3 Flags: none Version: 12
0x0060: Name: GLIBC_2.3.2 Flags: none Version: 8
0x0070: Name: GLIBC_2.3 Flags: none Version: 4
0x0080: Version: 1 File: libgcc_s.so.1 Cnt: 3
0x0090: Name: GCC_4.2.0 Flags: none Version: 10
0x00a0: Name: GCC_3.0 Flags: none Version: 7
0x00b0: Name: GCC_3.3 Flags: none Version: 3
0x00c0: Version: 1 File: libc.so.6 Cnt: 4
0x00d0: Name: GLIBC_2.4 Flags: none Version: 11
0x00e0: Name: GLIBC_2.3.4 Flags: none Version: 6
0x00f0: Name: GLIBC_2.18 Flags: none Version: 5
0x0100: Name: GLIBC_2.3 Flags: none Version: 2 | T-compiler,O-PowerPC,C-bug | low | Critical |
419,048,216 | pytorch | PyPy support | ## π Feature
Support pytorch from PyPy -- a fast, compliant alternative implementation of the Python language (http://pypy.org)
## Motivation
While pytorch itself probably won't benefit much from PyPy JIT, but often pytorch is used as a part of larger application, where using PyPy can have speed benefits, e.g. for evaluation, data generation, or other activities.
## Pitch
(A clear and concise description of what you want to happen). I can imagine several stages:
- PyPy support works by compiling from source
- PyPy is integrated into CI
- PyPy wheels are built and released
## Additional context
Right now it seems that PyPy support, at least for commonly used stuff, is not far from reality, for example this branch https://github.com/lopuhin/pytorch/tree/pypy3.6 works in PyPy 3.6 7.0.0 for a moderately complex application (didn't try running tests yet).
Here is a summary of fixes in that branch:
1. working around a compilation issue in the JIT: https://github.com/pytorch/pytorch/commit/0481da14052c09605b38940e08d8cae275c6a8cd - this is definitely a hack to get the compilation working, a proper implementation for PyPy needs to be in place.
2. **[merged]** `PySlice_Unpack` is not yet available in PyPy 3.6 -https://github.com/pytorch/pytorch/commit/752e204c90a782432eed77018e4fe3ad5fa4dbf2 - probably it's ok to merge this. With two above fixes, pytorch compiles on PyPy 3.6. PR https://github.com/pytorch/pytorch/pull/17836
3. **[merged in https://github.com/pybind/pybind11/pull/2146 ]** Applying https://github.com/pybind/pybind11/pull/1494 to pybind11 - https://github.com/pytorch/pytorch/commit/912b0b2a77b3ea0a5bac9fbe3ac5d577c86c6d7. Unfortunately, while the fix seems correct, and pybind11 has partial support and CI for PyPy, more work needs to be done to properly integrate it into pybind11, because other pybind11 tests are failing on a more recent PyPy, see https://github.com/pybind/pybind11/pull/1494 and https://github.com/pybind/pybind11/pull/1720
4. **[merged]** work around PyPy cpyext issue - https://github.com/pytorch/pytorch/commit/00941d8ae4e28bd03245a04c5096297c35a3128d - see https://bitbucket.org/pypy/pypy/issues/2968/segfault-calling-cpyext_tp_new_tuple (thanks @rlamy for the fix!), it seems that it is fine to merge this. PR https://github.com/pytorch/pytorch/pull/17837
Thanks to @xuhdev for an already merged PyPy compilation fix https://github.com/pytorch/pytorch/pull/11857
I submitted PRs for 2 and 4. If anyone can pick other items, that would be great.
While these are quite small fixes, still properly integrating them is not a small task, not speaking of setting up CI and supporting it.
cc @ezyang | module: binaries,feature,triaged | high | Critical |
419,048,627 | opencv | cv2.imread error when read a jp2 image with 7 channel | - OpenCV => 4.0.1
- Operating System / Platform => Windows 10 64 Bit Python 3.6
- .pyd Compiler => Visual Studio 2015
Get a NoneType when read jp2 image with 7 channel.
image url:http://www.usable-programming.com/zb_users/upload/2019/03/bavaria_north_7_channels.zip
| priority: low,category: imgcodecs | low | Critical |
419,057,565 | vscode | default run task | It would be nice if there was a default run task (build + execute), similar to the existing default build and test tasks. | feature-request,tasks | low | Major |
419,058,570 | go | x/image/font/sfnt: read more glyph metrics | Aligning text (left/right) requires access to more glyph metrics then AdvanceX. The side bearings (LSB/RSB) are required to know how much "whitespace" there is left/right of a glyph.
I'd like to propose a GlyphMetrics method which calculates and returns the LSB, RSB, Width and AdvanceX.
CL with working prototype is on its way. | NeedsDecision | low | Minor |
419,074,540 | godot | The Audio bottom panel goes under the taskbar on small displays (also SpriteFrames editor, shader editor, etc.) | **Godot version:**
v3.1.rc1
**OS/device including version:**
windows 10 x64
**Issue description:**
When click on "Audio" tab, the bottom side of godot goes under the taskbar, until toggle full screen and change to other tab (output, debugger, ..)


| bug,topic:editor,confirmed | medium | Critical |
419,097,871 | TypeScript | [BUG] Comments between export and import modules are removed | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.3.3333333333333333
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
* Comments
* treeshaking
* __pure__
**Code**
https://www.typescriptlang.org/play/index.html#src=import%20%7B%20m%20%7D%20from%20'm'%3B%0D%0Aconst%20t1%20%3D%20%2F*%40__PURE__*%2F%20m.bind(m)%3B%20%2F%2Fkeep%0D%0A%2F%2Fhere%20is%20a%20bug%20-%3E%20%20%F0%9F%91%87%20removed%20%F0%9F%91%87%0D%0Aexport%20const%20test%20%3D%20%2F*%40__PURE__*%2F%20m.bind(m)%3B
```ts
import { m } from 'm';
const t1 = /*@__PURE__*/ m.bind(m); //keep
//here is a bug -> π removed π
export const test = /*@__PURE__*/ m.bind(m);
```
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
#28370
#13721
#28482
| Bug,Domain: Comment Emit | low | Critical |
419,103,075 | vue | Pass component instance as second argument in computed setters | ### What problem does this feature solve?
Allow the usage of arrow functions in copmuted setters
From #7688
```js
computed: {
value: {
get: vm => vm.someValue,
set: (val, vm) => vm.someValue = val
}
}
```
### What does the proposed API look like?
```js
computed: {
value: {
get: vm => vm.someValue,
set (val, vm) {
this === vm // true
}
}
}
```
<!-- generated by vue-issues. DO NOT REMOVE --> | feature request,has PR | low | Major |
419,120,694 | vscode | Git - Pull and Sync fails when name of tracking branch differs from name of tracked branch on remote |
Issue Type: <b>Bug</b>
I have several branches that have another name, or rather path, in the upstream repository, e.g. local branch `foo` pushes to and fetches from branch `path/to/foo` in the upstream repository. But the tracking branch is `upstream/foo`, so the tracking branch path doesn't reflect the upstream branch path.
With this setup, the `git.pull` and `git.sync` commands fail with a message "Git: fatal: couldn't find remote ref foo".
Studying the code, I found that the bug is either at extensions/git/src/git.ts, lines 1462 - 1470, https://github.com/Microsoft/vscode/blob/c63c97c12848e85769e717209b73110e83c18ef6/extensions/git/src/git.ts#L1462-L1470
or at extensions/git/src/repository.ts, lines 924, 944 and 1006,
https://github.com/Microsoft/vscode/blob/c63c97c12848e85769e717209b73110e83c18ef6/extensions/git/src/repository.ts#L924
https://github.com/Microsoft/vscode/blob/c63c97c12848e85769e717209b73110e83c18ef6/extensions/git/src/repository.ts#L944
https://github.com/Microsoft/vscode/blob/c63c97c12848e85769e717209b73110e83c18ef6/extensions/git/src/repository.ts#L1006
Should the `name` field in the `upstream` object really be the name of the remote tracking branch? Shouldn't it be the name of the tracked branch on the remote? Or should the `upstream` object contain another field storing the name of the tracked branch on the remote which then should be used at extensions/git/src/repository.ts, lines 924, 944 and 1006?
VS Code version: Code 1.31.1 (1b8e8302e405050205e69b59abb3559592bb9e60, 2019-02-12T02:20:54.427Z)
OS version: Windows_NT x64 10.0.17134
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz (4 x 2592)|
|GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: enabled<br>surface_synchronization: enabled_on<br>video_decode: enabled<br>webgl: enabled<br>webgl2: enabled|
|Memory (System)|7.89GB (0.31GB free)|
|Process Argv|c:\Users\perpe\OneDrive - Adaptive Simulations Sweden AB\Source\VSCode Workspaces\NewPostProcessing.code-workspace|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (55)</summary>
Extension|Author (truncated)|Version
---|---|---
calculate|aca|2.1.0
Bookmarks|ale|10.2.2
project-manager|ale|10.3.2
code-gnu-global|aus|0.2.2
awarest-align|awa|1.1.0
path-intellisense|chr|1.4.2
ssh|chr|0.0.4
bracket-pair-colorizer|Coe|1.0.61
vscode-markdownlint|Dav|0.25.0
empty-indent|Dmi|0.2.0
vscode-gitignore-syntax|dun|0.1.2
gitlens|eam|9.5.1
tslint|eg2|1.0.43
hungry-backspace|ekl|1.0.0
prettier-vscode|esb|1.8.1
vscode-pyvmmonitor|fab|0.0.3
file-icons|fil|1.0.16
matlab|Gim|0.9.0
todo-tree|Gru|0.0.124
rest-client|hum|0.21.2
python-coding-conventions|igr|0.0.4
latex-workshop|Jam|6.1.0
vscode-jira|Kni|0.6.0
rest-client|Kro|0.18.3
git-tree-compare|let|1.5.0
Star|lip|0.0.3
remotefs|lix|0.0.13
vscode-smart-column-indenter|lmc|0.0.13
vscode-3dviewer|md2|1.0.0
rainbow-csv|mec|1.0.0
git-graph|mhu|1.4.1
vscode-deploy-reloaded|mkl|0.87.2
vscode-remote-workspace|mkl|0.41.0
prettify-json|moh|0.0.3
python|ms-|2019.2.5558
cpptools|ms-|0.21.0
Go|ms-|0.9.2
printcode|nob|3.0.0
indent-rainbow|ode|7.3.0
vscode-code-outline|pat|0.2.1
vscode-versionlens|pfl|0.22.0
material-icon-theme|PKi|3.6.3
quicktype|qui|12.0.46
datetime|rid|1.0.5
vscode-paste-and-indent|Rub|0.0.8
copy-text|sal|0.4.3
gitconfig|sid|2.0.0
open-file-between-two-folder|sky|0.0.1
vscode-hexdump|sle|1.7.2
python|tht|0.2.3
hosts|tom|1.1.1
python-extended-snippets|tus|0.0.1
vscodeintellicode|Vis|1.1.4
vscode-todo-highlight|way|1.0.4
pull-requester|yos|0.1.5
</details>
<!-- generated by issue reporter --> | bug,git | low | Critical |
419,138,911 | TypeScript | Provide .ts developer experience for .js modules (with a .d.ts without JSDoc) | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
checking .js files with declared types
reference types in .js files
.js developer experience
checking .js types without JSDoc
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
I'd like to have the same developer experience of writing .ts files that have their types declared in .d.ts files, when writing .js files with types declared in .d.ts files. I've asked a question about this (apparently missing) feature on [StackOverFlow](https://stackoverflow.com/questions/55083262/checking-my-libraries-js-code-with-my-libraries-corresponding-d-ts-code). This issue/feature is similar to #29056; however, this would be for the developers writing the module rather than those consuming the module. Also, be aware that the JS project does not support JSDoc typings.
To reiterate (from the SOF post) the story for this feature:
Imagine you are working on a project MyLibrary. It is written in JavaScript (MyLibrary.js) and you have also written a TypeScript declaration file (MyLibrary.d.ts). It is published to npm alongside your JS code so you can provide TS developers the ability to consume your project code and use it in TypeScript projects.
Now, you have some contributors to MyLibrary that are TypeScript developers. They would like the typings written in MyLibrary.d.ts to be inferred in the MyLibrary.js code (essentially granting them the TS dev experience while writting JS code).
## Use Cases
The Fastify Node.js server project is written in JavaScript and provides a fastify.d.ts file for typings. As a maintainer of this project I'd like for the types defined in this file to be referenced in the fastify.js file.
This type of dev experience might be difficult because, for example, the fastify.js file exports a single function `build`. When a dev uses fastify they would often write `const fastify = requires('fastify')` and then go from there. Our typings do not define types for `build` but for a module namespace `Fastify` object. If things worked like I wanted them to, i'd imagine the `build` function would need to be renamed to whatever I'm using in the type file.
I'm aware this feature request is maybe an anti-pattern, but I'd like to share it nonetheless to at least be discussed. I think it would be brilliant to provide a nearly equivalent developer experience for both JavaScript and TypeScript developers working on the same module library.
If this feature is already being worked on and I failed to land on it from my searches please link me to relevant issues and/or prs. I did search this repo issues, read the FAQ, read the 3.4 feature doc, and searched tirelessly on google.
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [ ] This wouldn't change the runtime behavior of existing JavaScript code
* [ ] This could be implemented without emitting different JS based on the types of the expressions
* [ ] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | medium | Critical |
419,174,455 | pytorch | Tensor::options() returns false for requires_grad when it is true | ## π Bug ?
Hello,
I have very simple c++ function:
```cpp
#include <torch/extension.h>
#include <iostream>
void print_options(torch::Tensor x) {
std::cout << x.options() << "\n";
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("print_options", &print_options, "");
}
```
which I use via JIT compiling extensions functionality:
```python
import torch
from torch.utils import cpp_extension
cuda_module = cpp_extension.load(name="cpp_module", sources=["cpp_modue/my_fun.cpp"])
print_options = cuda_module.print_options
a = torch.zeros(2, 3)
a.requires_grad = True
print(a.requires_grad)
print_options(a)
```
And here is the output:
```
True
TensorOptions(dtype=float, device=cpu, layout=Strided, requires_grad=false)
```
Is this expected behaviour? Why does passing tensor to the c++ function change its options (requires_grad in particular)?
- PyTorch Version is 1.0.1
cc @yf225 @glaringlee @zou3519 | module: cpp-extensions,triaged | low | Critical |
419,176,323 | youtube-dl | [TeachableCourse] unable to extract course title | ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2019.03.09*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [X] I've **verified** and **I assure** that I'm running youtube-dl **2019.03.09**
### Before submitting an *issue* make sure you have:
- [X] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [X] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [X] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [X] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-u', u'PRIVATE', u'-p', u'PRIVATE', u'https://stackskills.com/courses/enrolled/482418', u'--verbose']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.03.09
[debug] Python version 2.7.15 (CPython) - Linux-4.4.0-17763-Microsoft-x86_64-with
[debug] exe versions: ffmpeg 4.0.2, ffprobe 4.0.2
[debug] Proxy map: {}
[TeachableCourse] Downloading stackskills.com login page
[TeachableCourse] Logging in to stackskills.com
[TeachableCourse] 482418: Downloading webpage
WARNING: unable to extract course title; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
[download] Downloading playlist: 482418
[TeachableCourse] playlist 482418: Collected 0 video ids (downloading 0 of them)
[download] Finished downloading playlist: 482418
...
<end of log>
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.
If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
The issue is that the title of the course can not be downloaded anymore.
I would presume that means that the URL or the returned information has changed.
Therefore a fix would require a check of the returned data and adjusting the extraction process.
| account-needed | low | Critical |
419,179,062 | vue | Vue should not cause execution of content within <noscript> tags | ### Version
2.6.8
### Reproduction link
[https://jsfiddle.net/ncwuvekm/1/](https://jsfiddle.net/ncwuvekm/1/)
### Steps to reproduce
Check the "network" tab.
You can see that Vue causes a request to the content within the `<noscript>`
### What is expected?
Vue should not cause execution of `<noscript>`content
### What is actually happening?
Vue causes requests to elements within `<noscript>`
---
It's typical when websites are using Vue to supplement UI and a common pattern is to wrap the site with an `id="app"`
However, this introduces a problem when people are using `<noscript>` in various areas of the site that are not within Vue components as Vue will execute it regardless.
A use case example would be using a lazy loading library with a `<noscript>` fallback.
```html
<img src="thumbnail.jpg" data-src="hi-resolution.jpg" />
<noscript><img src="hi-resolution.jpg" /></noscript>
```
Vue will cause the `hi-resolution.jpg` image to download even though it's within the `<noscript>`
<!-- generated by vue-issues. DO NOT REMOVE -->
> For anybody experiencing this issue, a hack is to add `v-if="false"` to your `<noscript>` to prevent the element from rendering e.g.`<noscript v-if="false">` | has workaround | low | Major |
419,187,649 | flutter | Points of Path not Accessible | I am using the Path class from the painting.dart file to draw lines with a CustomPainter.
There is no way for me to access the contained points.
This is critical as the amount of points stored in the Path object can get huge (around 100k) which will lead to bad performance or I simply cannot work with these points.
As I cant access the points, I cannot do anything about it.
On the other hand storing these points to another list will unecessarily increase the used memory.
Please make a simple attribute or getter to access the Points stored to a Path object.
Thanks! | c: new feature,engine,P3,team-engine,triaged-engine | high | Major |
419,203,274 | go | cmd/go: go mod why fails for replacement modules | <pre>
$ go version
go version devel +ce7534ff06 Fri Mar 8 13:46:43 2019 +0000 linux/amd64
</pre>
When one module is replaced by another, `go mod why` does not acknowledge the replacement module.
Here's an example (https://godoc.org/github.com/rogpeppe/go-internal/cmd/testscript on the below file to reproduce):
It prints "main module does not need module `example.com/c`, but `example.com/c` is definitely in use as the replacement for `example.com/a` which isn't actually used at all.
```
# This command succeeds; Note: example.com/d is only
# depended on by example.com/c, but not by example.com/a, which is
# never used.
go mod why -m example.com/d
# This command succeeds, even though the program does not
# depend on example.com/a except as a replacement target.
go mod why -m example.com/a
! stdout 'main module does not need module'
# example.com is definitely used (and it's probably important
# the the user know that it is), but this command fails.
go mod why -m example.com/c
! stdout 'main module does not need module'
-- go.mod --
module m
go 1.11
require (
example.com/a v1.0.0
example.com/d v1.0.0 // indirect
)
replace example.com/a => example.com/c v1.0.0
-- go.sum --
example.com/c v1.0.0 h1:+JKa2qCailgQdye6M3nnfrb7q748qDLj2NNTEtN2DJs=
example.com/c v1.0.0/go.mod h1:NeOsx/KTizj35klXP3wYh3O0751aAtYrRoX+a6YAye8=
example.com/d v1.0.0 h1:+FTMPN+4iCiTwx9DTGWb788knewVgy4CGwf1SjyLpXY=
example.com/d v1.0.0/go.mod h1:jpRNKJ+rI4SFCFqRJlfe7G4saIJvHgJss1TcTCWmY18=
-- main.go --
package main
import _ "example.com/a"
func main() {
}
-- .gomodproxy/example.com_a_v1.0.0/.info --
{"Version":"v1.0.0","Time":"2018-10-22T18:45:39Z"}
-- .gomodproxy/example.com_a_v1.0.0/.mod --
module example.com/a
-- .gomodproxy/example.com_a_v1.0.0/a.go --
package a
import _ "example.com/b"
-- .gomodproxy/example.com_a_v1.0.0/go.mod --
module example.com/a
import _ "example.com/b"
-- .gomodproxy/example.com_b_v1.0.0/.info --
{"Version":"v1.0.0","Time":"2018-10-22T18:45:39Z"}
-- .gomodproxy/example.com_b_v1.0.0/.mod --
module example.com/b
-- .gomodproxy/example.com_b_v1.0.0/b.go --
package b
-- .gomodproxy/example.com_b_v1.0.0/go.mod --
module example.com/b
-- .gomodproxy/example.com_c_v1.0.0/.info --
{"Version":"v1.0.0","Time":"2018-10-22T18:45:39Z"}
-- .gomodproxy/example.com_c_v1.0.0/.mod --
module example.com/a
-- .gomodproxy/example.com_c_v1.0.0/a.go --
package a
import _ "example.com/d"
-- .gomodproxy/example.com_c_v1.0.0/go.mod --
module example.com/a
require example.com/d v1.0.0
-- .gomodproxy/example.com_d_v1.0.0/.info --
{"Version":"v1.0.0","Time":"2018-10-22T18:45:39Z"}
-- .gomodproxy/example.com_d_v1.0.0/.mod --
module example.com/d
-- .gomodproxy/example.com_d_v1.0.0/b.go --
package d
import _ "testing"
-- .gomodproxy/example.com_d_v1.0.0/go.mod --
module example.com/d
```
| NeedsInvestigation,modules | low | Minor |
419,203,756 | go | cmd/go: go mod why exit status should be non-zero if module is not used | <pre>
$ go version
go version devel +ce7534ff06 Fri Mar 8 13:46:43 2019 +0000 linux/amd64
</pre>
When `go mod why` is asked to show a module that isn't a dependency, it prints a "main module does not need" line, but it succeeds regardless.
Perhaps it should be changed so that the exit status reflects whether the dependency was actually found or not.
| NeedsFix,modules | low | Minor |
419,206,760 | pytorch | Build fails for caffe2/CMakeFiles/caffe2.dir/__/aten/src/ATen/native/mkldnn/Conv.cpp.o; possibly MKL-DNN issue | ## π Bug
PyTorch fails to build with an issue related to MKL-DNN. I previously built MKL-DNN by `cmake`ing from their GitHub repo.
Error trace is below:
```bash
[91mIn file included from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/opt/pytorch/aten/src/ATen/mkldnn/Runtime.h: In constructor βat::native::Stream::Stream()β:
/opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:42:40: error: βmkldnn::stream::kindβ has not been declared
Stream():_cpu_stream(mkldnn::stream::kind::eager) {}
^~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp: In function βat::Tensor at::native::mkldnn_convolution(const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, int64_t)β:
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:98:29: error: βmkldnn::memory::formatβ has not been declared
auto format_any = memory::format::any;
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:99:30: error: βmkldnn::memory::formatβ has not been declared
auto format_nchw = memory::format::nchw;
^~~~~~
[0m[91m/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:100:42: error: βmkldnn::memory::formatβ has not been declared
auto format_weight = (g!= 1) ? memory::format::goihw : memory::format::oihw;
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:100:66: error: βmkldnn::memory::formatβ has not been declared
auto format_weight = (g!= 1) ? memory::format::goihw : memory::format::oihw;
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:101:27: error: βmkldnn::memory::formatβ has not been declared
auto format_x = memory::format::x;
^~~~~~
[0m[91m/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:131:21: error: no matching function for call to βmkldnn::memory::memory(<brace-enclosed initializer list>, void*)β
input.data_ptr());
^
In file included from /opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:3,
from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/usr/local/include/mkldnn.hpp:900:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&)β
memory(const desc &md, const engine &aengine)
^~~~~~
/usr/local/include/mkldnn.hpp:900:5: note: no known conversion for argument 1 from β<brace-enclosed initializer list>β to βconst mkldnn::memory::desc&β
/usr/local/include/mkldnn.hpp:889:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&, void*)β
memory(const desc &md, const engine &aengine, void *ahandle) {
^~~~~~
/usr/local/include/mkldnn.hpp:889:5: note: candidate expects 3 arguments, 2 provided
/usr/local/include/mkldnn.hpp:577:8: note: candidate: βmkldnn::memory::memory(const mkldnn::memory&)β
struct memory: public handle<mkldnn_memory_t> {
^~~~~~
/usr/local/include/mkldnn.hpp:577:8: note: candidate expects 1 argument, 2 provided
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:133:22: error: no matching function for call to βmkldnn::memory::memory(<brace-enclosed initializer list>, void*)β
weight.data_ptr());
^
In file included from /opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:3,
from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/usr/local/include/mkldnn.hpp:900:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&)β
memory(const desc &md, const engine &aengine)
^~~~~~
/usr/local/include/mkldnn.hpp:900:5: note: no known conversion for argument 1 from β<brace-enclosed initializer list>β to βconst mkldnn::memory::desc&β
/usr/local/include/mkldnn.hpp:889:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&, void*)β
memory(const desc &md, const engine &aengine, void *ahandle) {
^~~~~~
/usr/local/include/mkldnn.hpp:889:5: note: candidate expects 3 arguments, 2 provided
/usr/local/include/mkldnn.hpp:577:8: note: candidate: βmkldnn::memory::memory(const mkldnn::memory&)β
struct memory: public handle<mkldnn_memory_t> {
^~~~~~
/usr/local/include/mkldnn.hpp:577:8: note: candidate expects 1 argument, 2 provided
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:135:22: error: no matching function for call to βmkldnn::memory::memory(<brace-enclosed initializer list>, void*)β
output.data_ptr());
^
In file included from /opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:3,
from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/usr/local/include/mkldnn.hpp:900:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&)β
memory(const desc &md, const engine &aengine)
^~~~~~
/usr/local/include/mkldnn.hpp:900:5: note: no known conversion for argument 1 from β<brace-enclosed initializer list>β to βconst mkldnn::memory::desc&β
/usr/local/include/mkldnn.hpp:889:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&, void*)β
memory(const desc &md, const engine &aengine, void *ahandle) {
^~~~~~
/usr/local/include/mkldnn.hpp:889:5: note: candidate expects 3 arguments, 2 provided
/usr/local/include/mkldnn.hpp:577:8: note: candidate: βmkldnn::memory::memory(const mkldnn::memory&)β
struct memory: public handle<mkldnn_memory_t> {
^~~~~~
/usr/local/include/mkldnn.hpp:577:8: note: candidate expects 1 argument, 2 provided
[0m[91m/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:139:36: error: βusing element_type = struct mkldnn::convolution_forward::primitive_descβ {aka βstruct mkldnn::convolution_forward::primitive_descβ} has no member named βsrc_primitive_descβ; did you mean βprimitive_descβ?
auto input_pd = conv_forward_pd->src_primitive_desc();
^~~~~~~~~~~~~~~~~~
primitive_desc
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:140:23: error: unable to deduce βautoβ from βinput_usr_memoryβ
auto input_memory = input_usr_memory;
^~~~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:141:56: error: βprimitive_descβ is not a member of βmkldnn::memoryβ
if (input_usr_memory.get_primitive_desc() != memory::primitive_desc(input_pd)) {
^~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:146:37: error: βusing element_type = struct mkldnn::convolution_forward::primitive_descβ {aka βstruct mkldnn::convolution_forward::primitive_descβ} has no member named βweights_primitive_descβ; did you mean βprimitive_descβ?
auto weight_pd = conv_forward_pd->weights_primitive_desc();
^~~~~~~~~~~~~~~~~~~~~~
primitive_desc
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:147:24: error: unable to deduce βautoβ from βweight_usr_memoryβ
auto weight_memory = weight_usr_memory;
^~~~~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:148:57: error: βprimitive_descβ is not a member of βmkldnn::memoryβ
if (weight_usr_memory.get_primitive_desc() != memory::primitive_desc(weight_pd)) {
^~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:153:37: error: βusing element_type = struct mkldnn::convolution_forward::primitive_descβ {aka βstruct mkldnn::convolution_forward::primitive_descβ} has no member named βdst_primitive_descβ; did you mean βprimitive_descβ?
auto output_pd = conv_forward_pd->dst_primitive_desc();
^~~~~~~~~~~~~~~~~~
primitive_desc
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:154:24: error: unable to deduce βautoβ from βoutput_usr_memoryβ
auto output_memory = output_usr_memory;
^~~~~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:155:57: error: βprimitive_descβ is not a member of βmkldnn::memoryβ
if (output_usr_memory.get_primitive_desc() != memory::primitive_desc(output_pd)) {
^~~~~~~~~~~~~~
[0m[91m/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:163:22: error: no matching function for call to βmkldnn::memory::memory(<brace-enclosed initializer list>, void*)β
bias.data_ptr()));
^
In file included from /opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:3,
from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/usr/local/include/mkldnn.hpp:900:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&)β
memory(const desc &md, const engine &aengine)
^~~~~~
/usr/local/include/mkldnn.hpp:900:5: note: no known conversion for argument 1 from β<brace-enclosed initializer list>β to βconst mkldnn::memory::desc&β
/usr/local/include/mkldnn.hpp:889:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&, void*)β
memory(const desc &md, const engine &aengine, void *ahandle) {
^~~~~~
/usr/local/include/mkldnn.hpp:889:5: note: candidate expects 3 arguments, 2 provided
/usr/local/include/mkldnn.hpp:577:8: note: candidate: βmkldnn::memory::memory(const mkldnn::memory&)β
struct memory: public handle<mkldnn_memory_t> {
^~~~~~
/usr/local/include/mkldnn.hpp:577:8: note: candidate expects 1 argument, 2 provided
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:176:35: error: βstruct mkldnn::streamβ has no member named βsubmitβ
Stream::Instance().get_stream().submit(net);
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp: In function βat::Tensor at::native::mkldnn_convolution_backward_input(c10::IntArrayRef, const at::Tensor&, const at::Tensor&, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, int64_t, bool)β:
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:209:29: error: βmkldnn::memory::formatβ has not been declared
auto format_any = memory::format::any;
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:210:30: error: βmkldnn::memory::formatβ has not been declared
auto format_nchw = memory::format::nchw;
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:211:42: error: βmkldnn::memory::formatβ has not been declared
auto format_weight = (g!= 1) ? memory::format::goihw : memory::format::oihw;
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:211:66: error: βmkldnn::memory::formatβ has not been declared
auto format_weight = (g!= 1) ? memory::format::goihw : memory::format::oihw;
^~~~~~
[0m[91m/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:251:27: error: no matching function for call to βmkldnn::memory::memory(<brace-enclosed initializer list>, void*)β
grad_output.data_ptr());
^
In file included from /opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:3,
from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/usr/local/include/mkldnn.hpp:900:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&)β
memory(const desc &md, const engine &aengine)
^~~~~~
/usr/local/include/mkldnn.hpp:900:5: note: no known conversion for argument 1 from β<brace-enclosed initializer list>β to βconst mkldnn::memory::desc&β
/usr/local/include/mkldnn.hpp:889:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&, void*)β
memory(const desc &md, const engine &aengine, void *ahandle) {
^~~~~~
/usr/local/include/mkldnn.hpp:889:5: note: candidate expects 3 arguments, 2 provided
/usr/local/include/mkldnn.hpp:577:8: note: candidate: βmkldnn::memory::memory(const mkldnn::memory&)β
struct memory: public handle<mkldnn_memory_t> {
^~~~~~
/usr/local/include/mkldnn.hpp:577:8: note: candidate expects 1 argument, 2 provided
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:253:22: error: no matching function for call to βmkldnn::memory::memory(<brace-enclosed initializer list>, void*)β
weight.data_ptr());
^
In file included from /opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:3,
from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/usr/local/include/mkldnn.hpp:900:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&)β
memory(const desc &md, const engine &aengine)
^~~~~~
/usr/local/include/mkldnn.hpp:900:5: note: no known conversion for argument 1 from β<brace-enclosed initializer list>β to βconst mkldnn::memory::desc&β
/usr/local/include/mkldnn.hpp:889:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&, void*)β
memory(const desc &md, const engine &aengine, void *ahandle) {
^~~~~~
/usr/local/include/mkldnn.hpp:889:5: note: candidate expects 3 arguments, 2 provided
/usr/local/include/mkldnn.hpp:577:8: note: candidate: βmkldnn::memory::memory(const mkldnn::memory&)β
struct memory: public handle<mkldnn_memory_t> {
^~~~~~
/usr/local/include/mkldnn.hpp:577:8: note: candidate expects 1 argument, 2 provided
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:255:26: error: no matching function for call to βmkldnn::memory::memory(<brace-enclosed initializer list>, void*)β
grad_input.data_ptr());
^
In file included from /opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:3,
from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/usr/local/include/mkldnn.hpp:900:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&)β
memory(const desc &md, const engine &aengine)
^~~~~~
/usr/local/include/mkldnn.hpp:900:5: note: no known conversion for argument 1 from β<brace-enclosed initializer list>β to βconst mkldnn::memory::desc&β
/usr/local/include/mkldnn.hpp:889:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&, void*)β
memory(const desc &md, const engine &aengine, void *ahandle) {
^~~~~~
/usr/local/include/mkldnn.hpp:889:5: note: candidate expects 3 arguments, 2 provided
/usr/local/include/mkldnn.hpp:577:8: note: candidate: βmkldnn::memory::memory(const mkldnn::memory&)β
struct memory: public handle<mkldnn_memory_t> {
^~~~~~
/usr/local/include/mkldnn.hpp:577:8: note: candidate expects 1 argument, 2 provided
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:259:48: error: βusing element_type = struct mkldnn::convolution_backward_data::primitive_descβ {aka βstruct mkldnn::convolution_backward_data::primitive_descβ} has no member named βdiff_dst_primitive_descβ; did you mean βprimitive_descβ?
auto grad_output_pd = conv_backward_data_pd->diff_dst_primitive_desc();
^~~~~~~~~~~~~~~~~~~~~~~
primitive_desc
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:260:29: error: unable to deduce βautoβ from βgrad_output_usr_memoryβ
auto grad_output_memory = grad_output_usr_memory;
^~~~~~~~~~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:261:62: error: βprimitive_descβ is not a member of βmkldnn::memoryβ
if (grad_output_usr_memory.get_primitive_desc() != memory::primitive_desc(grad_output_pd)) {
^~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:266:43: error: βusing element_type = struct mkldnn::convolution_backward_data::primitive_descβ {aka βstruct mkldnn::convolution_backward_data::primitive_descβ} has no member named βweights_primitive_descβ; did you mean βprimitive_descβ?
auto weight_pd = conv_backward_data_pd->weights_primitive_desc();
^~~~~~~~~~~~~~~~~~~~~~
primitive_desc
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:267:24: error: unable to deduce βautoβ from βweight_usr_memoryβ
auto weight_memory = weight_usr_memory;
^~~~~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:268:57: error: βprimitive_descβ is not a member of βmkldnn::memoryβ
if (weight_usr_memory.get_primitive_desc() != memory::primitive_desc(weight_pd)) {
^~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:273:47: error: βusing element_type = struct mkldnn::convolution_backward_data::primitive_descβ {aka βstruct mkldnn::convolution_backward_data::primitive_descβ} has no member named βdiff_src_primitive_descβ; did you mean βprimitive_descβ?
auto grad_input_pd = conv_backward_data_pd->diff_src_primitive_desc();
^~~~~~~~~~~~~~~~~~~~~~~
primitive_desc
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:274:28: error: unable to deduce βautoβ from βgrad_input_usr_memoryβ
auto grad_input_memory = grad_input_usr_memory;
^~~~~~~~~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:275:57: error: βprimitive_descβ is not a member of βmkldnn::memoryβ
if (grad_input_memory.get_primitive_desc() != memory::primitive_desc(grad_input_pd)) {
^~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:288:35: error: βstruct mkldnn::streamβ has no member named βsubmitβ
Stream::Instance().get_stream().submit(net);
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp: In function βstd::tuple<at::Tensor, at::Tensor> at::native::mkldnn_convolution_backward_weights(c10::IntArrayRef, const at::Tensor&, const at::Tensor&, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, int64_t, bool)β:
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:326:29: error: βmkldnn::memory::formatβ has not been declared
auto format_any = memory::format::any;
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:327:30: error: βmkldnn::memory::formatβ has not been declared
auto format_nchw = memory::format::nchw;
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:328:42: error: βmkldnn::memory::formatβ has not been declared
auto format_weight = (g!= 1) ? memory::format::goihw : memory::format::oihw;
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:328:66: error: βmkldnn::memory::formatβ has not been declared
auto format_weight = (g!= 1) ? memory::format::goihw : memory::format::oihw;
^~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:329:27: error: βmkldnn::memory::formatβ has not been declared
auto format_x = memory::format::x;
^~~~~~
[0m[91m/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:375:21: error: no matching function for call to βmkldnn::memory::memory(<brace-enclosed initializer list>, void*)β
input.data_ptr());
^
In file included from /opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:3,
from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/usr/local/include/mkldnn.hpp:900:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&)β
memory(const desc &md, const engine &aengine)
^~~~~~
/usr/local/include/mkldnn.hpp:900:5: note: no known conversion for argument 1 from β<brace-enclosed initializer list>β to βconst mkldnn::memory::desc&β
/usr/local/include/mkldnn.hpp:889:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&, void*)β
memory(const desc &md, const engine &aengine, void *ahandle) {
^~~~~~
/usr/local/include/mkldnn.hpp:889:5: note: candidate expects 3 arguments, 2 provided
/usr/local/include/mkldnn.hpp:577:8: note: candidate: βmkldnn::memory::memory(const mkldnn::memory&)β
struct memory: public handle<mkldnn_memory_t> {
^~~~~~
/usr/local/include/mkldnn.hpp:577:8: note: candidate expects 1 argument, 2 provided
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:377:27: error: no matching function for call to βmkldnn::memory::memory(<brace-enclosed initializer list>, void*)β
grad_output.data_ptr());
^
In file included from /opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:3,
from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/usr/local/include/mkldnn.hpp:900:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&)β
memory(const desc &md, const engine &aengine)
^~~~~~
/usr/local/include/mkldnn.hpp:900:5: note: no known conversion for argument 1 from β<brace-enclosed initializer list>β to βconst mkldnn::memory::desc&β
/usr/local/include/mkldnn.hpp:889:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&, void*)β
memory(const desc &md, const engine &aengine, void *ahandle) {
^~~~~~
/usr/local/include/mkldnn.hpp:889:5: note: candidate expects 3 arguments, 2 provided
/usr/local/include/mkldnn.hpp:577:8: note: candidate: βmkldnn::memory::memory(const mkldnn::memory&)β
struct memory: public handle<mkldnn_memory_t> {
^~~~~~
/usr/local/include/mkldnn.hpp:577:8: note: candidate expects 1 argument, 2 provided
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:379:27: error: no matching function for call to βmkldnn::memory::memory(<brace-enclosed initializer list>, void*)β
grad_weight.data_ptr());
^
In file included from /opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:3,
from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/usr/local/include/mkldnn.hpp:900:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&)β
memory(const desc &md, const engine &aengine)
^~~~~~
/usr/local/include/mkldnn.hpp:900:5: note: no known conversion for argument 1 from β<brace-enclosed initializer list>β to βconst mkldnn::memory::desc&β
/usr/local/include/mkldnn.hpp:889:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&, void*)β
memory(const desc &md, const engine &aengine, void *ahandle) {
^~~~~~
/usr/local/include/mkldnn.hpp:889:5: note: candidate expects 3 arguments, 2 provided
/usr/local/include/mkldnn.hpp:577:8: note: candidate: βmkldnn::memory::memory(const mkldnn::memory&)β
struct memory: public handle<mkldnn_memory_t> {
^~~~~~
/usr/local/include/mkldnn.hpp:577:8: note: candidate expects 1 argument, 2 provided
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:384:44: error: βusing element_type = struct mkldnn::convolution_backward_weights::primitive_descβ {aka βstruct mkldnn::convolution_backward_weights::primitive_descβ} has no member named βsrc_primitive_descβ; did you mean βprimitive_descβ?
auto input_pd = conv_backward_weight_pd->src_primitive_desc();
^~~~~~~~~~~~~~~~~~
primitive_desc
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:385:23: error: unable to deduce βautoβ from βinput_usr_memoryβ
auto input_memory = input_usr_memory;
^~~~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:386:56: error: βprimitive_descβ is not a member of βmkldnn::memoryβ
if (input_usr_memory.get_primitive_desc() != memory::primitive_desc(input_pd)) {
^~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:391:50: error: βusing element_type = struct mkldnn::convolution_backward_weights::primitive_descβ {aka βstruct mkldnn::convolution_backward_weights::primitive_descβ} has no member named βdiff_dst_primitive_descβ; did you mean βprimitive_descβ?
auto grad_output_pd = conv_backward_weight_pd->diff_dst_primitive_desc();
^~~~~~~~~~~~~~~~~~~~~~~
primitive_desc
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:392:29: error: unable to deduce βautoβ from βgrad_output_usr_memoryβ
auto grad_output_memory = grad_output_usr_memory;
^~~~~~~~~~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:393:62: error: βprimitive_descβ is not a member of βmkldnn::memoryβ
if (grad_output_usr_memory.get_primitive_desc() != memory::primitive_desc(grad_output_pd)) {
^~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:398:50: error: βusing element_type = struct mkldnn::convolution_backward_weights::primitive_descβ {aka βstruct mkldnn::convolution_backward_weights::primitive_descβ} has no member named βdiff_weights_primitive_descβ; did you mean βdiff_weights_descβ?
auto grad_weight_pd = conv_backward_weight_pd->diff_weights_primitive_desc();
^~~~~~~~~~~~~~~~~~~~~~~~~~~
diff_weights_desc
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:399:29: error: unable to deduce βautoβ from βgrad_weight_usr_memoryβ
auto grad_weight_memory = grad_weight_usr_memory;
^~~~~~~~~~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:400:62: error: βprimitive_descβ is not a member of βmkldnn::memoryβ
if (grad_weight_usr_memory.get_primitive_desc() != memory::primitive_desc(grad_weight_pd)) {
^~~~~~~~~~~~~~
/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:407:27: error: no matching function for call to βmkldnn::memory::memory(<brace-enclosed initializer list>, void*)β
grad_bias.data_ptr()));
^
In file included from /opt/pytorch/aten/src/ATen/mkldnn/Runtime.h:3,
from /opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:37:
/usr/local/include/mkldnn.hpp:900:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&)β
memory(const desc &md, const engine &aengine)
^~~~~~
/usr/local/include/mkldnn.hpp:900:5: note: no known conversion for argument 1 from β<brace-enclosed initializer list>β to βconst mkldnn::memory::desc&β
/usr/local/include/mkldnn.hpp:889:5: note: candidate: βmkldnn::memory::memory(const mkldnn::memory::desc&, const mkldnn::engine&, void*)β
memory(const desc &md, const engine &aengine, void *ahandle) {
^~~~~~
/usr/local/include/mkldnn.hpp:889:5: note: candidate expects 3 arguments, 2 provided
/usr/local/include/mkldnn.hpp:577:8: note: candidate: βmkldnn::memory::memory(const mkldnn::memory&)β
struct memory: public handle<mkldnn_memory_t> {
^~~~~~
/usr/local/include/mkldnn.hpp:577:8: note: candidate expects 1 argument, 2 provided
[0m[91m/opt/pytorch/aten/src/ATen/native/mkldnn/Conv.cpp:421:35: error: βstruct mkldnn::streamβ has no member named βsubmitβ
Stream::Instance().get_stream().submit(net);
^~~~~~
[0m[ 64%] Building CXX object caffe2/CMakeFiles/caffe2.dir/__/aten/src/ATen/CPUFloatType.cpp.o
caffe2/CMakeFiles/caffe2.dir/build.make:2841: recipe for target 'caffe2/CMakeFiles/caffe2.dir/__/aten/src/ATen/native/mkldnn/Conv.cpp.o' failed
[91mmake[2]: *** [caffe2/CMakeFiles/caffe2.dir/__/aten/src/ATen/native/mkldnn/Conv.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
[0m[91mmake[1]: *** [caffe2/CMakeFiles/caffe2.dir/all] Error 2
[0mCMakeFiles/Makefile2:2107: recipe for target 'caffe2/CMakeFiles/caffe2.dir/all' failed
Makefile:138: recipe for target 'all' failed
[91mmake: *** [all] Error 2
[0mBuilding wheel torch-1.1.0a0+742568e
-- Building version 1.1.0a0+742568e
```
## Build command
```bash
cd /opt && git clone --recursive https://github.com/pytorch/pytorch && \
cd /opt/pytorch && \
git submodule update --init && \
cd /opt/pytorch && ls && \
sed -i 's/"Use MKLDNN" OFF/"Use MKLDNN" ON/g' CMakeLists.txt && \
sed -i 's/"Use DISTRIBUTED" OFF/"Use DISTRIBUTED" ON /g' CMakeLists.txt && \
sed -i 's/for parallel code" OFF/for parallel code" ON /g' CMakeLists.txt && \
PYTHON_EXECUTABLE=/opt/conda/bin/python \
PYTHON_LIBRARY=/opt/conda/lib/libpython3.6m.so \
PYTHON_INCLUDE_DIR=/opt/conda/include/python3.6m \
FULL_CAFFE2=1 \
USE_OPENMP=1 \
USE_MKL=1 \
USE_MKLDNN=1 \
USE_MKLML=1 \
USE_SYSTEM_EIGEN_INSTALL=1 \
USE_ZMQ=1 \
USE_DISTRIBUTED=1 \
BUILD_TEST=0 \
MKLDNN_LIBRARY=/usr/local/lib \
MKLDNN_INCLUDE_DIR=/usr/local/include \
MKLDNN_LIB_DIR=/usr/local/lib \
python setup.py install
```
## Environment
- PyTorch Version (e.g., 1.0): Master branch
- OS (e.g., Linux): Debian:Stretch
- How you installed PyTorch (`conda`, `pip`, source): source
- Build command you used (if compiling from source): see above
- Python version: 3.6.5
- CUDA/cuDNN version: NA
- GPU models and configuration: NA
- Any other relevant information: NA
| caffe2 | low | Critical |
419,215,975 | flutter | Gifs play faster than expected | ## Steps to Reproduce
```dart
Image.network('https://github.githubassets.com/images/spinners/octocat-spinner-128.gif')
```
Flutter version:
```sh
Flutter 1.3.9 β’ channel dev β’ https://github.com/flutter/flutter.git
Framework β’ revision f91df4abe1 (16 hours ago) β’ 2019-03-09 21:19:28 -0500
Engine β’ revision 4e54bc93ca
Tools β’ Dart 2.2.1 (build 2.2.1-dev.1.0 2fb6cd9f5f)
```
Possibly related to #24804 | engine,a: quality,a: images,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-engine,triaged-engine | low | Critical |
419,218,807 | godot | macOS: Mission Control window focus issue | ___
***Bugsquad note:** This issue has been confirmed several times already. No need to confirm it further.*
___
**Godot version:**
<!-- Specify commit hash if non-official. -->
Godot_v3.1-rc1_osx.64
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
macOS Mojave 10.14.3
**Issue description:**
<!-- What happened, and what was expected. -->
NOTE: I'm not sure if this is a macOS issue or a Godot issue, but it only seems to happen with this application.
When using Mission Control to switch back to Godot, it sometimes switches to the application, but leaves focus on the previous application window. It does switch to the correct application as the Godot menu displays in the menu bar, but with the wrong application window focused. This requires me to either switch to Godot again or to hide the incorrectly focused window.
**Steps to reproduce:**
1. Open another application
2. Open Mission Control
3. Select the Godot application
| bug,platform:macos,topic:porting,confirmed | medium | Critical |
419,231,156 | angular | Why are angular elements swallowing error? | I have the simplest possible angular element within an angular project.
I throw error in component belonging to angular element as follows:
[dashboard-tile.component.ts:](https://github.com/goodmite/angular-element-starter/blob/master/src/app/dashboard/dashboard-tile/dashboard-tile.component.ts) (_referenced in [index.html](https://github.com/goodmite/angular-element-starter/blob/master/src/index.html) as `<dashboard-tile a="100" b="50" c="25"></dashboard-tile>`_)
```
ngOnInit() {
debugger;
throw "this is an error";
}
```
But I see no error in chrome console.
Link to [video](https://youtu.be/EsDbn7k9JpQ).
**However**, if I start to use this component as a regular component, I immediately get an error on console. So this is likely an angular-element issue.
Link to github [repo ](https://github.com/goodmite/angular-element-starter)containg the code
Tested on both chrome and firefox and its reproducible so its not browser isssue.
**Other info:**
Angular CLI: 7.1.4
Node: 10.14.2
OS: win32 x64
Angular: 7.1.4
... animations, cli, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.11.4
@angular-devkit/build-angular 0.11.4
@angular-devkit/build-optimizer 0.11.4
@angular-devkit/build-webpack 0.11.4
@angular-devkit/core 7.1.4
@angular-devkit/schematics 7.1.4
@angular/elements 7.2.8
@ngtools/webpack 7.1.4
@schematics/angular 7.1.4
@schematics/update 0.11.4
rxjs 6.3.3
typescript 3.1.6
webpack 4.23.1
| type: bug/fix,workaround1: obvious,freq1: low,area: core,state: confirmed,P4 | low | Critical |
419,253,755 | go | x/tools/astutil: strange comment placement with AddNamedImport | Using `00c44ba9c14f88ffdd4fb5bfae57fe8dd6d6afb1`
Consider the following snippet:
```go
package main
import (
"go/parser"
"go/printer"
"go/token"
"os"
"golang.org/x/tools/go/ast/astutil"
)
const source = `package comments//
// X comment
var X int
`
func main() {
fset := token.NewFileSet()
file, _ := parser.ParseFile(fset, "", source, parser.ParseComments)
astutil.AddNamedImport(fset, file, "tar", "archive/tar")
astutil.AddNamedImport(fset, file, "zip", "archive/zip")
(&printer.Config{Mode: printer.TabIndent | printer.UseSpaces, Tabwidth: 8}).Fprint(os.Stdout, fset, file)
}
```
This currently outputs:
```go
package comments //
import (
tar "archive/tar"
zip "archive/zip"
) // X comment
var X int
```
Notice how the next comment group `X comment` is attached to the import block? It seems that this only occurs if an inline comment occurs on the package statement.
I expect this to output:
```go
package comments //
import (
tar "archive/tar"
zip "archive/zip"
)
// X comment
var X int
``` | NeedsInvestigation,Tools | low | Major |
419,269,538 | node | setTimeout Calling Callback Too Early | * **Version**: v8.15.1
* **Platform**: Darwin Charlies-MacBook-Pro.local 18.2.0 Darwin Kernel Version 18.2.0: Thu Dec 20 20:46:53 PST 2018; root:xnu-4903.241.1~1/RELEASE_X86_64 x86_64
---
I have the following code in Node.js.
```js
const timeout = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
```
I'm trying to test this code with the following test:
```js
it("Should wait for given time before resolving", async () => {
const MS = 100;
const start = process.hrtime();
await timeout(MS);
const diff = process.hrtime(start);
expect(((diff[0] * NS_PER_SEC) + diff[1]) / 1000000).to.at.least(MS);
});
```
The problem is sometimes (rarely), this test fails:
Should wait for given time before resolving:
AssertionError: expected 99.595337 to be at least 100
+ expected - actual
-99.595337
+100
Obviously this is some type of timing issue with Node.js or something. If anything I expect `await timeout(MS);` to take slightly longer than `MS`. In no case do I expect it to take less time.
What is it about the internals of JavaScript/Node.js that causes this to happen?
This occurred on macOS 10.14.3 running Node.js version 8.15.1. | help wanted,timers | medium | Critical |
419,298,620 | TypeScript | Type aliases not being resolved for some functions types | **TypeScript Version:** 3.4.0-dev.20190310
**Search Terms:**
Resolve / flatten / simplify type aliases for function calls
resolve type aliases
**Code**
```ts
let subject = {a:1,b:2,c:3,d:4}
type thisResolves = Pick<typeof subject, 'a' | 'b'>
let thisDoesnt = pick(subject, ['a', 'b'])
declare function pick<T, K extends keyof T>(obj: T, keys: K[]): Pick<T,K>
```
**Expected behavior:**
The Quick Info type of `thisDoesnt` resolves to `{ a: number, b: number}`
**Actual behavior:**
The Quick Info type of `thisDoesnt` resolves to `Pick<{ a: number, b: number, c: number, d: number}, 'a' | 'b'>`
**Playground Link:** [here](https://www.typescriptlang.org/play/index.html#src=let%20subject%20%3D%20%7Ba%3A1%2Cb%3A2%2Cc%3A3%2Cd%3A4%7D%0D%0A%0D%0Atype%20thisResolves%20%3D%20Pick%3Ctypeof%20subject%2C%20'a'%20%7C%20'b'%3E%0D%0A%0D%0Alet%20thisDoesnt%20%3D%20pick(subject%2C%20%5B'a'%2C%20'b'%5D)%0D%0A%0D%0Adeclare%20function%20pick%3CT%2C%20K%20extends%20keyof%20T%3E(obj%3A%20T%2C%20keys%3A%20K%5B%5D)%3A%20Pick%3CT%2C%20K%3E%0D%0A)
**Related Issues:**
I've seen in other issues that type aliases are eagerly resolved (e.g. https://github.com/Microsoft/TypeScript/issues/13095#issuecomment-268627521 and https://github.com/Microsoft/TypeScript/issues/16798#issuecomment-324753135). This is a case where that behavior is useful, since reading through a lot of `Pick<Foo<Bar<...` in VS Code makes it difficult to figure out the true source of a type error. | Suggestion,Awaiting More Feedback | low | Critical |
419,310,052 | rust | Generator size: borrowed variables are assumed live across following yield points | Maybe a duplicate of #52924, but maybe also something else.
I observed that the sizes of `Future`s generated by async fns can grow exponentially.
The following code shows an async fn, which produces a 1kB future. Each layering in another async fn doubles it's size:
```rust
#![feature(async_await)]
async fn i_am_1kb() -> bool
{
let x: [u8; 1*1024] = [0; 1*1024];
async{}.await;
let _sum: u8 = x.iter().sum();
true
}
fn main() {
let fut1 = i_am_1kb();
dbg!(std::mem::size_of_val(&fut1));
let composed_1 = async {
let inner = i_am_1kb();
inner.await;
};
dbg!(std::mem::size_of_val(&composed_1));
let composed_2 = async {
let inner = i_am_1kb();
dbg!(std::mem::size_of_val(&inner));
inner.await;
};
dbg!(std::mem::size_of_val(&composed_2));
let composed_3 = async {
let inner = async {
let inner = async {
i_am_1kb().await;
};
dbg!(std::mem::size_of_val(&inner));
inner.await;
};
dbg!(std::mem::size_of_val(&inner));
inner.await;
};
dbg!(std::mem::size_of_val(&composed_3));
}
```
Output:
```
[src/main.rs:16] std::mem::size_of_val(&fut1) = 1032
[src/main.rs:22] std::mem::size_of_val(&composed_1) = 1036
[src/main.rs:29] std::mem::size_of_val(&composed_2) = 2072
[src/main.rs:44] std::mem::size_of_val(&composed_3) = 4168
```
It doesn't matter whether the statement between the future generation and `await!` references the future or not. A simply `println("")` will have the same effect.
Only if the future is directly awaited (as in `composed_1`) the size will stay constant.
cc @cramertj , @nikomatsakis , @Nemo157 | C-enhancement,T-compiler,A-coroutines,I-heavy,A-async-await,AsyncAwait-Triaged,C-optimization | medium | Major |
419,335,364 | godot | ARVR image shakes when Multithreaded and ARVROrigin is attached to RigidBody | **Godot version:**
Godot 3.1RC1
**OS/device including version:**
Windows 10, NVIDIA GT610
**Issue description:**
If Thread Model "Multithreaded" is selected and the ARVR Origin is attached to a RigidBody/VehicleBody then this Body shakes back and forth in the left or right view. (Doesn't matter if GLES2 or GLES3.)

**Steps to reproduce:**
Attach a ARVROrigin and ARVRCamera to a RigidBody/Vehicle Body.
Enable ThreadModel "Multithreading".
Run the project and move the RigidBody.
**Minimal reproduction project:**
You need NO VR Gear to test this:
Attached is the Trucktown demo which is just modified to enable the ARVR view.
--> Select the "Mini Van". Move the truck with cursor keys.
[truck_town_vr.zip](https://github.com/godotengine/godot/files/2951145/truck_town_vr.zip)
| bug,topic:physics,topic:xr | low | Major |
419,343,356 | You-Dont-Know-JS | Proposal: add a paragraph about WebAssembly | Even though WebAssembly is not tightly bound to JS, it can relief its performance in any environment that has a JS engine (among others).
So: would it make sense to include a mention of WebAssembly? Especially since there's already a paragraph on asm.js, one of Wasm's "ancestors".
Chapter 5 of Async & Performance might be the right place (the asm.js paragraph is also in there):
https://github.com/getify/You-Dont-Know-JS/blob/master/async%20%26%20performance/ch5.md
What do you think?
| for second edition | low | Major |
419,380,453 | godot | Texture loading/exporting bug when overriding driver_name for mobile | **Godot version:**
3.1 RC1
**Issue description:**
I have a game that is supposed to use GLES3 by default, but GLES2 for mobile. So I override the renderer_name option in the project settings, for mobile, while the default remains GLES3.
When I do that, the exported build crashes immediately - the debug output shows that it fails to load ETC textures. If I switch the default renderer to GLES2, I don't get these error messages and the game launches (but it crashes later for a different reason, see #26902)
So I assume that if you override the renderer for mobile and set it to GLES2, the game is still exported as if it was a GLES3 game, so the engine doesn't find the correct texture files. But if the GLES2 is the project default, the textures are exported correctly. I guess that setting fallback_to_gles2 would fix it, but there's no point in exporting both ETC and ETC2 textures if I want to force GLES2 on mobile devices.
**Minimal reproduction project:**
Any demo that uses VRAM compression for some textures. Use GLES3 as default and override to GLES2 for mobile.
| enhancement,topic:editor | low | Critical |
419,389,847 | godot | Godot physics performance issue when colliding with a ConvexPolygonShape generated from a SphereMesh | **Godot version:**
3.1 RC2
**OS/device including version:**
Windows 10 64-bit,
Intel I7 920 @ 2.67GHz, 2668 Mhz, 4 Cores, 8 Logical Processors,
NVIDIA GeForce GTX 750 Ti
**Issue description:**
When calling `KinematicBody::move_and_slide` multiple times per frame using Godot Physics and colliding with a vertical collision shape, frame rate plummets to almost 0. Bullet Physics works fine.
**Steps to reproduce:**
In the provided MRP, use WSAD to move the capsul over to the sphere and collide with it. Continuing to move around the sphere using Godot Physics causes the framerate to plummet. Switch to Bullet and do it again and you'll see it works fine.
**Minimal reproduction project:**
[PhysicsBug.zip](https://github.com/godotengine/godot/files/2951698/PhysicsBug.zip)
| bug,confirmed,topic:physics | low | Critical |
419,402,907 | go | x/net/html/charset: BOM cannot be processed correctly | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version go1.12 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/zsm/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/zsm/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/6v/7stmg2756wlfk9c_qnv1hnbm0000gn/T/go-build194117381=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
``` golang
package main
import (
"bytes"
"fmt"
"io/ioutil"
"golang.org/x/net/html/charset"
"golang.org/x/text/encoding"
"golang.org/x/text/transform"
)
func main() {
raw := []byte("\xEF\xBB\xBFhello")
fmt.Println(raw)
if e, _, _ := charset.DetermineEncoding(raw, ""); e != encoding.Nop {
tmp := transform.NewReader(bytes.NewBuffer(raw), e.NewDecoder())
dist, _ := ioutil.ReadAll(tmp)
fmt.Println(dist)
}
}
```
### What did you expect to see?
BOM has been removed
or
returns encoding.Nop
### What did you see instead?
```
[239 187 191 104 101 108 108 111]
[239 187 191 104 101 108 108 111]
```
| NeedsInvestigation | low | Critical |
419,418,724 | angular | Navigation with custom Animation and :Increment :decrement does not work as expected | # π bug report
### Description
We try to create a lazy loaded wizzard/stepper using router and custom navigation. We register a custom animation named **loginRouteSlideInAnimation** which has an `:increment` and `:decrement` transition
What we do, when we go to next page we would like to slide in new page from right and if we go to previous page we would like to slide in new page from right. We do this with this custom animation trigger
The strange thing is, if we run this locally with ng serve the navigation animation is not working as expected, go to NEXT page does not perform any animation until we go on a PREVIOUS page and also from the other side, on first PREVIOUS page does not perform any animation until we go on NEXT page.
What we can see `(query(':enter))` has no results in the situation where the animation is not working. But it should!
You can test the app on stackblitz, where the APP is working well. But please clone our repo and check it locally where it is not working as expected and like in the stackblitz.
## π¬ Minimal Reproduction
<!--
Please create and share minimal reproduction of the issue starting with this template: https://stackblitz.com/fork/angular-issue-repro2
-->
https://stackblitz.com/github/mburger81/test-animation
https://github.com/mburger81/test-animation
<!--
## π Your Environment
**Angular Version:**
<pre><code>
Angular CLI: 7.3.5
Node: 10.15.2
OS: linux x64
Angular: 7.2.8
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.13.5
@angular-devkit/build-angular 0.13.5
@angular-devkit/build-optimizer 0.13.5
@angular-devkit/build-webpack 0.13.5
@angular-devkit/core 7.3.5
@angular-devkit/schematics 7.3.5
@angular/cli 7.3.5
@ngtools/webpack 7.3.5
@schematics/angular 7.3.5
@schematics/update 0.13.5
rxjs 6.3.3
typescript 3.2.4
webpack 4.29.0
</code></pre>
| type: bug/fix,area: animations,freq2: medium,state: needs more investigation,P3 | low | Critical |
419,429,651 | TypeScript | Multiple issues with import quick fixes and project references | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.3.3 or 3.4.0-dev.20190311
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
quick fix, import, project references, path aliases
**Code:**
I use [project-references-demo](https://github.com/RyanCavanaugh/project-references-demo)
---------------------------------
**Problem 1**
Import quick fix doesn't appear.
**Modification to the original code**
Removed makeRandomName import from `dog.ts` file.
Added lastElementOf dummy function to core/utilites.ts.
**Description**
In the project `animals` I would like to use vscode and quick fix to add import for `makeRandomName` from project `core`. Vscode doesn't suggest anything.
To fix the problem I need to import once (it can be anywhere in my project) from the file that contains `makeRandomName`. For example if I add `import { lastElementOf } from "../core/utilities";` to `animal.ts` it will solve my problem. After that in `dog.ts` I get a suggestion to `Import makeRandomName from module "../lib/core/utilities"`.
**Expected behavior:**
I expect to immediately get import suggestions.
---------------------------------
**Problem 2**
Import quick fix suggests wrong paths in a project with path aliases
**Modification to the original code**
In zoo project I added the following entries to the tsconfig
```
"rootDir": "src",
"baseUrl": "src",
"paths": {
"@animals/*": ["../../animals/*"],
"@app/*": ["./*"]
}
```
**Description**
When I want to reference something from the project itself I get suggestions from quick fix to use `@app/{projectPath}` and it's correct.
The problem is with imports to the `animals` project. If I use something from `animals` I don't get any suggestions. To get them I need to have somewhere inside my `zoo` project already defined at least one import from `animals` project.
Now the suggestions show wrong paths. Vscode tells me to use the imports that point to my `outDir` folder which is `lib`, example:
`import { createDog } from '@app/../../lib/animals';`
or
`import { createDog } from '../../lib/animals';`
Correct suggestions (code compiles with them):
`import { createDog } from '@animals/index'`
`import { createDog } from '../../animals`
So here are three problems.
1. To get any suggestions you need to have defined at least one import. Maybe it doesn't detect `references` entry from tsconfig?
2. It doesn't suggest to use `@animals` path alias
3. It points to outDir folder, not to the animals project folder.
| Bug | low | Critical |
419,443,453 | opencv | cv::findTransformECC produce CV_error() when cannot align images. | Some times `findTransfromECC()` cannot align images.
I think that producing `CV_Error()` is not usable. So I just change `CV_Error` line to `return rho;` and checking in my code is return of `findTransformECC()` greater than zero to know that align complete successfully.
Is that change suitable. Would you accept my commit? | incomplete | low | Critical |
419,448,082 | rust | rustc: remove unnecessary extern_prelude logic from ty::item_path | From #56655 :
> The checks added in 02357e4 effectively turned crate::std into std, but they were too general (affecting any crate::foo where foo was in the extern prelude, not just extern crates), and unnecessary, as only the extern crates created by "std injection" need any special-casing.
Since this only affects the user-facing "relative" mode, it shouldn't have interactions with linking, and the only observable effect should be sometimes-shorter paths in diagnostics.
Creating this issue as a tracking issue since the PR is closed due to inactivity. | C-cleanup,T-compiler | low | Minor |
419,449,466 | angular | writeValue gets called before ngOnInit | <!--π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
Oh hi there! π
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
-->
# π bug report
### Affected Package
<!-- Can you pin-point one or more @angular/* packages as the source of the bug? -->
<!-- βοΈedit: --> The issue is caused by package `@angular/forms`
### Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- βοΈ-->
I don't know. I don't think so.
### Description
<!-- βοΈ-->
When `NgControl` is injected into the constructor of a component and the value accessor is set manually to avoid `cyclic dependency errors`, `writeValue` is called before `ngOnInit`.
This workaround is suggested by the [Material Angular doc](https://material.angular.io/guide/creating-a-custom-form-field-control#-code-ngcontrol-code-) under `ngControl`.
## π¬ Minimal Reproduction
<!--
Please create and share minimal reproduction of the issue starting with this template: https://stackblitz.com/fork/angular-issue-repro2
-->
<!-- βοΈ-->
[Stackblitz](https://stackblitz.com/edit/angular-form-control-bug)
<!--
If StackBlitz is not suitable for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue.
A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem.
Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior.
Issues that don't have enough info and can't be reproduced will be closed.
You can read more about issue submission guidelines here: https://github.com/angular/angular/blob/master/CONTRIBUTING.md#-submitting-an-issue
-->
## π Your Environment
**Angular Version:**
<pre><code>
Angular CLI: 7.3.5
Node: 11.9.0
OS: win32 x64
Angular: 7.2.8
... animations, common, compiler, compiler-cli, core, forms
... http, language-service, platform-browser
... platform-browser-dynamic, router, service-worker
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.13.5
@angular-devkit/build-angular 0.13.5
@angular-devkit/build-optimizer 0.13.5
@angular-devkit/build-webpack 0.13.5
@angular-devkit/core 7.3.5
@angular-devkit/schematics 7.3.5
@angular/cdk 7.3.3
@angular/cli 7.3.5
@angular/material 7.3.3
@angular/pwa 0.13.5
@ngtools/webpack 7.3.5
@schematics/angular 7.3.5
@schematics/update 0.13.5
rxjs 6.4.0
typescript 3.2.4
webpack 4.29.0
</code></pre>
**Anything else relevant?**
<!-- βοΈIs this a browser specific issue? If so, please specify the browser and version. -->
<!-- βοΈDo any of these matter: operating system, IDE, package manager, HTTP server, ...? If so, please mention it below. -->
I was sent here by the @angular/material2 team. They believe it's an issue with `@angular/forms`.
[https://github.com/angular/material2/issues/15434](https://github.com/angular/material2/issues/15434) | type: bug/fix,freq2: medium,area: forms,state: confirmed,P4 | medium | Critical |
419,518,413 | rust | Highlight when errors have automatically applicable suggestions | With `tool_only_span_diagnostics` having landed, some suggestions that can be automatically applied aren't shown to the user as labels with `help: ...` messages, so users who look for that to know they can automatically apply the fix won't know about those suggestions.
We should add some indicator that tells the user that an error can be automatically fixed.
See [this conversation](https://github.com/rust-lang/rust/pull/59084#discussion_r264273929). | C-enhancement,T-compiler,A-suggestion-diagnostics | medium | Critical |
419,562,532 | godot | Crash after splashscreen on Android with GLES2 if default_env.tres is present [Samsung Galaxy S4] | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
Tested on 3.1 RC1 and RC2
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Samsung "Galaxy S4" GT-I9505
Android 5.0.1
Exported on Ubuntu 18.04.1 LTS
**Issue description:**
<!-- What happened, and what was expected. -->
The error described in #26651 is occurring "again". The funny thing is, that it even occurs on Version RC1, even though I've tested it and used it for 2 days. It happened again after I reset my git-repo, to a commit that exported correctly yesterday.I don't know what caused this to happen, but the error remains the following:
If I try to export a very basic project with GLES2 for Android, the app crashes instantly after the splash screen. If I delete the file `default_env.tres`, the app launches normally. Tested on RC1 and RC2
**Steps to reproduce:**
as in #26651
1.) Create a new GLES2-Project
2.) Add and set main-scene
3.) Set export template and export for Android/One-Click-Deploy (with debug)
4.) Trying to launch app
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
-> #26651
| bug,platform:android,topic:rendering,confirmed,crash | high | Critical |
419,574,007 | pytorch | Linking Caffe2 with sequential MKL but still calling multi-threaded mkl | I turned off the openmp feature when building caffe2 and link caffe2 with the sequential version of mkl (libmkl_sequential.so). ldd caffe2.so also displays that caffe2 is linked with the sequential mkl (see first attached picture) .

But when I run caffe2 inference, it still calls the multi-threaded version of mkl (libmkl_intel_thread.so), the stack trace is displayed in the second picture as below. How could I make the compiled caffe2 run sequential mkl for gemm?

| caffe2 | low | Minor |
419,611,447 | flutter | Initial routes with parameters | It would be useful to be able to set the parameters of the initial routes, not just the names.
Something like:
```
MaterialApp(
initialRoutes: [
RouteSettings(name: "MainPage"),
RouteSettings(name: "AlbumPage", arguments: albumId),
RouteSettings(name: "SongPage", arguments: songId)
]
...
)
``` | c: new feature,framework,f: material design,f: routes,P3,workaround available,team-design,triaged-design | medium | Major |
419,636,300 | TypeScript | feature request: change indentation of multi-line variable declarations | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
format code indent tsserver
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
When `tsserver` formats the following typescript code using an indent size of 2, I get
```typescript
let a = 1,
b = 2
```
while I'd expect it to indent the code like
```typescript
let a = 1,
b = 2
```
regardless of the indent size.
Additionally, the following code would be formatted like
```typescript
const a = 1,
b = 2
```
## Use Cases
This feature would be useful in variable declarations (`let`, `var`, `const`) when using an indent size different than 4.
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
See the Suggestion section above.
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
419,640,948 | TypeScript | Result of unknown indexing explicitly cast to `any` incorrectly reported as implicit any | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** [email protected]
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** "Element implicitly has an 'any' type" " index expression is not of type"
We are on the same page that an unknown/untyped indexing operation results in an `any` result. However, it seems that silence the error the object being indexed must be cast to `any`, it is insufficient to cast the result of the indexing itself to `any`.
**Code**
```ts
declare let foo: HTMLElement;
let a = (<any>foo.attributes)["bar"].value;
let x = foo.attributes["bar"].value;
let y = (<any>foo.attributes["bar"]).value;
let z = (foo.attributes["bar"] as any).value;
```
compiled with `tsc --strict ./test.ts`
**Expected behavior:**
* No error on `let a = ...` because type warnings on any operation on an explicitly `any` type (including indexing) are let through.
* An error is thrown on `let x = ...` because the element type resulting from the lookup is of unknown type.
* No error on `let y = ...`, not because the type is known but because I'm explicitly casting the `<any>` result to `any`.
* No error on `let z = ...`, not because the type is known but because I'm explicitly casting the `<any>` result to `any`.
**Actual behavior:**
```
test.ts:3:24 - error TS7015: Element implicitly has an 'any' type because index expression is not of type 'number'.
3 let x = foo.attributes["bar"].value;
~~~~~
test.ts:4:30 - error TS7015: Element implicitly has an 'any' type because index expression is not of type 'number'.
4 let y = (<any>foo.attributes["bar"]).value;
~~~~~
test.ts:5:25 - error TS7015: Element implicitly has an 'any' type because index expression is not of type 'number'.
5 let z = (foo.attributes["bar"] as any).value;
~~~~~
```
I argue that both `(<any>foo)["bar"]` and `(<any> (foo["bar"]))` should not trigger this error. | Suggestion,In Discussion | low | Critical |
419,662,763 | pytorch | C++ nn::Sequential push_back() copies module if the module is concrete type | ## π Bug
In https://github.com/pytorch/pytorch/blob/3f1d0ee5d5f48bb0fbef433a61cef0be9ad40a76/test/cpp/api/sequential.cpp#L37-L48, if we delete the copy constructor of `M` by adding `M(const M&) = delete;`, this test will fail to compile, because in https://github.com/pytorch/pytorch/blob/3f1d0ee5d5f48bb0fbef433a61cef0be9ad40a76/torch/csrc/api/include/torch/nn/modules/sequential.h#L197-L203 the `push_back(std::make_shared<Type>(std::forward<M>(module)));` line actually copies the module and expects the copy constructor to exist. Since copying a concrete type module is expensive, we should fix it so that this copy can be avoided.
cc @yf225 @glaringlee @albanD @mruberry @jbschlosser @walterddr | module: cpp,module: nn,triaged | low | Critical |
419,679,994 | godot | Slight delay when looping .ogg file | **Godot version:**
3.1 RC2 (Also happening on 3.0.6)
**OS/device including version:**
Windows 8.1 Pro 64-bit and Windows 7 Pro 64-bit
**Issue description:**
When an ogg file loops, there is a slight delay before looping the sample back to the beginning.
**Steps to reproduce:**
Create a project with an empty node. Add an AudioStreamPlayer to the node and load an ogg file. Check auto-play and run the project. This is more apparent with music that has a perfect loop from the end to beginning.
**Minimal reproduction project:**
In the provided project, the ogg sample should automatically play. The ogg sample is 9 seconds long.
[OggMusicTest.zip](https://github.com/godotengine/godot/files/2954141/OggMusicTest.zip) | bug,topic:audio | low | Major |
419,693,148 | pytorch | Caffe2 building failure | If I uninstall mkl-include in anaconda, this issue will disappear. But if caffe2 is compiled with mkl-include, it gives error as below. Error: "regexec" is not declared in this scope.

| caffe2 | low | Critical |
419,695,521 | flutter | Google Maps initialCameraPosition with bounding | I am using the google-maps package for flutter and I need to fit multiple markers. I tried to fix it by calculating the bounding box and the zoom level by myself, but this didn't quite work as expected. Sometimes, the camera is too far out.
Since a few versions, there is the ability to animate to a `LatLngBounds`, which would work and zoom correct. I tried to call the following, when map was created:
` controller.animateCamera(CameraUpdate.newLatLngBounds(LatLngBounds(
southwest: const LatLng(-38.483935, 113.248673),
northeast: const LatLng(-8.982446, 153.823821),
), 10));`
Sometimes this is working, but sometimes I get the following platform-exception:
`Error using newLatLngBounds(LatLngBounds, int): Map size can't be 0. Most likely, layout has not yet occured for the map view. Either wait until layout has occurred or use newLatLngBounds(LatLngBounds, int, int, int) which allows you to specify the map's dimensions.`
I also tried to achieve this by adding the `cameraTargetBounds` property to the `GoogleMap`-Widget, but as I have seen in another issue, this will not zoom to fit correctly, and I do not want to bound the camera position, I only want to fit it at init.
Is there a solution or a workaround? Or do I have to wait, until it is officially supported?
| customer: crowd,p: maps,package,team-ecosystem,P2,triaged-ecosystem | low | Critical |
419,717,477 | godot | Editor crash in EditorResourcePreview when assigned as fallback bitmap font | **Godot version:**
Godot 3.1 RC 2
**OS/device including version:**
Linux Mint 19
**Issue description:**
When assigning fallback of fallback to root BitmapFont or next pass in Material, then editor crash.
EDIT: It happens also with Atlas Texture and probably others, but for now I don't know how to fix this.
- [x] - 1. Bitmap font and Material
- [ ] - 2. Textures
**Steps to reproduce:**
1. Create new resource Bitmap font
2. Save it
3. Open this Bitmap font
4. Assign to fallback new Bitmap font
5. To newly created Bitmap font assign as fallback previously saved font(root one)
https://streamable.com/gy4jp
| bug,topic:editor,crash | low | Critical |
419,717,942 | go | runtime: killed threads do not crash whole process | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/ivan/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/ivan/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build580235778=/tmp/go-build -gno-record-gcc-switches"</pre></details>
### What did you do?
I have a program running under systemd and with `SystemCallFilter` applied. My list of allowed syscalls does not include `madvise`, which eventually leads to:
```
$ sudo strace -f -p $(pidof tracefwdr) 2>&1 | fgrep SIGSYS -C5
[pid 125587] <... futex resumed> ) = 0
[pid 125588] <... read resumed> "\202\201\0\temitBatch\34\34\30\10nginx-fl\31<\30\16jae"..., 65000) = 7271
[pid 125587] madvise(0xc000400000, 2097152, MADV_NOHUGEPAGE <unfinished ...>
[pid 125588] read(3, 0xc000438000, 65000) = -1 EAGAIN (Resource temporarily unavailable)
[pid 125587] <... madvise resumed>) = ?
[pid 125587] +++ killed by SIGSYS +++
[pid 125588] sched_yield() = 0
```
Here one thread is killed by `SIGSYS` for seccomp violation.
### What did you expect to see?
Whole process panics, clearly indicating failure.
### What did you see instead?
One thread is dead and random part of my program is not working anymore (either reads from channel or udp socket, not so sure). | NeedsInvestigation,compiler/runtime | low | Critical |
419,723,409 | rust | Add function to make paths absolute, which is different from canonicalization | Many users want to turn relative paths into absolute paths, but the only tool that libstd currently offers is `canonicalize` which is bad because it has to hit the filesystem which has several downsides:
1. It takes time to hit the filesystem.
2. The path has to actually exist.
3. It fails on certain types of drives such as RAM drives on Windows.
4. The path it creates is a `\\?\` path which not all software can handle correctly and imposes requirements on further path manipulation that few users are even aware of.
Needing symbolic links actually resolved is an extremely rare use case, and is often misused to compare paths for equality when in reality you should be comparing file IDs due to things like hard links existing.
A new function (`normalize` or `make_absolute` or something, bikeshed away) should be added that will turn a relative path into an absolute path without touching the filesystem. On Windows this should either call `GetFullPathNameW` or do the pure Rust equivalent while on unixy platforms... *something* should happen, I have no idea.
If such a function did exist, rustc could start using that instead of canonicalize which would fix a whole host of issues including:
https://github.com/rust-lang/rust/issues/74327
https://github.com/rust-lang/rust/pull/74146
https://github.com/rust-lang/rust/issues/59107
https://github.com/rust-lang/rust/issues/58613
https://github.com/rust-lang/rust/issues/55812
https://github.com/rust-lang/rust/issues/52440
https://github.com/rust-lang/rust/issues/48249
https://github.com/rust-lang/rust/issues/45067
https://github.com/rust-lang/rust/issues/42869 | T-libs-api,C-feature-request | medium | Critical |
419,733,825 | pytorch | CUDA large matrix-vector product (torch.mv) causes illegal memory access | ## π Bug
`torch.mv` causes an "illegal memory access" when multiplying a matrix with more than 2^31-1 elements. Note that each dim of the first matrix can fit in an `int`.
This is likely a bug in cuBLAS. Either cuBLAS should be fixed or PyTorch should issue multiple calls to `cublasSgemv`.
## To Reproduce
Note: you need ~10 GB on your GPU to run this example
```python
x = torch.ones(35783, 65133, device='cuda')
y = torch.randn(65133, device='cuda')
z = torch.mv(x, y)
torch.cuda.synchronize() # report asynchronous error
```
## Environment
PyTorch master 066d1584 (Mar 11, 2019)
CUDA 9.2.88
| module: dependency bug,module: cuda,triaged,module: 64-bit,module: cublas | low | Critical |
419,739,355 | go | cmd/compile: add consistency check that local variables are associated with Curfn | [CL 153841](https://golang.org/cl/153841/) needed to create local temporaries in "init" to handle when f(g()) appears in a package-scope initialization statement.
But the concurrently developed [CL 159717](https://golang.org/cl/159717/) moved initialization statements to run in "init.ializer" instead of "init."
The result was entirely unsafe because initialization statements compiled within "init.ializer" were referring to local variables in "init," but it seems like there weren't any compiler consistency checks that noticed this. It was only detected because net/http/pprof happened to fail very loudly at runtime when built with -gcflags=-N.
Granted maybe [CL 153841](https://golang.org/cl/153841) was being too clever with its solution to "init" (now "init.ializer") not yet existing and it's worth revisiting whether "init.ializer" can be created earlier, but it still surprised me that the backend didn't notice it was using a PAUTO variable from another function. | NeedsInvestigation,compiler/runtime | low | Minor |
419,773,969 | go | cmd/go: should `go list -json` imply `-e`? | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go 1.12 darwin
</pre>
### Does this issue reproduce with the latest release?
Yes
### What did you do?
`GO111MODULE=on go list -m -versions -json github.com/golang/nonexistentrepo@latest`
### What did you expect to see?
A valid json output with a clear error in the Error field. Something similar to when `go mod download` has an error as such:
```json
{
"Error": "repository not found"
}
```
### What did you see instead?
A non json output as such (modified):
```json
go list -m github.com/golang/nonexistentrepo: git ls-remote -q origin in /Users/me/go/pkg/mod/cache/vcs/xxx-long-sha: exit status 128:
remote: Repository not found.
fatal: repository 'https://github.com/golang/nonexistentrepo/' not found
``` | NeedsInvestigation,GoCommand,modules | low | Critical |
419,780,220 | angular | Add to Schematics API documentation |
# π Docs or angular.io bug report
### Description
Hello. I just spend some time creating my first schematics. Have to say it's awesome! Unfortunately, I had to spend much of my time reading throughout the angular @schematics code to see examples of how it works and articles on Medium by other people about how to create a basic schematic.
It would be awesome if the Angular team spends some time making an awesome documentation of such an awesome feature! This has so much potential to be wildly used, and I'm afraid that without proper documentation, it won't reach the masses!
Just try to google a simple example of `How to copy a file in angular schematics`, and there is no simple link to it. I ended up reading a file via the `tree.read` into a `buffer`, transforming the file into a `string`, and then creating a new one via `tree.create`, but don't even know if this is standard or not π€·ββοΈ
In any case, I love schematics, but there is no official documentation at all other than two blog post (one with unit test) with simple examples you guys created in the Angular Medium Blog.
Thanks! Nice job! | state: blocked,effort3: weeks,freq3: high,state: needs eng input,P4,area: docs | medium | Critical |
419,786,510 | pytorch | should disable AVX on 32bit x86 / refine AVX availability tests | On desktop systems like Windows and some Linux distributions (e.g. Ubuntu), the PyTorch / Caffe2 library doesn't work under 32-bit systems, mainly for the reason that some AVX functions for 64 bit are lost, like `_mm256_extract_epi64`. According to @t-vi, the disabling of AVX enables the 32-bit build on ARM(Android). So the problem is that should we disable AVX on 32bit x86 platform, or we should just use some alternatives for `_mm256_extract_epi64`?
cc @malfet @soumith @apaszke | module: build,triaged | low | Major |
419,787,694 | go | crypto/x509, encoding/asn1: ObjectIdentifier and ParseCertificate do not support int > 31 bits, preventing support of OID 2.25 (/UUID) (follow-up on #19933) | [Commit 40436](https://go-review.googlesource.com/c/go/+/40436/) extends support for OIDs from original 28 bits to 31 bits as a result of #19933 . But this is insufficient to support the [2.25 OID subtree](http://oid-info.com/cgi-bin/display?oid=2.25&action=display), which is AFAIK the only place in the tree one can get registration-less OIDs, and it mandates them to be 128bits-big. So certificates issued with such OIDs get rejected by go application (caddy, in my case, acting as an HTTP proxy with proxy target being served with such certificate).
@agl : The above was intended as a response to your post on #19933 . I would have happily posted there if it was not blocked. I would happily have this report deleted/closed if the original one can be reopened. | NeedsInvestigation | low | Minor |
419,787,779 | pytorch | Conjugate gradient method | ## π Feature
I want to add batch preconditioned conjugate gradient (including its gradient) to the torch api.
## Motivation
This feature exists in as scipy, as scipy.linalg.cg. I'd like a torch equivalent that can handle batches.
## Pitch
Add a torch function cg(A, B) that returns X^(-1) B by running CG in parallel across the columns of B.
## Alternatives
N/A
## Additional context
I've implemented the cg algorithm as a pytorch c++ extension, as well as its gradients (with respect to A and B as sparse tensors) and it seems to work. It doesn't seem like it would be too hard to pull this into pytorch, but I'm not sure what the right steps are.
See www.github.com/sbarratt/torch_cg
| feature,triaged,module: derivatives,function request | low | Minor |
419,802,862 | vscode | errors in dependsOn background tasks do not prevent subsequent tasks from executing | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.32.1
- OS Version: 18309
Steps to Reproduce:
create launch.json:
```
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "chrome",
"request": "launch",
"name": "Launch Chrome against localhost",
"url": "http://localhost:8080",
"webRoot": "${workspaceFolder}\\src\\FrontEnd\\build",
"preLaunchTask": "core"
}
]
}
```
2. create task.json:
```
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "core",
"type": "npm",
"script": "start",
"path": "src/FrontEnd/core/",
"isBackground": true,
"dependsOn":["loss"],
"problemMatcher":{
"owner": "custom",
"pattern":[
{
"regexp": "something not exists",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"activeOnStart": true,
"beginsPattern": "[email protected] start",
"endsPattern": "created ..\\\\build\\\\vendor.js"
}
}
},
{
"label": "loss",
"type": "npm",
"script": "watch",
"path": "src/FrontEnd/loss/",
"isBackground": true,
"problemMatcher":{
"owner": "custom",
"pattern":[
{
"regexp": "something not exists",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"activeOnStart": true,
"beginsPattern": "[email protected] start",
"endsPattern": "waiting for changes"
}
}
}
]
}
```
I can use beginsPattern/endsPattern with no issue if I direct launched by preLaunchTask, but if I have DependsOn for same task, it will run the dependencies but can't detect it's status by using endsPattern, the task will hang there. it looks like a bug for me.
I tried to use the new Terminal: "terminal.integrated.windowsEnableConpty": false/true, no difference.
I tried to use presentation.panel:"dedicated", no difference.
I think we have done a nice work on putting tasks running in background and launch, but not those tasks in DependsOn property.
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
| feature-request,tasks | medium | Critical |
419,854,082 | pytorch | [caffe2] resnet_trainer example fails with MNIST dataset, when parameter num_channels=1 is provided | ## π Bug
The resnet_50 trainer fails when parameter num_channels 1 is provided:
```
python resnet50_trainer.py --train_data ~/mnist_train_lmdb --num_gpus 4 --batch_size 64 --num_channels 1
...
...
INFO:ResNe(X)t_trainer:Starting epoch 0/1
[E net_async_base.cc:377] [enforce fail at conv_op_cudnn.cc:555] filter.dim32(1) == C / group_. 1 vs 3
Error from operator:
input: "gpu_1/data" input: "gpu_1/conv1_w" output: "gpu_1/conv1" name: "" type: "Conv" arg { name: "kernel" i: 7 } arg { name: "order" s: "NCHW" } arg { name: "enable_tensor_core" i: 0 } arg { name: "stride" i: 2 } arg { name: "pad" i: 3 } arg { name: "exhaustive_search" i: 1 } arg { name: "ws_nbytes_limit" i: 67108864 } device_option { device_type: 1 device_id: 1 } engine: "CUDNN"frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void const*)
```
## To Reproduce
Execute the following command with MNIST dataset:
`python resnet50_trainer.py --train_data ~/mnist_train_lmdb --num_gpus 4 --batch_size 64 --num_channels 1`
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
The training is expected to complete without any errors.
## Environment
```PyTorch version: 1.0.1 (with some local changes)
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Red Hat Enterprise Linux Server 7.6 (Maipo)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.1
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
Nvidia driver version: 418.29
cuDNN version: Could not collect
| caffe2 | low | Critical |
419,910,995 | flutter | Allow iOS project to access flutter_assets in App.framework as bundle resource | In the latest version of Flutter, flutter_assets'folder reference is removed in iOS project,
but our project uses webview to show local web page, thus we can't read the web assets correctly.
The web assets are placed in ${FLUTTER_PROJECT}/web, and added to pubspec.yaml correctly,
The reason is that iOS required adding folder reference to Xcode if we want our app treat the folder's assets as simple assets (not code), and your feature has removed flutter_assets'folder reference make our webview not work well
If I add the folder reference manually , It works well just like before.
the new flutter:

the old:

flutter doctor:
[β] Flutter (Channel beta, v1.2.1, on Mac OS X 10.14 18A391, locale zh-Hans-CN)
β’ Flutter version 1.2.1 at /Users/blockmake/Documents/Test/flutter
β’ Framework revision 8661d8aecd (4 weeks ago), 2019-02-14 19:19:53 -0800
β’ Engine revision 3757390fa4
β’ Dart version 2.1.2 (build 2.1.2-dev.0.0 0a7dcf17eb)
[!] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
β’ Android SDK at //Users/blockmake/Library/Android/sdk
β’ Android NDK location not configured (optional; useful for native profiling
support)
β’ Platform android-28, build-tools 28.0.3
β’ ANDROID_HOME = //Users/blockmake/Library/Android/sdk
β’ Java binary at: /Applications/Android
Studio.app/Contents/jre/jdk/Contents/Home/bin/java
β’ Java version OpenJDK Runtime Environment (build
1.8.0_152-release-1024-b01)
! Some Android licenses not accepted. To resolve this, run: flutter doctor
--android-licenses
[β] iOS toolchain - develop for iOS devices (Xcode 10.1)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Xcode 10.1, Build version 10B61
β’ ios-deploy 1.9.4
β’ CocoaPods version 1.5.0
[β] Android Studio (version 3.1)
β’ Android Studio at /Applications/Android Studio.app/Contents
β’ Flutter plugin version 29.0.1
β’ Dart plugin version 173.4700
β’ Java version OpenJDK Runtime Environment (build
1.8.0_152-release-1024-b01)
[!] VS Code (version 1.31.1)
β’ VS Code at /Applications/Visual Studio Code.app/Contents
β Flutter extension not installed; install from
https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[β] Connected device (2 available)
β’ littleStrong β’ 00008020-00194D9A01F0002E β’ ios β’ iOS 12.1.4
β’ iPhone XR β’ 0CBBCCD6-8FBD-4544-A36F-8CC3163FC12C β’ ios β’ iOS 12.1
(simulator)
| platform-ios,tool,d: api docs,t: xcode,P3,team-ios,triaged-ios | low | Minor |
419,960,661 | node | Debug worker_threads | There is no possible to debug worker threads, only through event messages to main thread
You can't step into this worker in the chrome dev tools
```js
const worker = new Worker('./worker.js', {});
```
`debugger` statements also don't work inside thread | inspector,worker | medium | Critical |
420,010,444 | go | x/net/html: unexpected whitespace rendering of html | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/tcurdt/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/tcurdt/.go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/pf/7vhqx5bn41qddypw08w9jc4w0000gn/T/go-build451640899=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
I am parsing and then rendering html.
```
package main
import (
"bytes"
"fmt"
"os"
"strings"
"golang.org/x/net/html"
)
func main() {
var input = `<!DOCTYPE html>
<html>
<head>
<title>Title of the document</title>
</head>
<body>
body content <p>more content</p>
</body>
</html>`
doc, err := html.Parse(strings.NewReader(input))
if err != nil {
fmt.Fprintf(os.Stderr, "error parsing: %s\n", err.Error())
os.Exit(1)
}
buf := bytes.NewBufferString("")
html.Render(buf, doc)
fmt.Println("--")
fmt.Print(input)
fmt.Println("--")
fmt.Println("--")
fmt.Print(buf.String())
fmt.Println("--")
}
```
### What did you expect to see?
With the docs saying:
> Rendering is done on a 'best effort' basis: calling Parse on the output of Render will always result in something similar to the original tree, but it is not necessarily an exact clone unless the original tree was 'well-formed'.
Given that the HTML is well-formed I'd expect the output be the same as the input.
### What did you see instead?
Instead I am seeing changes in whitespace:
```
--
<!DOCTYPE html>
<html>
<head>
<title>Title of the document</title>
</head>
<body>
body content <p>more content</p>
</body>
</html>--
--
<!DOCTYPE html><html><head>
<title>Title of the document</title>
</head>
<body>
body content <p>more content</p>
</body></html>--
```
IMO there should be a test case verifying that the output matches in input for the documented case. | NeedsInvestigation | medium | Critical |
420,017,651 | opencv | some 4.0.0.1 examples show grey empty blank highgui GTK 2 and GTK 3 windows on armhf Debian Stretch | - OpenCV => 4.0.0.1
- Operating System / Platform => armhf Linux 32 Bit kernel (Raspberry pi3B+)
- Compiler => gcc
- OpenCV => :grey_question: GTK window renders no example content, but opengl example does
- Operating System / Platform => :grey_question: window blips content if moved
- Compiler => :grey_question: can't disable VTK for build
Display bug GTK window renders no example content:
test1: disable openGL, with gtk 3.22.11
test2: with openGL, with gtk 2.x
test3: with openGL, with gtk 2.x, disable opencl, disable carotene
notes:
1. for each test I tried both the marco gpu and software desktop compositors.
2. cpp_lkdemo video content seems to blip a camera frame on being dragged
3. opengl_interop seems show video feed (openCL mode with throw errors, but that is expected for the flags i used)
Current test3 build (dirty due to CMake platform specific path includes):
```
-- General configuration for OpenCV 4.0.1-dev =====================================
-- Version control: 4.0.1-340-ga1ef61266-dirty
--
-- Extra modules:
-- Location (extra): /home/pi/SRC/opencv_contrib/modules
-- Version control (extra): 33f18dd-dirty
--
-- Platform:
-- Timestamp: 2019-03-11T00:36:12Z
-- Host: Linux 4.14.52-rt34-v7+ armv7l
-- CMake: 3.7.2
-- CMake generator: Unix Makefiles
-- CMake build tool: /usr/bin/make
-- Configuration: RELEASE
--
-- CPU/HW features:
-- Baseline: VFPV3 NEON
-- requested: DETECT
-- required: VFPV3 NEON
--
-- C/C++:
-- Built as dynamic libs?: YES
-- C++ Compiler: /usr/bin/c++ (ver 6.3.0)
-- C++ flags (Release): -DTBB_USE_GCC_BUILTINS=1 -D__TBB_64BIT_ATOMICS=0 -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -mfpu=neon -mfp16-format=ieee -fvisibility=hidden -fvisibility-inlines-hidden -fopenmp -O3 -DNDEBUG -DNDEBUG
-- C++ flags (Debug): -DTBB_USE_GCC_BUILTINS=1 -D__TBB_64BIT_ATOMICS=0 -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -mfpu=neon -mfp16-format=ieee -fvisibility=hidden -fvisibility-inlines-hidden -fopenmp -g -O0 -DDEBUG -D_DEBUG
-- C Compiler: /usr/bin/cc
-- C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -mfpu=neon -mfp16-format=ieee -fvisibility=hidden -fopenmp -O3 -DNDEBUG -DNDEBUG
-- C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -mfpu=neon -mfp16-format=ieee -fvisibility=hidden -fopenmp -g -O0 -DDEBUG -D_DEBUG
-- Linker flags (Release):
-- Linker flags (Debug):
-- ccache: NO
-- Precompiled headers: NO
-- Extra dependencies: /usr/local/caffe/lib/libcaffe.so /usr/lib/arm-linux-gnueabihf/libglog.so /usr/local/lib/libprotobuf.so dl m pthread rt /usr/lib/arm-linux-gnueabihf/libGLU.so /usr/lib/arm-linux-gnueabihf/libGL.so
-- 3rdparty dependencies:
--
-- OpenCV modules:
-- To be built: aruco bgsegm bioinspired calib3d ccalib cnn_3dobj core datasets dnn dnn_objdetect dpm face features2d flann freetype fuzzy gapi hdf hfs highgui img_hash imgcodecs imgproc java java_bindings_generator line_descriptor ml objdetect optflow phase_unwrapping photo plot python2 python3 python_bindings_generator quality reg rgbd saliency sfm shape stereo stitching structured_light superres surface_matching text tracking ts video videoio videostab viz xfeatures2d ximgproc xobjdetect xphoto
-- Disabled: world
-- Disabled by dependency: -
-- Unavailable: cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev cvv js matlab ovis
-- Applications: tests perf_tests examples apps
-- Documentation: NO
-- Non-free algorithms: YES
--
-- GUI:
-- GTK+: YES (ver 2.24.31)
-- GThread : YES (ver 2.50.3)
-- GtkGlExt: YES (ver 1.2.0)
-- OpenGL support: YES (/usr/lib/arm-linux-gnueabihf/libGLU.so /usr/lib/arm-linux-gnueabihf/libGL.so)
-- VTK support: YES (ver 6.3.0)
--
-- Media I/O:
-- ZLib: /usr/lib/arm-linux-gnueabihf/libz.so (ver 1.2.8)
-- JPEG: /usr/lib/arm-linux-gnueabihf/libjpeg.so (ver 62)
-- WEBP: /usr/lib/arm-linux-gnueabihf/libwebp.so (ver encoder: 0x0209)
-- PNG: /usr/lib/arm-linux-gnueabihf/libpng.so (ver 1.6.28)
-- TIFF: build (ver 42 - 4.0.10)
-- JPEG 2000: /usr/lib/arm-linux-gnueabihf/libjasper.so (ver 1.900.1)
-- OpenEXR: /usr/lib/arm-linux-gnueabihf/libImath.so /usr/lib/arm-linux-gnueabihf/libIlmImf.so /usr/lib/arm-linux-gnueabihf/libIex.so /usr/lib/arm-linux-gnueabihf/libHalf.so /usr/lib/arm-linux-gnueabihf/libIlmThread.so (ver 2.2.0)
-- HDR: YES
-- SUNRASTER: YES
-- PXM: YES
-- PFM: YES
--
-- Video I/O:
-- DC1394: YES (2.2.5)
-- FFMPEG: YES
-- avcodec: YES (57.64.101)
-- avformat: YES (57.56.101)
-- avutil: YES (55.34.101)
-- swscale: YES (4.2.100)
-- avresample: YES (3.1.0)
-- GStreamer: YES (1.10.4)
-- v4l/v4l2: YES (linux/videodev2.h)
--
-- Parallel framework: TBB (ver 4.3 interface 8006)
--
-- Trace: YES (built-in)
--
-- Other third-party libraries:
-- Lapack: YES (/usr/lib/libopenblas.so)
-- Eigen: YES (ver 3.2.10)
-- Custom HAL: NO
-- Protobuf: /usr/local/lib/libprotobuf.so (3.7.0)
--
-- Python 2:
-- Interpreter: /usr/bin/python2.7 (ver 2.7.13)
-- Libraries: /usr/lib/arm-linux-gnueabihf/libpython2.7.so (ver 2.7.13)
-- numpy: /usr/local/lib/python2.7/dist-packages/numpy/core/include (ver 1.15.3)
-- install path: lib/python2.7/dist-packages/cv2/python-2.7
--
-- Python 3:
-- Interpreter: /usr/bin/python3 (ver 3.5.3)
-- Libraries: /usr/lib/arm-linux-gnueabihf/libpython3.5m.so (ver 3.5.3)
-- numpy: /usr/lib/python3/dist-packages/numpy/core/include (ver 1.12.1)
-- install path: lib/python3.5/dist-packages/cv2/python-3.5
--
-- Python (for build): /usr/bin/python2
-- Pylint: /usr/bin/pylint (ver: 1.6.5, checks: 168)
-- Flake8: /usr/bin/flake8 (ver: 3.2.1)
--
-- Java:
-- ant: /usr/bin/ant (ver 1.9.9)
-- JNI: /usr/lib/jvm/default-java/include /usr/lib/jvm/default-java/include/linux /usr/lib/jvm/default-java/include
-- Java wrappers: YES
-- Java tests: YES
--
-- Install to: /usr/local
-- -----------------------------------------------------------------
```
Current build steps:
```
#note: version 3.5 included in opencv repo will fail to build
#protobuf 3,7.0
cd ~/SRC
git clone --depth 1 https://github.com/protocolbuffers/protobuf.git
cd ~/SRC/protobuf
git submodule update --init --recursive
./autogen.sh
./configure
make
sudo make install
sudo ldconfig
#caffe
cd /usr/lib/arm-linux-gnueabihf
sudo ln -s libhdf5_serial.so.100.0.1 libhdf5.so
sudo ln -s libhdf5_serial_hl.so.100.0.0 libhdf5_hl.so
cd ~/SRC/caffe/python
for req in $(cat requirements.txt); do sudo pip install $req; done
git clone --depth 1 https://github.com/BVLC/caffe.git caffe_head
cd ~/SRC/caffe_head/
mkdir -p ~/SRC/caffe_head/build
cd ~/SRC/caffe_head/build
cmake -D CPU_ONLY=1 -D USE_OPENCV=0 -D CMAKE_INSTALL_PREFIX=/usr/local/caffe ..
make all
make pycaffe
sudo make install
sudo ldconfig
#disable opencl carotene
cmake -D BUILD_TIFF=ON \
-D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/home/pi/SRC/opencv_contrib/modules \
-D_GLIBCXX_USE_CXX11_ABI=0 -D WITH_UNICAP=ON -D BLAS=open \
-D WITH_MATLAB=OFF -D WITH_QT=OFF -D WITH_TESTS=OFF -D ENABLE_PRECOMPILED_HEADERS=OFF \
-D BUILD_opencv_gpulegacy=OFF \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=ON \
-D BUILD_EXAMPLES=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D WITH_CAFFE=ON -D BUILD_CAFFE=OFF -D Caffe_LIBS=/usr/local/caffe/lib/libcaffe.so -D Caffe_INCLUDE_DIR=/usr/local/caffe/include \
-D WITH_CERES=ON -D BUILD_CERES=OFF -D CERES_LIBS=/usr/local/lib/libceres.a -D CERES_INCLUDE_DIR=/usr/local/include \
-D Atlas_LAPACK_LIBRARY=/usr/lib/liblapack.so \
-D ATLAS_INCLUDE_DIR==/usr/include/atlas/ \
-D ENABLE_NEON=ON \
-D ENABLE_VFPV3=ON \
-D CMAKE_CXX_FLAGS="-DTBB_USE_GCC_BUILTINS=1 -D__TBB_64BIT_ATOMICS=0" \
-D WITH_GTK=ON -DWITH_GTK_2_X=ON \
-D WITH_OPENGL=ON \
-D WITH_CAROTENE=OFF -D WITH_VTK=ON -D WITH_OPENCL=OFF -D WITH_OPENCLAMDFFT=OFF -D WITH_OPENCLAMDBLAS=OFF -D WITH_VA_INTEL=OFF \
-D ocv_add_testdata=ON \
-D CPACK_BINARY_DEB=ON \
-D PROTOBUF_UPDATE_FILES=ON \
-D BUILD_PROTOBUF=OFF \
-D PROTOBUF_MIN_PROTOC_VERSION=3004000 \
-D PROTOBUF_LIBRARY=/usr/local/lib/libprotobuf.so \
-D PROTOBUF_LITE_LIBRARY=/usr/local/lib/libprotobuf-lite.so \
-D PROTOBUF_PROTOC_EXECUTABLE=/usr/local/bin/protoc \
-D Protobuf_PROTOC_EXECUTABLE=/usr/local/bin/protoc \
-D PROTOBUF_PROTOC_LIBRARY=/usr/local/lib/libprotoc.so \
-D Protobuf_LIBS=/usr/local/lib/libprotobuf.so \
-D PYTHON2_INCLUDE_DIR2=/usr/local/caffe/include \
-D PYTHON3_INCLUDE_DIR2=/usr/local/caffe/include \
-D PYTHON_DEFAULT_EXECUTABLE=$(which python2) \
-D BUILD_opencv_python2=ON \
-D BUILD_opencv_python3=ON \
-D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_opencv_dnn=ON \
-D BUILD_opencv_world=OFF -D BUILD_opencv_cnn_3dobj=ON \
-D WITH_EIGEN=ON -D BUILD_opencv_gs=ON -D BUILD_opencv_ovis=ON -D BUILD_opencv_gpu=ON \
-D BUILD_opencv_gpuarithm=ON -D BUILD_opencv_gpubgsegm=ON -D BUILD_opencv_gpucodec=ON \
-D BUILD_opencv_gpufeatures2d=ON -D BUILD_opencv_gpufilters=ON -D BUILD_opencv_gpuimgproc=ON \
-D BUILD_opencv_gpuoptflow=ON -D BUILD_opencv_gpustereo=ON -D BUILD_opencv_gpuwarping=ON \
-D WITH_OPENMP=ON -D WITH_TBB=ON -D WITH_V4L=ON \
-D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_C_EXAMPLES=ON -D BUILD_EXAMPLES=ON
```
| priority: low,category: highgui-gui,platform: arm | low | Critical |
420,055,284 | rust | Tracking issue for `FromStr` trait usage in const fn | Because Rust already have macros that evaluate on compile time to strings I think converting them to ints in a `const fn` is a very nice feature.
e.g. `env!("SOMETHING").parse::<usize>().unwrap()`
----------------------
I opened this Issue as part of: https://github.com/rust-lang/rust/issues/57563
Interested to hear if other people want this and if it's already exists as part of a different issue/RFC | T-libs-api,C-tracking-issue,A-const-eval | medium | Major |
420,056,307 | go | go/packages: add to standard library | We made go/build.Import keep working kind of for modules in Go 1.11,
but as we move toward modules on by default, we should be planning
to have go/packages available in the standard library as a replacement.
This will require making sure we are happy with the API freezing.
/cc @ianthehat @matloob | NeedsDecision,early-in-cycle | low | Major |
420,058,823 | go | net/http/httptest: clarify that Server.Client is only good for Server.URL | The docs don't make it clear that the http.Client returned by httptest.Server.Client is only good for hitting httptest.Server.URL.
That is, the http.Client's Transport doesn't redirect all dials of any hostname to the test server. (Maybe it should? Would that break existing users?)
/cc @Capstan
| Documentation,NeedsFix | low | Minor |
420,069,306 | youtube-dl | [linkedin:learning]Β can't download second section | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2019.03.09*. If it's not, read [this FAQ entry](https://github.com/ytdl-org/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2019.03.09**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/ytdl-org/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/ytdl-org/youtube-dl#faq) and [BUGS](https://github.com/ytdl-org/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
youtube-dl --proxy [email protected]@REDACTED.com:80 \
--username [email protected] --password REDACTED \
-f 'bestvideo[height<=720]+bestaudio/best[height<=720]' \
-o '%(playlist_title)s/Lesson %(chapter_number)02d - %(chapter)s/%(playlist_index)s - %(title)s.%(ext)s' --restrict-filenames --verbose --add-metadata --write-sub --limit-rate 2M --min-sleep-interval 5 --max-sleep-interval 10 \
"https://www.linkedin.com/learning/photoshop-cc-2018-essential-training-the-basics"
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'--proxy', u'[email protected]:[email protected]:80', u'--username', u'PRIVATE', u'--password', u'PRIVATE', u'-f', u'bestvideo[height<=720]+bestaudio/best[height<=720]', u'-o', u'%(playlist_title)s/Lesson %(chapter_number)02d - %(chapter)s/%(playlist_index)s - %(title)s.%(ext)s', u'--restrict-filenames', u'--verbose', u'--add-metadata', u'--write-sub', u'--limit-rate', u'2M', u'--min-sleep-interval', u'5', u'--max-sleep-interval', u'10', u'https://www.linkedin.com/learning/photoshop-cc-2018-essential-training-the-basics']
[debug] Encodings: locale ANSI_X3.4-1968, fs ANSI_X3.4-1968, out ANSI_X3.4-1968, pref ANSI_X3.4-1968
[debug] youtube-dl version 2019.03.09
[debug] Python version 2.7.15rc1 (CPython) - Linux-4.15.0-46-generic-x86_64-with-Ubuntu-18.04-bionic
[debug] exe versions: ffmpeg 3.4.4, ffprobe 3.4.4
[debug] Proxy map: {u'http': u'[email protected]:[email protected]:80', u'https': u'[email protected]:[email protected]:80'}
[linkedin:learning:course] Downloading login page
[linkedin:learning:course] Logging in
[linkedin:learning:course] Downloading JSON metadata
[download] Downloading playlist: Photoshop CC 2018 Essential Training: The Basics
[linkedin:learning:course] playlist Photoshop CC 2018 Essential Training: The Basics: Collected 63 video ids (downloading 63 of them)
[download] Downloading video 1 of 63
[linkedin:learning] Downloading login page
[linkedin:learning] Logging in
[linkedin:learning] welcome: Downloading 360p JSON metadata
[linkedin:learning] welcome: Downloading 540p JSON metadata
[linkedin:learning] welcome: Downloading 720p JSON metadata
[linkedin:learning] welcome: Downloading m3u8 information
[debug] Invoking downloader on u'https://files3.lynda.com/secure/courses/625922/VBR_MP4h264_main_HD720/625922_00_01_WX30_welPSE.mp4?bZylE9zHX0NXuLt37pG5nguuMkfIbHyoUxOhh_NxW9t6Jcia74AuY3g8RvnSuNSTU9chCakoiJDJy1WIXJTnQ9ykPGWvquamgnYYZowPynjIMJL3z0bZijOTAP_GrQN9Hmtwg4-2D2Ybd6HyvoZTWUSD3MAWyCz1IOObmmPdmmubGNRCS0sI1g'
[download] Sleeping 7.10 seconds...
[download] Destination: Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/01 - Welcome.mp4
[download] 100% of 7.45MiB in 00:03
[ffmpeg] Adding metadata to 'Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/01 - Welcome.mp4'
[debug] ffmpeg command line: ffmpeg -y -loglevel 'repeat+info' -i 'file:Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/01 - Welcome.mp4' -c copy -metadata 'date=20171005' -metadata 'purl=https://www.linkedin.com/learning/photoshop-cc-2018-essential-training-the-basics/welcome' -metadata 'title=Welcome' 'file:Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/01 - Welcome.temp.mp4'
[download] Downloading video 2 of 63
[linkedin:learning] what-you-should-know: Downloading 360p JSON metadata
[linkedin:learning] what-you-should-know: Downloading 540p JSON metadata
[linkedin:learning] what-you-should-know: Downloading 720p JSON metadata
[linkedin:learning] what-you-should-know: Downloading m3u8 information
[debug] Invoking downloader on u'https://files3.lynda.com/secure/courses/625922/VBR_MP4h264_main_HD720/625922_00_02_XR30_whatKnow.mp4?a3tBiueHfR3hvgwkhccZuVyQ_2sulkhNy1CdbY5MjCeQsMZPLF0BoHJsd7Zr1K0uxIRDV2JrYHaAIsDL9S6km_xwTWbsIT0iYqFpk1FJ3QFxEOKYcVWLqYtEz_fH6RtrWKQDhdKehNNsRSupcDy6_5EWgMP8Tst1ctaR9sGNXfII-Ub7ZPsazryX'
[download] Sleeping 6.67 seconds...
[download] Destination: Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/02 - What_you_should_know.mp4
[download] 100% of 661.78KiB in 00:00
[ffmpeg] Adding metadata to 'Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/02 - What_you_should_know.mp4'
[debug] ffmpeg command line: ffmpeg -y -loglevel 'repeat+info' -i 'file:Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/02 - What_you_should_know.mp4' -c copy -metadata 'date=20171005' -metadata 'purl=https://www.linkedin.com/learning/photoshop-cc-2018-essential-training-the-basics/what-you-should-know' -metadata 'title=What you should know' 'file:Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/02 - What_you_should_know.temp.mp4'
[download] Downloading video 3 of 63
[linkedin:learning] using-the-exercise-files: Downloading 360p JSON metadata
[linkedin:learning] using-the-exercise-files: Downloading 540p JSON metadata
[linkedin:learning] using-the-exercise-files: Downloading 720p JSON metadata
[linkedin:learning] using-the-exercise-files: Downloading m3u8 information
[debug] Invoking downloader on u'https://files3.lynda.com/secure/courses/625922/VBR_MP4h264_main_HD720/625922_00_03_XR15_exFiles_519289.mp4?Y8vBhw2GaHpZH0vNNy029jVen0P54ARnb9jrCfBuF4h81i6bDDhWBnw3C8r2DbhErJwnT3phpJ2tZklkePg72Gdk9dSeG3Pgo-ZwujwgU6qi7uAX05-7QoxieTafN9lZTXve_1lfRPip_oCG3OlCXbfd8UOmPK1x0yW6RR-bm_neTDoWOB8DcGJNYGbK_g8y'
[download] Sleeping 7.69 seconds...
[download] Destination: Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/03 - Using_the_exercise_files.mp4
[download] 100% of 558.49KiB in 00:00
[ffmpeg] Adding metadata to 'Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/03 - Using_the_exercise_files.mp4'
[debug] ffmpeg command line: ffmpeg -y -loglevel 'repeat+info' -i 'file:Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/03 - Using_the_exercise_files.mp4' -c copy -metadata 'date=20171005' -metadata 'purl=https://www.linkedin.com/learning/photoshop-cc-2018-essential-training-the-basics/using-the-exercise-files' -metadata 'title=Using the exercise files' 'file:Photoshop_CC_2018_Essential_Training_-_The_Basics/Lesson 01 - Introduction/03 - Using_the_exercise_files.temp.mp4'
[download] Downloading video 4 of 63
[linkedin:learning] opening-documents-in-photoshop: Downloading 360p JSON metadata
[linkedin:learning] opening-documents-in-photoshop: Downloading 540p JSON metadata
[linkedin:learning] opening-documents-in-photoshop: Downloading 720p JSON metadata
ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 794, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 522, in extract
ie_result = self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/linkedin.py", line 126, in _real_extract
self._sort_formats(formats, ('width', 'height', 'source_preference', 'tbr', 'abr'))
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 1319, in _sort_formats
raise ExtractorError('No video formats found')
ExtractorError: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
The formatting of the `playlist_title` is changed. The same with the `title` it didn't always give you underscores in the filenames, just spaces. (`Photoshop_CC_2018_Essential_Training_-_The_Basics`) But the main issue is that it seems to download the first "section" of the videos just fine. But as soon as the new section begins it spits out this error.
| account-needed | medium | Critical |
420,072,833 | TypeScript | Undocumented = operator syntax in Generic Constraining | In the declaration file for Fastify (https://github.com/fastify/fastify/blob/master/fastify.d.ts) there is some TypeScript code I do not recognize and cannot find another example of in the TypeScript documentation:
```typescript
declare function fastify<
HttpServer extends (http.Server | http2.Http2Server) = http.Server,
HttpRequest extends (http.IncomingMessage | http2.Http2ServerRequest) = http.IncomingMessage,
HttpResponse extends (http.ServerResponse | http2.Http2ServerResponse) = http.ServerResponse
>(opts?: fastify.ServerOptions): fastify.FastifyInstance<HttpServer, HttpRequest, HttpResponse>;
```
I recognize the Generic constraint syntax with the `extends` keyword, but what about the `= http.Server` part? Is this like a default parameter or something?
Whatever this syntax means it should be better documented.
I tried googling for this syntax by searching
- `typescript generic constraint`
- `typescript generic constraint assignment`
- `typescript generic defaults` | Docs | low | Minor |
420,073,098 | pytorch | Build compact libtorch from source with cmake | Hi. I am currently working with the PyTorch C++ API and I'm really enjoying the flexibility so far. However, for my use case, I would like the libraries to be as small as possible.
When building libtorch from source (using build_libtorch.py), what can be done to minimize the final package size? I have searched around but there does not seem to be any info related to this (other than cmake compiler optimisations). | module: build,module: cpp,triaged | medium | Major |
420,076,607 | flutter | Explore splitting foundation to a separate package without strict dependency on dart:ui | Internal: b/128283067
See https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/foundation/README.md
We have a customer request to consume parts of Flutter's foundation pacage in a Dart isolate that isn't using Flutter - specifically to use ValueNotifier/ChangeNotifier.
Today there are only a small handful of dependencies on dart:ui in foundation, and we already know we want to get rid of them. If we moved foundation to another package (in the flutter/flutter repo), we may be able to make dart:ui a conditional import so that we could define VoidCallback, hashValues, hashList, and lerpDouble pass through to dart:ui if it's available or define them in the foundation package otherwise. The rest of flutter/flutter could take whatever is exported from the foundation package rather than from dart:ui
/cc @mehmetf @goderbauer @tvolkert @Hixie | framework,customer: dream (g3),c: proposal,P2,team-framework,triaged-framework | low | Critical |
420,085,163 | go | x/build/cmd/coordinator: support builders that only run on subrepos | It's currently not possible to configure a builder that only builds a subrepo(s) and not the "go" repo.
This is affecting `linux-amd64-androidemu` and `darwin-amd64-wikofever` which aren't building the `mobile` repo currently, despite being configured to do so.
Two reasons why they aren't:
* subrepos always wait for a make.bash snapshot (this should be conditional on "go" being configured to even run for that repo)
* because https://build.golang.org/ never populates the subrepo tables' columns with a builder name if it doesn't also show up in the top ("go") table. And the coordinator uses a json version of that same page (https://build.golang.org/?mode=json) to find work to do.
For the first, it should also be possible to say that a given builder can also use a built make.bash from a different builder. e.g. `linux-amd64-androidemu` can use a built make.bash tarball from `linux-amd64`.
/cc @eliasnaur @dmitshur | Builders,NeedsFix,mobile | low | Minor |
420,086,453 | go | time: Parse behaves inconsistently when parsing numerical timezones with an "MST" format string | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
Local machine:
<pre>
$ go version
go version go1.12 darwin/amd64
</pre>
Docker container:
<pre>
$ go version
go version go1.12 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/nmooney/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/nmooney/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/vv/t3z4pbwn3pd36hm89fg1m8jc0000gn/T/go-build118701068=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
[Go Playground example here.](https://play.golang.org/p/XXspq1f7WXw)
I attempted to use the `time.Parse` format string `"Mon, 2 Jan 2006 15:04:05 MST"` to parse the date `"Tue, 12 Mar 2019 15:34:39 -0000"`.
### What did you expect to see?
I would expect `time.Parse` to fail to parse the string, since it's suffixed with `-0000` rather than an alphabetical time zone designator.
This behaves correctly when I try to parse a date with a non-zero offset, such as `"Tue, 12 Mar 2019 15:34:39 -0700"` (i.e. parsing fails).
### What did you see instead?
`time.Parse` successfully parses date strings with TZ offsets of zero given a format string that ends with `MST`, when it should likely fail if there is no alphabetical timezone designator. | NeedsDecision | medium | Critical |
420,101,812 | flutter | We've introduced new dependencies on dart:ui in foundation | https://github.com/flutter/flutter/pull/25594 and https://github.com/flutter/flutter/pull/27389 introduced new dependencies on `dart:ui` into `flutter/foundation`.
We need to either update https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/foundation/README.md if this is really ok, or move that code out of foundation.
/cc @sbaranov @matthew-carroll @goderbauer
Related: #29230 | framework,dependency: dart,P2,team-framework,triaged-framework | low | Minor |
420,108,124 | TypeScript | Adjacent .d.ts file stops working when using import path aliases | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
We make quite heavy use of the Typescript feature that allows one to declare a `.d.ts` file adjacent to a json file in order to manually declare the type of the json object (rather than have Typescript scan the json file and determine the type itself). This is very useful because (a) we can name types in the json file, where they would otherwise all be anonymous and (b) sometimes Typescript attempts to declare the type of the json file as an enormous union of all the string literal types in the file.
A few days ago we also introduced an import path alias so that we can use the .NET-style `~` sigil to refer to the root of the project which works great but has, unfortunately, broken all the `.d.ts` files next to json files and Typescript has reverted to scanning the json file for a type.
Here's a minimal example:
---
## resources.json
```json
{
"incorrect" : ""
}
```
---
## resources.json.d.ts
```typescript
type ResourceCollection = {
resources: Record<string, Resource>;
};
type Resource = {
value: string;
};
declare const resources: ResourceCollection;
export default resources;
```
---
## json-demo.ts
```typescript
import resources from './resources.json';
import resources2 from '~/resources.json';
type A = typeof resources;
type B = typeof resources2;
```
---
## Tsconfig.json
```json
{
"compilerOptions": {
"allowSyntheticDefaultImports": true,
"moduleResolution": "node",
"esModuleInterop": true,
"target": "es2015",
"module": "es2015",
"strict": true,
"jsx": "react",
"allowJs": true,
"declaration": false,
"resolveJsonModule": true,
"baseUrl": ".",
"paths": {
"~/*": ["src/*"]
}
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist"]
}
```
**Expected behavior:**
The two imports `resources` and `resources2` refer to the same file, so `type A` and `type B` should have the same definition.
**Actual behavior:**
Hovering over `A` gives:
```typescript
type A = {
resources: Record<string, Resource>;
}
```
whereas `B` gives:
```typescript
type B = {
"incorrect": string;
}
```
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Bug | low | Critical |
420,133,925 | react-native | `View.getGlobalVisibleRect()` is broken in some use cases | ## π Bug Report
With respect to the list of these commits - all of which involve child-views clipping on Android (AKA the _overflow_ functionality): https://github.com/facebook/react-native/commit/b81c8b, https://github.com/facebook/react-native/commit/6110a4c, https://github.com/facebook/react-native/commit/bbdc12e, https://github.com/facebook/react-native/commit/9c71952;
While in essence the approach makes sense for introducing overflow toggling support, having view-group components' clip-children flag hard-codedly set to `false` creates undesired side-effects. One of which is the breaking of the logic of the native `View.getGlobalVisibleRect()` method. In some cases -- as illustrated by [the screenshot from a demo app](https://pasteboard.co/I56NDcX.png) I've worked out for this, the native method will return `true` for views that were effectively [properly clipped by `ReactViewGroup`](https://github.com/facebook/react-native/blob/b81c8b51fc6fe3c2dece72e3fe500e175613c5d4/ReactAndroid/src/main/java/com/facebook/react/views/view/ReactViewGroup.java#L828), and are in fact **not visible** on the screen.
In that specific use case, while the view-group holding the upper part of the screen (gray background) is limited in height, the scroll view stretches all the way down to the bottom of the screen. This way, items 14..17 are drawn under the lower part of the screen (blue), and - as far as Android is concerned, are NOT clipped by the parent view-group -- as it's clip-children flag is off.
> Note: the bug persists even if `ScrollView` is removed from the hierarchy.
Among other potential functionalities, this stability of the `View.getGlobalVisibleRect()` method is paramount for ui-testing projects to work -- such as https://github.com/wix/Detox (the one I'm working on right now) and that is already integrated into React Native itself (for iOS).
## To Reproduce
The demo app is [available on github](https://github.com/d4vidi/RNClipVisibilityBugDemo)
## Expected Behavior
`View.getGlobalVisibleRect()` should return `false` for views that were clipped off by one of their view-group _parents_.
## Code Example
**From the demo app:**
- [A screen layout where this bug occurs](https://github.com/d4vidi/RNClipVisibilityBugDemo/blob/master/App.js#L36)
- [The native logic](https://github.com/d4vidi/RNClipVisibilityBugDemo/blob/master/android/app/src/main/java/com/awesomeproject/CustomVisibilityInspector.kt#L47) that runs 'under the hood' in order to show the bug is there.
## Environment
```
info
React Native Environment Info:
System:
OS: macOS 10.14.3
CPU: (8) x64 Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
Memory: 409.30 MB / 16.00 GB
Shell: 3.2.57 - /bin/bash
Binaries:
Node: 9.11.2 - ~/.nvm/versions/node/v9.11.2/bin/node
Yarn: 1.3.2 - /usr/local/bin/yarn
npm: 6.8.0 - ~/.nvm/versions/node/v9.11.2/bin/npm
Watchman: 4.9.0 - /usr/local/bin/watchman
SDKs:
iOS SDK:
Platforms: iOS 12.1, macOS 10.14, tvOS 12.1, watchOS 5.1
Android SDK:
API Levels: 19, 21, 22, 23, 24, 25, 26, 27, 28
Build Tools: 19.1.0, 20.0.0, 21.1.2, 22.0.1, 23.0.1, 23.0.2, 23.0.3, 24.0.1, 25.0.0, 25.0.1, 25.0.2, 25.0.3, 26.0.0, 26.0.1, 26.0.2, 26.0.3, 27.0.1, 27.0.3, 28.0.0, 28.0.2, 28.0.3
System Images: android-16 | Google APIs Intel x86 Atom, android-19 | Google APIs Intel x86 Atom, android-22 | Google APIs Intel x86 Atom, android-22 | Google APIs Intel x86 Atom_64, android-23 | Google APIs Intel x86 Atom_64, android-25 | Google APIs Intel x86 Atom_64, android-26 | Google APIs Intel x86 Atom, android-26 | Google Play Intel x86 Atom, android-28 | Google Play Intel x86 Atom
IDEs:
Android Studio: 3.3 AI-182.5107.16.33.5199772
Xcode: 10.1/10B61 - /usr/bin/xcodebuild
npmPackages:
react: 16.8.3 => 16.8.3
react-native: 0.59.0 => 0.59.0
npmGlobalPackages:
react-native-cli: 2.0.1
``` | Stale,Platform: Android,Bug | high | Critical |
420,158,276 | TypeScript | Allow type assertions to consider typed index signatures | ## Search Terms
interface, index, signature, typed, assertion, conversion, partial,
## Suggestion
Currently type assertions do not seem to respect typed index signatures `[key: string]: any;` in interfaces or test for compatibility before attempting to convert.
I am proposing that type assertions would be able to convert this situation without error:
```ts
interface Test {
a: string;
b?: string;
[key: string]: any;
}
const a = {b: 'b', c: 3 } as Test;
```
## Use Cases
Currently to work around this I need to first create an intermediate variable that is typed as a Partial before making the assertion. I would prefer to just make the assertion.
## Examples
This example above will error because `{b: 'b', c: 3 }` is missing property `a`
```ts
interface Test {
a: string;
b?: string;
[key: string]: any;
}
const a = {b: 'b', c: 3 } as Test;
```
[playground](https://www.typescriptlang.org/play/#src=interface%20Test%20%7B%0D%0A%09a%3A%20string%3B%0D%0A%09b%3F%3A%20string%3B%0D%0A%09%5Bkey%3A%20string%5D%3A%20any%3B%0D%0A%7D%0D%0A%0D%0Aconst%20a%20%3D%20%7Bb%3A%20'b'%2C%20c%3A%203%20%7D%20as%20Test%3B)
If I remove `c: 3` from the object then there is no error and `a` is no longer required:
```ts
interface Test {
a: string;
b?: string;
[key: string]: any;
}
const a = {b: 'b' } as Test;
```
[playground](https://www.typescriptlang.org/play/#src=interface%20Test%20%7B%0D%0A%09a%3A%20string%3B%0D%0A%09b%3F%3A%20string%3B%0D%0A%09%5Bkey%3A%20string%5D%3A%20any%3B%0D%0A%7D%0D%0A%0D%0Aconst%20a%20%3D%20%7Bb%3A%20'b'%20%7D%20as%20Test%3B)
Currently to workaround this an intermediate **typed** value can be used so the subtype is already confirmed before the assertion:
```ts
interface Test {
a: string;
b?: string;
[key: string]: any;
}
const test: Partial<Test> = { b: 'b', c: 3 };
const a = test as Test
```
[playground](https://www.typescriptlang.org/play/#src=interface%20Test%20%7B%0D%0A%09a%3A%20string%3B%0D%0A%09b%3F%3A%20string%3B%0D%0A%09%5Bkey%3A%20string%5D%3A%20any%3B%0D%0A%7D%0D%0A%0D%0Aconst%20test%3A%20Partial%3CTest%3E%20%3D%20%7B%20b%3A%20'b'%2C%20c%3A%203%20%7D%3B%0D%0Aconst%20a%20%3D%20test%20as%20Test%3B)
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Critical |
420,161,031 | pytorch | JIT torch.ones_like with dtype starts failing on master | ## π Bug
Similar symptom to #15478, when using `torch.jit.script`, calling `torch.ones_like()` with dtype starts to give errors while previously it works.
## To Reproduce
```python
import torch
@torch.jit.script
def test_ones_like(x):
return torch.ones_like(x, dtype=torch.int64)
test_ones_like(torch.zeros(4, 5))
```
Run with `PYTORCH_JIT=0 python ones_like_test.py` works fine. `PYTORCH_JIT=1 python ones_like_test.py` gives error
```
RuntimeError:
arguments for call are not valid:
for operator aten::ones_like(Tensor self) -> Tensor:
keyword argument dtype unknown
for operator aten::ones_like(Tensor self, *, int dtype, int layout, Device device) -> Tensor:
argument layout not provided.
@torch.jit.script
def test_ones_like(x):
return torch.ones_like(x, dtype=torch.int64)
~~~~~~~~~~ <--- HERE
for call at:
@torch.jit.script
def test_ones_like(x):
return torch.ones_like(x, dtype=torch.int64)
~~~~~~~~~~ <--- HERE
```
## Expected behavior
Should not throw error.
## Environment
Collecting environment information...
PyTorch version: 1.1.0a0+ba06fa3
Is debug build: Yes
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.12.2
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 1070
Nvidia driver version: 410.78
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.1
/usr/local/cuda-10.0/lib64/libcudnn.so.7
Versions of relevant libraries:
[pip] numpy==1.15.4
[pip] torch==1.1.0a0+ba06fa3
[conda] blas 1.0 mkl
[conda] mkl 2018.0.3 1
[conda] mkl_fft 1.0.6 py37h7dd41cf_0
[conda] mkl_random 1.0.1 py37h4414c95_1
[conda] torch 1.1.0a0+ba06fa3 <pip>
| oncall: jit | low | Critical |
420,200,107 | pytorch | [caffe2] resize_op_test.py::TestResize::test_nearest FAILED | ## π Bug
resize_op_test.py::TestResize::test_nearest FAILED
## To Reproduce
Steps to reproduce the behavior:
Decorate the function test_nearest in resize_op_test.py with:
```
@reproduce_failure('3.59.1', 'AAMBGgDn8sXqhzzgA2trGwBu+/hbaVU5')
```
```
example:
from hypothesis import reproduce_failure
class TestResize(hu.HypothesisTestCase):
@reproduce_failure('3.59.1', 'AAMBGgDn8sXqhzzgA2trGwBu+/hbaVU5')
@given(height_scale=st.floats(0.25, 4.0) | st.just(2.0),
width_scale=st.floats(0.25, 4.0) | st.just(2.0),
height=st.integers(4, 32),
width=st.integers(4, 32),
num_channels=st.integers(1, 4),
batch_size=st.integers(1, 4),
seed=st.integers(0, 65535),
**hu.gcs)
def test_nearest(self, height_scale, width_scale, height, width,
num_channels, batch_size, seed,
gc, dc):
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
the gradient data calculated apears to be the same on both x86 and ppc
log of test failure from x86 system
[results.txt](https://github.com/pytorch/pytorch/files/2958366/results.txt)
log of test failure from ppc64le system
[results.ppc64le.txt](https://github.com/pytorch/pytorch/files/2958695/results.ppc64le.txt)
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The test should pass
## Environment
The failure is seen on both:
- ppc64le / CUDA 10.1 / NVIDIA V100 GPU / Driver Version 418 / RHEL 7.6
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
- x86 / CUDA 10.1 / NVIDIA P100 GPU/ Driver version 418 /Ubuntu 18.04
GPU models and configuration:
GPU 0: Tesla P100-PCIE-16GB
GPU 1: Tesla P100-PCIE-16GB
## Additional context
<!-- Add any other context about the problem here. -->
| caffe2 | low | Critical |
420,204,681 | godot | Implement additional indentation for line breaking when word wrap is on | **Godot version:**
3.1 rc2
**OS/device including version:**
Windows 10 x64
**Issue description:**
When word wrap option in on, the line breaking put the broken part left aligned with the starting line:

This is not aesthetically good and can lead to confusion in reading the code.
It would be best to insert an additional indentation for the broken parts, thus differentiating them from the first line:

| enhancement,usability,topic:gui | low | Critical |
420,314,250 | rust | E0587 error on packed and aligned structures from C | When converting C structures that are both packed and aligned using either C2Rust or bindgen, such as:
```C
struct xregs_state {
struct fxregs_state i387;
struct xstate_header header;
u8 extended_state_area[0];
} __attribute__ ((packed, aligned (64)));
```
the code that either tool produces looks something like:
```Rust
#[repr(C, packed(64))]
#[repr(align(64))]
pub struct xregs_state {
pub i387: fxregs_state,
pub header: xstate_header,
pub extended_state_area: __IncompleteArrayField<u8>,
}
```
This Rust code fails to compile due to error `E0587`:
```
error[E0587]: type has conflicting packed and align representation hints
--> .../out/bindings.rs:3894:1
|
3894 | / pub struct xregs_state {
3895 | | pub i387: fxregs_state,
3896 | | pub header: xstate_header,
3897 | | pub extended_state_area: __IncompleteArrayField<u8>,
3898 | | }
| |_^
```
We can work around this in C2Rust by emitting an aligned outer/packed inner structure pair (I think bindgen could do the same), but I'm wondering if it would be better to fix this on the Rust language/compiler side. | C-enhancement,A-diagnostics,A-FFI,T-lang,T-compiler,A-repr-packed | medium | Critical |
420,316,379 | vue | Performance: Include/Use uid in the name when using performance.measure | ### What problem does this feature solve?
This is in regards when Vue.config.performance is set to true.
When there is multiple components of the same name, it is difficult to distinguish which component coincide to which performance information.
One simple use case is rendering components inside a v-for.
### What does the proposed API look like?
Instead of just using name (Vue._name), I suggest to include the _uid to better distinguish components with the same name
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | medium | Major |
420,346,531 | flutter | Feature Request : Google Map Navigation & Best Route Package/Plug-in | I wonder if flutter support google map navigation and also best route? If it can be integrate into Flutter, I looking forward for that package/plug-in and tutorial.
Thank you :) | c: new feature,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Major |
420,346,837 | flutter | No documentation for uninstalling Flutter on Windows | I want to remove or uninstall flutter from my windows 10 system | tool,platform-windows,d: api docs,P3,team-tool,triaged-tool | low | Major |
420,366,620 | pytorch | torch.cuda.is_available() returns misleading value | Using pytorch version 1.0.0 on python 3.6.8 on a machine with a NVIDIA GF108 GeForce GT630 GPU.
On this machine torch.cuda.is_available() returns True, but then a warning is shown ("PyTorch no longer supports this GPU because it is too old.") and eventually the attempt to use it generates an exception:
```
ret = torch.addmm(torch.jit._unwrap_optional(bias), input, weight.t())
RuntimeError: CUDA error: no kernel image is available for execution on the device
```
It seems useless for the function to report that CUDA can be used whan in fact, it cannot.
This function should clearly reflect if CUDA is actually usable, not just "available" or a function should get added which does that.
cc @ngimel | module: cuda,triaged,module: ux | low | Critical |
420,403,229 | flutter | Navigator.pushNamedAndRemoveUntil οΌThe phone may have a black screen | I am using MOTO Phone to test , when I use Navigator.pushNamedAndRemoveUntil api to remove all the routes , and then new screen use Navigator.pop or backspace key , The phone went black screen.
flow:
1. screen_1 => Navigator.of(context).pushNamed("/screen_1");
2. screen_2 => Navigator.of(context).pushNamed("/screen_2");
3. screen_3 => Navigator.of(context).pushNamedAndRemoveUntil("/screen_4",(Route<dynamic> route) => false);
4. screen_4 => Navigator.of(context).pop();
```
class RouteDemoApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'RouteDemo',
theme: ThemeData(
primaryColor: Colors.redAccent,
),
home: RouteUserDemo(),
routes: <String, WidgetBuilder>{
'/screen1': (BuildContext context) => new Screen1(),
'/screen2': (BuildContext context) => new Screen2(),
'/screen3': (BuildContext context) => new Screen3(),
'/screen4': (BuildContext context) => new Screen4(),
},
);
}
}
``` | c: new feature,framework,f: material design,f: routes,c: proposal,P3,team-design,triaged-design | low | Minor |
420,427,947 | TypeScript | `importHelpers` generates code, that is incompatible with browsers | Currently enabling `importHelpers` adds
```
import * as tslib_1 from "tslib";
```
at the top of every source file.
Such notation is not compatible with browsers. Nowadays, all modern browsers supports Ecma modules, there's no reason for writing code that assumes Node.js environment + bundler build. Code should assume ["Ecma modules" environment](https://www.bryntum.com/blog/writing-isomorphic-tests-with-siesta-5-2-0/) instead. And bundlers can stop being compilers and do the work they are supposed to do - create _optimized_ builds (if needed at all).
The change is simple, `importHelpers` should generate:
```
import * as tslib_1 from "../../../node_modules/tslib/index.js";
```
The path to "node_modules" should be determined at compile time of course. | Suggestion,Awaiting More Feedback | low | Minor |
420,442,628 | pytorch | Binary not operator causes crash when Jit module is executed on different device | ## π Bug
When the binary not operator `~` is used in a module, which is then Jit traced, it causes a crash when the resulting Jit module is moved to and executed on a different device type (e.g. CPU -> GPU).
This crash only occurs in the Jit module, but not in eager mode.
Using `1-tensor` to negate the ByteTensor does not show the same issue in Jit.
## To Reproduce
```python
import torch
import torch.nn as nn
class OneMinus(nn.Module):
def forward(self, inp):
mask = inp > 0.5
return inp[1-mask]
class Not(nn.Module):
def forward(self, inp):
mask = inp > 0.5
return inp[~mask]
inp = torch.rand((8,))
# Eager
out = OneMinus()(inp) # Works
out = Not()(inp) # Works
out = OneMinus().cuda()(inp.cuda()) # Works
out = Not().cuda()(inp.cuda()) # Works
# Jit
oneminus_jit = torch.jit.trace(OneMinus(), inp)
not_jit = torch.jit.trace(Not(), inp)
oneminus_jit(inp) # Works
not_jit(inp) # Works
oneminus_jit.cuda()(inp.cuda()) # Works
not_jit.cuda()(inp.cuda()) # Fails
# Jit Trace with Cuda
not_jit_cuda = torch.jit.trace(Not().cuda(), inp.cuda())
not_jit_cuda(inp.cuda()) # Works
not_jit_cuda.cpu()(inp) # Fails
```
The error console output is:
```
Traceback (most recent call last):
File "issue.py", line 32, in <module>
not_jit.cuda()(inp.cuda()) # Fails
File "/home/lorenwel/venv/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/lorenwel/venv/pytorch/lib/python3.6/site-packages/torch/jit/__init__.py", line 1347, in forward
return self._get_method('forward')(*args, **kwargs)
RuntimeError:
expected type CPUByteType but got CUDAByteType (compute_types at /home/lorenwel/git/pytorch/aten/src/ATen/native/TensorIterator.cpp:134)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6c (0x7f6df917b02c in /home/lorenwel/git/pytorch/torch/lib/libc10.so)
frame #1: at::TensorIterator::compute_types() + 0xcb1 (0x7f6dd0db09b1 in /home/lorenwel/git/pytorch/torch/lib/libcaffe2.so)
frame #2: at::TensorIterator::Builder::build() + 0x5c (0x7f6dd0db679c in /home/lorenwel/git/pytorch/torch/lib/libcaffe2.so)
frame #3: at::TensorIterator::binary_op(at::Tensor&, at::Tensor const&, at::Tensor const&) + 0x30a (0x7f6dd0db762a in /home/lorenwel/git/pytorch/torch/lib/libcaffe2.so)
frame #4: at::native::sub_out(at::Tensor&, at::Tensor const&, at::Tensor const&, c10::Scalar) + 0xb2 (0x7f6dd0c31af2 in /home/lorenwel/git/pytorch/torch/lib/libcaffe2.so)
frame #5: at::TypeDefault::sub_(at::Tensor&, at::Tensor const&, c10::Scalar) const + 0x8d (0x7f6dd0fb771d in /home/lorenwel/git/pytorch/torch/lib/libcaffe2.so)
frame #6: torch::autograd::VariableType::sub_(at::Tensor&, at::Tensor const&, c10::Scalar) const + 0x306 (0x7f6dd3894b06 in /home/lorenwel/git/pytorch/torch/lib/libtorch.so.1)
frame #7: <unknown function> + 0x5f20a0 (0x7f6dd3b5a0a0 in /home/lorenwel/git/pytorch/torch/lib/libtorch.so.1)
frame #8: <unknown function> + 0x626ab5 (0x7f6dd3b8eab5 in /home/lorenwel/git/pytorch/torch/lib/libtorch.so.1)
frame #9: torch::jit::InterpreterState::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x31 (0x7f6dd3b88ed1 in /home/lorenwel/git/pytorch/torch/lib/libtorch.so.1)
frame #10: <unknown function> + 0x60b0d3 (0x7f6dd3b730d3 in /home/lorenwel/git/pytorch/torch/lib/libtorch.so.1)
frame #11: <unknown function> + 0x3cc9c8 (0x7f6dffa789c8 in /home/lorenwel/git/pytorch/torch/lib/libtorch_python.so)
frame #12: <unknown function> + 0x3adc76 (0x7f6dffa59c76 in /home/lorenwel/git/pytorch/torch/lib/libtorch_python.so)
frame #13: <unknown function> + 0x10eb46 (0x7f6dff7bab46 in /home/lorenwel/git/pytorch/torch/lib/libtorch_python.so)
<omitting python frames>
frame #17: python() [0x5381b4]
frame #20: python() [0x574417]
frame #25: python() [0x574417]
frame #29: python() [0x5381b4]
frame #31: python() [0x57cb45]
frame #33: python() [0x574417]
frame #35: python() [0x5e8ba2]
frame #40: __libc_start_main + 0xe7 (0x7f6e09502b97 in /lib/x86_64-linux-gnu/libc.so.6)
:
operation failed in interpreter:
issue.py(12): forward
/home/lorenwel/venv/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py(477): _slow_forward
/home/lorenwel/venv/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py(487): __call__
/home/lorenwel/venv/pytorch/lib/python3.6/site-packages/torch/jit/__init__.py(636): trace
issue.py(26): <module>
```
## Expected behavior
It should not crash.
## Environment
Please copy and paste the output from our
[environment collection script]:
```
Collecting environment information...
PyTorch version: 1.0.0a0+743fdbd
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce RTX 2080
Nvidia driver version: 410.78
cuDNN version: Probably one of the following:
/usr/local/MATLAB/R2018a/bin/glnxa64/libcudnn.so.7.0.3
/usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7.4.2
/usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn_static.a
Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
```
- PyTorch Version 1.0.1
- Ubuntu 18.04
- Compiled from source with `python setup.py install`
- Python 3.6.8
- CUDA 10.0, cuDNN 7.4.2
- Nvidia RTX 2080
## Additional context
The same issue also occurs when loading the failing jit module into libtorch and executing it there.
cc @suo | oncall: jit,triaged | low | Critical |
420,504,220 | opencv | VideoCapture::read() returns success when camera disconnected | ##### System information (version)
- OpenCV => 4.0.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2017
##### Detailed description
The app is reading camera and camera is unplugged during the run of the app, the first time it returns false and empty frame, but the second time it returns success and a not empty frame. My desire was to handle cheap cameras which can deliver empty frames. I would expect to get false and empty frame even for further read calls.
##### Steps to reproduce
...
m_camera.open(0);
bool success=m_camera.read(m_originalFrame);
if(!success || m_originalFrame.empty())
{
success=m_camera.read(m_originalFrame);
if(!success || m_originalFrame.empty())
{
m_camera.release();
... | bug,category: videoio(camera),incomplete,platform: win32,needs investigation | low | Major |
420,600,056 | TypeScript | Pipe/flow/chain type support | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
recursive, flow, pipe, chain
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
I do not have a specific implementation in mind to support this. There is a decent existing solution [by a stackoverflow user](https://stackoverflow.com/questions/53173203/typescript-recursive-function-composition) @jcalz, which does its best given what typescript is able to accomplish now. I have created this issue to more to specifically point out the current limitation of typescript in regard to this feature, and the current use cases. This feature request falls under the larger umbrella of recursive type inference.
At its most broad, I would like to suggest that typescript include some form of an official `Chain` type that developers can use.
## Unanswered Questions
- should this type have an official implementation with a max recursion value? `pipe: Chain<In, Out, number>`
- will this design pattern cease to be relevant if the [tc39 pipeline proposal](https://github.com/tc39/proposal-pipeline-operator) is integrated into javascript?
<!-- A summary of what you'd like to see added or changed -->
## Use Cases
There are a number of implementations for javascript pipes in common libraries.
- [lodash pipe](https://github.com/DefinitelyTyped/DefinitelyTyped/blob/master/types/lodash/fp.d.ts#L309)
- [rxjs pipe](https://github.com/ReactiveX/rxjs/blob/master/src/internal/util/pipe.ts#L5)
- [ramda pipe](https://github.com/DefinitelyTyped/DefinitelyTyped/blob/master/types/ramda/index.d.ts#L1941)
## Related Issues
https://github.com/Microsoft/TypeScript/issues/27102
https://github.com/Microsoft/TypeScript/issues/28505
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
a typical implementation looks like this:
```ts
function pipe<T, A>(arg1: (in: T) => A): A
function pipe<T, A, B>(arg1: (in: T) => A, arg2: (in: A) => B): (in: T) => B
function pipe(...args: any[]): (in: any) => any
const strToNumber = (str: string) => parseInt(str)
const add = (x: number) => (y: number) => x + y
const safeAdder = pipe(strToNumber, add(5))
const out: number = safeAdder('6') // returns 11
```
The limitation of the current implementation is that if you use a typed pipe function with more arguments than the written overloads, you run lose type inference.
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [ ] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals). | Suggestion,Needs Proposal | medium | Critical |
420,623,975 | three.js | feature request: transform origin (or "pivot point") | ##### Description of the problem
As an example, Babylon has this feature built in, called [pivot points](https://doc.babylonjs.com/how_to/pivots). CSS has it in the form of the [`transform-origin` property](https://developer.mozilla.org/en-US/docs/Web/CSS/transform-origin).
These features enable rotation about a pivot point with a one-liner.
Here's an implementation idea: https://jsfiddle.net/mmalex/hd8ex0ok/ (thanks @nmalex).
I believe ideally this would be implemented inside `Matrix4`, and the `Matrix4.compose( position, quaternion, scale )` signature would be changed to `Matrix4.compose( position, quaternion, scale[, origin] )`, with the `origin` parameter being optional for backwards compatibility.
An `origin` property would be added to `Object3D`, so that we can for example write `object.origin.set(1,2,3)`.
`updateMatrixWorld` would then call `this.matrix.compose( this.position, this.quaternion, this.scale, this.origin )`.
Just bike shedding, but `pivot` could also be another name in place of `origin`, but I like `origin` better because "pivot" doesn't seem to align with "scale" (I'm thinking that `scale` would also happen about the `origin`).
##### Three.js version
- [x] Dev
- [ ] r102
- [ ] ...
##### Browser
- [x] All of them
- [ ] Chrome
- [ ] Firefox
- [ ] Internet Explorer
##### OS
- [x] All of them
- [ ] Windows
- [ ] macOS
- [ ] Linux
- [ ] Android
- [ ] iOS
##### Hardware Requirements (graphics card, VR Device, ...)
N/A
| Enhancement | high | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.