id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
505,529,125 | TypeScript | Debug Failure. Did not expect PropertyDeclaration to have an Identifier in its trivia | We can see via telemetry this happens a fair amount. The invalid parse tree indicated by this assertion may be the root cause of a VS crash @uniqueiniquity was investigating, but we haven’t been able to reproduce. Posting this issue in the hopes that someone will search for this and provide us with more clues.
(Note: I’m specifically interested in _PropertyDeclaration_ errors—this happens more often with JSX-related nodes.)
```
Debug Failure. Did not expect PropertyDeclaration to have an Identifier in its trivia
at addSyntheticNodes (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:117479:30)
at processNode (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:117452:13)
at Object.forEach (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:207:30)
at createChildren (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:117462:12)
at NodeObject.getChildren (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:117412:56)
at getTokenAtPositionWorker (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:93879:43)
at Object.getTokenAtPosition (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:93871:16)
at Object.getCodeActions (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:113197:32)
at C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:109567:121
at Object.flatMap (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:488:25)
at Object.getFixes (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:109567:23)
at C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:118771:35
at Object.flatMap (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:488:25)
at Object.getCodeFixesAtPosition (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:118769:23)
at IOSession.Session.getCodeFixes (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:126922:64)
at Session.handlers.ts.createMapFromTemplate._a.(anonymous function) (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:125730:61)
at C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:127091:88
at IOSession.Session.executeWithRequestId (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:127082:28)
at IOSession.Session.executeCommand (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:127091:33)
at IOSession.Session.onMessage (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:127113:35)
at Interface.<anonymous> (C:\Program Files (x86)\Microsoft SDKs\TypeScript\3.3\tsserver.js:128374:27)
at Interface.emit (events.js:182:13)
at Interface._onLine (readline.js:290:10)
at Interface._normalWrite (readline.js:433:12)
at Socket.ondata (readline.js:149:10)
at Socket.emit (events.js:182:13)
at addChunk (_stream_readable.js:283:12)
at readableAddChunk (_stream_readable.js:264:11)
at Socket.Readable.push (_stream_readable.js:219:10)
at Pipe.onread (net.js:638:20)
``` | Bug,Crash | low | Critical |
505,611,727 | flutter | gradle_jetifier_test fails on macOS Cirrus | ```
[gradle_jetifier_test] [STDOUT] Executing: find /usr/local/share/android-sdk -name dexdump
[gradle_jetifier_test] [STDOUT] "find" exit code: 0
[gradle_jetifier_test] [STDERR] Task failed: Exception: Couldn't find a dexdump executable.
```
Currently this test is disabled on Cirrus by virtue of the shard not being included on macOS; I turn that shard on in one of my PRs but I'm going to disable the test for now. | a: tests,platform-android,tool,platform-mac,t: gradle,P2,team-android,triaged-android | low | Critical |
505,650,877 | node | Refactor Worker and NodeMainInstance class to reuse code | Opening an issue to discuss about how to refactor the `Worker` and `NodeMainInstance` class to reuse code.
My current plan is to create a base class `NodeInstance` and try to strip out common code in `WorkerData`/`Worker` as well as `NodeMainInstance` in there, and then make `Worker` and `NodeMainInstance` inherit from `NodeInstance`.
This is useful in adding support for startup snapshots in workers and ContextifyContext - otherwise we need to repeat e.g. snapshot availability detection code in multiple places which can be tricky to maintain.
Refs: https://github.com/nodejs/node/issues/29842 | process | low | Minor |
505,686,267 | react | mouseEnter behaves like mouseOver when using ReactDOM.render() to mount a child element | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
**Do you want to request a *feature* or report a *bug*?**
Bug
**What is the current behavior?**
When using `ReactDOM.render()` to mount a child element, `mouseEnter` behaves like `mouseOver` (`mouseLeave` behaves like `mouseOut`). you can see the demo, and when my cursor moves between the red and blue blocks, it will repeatedly trigger mouseEnter and mouseLeave.
https://codepen.io/sen-dream/pen/VwwvGbm
**What is the expected behavior?**
https://codepen.io/sen-dream/pen/WNNQgoy
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
Testes with React 16.8 on Chrome/macOS. It didn't work in previous versions.
| Component: DOM,Type: Needs Investigation | low | Critical |
505,736,856 | flutter | Add-to-app Flutter build script fails when Podfile is in different directory from Xcode project | We found that there is `Run Flutter Build Script` in flutter 1.9.
```
flutter_export_environment_path = File.join('$SRCROOT', relative, 'flutter_export_environment.sh');
script_phase :name => 'Run Flutter Build Script',
:script => "set -e\nset -u\nsource \"#{flutter_export_environment_path}\"\n\"$FLUTTER_ROOT\"/packages/flutter_tools/bin/xcode_backend.sh build",
:input_files => [
File.join('$SRCROOT', flutter_application_path, '.metadata'),
File.join('$SRCROOT', relative, 'App.framework', 'App'),
File.join('$SRCROOT', relative, 'engine', 'Flutter.framework', 'Flutter'),
flutter_export_environment_path
],
:execution_position => :before_compile
```
Under normal circumstances can meet our normal use. However, **if our `Podfile` is not in the `.xcodeproj` directory, ie the `Podfile` is not in the `$SRCROOT` directory,** then this place is faulty.
`File.join('$SRCROOT', flutter_application_path, '.metadata'),`
`File.join('$SRCROOT', relative, 'App.framework', 'App')`
` File.join('$SRCROOT', relative, 'engine', 'Flutter.framework', 'Flutter')`
` File.join('$SRCROOT', relative, 'flutter_export_environment.sh')` these paths are **not exists**.
## Suggestion:
I suggest that we can add a parameter `pod_path`.
in `Podfile`:
```
pod_path = current_Podfile_path
```
```
install_all_flutter_pods(flutter_application_path, pod_path)
```
in `podhelper.rb`:
```
pod_path ||=flutter_application_path
```
```
flutter_export_environment_path = File.join(pod_path, relative, 'flutter_export_environment.sh');
script_phase :name => 'Run Flutter Build Script',
:script => "set -e\nset -u\nsource \"#{flutter_export_environment_path}\"\n\"$FLUTTER_ROOT\"/packages/flutter_tools/bin/xcode_backend.sh build",
:input_files => [
File.join(pod_path, flutter_application_path, '.metadata'),
File.join(pod_path, relative, 'App.framework', 'App'),
File.join(pod_path, relative, 'engine', 'Flutter.framework', 'Flutter'),
flutter_export_environment_path
],
:execution_position => :before_compile
```
[podhelper.rb.txt](https://github.com/flutter/flutter/files/3716635/podhelper.rb.txt)
| platform-ios,tool,a: existing-apps,P3,team-ios,triaged-ios | low | Major |
505,757,006 | node | Http2Stream emit connection related 'error' after receiving all data of stream. | The following conditions cause an error event on `http2stream`, even though `http2stream` received all data and RST_STREAM from a client.
1) `http2stream` is not destroyed(does not consume all data)
2) goaway frame is not received
3) socket error happens
4) `http2stream` received all data and RST_STREAM
Socket error without goaway frame causes a `http2session.destroy(err)`.
https://github.com/nodejs/node/blob/81bc7b3ba5a37a5ad4de0f8798eb42e631d55617/lib/internal/http2/core.js#L2678-L2688
`http2session.destroy(err)` propagate error to `http2stream.destroy(err)`.
https://github.com/nodejs/node/blob/81bc7b3ba5a37a5ad4de0f8798eb42e631d55617/lib/internal/http2/core.js#L1298-L1323
I thought that `http2stream` got RST_STREAM means that there is no error about processing data. So, I thought if `http2stream` is closed by RST_STREAM, connection error should not cause `http2stream` error.
The codes to reproduce.
```js
'use strict';
const common = require('../common');
const fixtures = require('../common/fixtures');
const http2 = require('http2');
const fs = require('fs');
const net = require('net');
const tmpdir = require('../common/tmpdir');
tmpdir.refresh();
const loc = fixtures.path('person-large.jpg');
const server = http2.createServer();
let session_;
server.on('session', (session)=>{
session_ = session;
});
server.on('stream', common.mustCall((stream) => {
let sum = 0;
stream.pause();
const slowRead = ()=>{
setTimeout(()=>{
const data = stream.read(stream._readableState.highWaterMark/10);
sum += data ? data.length: 0;
console.log('read:' + sum + ' soc:' + socket.bytesWritten + ' closed:' + stream.closed + ' destroyed:' + stream.destroyed);
if(stream.closed){ // Got RST_STREAM and stream was closed but all data isn't processed.
socket.destroy(); // destroy connection without goaway frame.
try{
session_.ping(()=>{}); // activate read.
}catch(err){
console.log(err);
}
}
slowRead();
}, 10)
};
slowRead();
stream.respond();
stream.end();
stream.on('error', (err)=>{
// Stream allready closed, but error event happens.
console.log(err);
});
}));
let socket;
server.listen(0, common.mustCall(() => {
const options = {
createConnection: (authority, options) => {
socket = net.connect(server.address().port, 'localhost');
return socket;
}
}
const client = http2.connect(`http://localhost:${server.address().port}`, options);
const req = client.request({ ':method': 'POST' });
req.on('response', common.mustCall());
req.resume();
const str = fs.createReadStream(loc);
str.on('end', common.mustCall());
str.pipe(req);
}));
```
| stream,http2 | low | Critical |
505,771,129 | pytorch | Scripting torchvision.models.detection.maskrcnn_resnet50_fpn | ## 🐛 Bug
**My Code:**
```
import torch
import torchvision
print('going in...')
# scripted_module = torch.jit.script(torch_model)
# scripted_module = torch.jit.script(GeneralizedRCNN(cfg.clone()))
torch.jit.script(torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True))
```
<!-- A clear and concise description of what the bug is. -->
If I try to send `torchvision.models.detection.maskrcnn_resnet50_fpn` this model to `torch.jit.script` several errors are thrown.
## Errors Encountered
```
RuntimeError:
Arguments for call are not valid.
The following operator variants are available:
aten::append(Tensor[](a!) self, Tensor(c -> *) el) -> (Tensor[](a!)):
Expected a value of type 'List[Tensor]' for argument 'self' but instead found type 'List[int]'.
aten::append(int[](a!) self, int el) -> (int[](a!)):
Expected a value of type 'int' for argument 'el' but instead found type 'List[int]'.
aten::append(float[](a!) self, float el) -> (float[](a!)):
Expected a value of type 'List[float]' for argument 'self' but instead found type 'List[int]'.
aten::append(bool[](a!) self, bool el) -> (bool[](a!)):
Expected a value of type 'List[bool]' for argument 'self' but instead found type 'List[int]'.
aten::append(t[](a!) self, t(c -> *) el) -> (t[](a!)):
Could not match type List[int] to t in argument 'el': Type variable 't' previously matched to type int is matched to type List[int].
aten::append(str[](a!) self, str? el) -> (str[](a!)):
Expected a value of type 'List[str]' for argument 'self' but instead found type 'List[int]'.
The original call is:
at /home/miller/anaconda3/envs/export/lib/python3.7/site-packages/torchvision/models/detection/generalized_rcnn.py:50:12
like `scores`, `labels` and `mask` (for Mask R-CNN models).
"""
if self.training and targets is None:
raise ValueError("In training mode, targets should be passed")
# original_image_sizes = [img.shape[-2:] for img in images]
original_image_sizes = torch.jit.annotate(List[int],[])
# original_image_sizes = []
for img in images:
original_image_sizes.append(img.shape[-2:])
~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
images, targets = self.transform(images, targets)
features = self.backbone(images.tensors)
if isinstance(features, torch.Tensor):
features = OrderedDict([(0, features)])
proposals, proposal_losses = self.rpn(images, features, targets)
detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)
losses = {}
```
and if I change this `original_image_sizes.append(img.shape[-2:])` to this `original_image_sizes.append(torch.tensor(img.shape[-2:]))`. I encounter another error.
```
RuntimeError:
iterator expression is expected to be a list, iterable, or range, found value of type 'Tensor':
at /home/miller/anaconda3/envs/export/lib/python3.7/site-packages/torchvision/models/detection/transform.py:33:18
def forward(self, images, targets=None):
images = [img for img in images]
~~~~~~~~~~~~~~~~~~~~~ <--- HERE
```
## To Reproduce
Just use the above code to reproduce
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
A scripted version of `torchvision.models.detection.maskrcnn_resnet50_fpn`
<!-- A clear and concise description of what you expected to happen. -->
## Environment
- PyTorch version: 1.3.0a0+a7de545
- Is debug build: No
- CUDA used to build PyTorch: 10.0.130
- OS: Ubuntu 18.04.3 LTS
- GCC version: (Ubuntu 6.5.0-2ubuntu1~18.04) 6.5.0 20181026
- CMake version: version 3.15.4
- Python version: 3.7
- Is CUDA available: No
- CUDA runtime version: 10.0.130
- GPU models and configuration: Could not collect
- Nvidia driver version: Could not collect
- cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3
- Versions of relevant libraries:
[pip] numpy==1.17.2
[pip] torch==1.3.0a0+a7de545
[pip] torchvision==0.4.0a0+6b959ee
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.14 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] pytorch 1.2.0 py3.7_cuda10.0.130_cudnn7.6.2_0 pytorch
[conda] torch 1.3.0a0+a7de545 pypi_0 pypi
[conda] torchvision 0.4.0 py37_cu100 pytorch
## Additional context
<!-- Add any other context about the problem here. -->
| triaged,module: vision | low | Critical |
505,799,040 | flutter | Developing Flutter with VSCode and WSL2 | Since I mostly develop Web, using nginx, PHP and MySQL, I have ported my WebDev-environment entirely to WSL2.
Since performance is very important, all my web-related projects reside on the WSL2-vhdx file `/home/user/Projects/Web`. In WSL2 I've installed all my necessary tools for a nice and neat Linux-like experience, Docker, GIT, etc.. This combined with VSCode remote integration works very well.
Now, I'm digging into building Flutter-Apps, and my Flutter-environment is installed on the Windows side. My Flutter-related projects reside on `D:\Projects\Flutter` which is a partition, and **NOT USED** in WSL2 in any way. Building Flutter-apps with flutter-windows-sdk and VSCode works neatly.
But, the problem is: Now I've my project files scattered all across my computer. Web-stuff in a WSL2-vhdx-file and Flutter-stuff on the D-partition.
Is there a way to build flutter-apps with Flutter, while having the project-files stored on a WSL2-vhdx-file, in combination with VSCode-remote and an Android-emulator?
I tried creating a test Flutter-project on the `\\wsl$` network mount, which didn't work.
| c: new feature,tool,platform-windows,P3,team-tool,triaged-tool | medium | Major |
505,829,426 | electron | Child window's previous maximized/minimized state isn't restored when minimizing and restoring the parent window | ### Preflight Checklist
* [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project.
* [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to.
* [x] I have searched the issue tracker for an issue that matches the one I want to file, without success.
### Issue Details
* **Electron Version:** 5.0.7
* **Operating System:** Windows 10
### Expected Behavior
If I open a child window and maximize it, then press Windows+D to hide all the windows, and press it again to undo that, I expect the previous visibility state of the window to be restored, and the window should be maximized like before.
Same thing for minimizing the parent window and unminimizing it.
### Actual Behavior
The window isn't maximized and appears to be restored to the normal bounds from before the maximized/minimized state change.
Same thing with minimizing the window.
This doesn't occur if the window isn't set up as a child window the main window.
### To Reproduce
```
const { app, BrowserWindow } = require('electron')
async function createWindow() {
const mainWindow = new BrowserWindow();
const childWin = new BrowserWindow();
childWin.setTitle("Child");
childWin.setParentWindow(mainWindow); // commenting this out will fix the issue
}
app.on('ready', createWindow)
``` | platform/windows,bug :beetle:,status/confirmed,5-0-x,component/BrowserWindow,7-1-x,10-x-y | low | Major |
505,915,304 | vscode | Extension proxy doe not tunnel https to http | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.39.1
- OS Version: Ubuntu 18.04.3 LTS
Steps to Reproduce:
1. Run VS Code behind a proxy with HTTPS_PROXY set to an http url
2. Use the Atlascode or any extension that makes https calls
Expected: the requests work
Actual: the requests fail. Most of the time the error reported is "socket hang up"
I believe the chromium proxy stuff internally handles tunneling https to http when running behind an http proxy, However, I don't think VS Code handles this when it rewrites the https module for extension proxy support.
We've been able to get around this by adding this to our axios requests when behind a proxy:
```
const [host, port] = getProxyHostAndPort();
let numPort = undefined;
if (host.trim() !== '') {
if (port.trim() !== '') {
numPort = parseInt(port);
}
agent = {
httpsAgent: tunnel.httpsOverHttp({
proxy: {
host: host,
port: numPort
}
}), proxy: false
};
}
```
It would be ideal if VS Code just handled this for extensions.
| bug,proxy | low | Critical |
505,942,190 | terminal | Open new terminal tab in same directory as existing tab (OSC 7?) | <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
Have the option (or default) of a new terminal tab opening in the current directory of the window of which you hit the hotkey to open a new tab. This is the standard way most linux terminals work and is most handy. I often work in a directory where I need to launch multiple seperate processes, its a pain to CD back into the directory each time.
# Proposed technical implementation details (optional)
Hit the new tab hotkey, the new terminal should then be in the same folder as the previouse.
----
_maintainer edit: Before commenting, make sure to check out_
# [Tutorial: Opening a tab or pane in the same directory in Windows Terminal](https://learn.microsoft.com/en-us/windows/terminal/tutorials/new-tab-same-directory)
This is largely something configurable today, this issue is just tracking _another_ way of configuring this | Issue-Feature,Product-Powershell,Product-Conpty,Area-VT,Area-Settings,Product-Terminal | high | Critical |
505,964,616 | pytorch | Version number is still duplicated in a bunch of places | Current sites:
https://github.com/pytorch/pytorch/pull/27751
https://github.com/pytorch/pytorch/pull/27374
cc @ezyang @pbelevich @soumith | module: binaries,triaged,better-engineering | low | Minor |
505,970,806 | TypeScript | Suggestion: disallow synthetic imports for ES modules | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.6.3
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** allowSyntheticDefaultImports esModuleInterop babel synthetic es modules default exports imports commonjs
**Code**
`allowSyntheticDefaultImports` is great when you're importing a _CommonJS module_, where/when Babel/TS will make sure there is a `default` export/import via their "interop" layers (e.g. `esModuleInterop`).
```ts
// foo.js
exports.foo = () => {}
```
```ts
// foo.d.ts
declare const _default: { foo: () => {} }
export = _default
```
```ts
// main.ts
import Foo from './foo';
Foo.foo(); // all good!
```
However, it's possible to shoot yourself in the foot by trying to import a non-existent `default` from an _ES module_.
```js
// foo.js
export const foo = () => {}
```
```ts
// foo.d.ts
export declare const foo: () => {}
```
```ts
// main.ts
import Foo from './foo'; // no error, but should be `* as Foo`!
// runtime error!!
// TypeError: Cannot read property 'foo' of undefined
Foo.foo();
```
Ideally it would not be possible to use synthetic default imports with an ES module, so runtime errors such as the one above would not happen. | Suggestion,Awaiting More Feedback | low | Critical |
505,993,935 | godot | url, b, u and i tags in RichTextLabel don´t respect the margin | **Godot version:** 3.1.1-stable_win32
**OS/device including version:** Windows 7/PC
**Issue description:** When a text marked with a url, u, b or i tag inside a Richtextlabel hits the margin area, it just go over the margin no matter what. The regular text detects the margin and jumps to the next line, but the tags doesn´t and just continue like no margin is in place.
**Steps to reproduce:**
-Create a Rich text label, set a right margin and disable scroll_active.
-Fill the Richtextlabel with a text so a word marked with a tag reaches the right margin.
-Enjoy the bug!
**Minimal reproduction project:**
See the word "Paleolítico" in the second line of this capture:

| bug,topic:gui | low | Critical |
506,006,892 | pytorch | Test `make html-stable` target in CI | Right now, in the CI, we build master docs on all PRs. Every time a release comes around I am nervous if the stable docs (built with `make html-stable`) are buildable or not. We should also build stable docs on all PRs.
cc @ezyang @zou3519 | triaged,module: doc infra | low | Minor |
506,034,460 | flutter | mDNS does not work on an iPad | It appears that mDNS discovery is not working on iPads. The seriousness of this is exacerbated by https://github.com/flutter/flutter/issues/41133. There are additional reports [here](https://github.com/flutter/flutter/issues/41911) @jaco-pixeldump @enricobenedos.
## Steps to Reproduce
`flutter run -v` any flutter app on an iPad (tested with iPad Air 2 on iOS 12.4). See the following logs:
```
[ +24 ms] Application launched on the device. Waiting for observatory port.
[ +3 ms] Checking for advertised Dart observatories...
[+5015 ms] No pointer records found.
[ +2 ms] mDNS lookup failed, attempting fallback to reading device log.
[ ] Waiting for observatory port.
[ +1 ms] Observatory URL on device: http://127.0.0.1:49612/669x3vyByzY=/
[ +1 ms] Attempting to forward device port 49612 to host port 1024
[ ] executing: /Users/fujino/git/flutter/bin/cache/artifacts/usbmuxd/iproxy 1024 49612
b0f7081e6f02d7e3b573bf6afcb64e61ed8b86bb
[+1009 ms] Forwarded port ForwardedPort HOST:1024 to DEVICE:49612
[ ] Forwarded host port 1024 to device port 49612 for Observatory
[ +2 ms] Installing and launching... (completed in 13.0s)
```
mDNS discovery failed because no pointer records were found. Fallback to reading the logs succeeded (however, this won't work for iOS 13, for example).
Output of `flutter doctor -v`:
```
[✓] Flutter (Channel master, v1.10.15-pre.68, on Mac OS X 10.14.6 18G87, locale en-US)
• Flutter version 1.10.15-pre.68 at /Users/fujino/git/flutter
• Framework revision 6430b440d7 (23 hours ago), 2019-10-10 14:14:53 -0700
• Engine revision 5162413111
• Dart version 2.6.0 (build 2.6.0-dev.0.0 48e93d3d3b)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/fujino/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 28.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 11.0)
• Xcode at /Users/fujino/Downloads/./Xcode.app/Contents/Developer
• Xcode 11.0, Build version 11A419c
• CocoaPods version 1.7.5
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 37.0.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[✓] Connected device (1 available)
• iPad Air 2 • b0f7081e6f02d7e3b573bf6afcb64e61ed8b86bb • ios • iOS 12.4
• No issues found!
```
And this is the output of `dns-sd`:
```
~/git/flutter$ dns-sd -Z _dartobservatory
Browsing for _dartobservatory._tcp
DATE: ---Fri 11 Oct 2019---
13:46:44.608 ...STARTING...
; To direct clients to browse a different domain, substitute that domain in place of '@'
lb._dns-sd._udp PTR @
; In the list of services below, the SRV records will typically reference dot-local Multicast DNS names.
; When transferring this zone file data to your unicast DNS server, you'll need to replace those dot-local
; names with the correct fully-qualified (unicast) domain name of the target host offering the service.
_dartobservatory._tcp PTR io\.flutter\.examples\.hello-world._dartobservatory._tcp
io\.flutter\.examples\.hello-world._dartobservatory._tcp SRV 0 0 49612 iPad-Air-2.local. ; Replace with unicast FQDN of target host
io\.flutter\.examples\.hello-world._dartobservatory._tcp TXT "authCode=669x3vyByzY="
``` | tool,P2,team-tool,triaged-tool | low | Critical |
506,050,354 | flutter | Report more detail about hot reload/restart failures to analytics | The tool sometimes fails to send analytics when there is an exception during hot reload, like here:
https://github.com/flutter/flutter/blob/e76160703f0a027c90d2c71ebfcb829306edccd2/packages/flutter_tools/lib/src/run_hot.dart#L774
Also, the tool could send more information on a rejection notice from the VM. Currently these messages are dropped here:
https://github.com/flutter/flutter/blob/e76160703f0a027c90d2c71ebfcb829306edccd2/packages/flutter_tools/lib/src/run_hot.dart#L507
These messages come from a few different places in the VM, and a rejection notice could in theory contain several messages, so a component of this issue may be asking for a summary message from the VM, or reporting only the first message in the notice to analytics.
Search term in the VM code: 'ReasonForCancelling' like
https://github.com/dart-lang/sdk/blob/90ff37e0115c301cb742f894f64f4e3c78879df7/runtime/vm/isolate_reload.cc#L266
and
https://github.com/dart-lang/sdk/blob/90ff37e0115c301cb742f894f64f4e3c78879df7/runtime/vm/object_reload.cc#L562 | tool,t: hot reload,from: study,a: quality,a: annoyance,P2,team-tool,triaged-tool | low | Critical |
506,062,312 | pytorch | torch.cuda.default_generators documentation referenced but don't exist. | https://github.com/pytorch/pytorch/blob/master/docs/source/torch.rst (scroll down to default_generators)
cc @ezyang @gchanan @zou3519 @jerryzh168 | high priority,module: docs,module: cuda,triaged | low | Major |
506,074,989 | godot | Parser Error: Identifier '_some_singleton' not declared in the current scope. | **Godot version:**
3.1.1
**OS/device including version:**
Win 10
**Issue description:**
When pressing F6 to play a scene, the parser error in the title is thrown referring to a AutoLoad singleton. This happens only sometimes - reopening Godot and the project makes the problem rarer in my case. This doesn't happen on a simple (minimal) project. Only seems to happen when there are more singletons / more stuff in them (one of them is a simple scene). The order of singletons doesn't seem to matter: it's either all loaded or none.
I've found same / similar issue here: #3156 but since asked in the discussion, here is a new bug report.
**Update:**
My last uneducated guesses are that it has something to do with script errors that might be elsewhere. Seems to happen more frequently, when you change a script and save it but do not yet re-save other scripts that extend on it. (Doing this, in any case, causes often problems: from minor problems with refreshing export variables in the inspector to crashing the whole engine.) Just don't understand why it happens only sometimes (as if by a coin toss). | bug,topic:gdscript,confirmed | medium | Critical |
506,077,011 | pytorch | docs for torch.cuda.reset_max_memory_reserved don't exist | The documentation has an[entry for it](https://github.com/pytorch/pytorch/blob/master/docs/source/cuda.rst), but the documentation website [doesn't have it](https://pytorch.org/docs/master/cuda.html). | module: docs,module: cuda,triaged | low | Major |
506,096,549 | go | x/website: link to /dl from project and release history pages | It'd be nice to get the download page linked from more places like The Project, Release History, and the release pages.
I always end up on the releases pages first and then don't see a link to find the sha256s and URLs to grab them. It'd be nice to just have link to the downloads page on those | Documentation,NeedsInvestigation,website | low | Minor |
506,124,589 | TypeScript | Expose `resolvedModules` property of `SourceFile` | **TypeScript Version:** 3.6.4
**Code**
```ts
import * as ts from 'typescript'
function createCompilerHost(): ts.CompilerHost {
function fileExists(fileName: string): boolean {
return ts.sys.fileExists(fileName)
}
function readFile(fileName: string): string | undefined {
return ts.sys.readFile(fileName)
}
function getSourceFile(
fileName: string,
languageVersion: ts.ScriptTarget,
) {
const sourceText = ts.sys.readFile(fileName)
return sourceText !== undefined
? ts.createSourceFile(fileName, sourceText, languageVersion)
: undefined
}
function resolveModuleNames(moduleNames) {
return moduleNames.map(() => undefined)
}
return {
getSourceFile,
getDefaultLibFileName: () => 'lib.d.ts',
writeFile: (fileName, content) => ts.sys.writeFile(fileName, content),
getCurrentDirectory: () => ts.sys.getCurrentDirectory(),
getDirectories: path => ts.sys.getDirectories(path),
getCanonicalFileName: fileName =>
ts.sys.useCaseSensitiveFileNames ? fileName : fileName.toLowerCase(),
getNewLine: () => ts.sys.newLine,
useCaseSensitiveFileNames: () => ts.sys.useCaseSensitiveFileNames,
fileExists,
readFile,
resolveModuleNames,
}
}
const options = {
module: ts.ModuleKind.ES2015,
target: ts.ScriptTarget.ES5,
}
const host = createCompilerHost()
const sourceFiles = ts.createProgram(['./test.ts'], options, host).getSourceFiles()
const matchsFile = sourceFiles
.filter(s => {
return s.resolvedModules.has('typescript')
})
```
**Expected behavior:**
Property 'resolvedModules' should exist on type 'SourceFile'
**Actual behavior:**

| Suggestion,API,Awaiting More Feedback | low | Major |
506,147,371 | godot | _init() not work in visual script | **Godot version:** Godot 3.2 alpha2
**OS/device including version:** Ubuntu 18.04 LTS x86_64
**Issue description:** _init() not work in visual script
**Steps to reproduce:**
Create a simple visual script like:

Nothing was printed in the output tab when running.
I also tried to assign a value to a member variable in _init(). But I found its value was not changed when I printed in _ready() function.
**Minimal reproduction project:**
| bug,confirmed,topic:visualscript | low | Critical |
506,179,978 | vscode | Support for RHEL 8 in FIPS mode | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.39.1
- OS Version: RHEL 8 (Red Hat Enterprise Linux 8)
Steps to Reproduce:
1. Enable FIPS in RHEL 8
2. Attempt to install VS Code
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: n/a
Most likely causes:
* VS Code may not officially support RHEL 8
* VS Code's RPM uses a non-FIPS algorithm for the per-file digest and other cryptographic operations, most likely MD5.
```
[root@rhel8 ~]# dnf install code
Updating Subscription Management repositories.
Dependencies resolved.
==========================================================================================================================================================================
Package Arch Version Repository Size
==========================================================================================================================================================================
Installing:
code x86_64 1.39.1-1570750844.el7 code 77 M
Transaction Summary
==========================================================================================================================================================================
Install 1 Package
Total download size: 77 M
Installed size: 77 M
Is this ok [y/N]: y
Downloading Packages:
code-1.39.1-1570750844.el7.x86_64.rpm 5.6 MB/s | 77 MB 00:13
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 5.6 MB/s | 77 MB 00:13
Running transaction check
Transaction check succeeded.
Running transaction test
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: Transaction check error:
package code-1.39.1-1570750844.el7.x86_64 does not verify: no digest
Error Summary
-------------
```
Workaround (with RPM in current directory):
```
[root@rhel8 ~]# rpm --nofiledigest --nodigest --install code-1.39.1-1570750844.el7.x86_64.rpm
``` | help wanted,feature-request,install-update,linux | medium | Critical |
506,203,798 | terminal | cursorWidth for vertical bar | <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
My preferred cursor shape is "bar", and thus I'm happy to see that this is WT's default.
(My reason is that this is what every modern application does, carrying the semantics that the cursor is between two characters rather than over one. I found that in the terminal-based apps I'm using it becomes more obvious to know e.g. the boundaries of a piece of text as I'm selecting it, without potential off-by-ones – especially when the application uses inverse colors for the selected text, and so does the terminal for its solid rectangle cursor.)
There's one big disadvantage though: It's hard to locate if your eyes don't know where to look for it.
So I propose a config option to make the bar wider, analogously to the already existing "cursorHeight" for "vintage".
(On a side note, I'm wondering why "cursorHeight" doesn't apply to "underscore" too; in fact, why these are two different cursor shapes rather than one with a different height...)
# Proposed technical implementation details
A new option "cursorWidth" for the "bar" shape, or perhaps "cursorHeight" and "cursorWidth" merged into a common option.
I don't have a firm opinion whether the width should grow only to the right, or evenly to both sides. I'd probably only increase it from 1px to 2px for myself, so it doesn't really matter to me. I'd leave it to you to make a choice. In VTE it only grows to the right, mostly because we only have a 1px padding by default as opposed to your 8px, and we wouldn't want to chop it off when it's at the beginning of a line.
| Help Wanted,Area-Settings,Product-Terminal,Issue-Task | medium | Critical |
506,205,451 | TypeScript | Improve Intellisense for String Enums | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
## Suggestion
String Enums should get a dropdown intellisense when `=== ""` is typed.

Note that TS properly knows that `x === ""` is always false in both cases, since it knows `x` is `"A" | "B" | "C" | "D" | "E"`.
Interestingly, intellisense properly displays all the string methods when `x.` is typed.

<!-- A summary of what you'd like to see added or changed -->
## Use Cases
This feature would make it easier to use string enums defined by libraries, especially in the case of same-name index-value pairs like `A = "A"`.
That way, one could do:
```ts
x === "A"
```
Instead of:
```ts
x === Enum.A
```
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
**Related:** https://github.com/microsoft/TypeScript/issues/2151
**Search Terms:** string enum intellisense
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily --> | Bug | low | Critical |
506,207,026 | youtube-dl | site support for panopto.com | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.28. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.09.28**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
```
$ youtube-dl -v -F "https://brown.hosted.panopto.com/Panopto/Pages/Embed.aspx?id=0b3ff73b-36a0-46c5-8455-aadf010a3638"
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', '-F', 'https://brown.hosted.panopto.com/Panopto/Pages/Embed.aspx?id=0b3ff73b-36a0-46c5-8455-aadf010a3638']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.09.28
[debug] Python version 3.7.5rc1 (CPython) - Linux-5.2.0-3-amd64-x86_64-with-debian-bullseye-sid
[debug] exe versions: ffmpeg 4.1.4-1, ffprobe 4.1.4-1, phantomjs 2.1.1, rtmpdump 2.4
[debug] Proxy map: {}
[generic] Embed: Requesting header
WARNING: Falling back on generic information extractor.
[generic] Embed: Downloading webpage
[generic] Embed: Extracting information
ERROR: Unsupported URL: https://brown.hosted.panopto.com/Panopto/Pages/Embed.aspx?id=0b3ff73b-36a0-46c5-8455-aadf010a3638
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/youtube_dl/YoutubeDL.py", line 796, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3/dist-packages/youtube_dl/extractor/common.py", line 530, in extract
ie_result = self._real_extract(url)
File "/usr/lib/python3/dist-packages/youtube_dl/extractor/generic.py", line 3355, in _real_extract
raise UnsupportedError(url)
youtube_dl.utils.UnsupportedError: Unsupported URL: https://brown.hosted.panopto.com/Panopto/Pages/Embed.aspx?id=0b3ff73b-36a0-46c5-8455-aadf010a3638
```
This is in Debian testing -
```
Package: youtube-dl
Version: 2019.09.28-1
Severity: normal
-- System Information:
Debian Release: bullseye/sid
APT prefers testing
APT policy: (900, 'testing'), (500, 'testing-debug'), (100, 'unstable-debug'), (100, 'experimental'), (100, 'unstable'), (50, 'experimental-debug')
Architecture: amd64 (x86_64)
Kernel: Linux 5.2.0-3-amd64 (SMP w/4 CPU cores)
Locale: LANG=en_IN, LC_CTYPE=en_IN (charmap=UTF-8), LANGUAGE=en_IN:en (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled
Versions of packages youtube-dl depends on:
ii python3 3.7.5-1
ii python3-pkg-resources 41.2.0-1
Versions of packages youtube-dl recommends:
ii ca-certificates 20190110
ii curl 7.66.0-1
ii ffmpeg 7:4.1.4-1+b2
ii mpv 0.29.1-1
ii phantomjs 2.1.1+dfsg-2+b1
ii python3-pyxattr 0.6.1-1
ii rtmpdump 2.4+20151223.gitfa8646d.1-2
ii wget 1.20.3-1+b1
youtube-dl suggests no packages.
-- no debconf information
``` | site-support-request | medium | Critical |
506,212,884 | flutter | Severe Performance issue rendering Emoji's on first run. | note: Its worse in release mode. I guess because everything is fast but that first run rendering the emoji slows everything around it.
Very bad performance when navigating to a listview with emoji text when on the first run of the application. Really pisses me off personally so if i was an end user of my application which cant render emoji without screen lag/janks**(first-run)**. I would uninstall my own app and write a 1 star review with some old school language.
I can run computations and do complex processing while maintaining somewhat decent fluid animation. One bloody emoji ruins the whole experience.
Temporary fix: I render an emoji temporarily on app start to shock the framework.
Tested on Motorola x4, iPhone SE. Performance issue can be seen in simulator as well with performance overlay set to true.
Problem has been here since flutter 1.0.0 from my memory.
Here is sample code that you can run on your physical and or simulator devices.
```
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
void main() {
runApp(TestEmoji());
}
class TestEmoji extends StatelessWidget {
@override
Widget build(BuildContext context){
return MaterialApp(
showPerformanceOverlay: true,
home: Builder(
builder: (context) => Center(
child: RaisedButton(
color: Colors.blue,
child: const Text("Navigate to emoji listview", style: TextStyle(fontSize: 17,color: Colors.white),),
onPressed: (){
Navigator.push(context, CupertinoPageRoute(
builder: (context) => TestText()
));
},
),
),
)
);
}
}
class TestText extends StatefulWidget{
@override
_TestText createState() => _TestText();
}
class _TestText extends State<TestText> {
List<String> items = ["🗺️🔥🥺😊😍🔥😊🤔❤️👍🥰","😍🔥😊🤔❤️👍🥰","😍🔥😊","😍🔥😊","😍🔥😊","😍🔥😊🤔❤️👍🥰","😍🔥😊🤔❤️👍🥰"]; //init list of items
List<String> textItems = ["Test Message, lets see how this performs","Test Message, lets see how this performs","Test Message, lets see how this performs","Test Message, lets see how this performs","Test Message, lets see how this performs","Test Message, lets see how this performs"];
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text("Emoji listview"),
centerTitle: true,
),
body: SafeArea(
child: ListView.builder(
reverse: true,
itemCount: items.length,
itemBuilder: (context, index){
String item = items[index];
return Padding(
padding: const EdgeInsets.all(8.0),
child: Center(
child: Text(item, style: const TextStyle(fontSize: 20),)
),
);
},
),
),
);
}
}
```
You can change:
1) `itemCount: items.length` **to =>** `itemCount: textItems.length`
2) `String item = items[index]` **to =>** `String item = textItems[index]`
to compare the performance with a listview building text emoji and without emojis.
**Its somewhat 5 to 6 times slower rendering the listview with emojis on the first run compared to a listview without emojis**
This only happens on first run. It looks like we shock the whole framework and it performs well after the initial trauma.
**Flutter doctor -v summary:**
```
[✓] Flutter (Channel stable, v1.9.1+hotfix.4, on Mac OS X 10.14.6 18G95, locale en-US)
• Flutter version 1.9.1+hotfix.4 at /Users/fa****/Desktop/flutter
• Framework revision cc949a8e8b (2 weeks ago), 2019-09-27 15:04:59 -0700
• Engine revision b863200c37
• Dart version 2.5.0
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/far***/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 11.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.0, Build version 11A420a
• CocoaPods version 1.8.0
[✓] Android Studio (version 3.5)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 39.0.3
• Dart plugin version 191.8423
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[✓] Connected device (1 available)
• iPhone 11 • 8F32A367-AE35-489B-94DA-A5E3AB66A050 • ios • com.apple.CoreSimulator.SimRuntime.iOS-13-0 (simulator)
• No issues found!
``` | framework,c: performance,d: api docs,a: typography,P2,team-framework,triaged-framework | low | Major |
506,215,035 | flutter | SliverAppBar not tappable while scrolling up when snap=true | Internal: b/142343241
Steps to reproduce:
1. Create a simple sliver with a SliverAppBar with snap=true and SliverList.
2. Scroll down.
3. Scroll up.
4. While the scroll up is still in progress, tap the SliverAppBar.
Expected behaviour:
* SliverAppBar should remain, and the tap should be recognized and handled by widgets (eg. tab bar) on the app bar.
Actual behaviour:
* SliverAppBar disappears, due to an ScrollPosition.isScrollingNotifier handled, that calls maybeStartSnapAnimation when scrolling stops. | framework,f: material design,f: scrolling,customer: quill (g3),has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Major |
506,216,641 | flutter | [url_launcher] Status bar color is not set | Internal: b/140725708
url_launcher Flutter plugin doesn't update the status bar color when the Safari view controller is pushed. If the background color of the app is black (and/or app sets the status bar to white), the url_launcher will happily launch safari with the same configuration causing the status bar to disappear. | platform-ios,p: url_launcher,package,P3,team-ios,triaged-ios | low | Major |
506,218,060 | flutter | VoiceOver not reading next carousel item after scroll gesture | Internal: b/142594347
My app has a carousel widget which requires users to use the screenreader scroll gesture to see the next item in the carousel. A11y testers filed b/142130922.
On Android, this works fine. You perform the scroll gesture (swipe right then left), and it scrolls, then focuses on the new item and reads it.
However, on iOS, it doesn't work as well. You perform the scroll gesture (three finger swipe), and it scrolls and plays the scrolling sound. It focuses on the new item, but doesn't read it.
To reproduce:
```
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
home: CarouselFocusIssueRepro(),
);
}
}
class CarouselFocusIssueRepro extends StatefulWidget {
final int itemCount = 10;
/// A builder that builds widgets to appear in the carousel.
///
/// Uses indices between 0 and [itemCount] exclusive.
final IndexedWidgetBuilder itemBuilder =
(BuildContext context, int index) => Container(
child: Center(child: Text('item $index')),
color: Colors.grey,
);
final int initialPage = 0;
final double aspectRatio = 4 / 3;
final double viewportFraction = 0.9;
final EdgeInsets carouselPadding = EdgeInsets.all(10);
final double itemSpacing = 10;
CarouselFocusIssueRepro({Key key}) : super(key: key);
@override
_CarouselFocusIssueReproState createState() =>
_CarouselFocusIssueReproState();
}
class _CarouselFocusIssueReproState extends State<CarouselFocusIssueRepro> {
PageController _pageController;
@override
void initState() {
super.initState();
_pageController = PageController(viewportFraction: 0.9);
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('title'),
),
body: Center(
child: LayoutBuilder(
builder: _buildCarousel,
),
),
bottomNavigationBar: Text('footer'),
);
}
Widget _buildCarousel(BuildContext context, BoxConstraints constraints) {
final itemWidth =
(constraints.maxWidth - widget.carouselPadding.horizontal) *
widget.viewportFraction;
final itemHeight = itemWidth / widget.aspectRatio;
final devicePadding = MediaQuery.of(context).padding;
if (widget.itemCount == 0) {
return Container(
width: constraints.maxWidth,
height: itemHeight,
);
}
return SizedBox(
height: itemHeight,
child: ListView.custom(
scrollDirection: Axis.horizontal,
controller: _pageController,
physics: PageScrollPhysics(),
padding: widget.carouselPadding.copyWith(
left: widget.carouselPadding.horizontal + devicePadding.left,
right: widget.carouselPadding.horizontal + devicePadding.right,
),
itemExtent: itemWidth +
(widget.carouselPadding.horizontal * widget.viewportFraction),
childrenDelegate: SliverChildBuilderDelegate(
_buildCarouselItem,
childCount: widget.itemCount,
),
),
);
}
Widget _buildCarouselItem(BuildContext context, int index) {
return Padding(
padding: EdgeInsets.symmetric(horizontal: widget.itemSpacing),
child: Semantics(
child: widget.itemBuilder(context, index),
),
);
}
}
``` | platform-ios,framework,engine,a: accessibility,has reproducible steps,P3,team-ios,triaged-ios,found in release: 3.16 | low | Major |
506,224,610 | godot | If a `preload` is highlighted by wrong file name, renaming the file does not update the script editor | Godot 3.2 alpha2
Say you have this in your script:
```gdscript
const MyShader = preload("./../my_shader.shader")
```
If the file name is actually `the_shader.shader`, it will cause the line to be highlighted red and produce an error.
Now if you rename your file to be `my_shader.shader`, it should match the script. However, the script editor still shows the error and only updates if you do an edit.
It probably also happens in the opposite case, where a path is initially correct but a rename makes it wrong. | bug,topic:editor,confirmed,usability | low | Critical |
506,228,527 | pytorch | Improve the error message when trying to install in a 32-bit Python environment | ## 🐛 Bug
Trying to install the stable release following instructions from https://pytorch.org/get-started/locally/ results in the message `ERROR: No matching distribution found for torch===1.3.0`.
Trying to install the preview release results in a python error.
## To Reproduce
Steps to reproduce the behavior:
1. Try to install python as instructed in https://pytorch.org/get-started/locally/
**Stable (1.3)** error:
```
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch===1.3.0
ERROR: Could not find a version that satisfies the requirement torch===1.3.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch===1.3.0
```
**Preview (Nightly)** error:
```
Looking in links: https://download.pytorch.org/whl/nightly/cu101/torch_nightly.html
Collecting torch
Using cached https://files.pythonhosted.org/packages/f8/02/880b468bd382dc79896eaecbeb8ce95e9c4b99a24902874a2cef0b562cea/torch-0.1.2.post2.tar.gz
Collecting torchvision
Using cached https://files.pythonhosted.org/packages/fb/01/03fd7e503c16b3dc262483e5555ad40974ab5da8b9879e164b56c1f4ef6f/torchvision-0.2.2.post3-py2.py3-none-any.whl
Requirement already satisfied: pyyaml in c:\program files (x86)\python37-32\lib\site-packages (from torch) (5.1.2)
Requirement already satisfied: numpy in c:\program files (x86)\python37-32\lib\site-packages (from torchvision) (1.17.2)
Collecting pillow>=4.1.1 (from torchvision)
Using cached https://files.pythonhosted.org/packages/6e/42/cbcafb97c5e288a0340bfdffb883faa14cf1e2edd81727f0bac6d0150d4a/Pillow-6.2.0-cp37-cp37m-win32.whl
Collecting six (from torchvision)
Using cached https://files.pythonhosted.org/packages/73/fb/00a976f728d0d1fecfe898238ce23f502a721c0ac0ecfedb80e0d88c64e9/six-1.12.0-py2.py3-none-any.whl
Installing collected packages: torch, pillow, six, torchvision
Running setup.py install for torch: started
Running setup.py install for torch: finished with status 'error'
ERROR: Command errored out with exit status 1:
command: 'C:\Program Files (x86)\Python37-32\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\root\\AppData\\Local\\Temp\\pip-install-sdujsaav\\torch\\setup.py'"'"'; __file__='"'"'C:\\Users\\root\\AppData\\Local\\Temp\\pip-install-sdujsaav\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\root\AppData\Local\Temp\pip-record-q50tvjyk\install-record.txt' --single-version-externally-managed --compile
cwd: C:\Users\root\AppData\Local\Temp\pip-install-sdujsaav\torch\
Complete output (23 lines):
running install
running build_deps
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\root\AppData\Local\Temp\pip-install-sdujsaav\torch\setup.py", line 265, in <module>
description="Tensors and Dynamic neural networks in Python with strong GPU acceleration",
File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "C:\Program Files (x86)\Python37-32\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "C:\Program Files (x86)\Python37-32\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "C:\Program Files (x86)\Python37-32\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\root\AppData\Local\Temp\pip-install-sdujsaav\torch\setup.py", line 99, in run
self.run_command('build_deps')
File "C:\Program Files (x86)\Python37-32\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Program Files (x86)\Python37-32\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\root\AppData\Local\Temp\pip-install-sdujsaav\torch\setup.py", line 51, in run
from tools.nnwrap import generate_wrappers as generate_nn_wrappers
ModuleNotFoundError: No module named 'tools.nnwrap'
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Program Files (x86)\Python37-32\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\root\\AppData\\Local\\Temp\\pip-install-sdujsaav\\torch\\setup.py'"'"'; __file__='"'"'C:\\Users\\root\\AppData\\Local\\Temp\\pip-install-sdujsaav\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\root\AppData\Local\Temp\pip-record-q50tvjyk\install-record.txt' --single-version-externally-managed --compile Check the logs for full command output.
```
## Expected behavior
PyTorch gets installed.
## Environment
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: N/A
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce GTX 1070 Ti
Nvidia driver version: 436.48
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.17.2
[conda] Could not collect
```
cc @peterjc123 | triaged,enhancement | medium | Critical |
506,230,016 | TypeScript | Sorting an array removes type inference | **TypeScript Version:** 3.7.0-dev.20191011
**Search Terms:** type inference
**Code**
```ts
function getAllValuesUnsorted(m: ReadonlyArray<ReadonlyArray<string>>): ReadonlyArray<string> {
// OK
return Array.from(flatten(m.values()))
}
function getAllValuesSorted(m: ReadonlyArray<ReadonlyArray<string>>): ReadonlyArray<string> {
// Type 'unknown[]' is not assignable to type 'readonly string[]'.
return Array.from(flatten(m.values())).sort()
}
function* flatten<T>(a: Iterable<Iterable<T>>): Iterable<T> {
for (const xs of a)
for (const x of xs)
yield x
}
```
**Expected behavior:**
No error.
**Actual behavior:**
```
a.ts:6:2 - error TS2322: Type 'unknown[]' is not assignable to type 'readonly string[]'.
Type 'unknown' is not assignable to type 'string'.
``` | Bug | low | Critical |
506,232,623 | go | x/website/internal/dl: document /dl/?mode=json API more prominently | It would be nice for the JSON mode of golang.org/dl to be documented somewhere (https://golang.org/dl/?mode=json). I wasn't able to find it until after I built a project to recreate the same functionality and someone pointed it out.
Having a JSON API for Go releases is great and I think some folks would build more tools using it if we had it documented. | Documentation,NeedsFix | low | Major |
506,233,477 | rust | Improve pretty printing of const raw pointers | #64986 changed the pretty-printing of const raw pointers to `{pointer}`. This can cause confusing diagnostics due to the lack of detail the printing of raw pointers provides, like the following:
```
error[E0308]: mismatched types
--> $DIR/raw-ptr-const-param.rs:7:38
|
LL | let _: Const<{15 as *const _}> = Const::<{10 as *const _}>;
| ^^^^^^^^^^^^^^^^^^^^^^^^^ expected `{pointer}`, found `{pointer}`
|
= note: expected type `Const<{pointer}>`
found type `Const<{pointer}>`
``` | C-enhancement,A-diagnostics,T-compiler,A-const-generics | low | Critical |
506,244,071 | node | FSWatcher.close() incurs a big performance cost in >=10.16 | <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: 10.16
* **Platform**: Mac
* **Subsystem**: fs
<!-- Please provide more details below this comment. -->
There seems to have been a massive performance decrease in FSWatcher.close() introduced in 10.16.
I use `ts-node-dev` as a file watcher when developing web applications, which under the hood uses `filewatcher`.
I believe the following commits/PRs could be contributing? Not smart enough to tell you why though 😅
- #19345
- #19089
Some examples w/ a pretty small express app:
#### 10.15
```
$ nvm use 10.15
$ tsnd src/index
```
save some files...
```
Using ts-node version 8.4.1, typescript version 3.6.4
[INFO] 17:03:16 Restarting: manual restart from user
removing 962 watchers
watcher.removeAll: 5.162ms
Using ts-node version 8.4.1, typescript version 3.6.4
[INFO] 17:03:17 Restarting: manual restart from user
removing 962 watchers
watcher.removeAll: 3.997ms
Using ts-node version 8.4.1, typescript version 3.6.4
[INFO] 17:03:18 Restarting: manual restart from user
removing 962 watchers
watcher.removeAll: 3.915ms
Using ts-node version 8.4.1, typescript version 3.6.4
[INFO] 17:03:18 Restarting: manual restart from user
removing 921 watchers
watcher.removeAll: 2.992ms
```
#### 10.16
```
$ nvm use 10.16
$ tsnd src/index
```
save some files...
```
Using ts-node version 8.4.1, typescript version 3.6.4
[INFO] 17:05:20 Restarting: manual restart from user
removing 962 watchers
watcher.removeAll: 8631.891ms
Using ts-node version 8.4.1, typescript version 3.6.4
[INFO] 17:05:31 Restarting: manual restart from user
removing 962 watchers
watcher.removeAll: 8679.892ms
Using ts-node version 8.4.1, typescript version 3.6.4
[INFO] 17:05:42 Restarting: manual restart from user
removing 962 watchers
watcher.removeAll: 8679.248ms
Using ts-node version 8.4.1, typescript version 3.6.4
[INFO] 17:05:51 Restarting: manual restart from user
removing 962 watchers
watcher.removeAll: 8764.230ms
Using ts-node version 8.4.1, typescript version 3.6.4
```
#### Method
Perf code added to the following: https://github.com/fgnass/filewatcher/blob/master/index.js#L130
```
FileWatcher.prototype.removeAll = function() {
console.time("watcher.removeAll");
console.log(`removing ${this.list().length} watchers`);
this.list().forEach(this.remove, this);
console.timeEnd("watcher.removeAll");
};
```
One might be tempted to think the `list()` method could be slow, however adding code right above it, a single invocation of watcher.close() took longer than all of them in 10.15.
```
watcher.close ~/project/node_modules/date-fns/getISOWeekYear/index.js: 6.616ms
watcher.close ~/project/node_modules/date-fns/startOfISOWeek/index.js: 6.514ms
watcher.close ~/project/node_modules/date-fns/startOfWeek/index.js: 6.731ms
watcher.close ~/project/node_modules/date-fns/setISOWeekYear/index.js: 6.540ms
watcher.close ~/project/node_modules/date-fns/startOfISOWeekYear/index.js: 6.447ms
watcher.close ~/project/node_modules/date-fns/differenceInCalendarDays/index.js: 6.563ms
watcher.close ~/project/node_modules/date-fns/_lib/getTimezoneOffsetInMilliseconds/index.js: 6.465ms
```
| fs | low | Critical |
506,258,126 | go | proposal: spec: compile-time boolean assertions | I propose adding compile-time boolean assertions to Go.
[I don't feel strongly about this proposal, but it seems pretty minimal; easy to implement; and to make some real world code somewhat easier to read/write. I've also not found any past discussion of this idea, so it seemed worth at least writing down even if rejected.]
# Proposal
Concretely, I propose making these changes:
1. Introduce a new "assert" package like:
package assert
type True bool
2. Add a language rule that it's an error to have a constant of type assert.True but value false.
3. (Optional) Add a language rule that it's an error to use assert.True except as the type of a constant.
# Uses
There are somewhat common idioms of writing:
const _ = -uint(x - y) // assert x == y
const _ = uint(x - y) // assert x >= y
But I at least find these awkward to reason about, even being very familiar with the details of how they work.
With this proposal, they could instead be written more clearly as:
import "assert"
const _ assert.True = x == y
const _ = assert.True(x >= y)
(Showing off two different ways to write const declarations using assert.True.)
Further, generalizing to boolean expressions allows us to easily use boolean operators to combine multiple tests. It also potentially allows static assertions involving non-integer constants (i.e., floats, complex, bools, and strings).
For example, [package gc's sizeof_test.go](https://github.com/golang/go/blob/master/src/cmd/compile/internal/gc/sizeof_test.go) could be rewritten as compile time asserts like:
const (
ptrSize = unsafe.Sizeof((*int)(nil))
funcSize = unsafe.Sizeof(Func{})
_ = assert.True((ptrSize == 4 && funcSize == 116) || (ptrSize == 8 && funcSize == 208))
)
# Backwards compatibility
assert.True doesn't exist today, so there's no code using it that we have to worry about.
Old tools unaware of the special semantics for assert.True (e.g., old compilers or tools using go/types) will continue working for old code. They'll also continue working correctly for new code *that correctly use assert.True*. The tools will, however, fail to detect failing assertions.
# Related proposals
https://github.com/golang/go/issues/9367 proposed allowing bool->int conversions, which be an alternative way of extending the current integer static assertions idiom to arbitrary boolean static assertions. However, it would still be somewhat awkward to read/write.
https://github.com/golang/go/issues/30582 proposes an assertion to indicate unreachable code paths. Technically orthogonal to this one, but it might be worth ensuring they expose a consistent API to users.
C++11 added static_assert: https://en.cppreference.com/w/cpp/language/static_assert (Counter argument: C++11 has templates and constexpr, which make static_assert more broadly useful than assert.True would be.) | LanguageChange,Proposal,LanguageChangeReview | medium | Critical |
506,258,325 | fastapi | Further develop startup and shutdown events | While the documentationn for FastAPI is in general extremely solid, there's a weakpoint that I feel hints at some underdevelopped feature within the framework, and that's [startup and shutdown events][1]. They are briefly mentionned (separately) with the startup event in particular being demonstrated like this :
```py
items = {}
@app.on_event("startup")
async def startup_event():
items["foo"] = {"name": "Fighters"}
items["bar"] = {"name": "Tenders"}
@app.get("/items/{item_id}")
async def read_items(item_id: str):
return items[item_id]
```
...which could very well be written like this:
```py
items = {
"foo": {"name": "Fighters"},
"bar": {"name": "Tenders"}
}
@app.get("/items/{item_id}")
async def read_items(item_id: str):
return items[item_id]
```
...and therefore makes the feature look useless. The example for `shutdown` instead uses logging as an example, which makes it look like this would be the primary purposes for those events, while in reality, it's not.
**Is your feature request related to a problem? Please describe.**
The problem is that, throughout the entire documentation, things like database connections are created in the global scope, at module import. While this would be fine in a regular Python application, this has a number of problems, especially with objects that have a side-effect outside the code itself, like database connections. To demonstrate this, I've made [a test structure that creates a lock file when initialized and deletes it when garbage collected][2].
Using it like this:
```py
from fastapi import FastAPI
from lock import FileLock
app = FastAPI()
lock = FileLock("fastapi")
@app.get("/")
async def root():
return {"message": "Hello World"}
```
...does not work and the lock is not deleted before shutdown (I was actually expecting it to be closed properly, like SQLAlchemy does with its connections, but clearly there's a lot of extra magic going on with SQLAlchemy that I don't even come close to understanding). This is also extremely apparent when using the `--reload` option on Uvicorn, bcause the lock is *also* not released when the modules are reloaded, causing the import to fail and the server to crash. This would be one thing, but I've had a similar incident occur some time ago when, while developping in reload mode, I've actually managed to take up every connection on my PostgreSQL server because of that problem, since while SQLAlchemy is smart enough to cleanup on exit where my `FileLock` cannot, the same does not happen when hot-reloading code.
So that would be one thing; the documentation should probably go into more details about what those startup and shutdown events are for ([the Starlette documentation is a little more concrete about this][3], but no working code is given to illustrate this) and that should also be woven with the chapters about databases and such to make sure people don't miss it.
Except... That's not super ergonomic, now, is it?
```py
from fastapi import FastAPI, Depends
from some_db_module import Connection
app = FastAPI()
_db_conn: Connection
@app.on_event("startup")
def take_lock():
global _db_conn
_db_conn = Connection("mydb:///")
@app.on_event("shutdown")
def release_lock():
global _db_conn
_db_conn.close()
def get_db_conn():
return _db_conn
@app.get("/")
async def root(conn: Connection = Depends(get_db_conn)):
pass
```
This is basically just a context manager split into two halves, linked together by a global variable. A context manager that will be entered and exited when the ASGI lifetime protocol notifies that the application has started and stopped. A context manager whose only job will be to initialize a resource to be either held while the application is running or used as an injectable dependency. *Surely* there's a cleaner way to do this.
**Describe the solution you'd like**
I've been meaning to file this bug for a few weeks now, but what finally got me to do it is the release of FastAPI 0.42 (Good job, everyone!), which has [context managers as dependencies][4] as one of its main new features. Not only that, but the examples being given are pretty much all database-related, except connections are opened and closed for each call of each route instead of being polled like SQLAlchemy (and I assume encode's async database module too). Ideally, events should be replaced with something like this, but where the dependencies are pre-initialized instead of being created on the spot. Maybe by having context managers that are started and stopped based on `startup` and `shutdown` events and yield "factory functions" that could in turn be called during dependency injection to get the object that needs to be passed.
Something along those lines:
```py
from fastapi import FastAPI, Depends
from sqlalchemy import create_engine
app = FastAPI()
@app.lifetime_dependency
def get_db_conn():
conn_pool = create_engine("mydb:///")
# yield a function that closes around db_conn
# and returns it as a dependency when called
yield lambda: conn_pool
conn_pool.close()
@app.get("/")
async def root(conn: Connection = Depends(get_db_conn)):
pass
```
**Additional context**
Not sure where else to mention it, but I've ran into cases where the shutdown event does not get called before exiting, namely when using the VSCode debugger on Windows and stopping or restarting the application via the debugger's controls (haven't tried this on Linux yet). This apparently kills the thread without any sort of cleanup being performed and leaves all database connections open (and possibly unable to timeout, since [DBAPI appears to suggest that all queries to be executed as part of a transaction][5], which most drivers do, and mid-transaction timeout is disabled by default on at least PostgreSQL). I don't think there is any way that could be fixed, though it should probably be mentionned somewhere, either there or in [debugging part of the documentation][6].
[1]: https://fastapi.tiangolo.com/tutorial/events/
[2]: https://gist.github.com/sm-Fifteen/2ceb7b453463b828dc1bb42077fdce63#file-lock-py
[3]: https://www.starlette.io/events/
[4]: https://fastapi.tiangolo.com/tutorial/dependencies/dependencies-with-yield/
[5]: https://www.python.org/dev/peps/pep-0249/#commit
[6]: https://fastapi.tiangolo.com/tutorial/debugging/ | feature,reviewed | high | Critical |
506,258,886 | neovim | UI: signcolumn (signs "gutter") on the right side | Would it be possible to implement a signcolumn on the right side? Plugins could be build to use it as a scrollbar/minimap or as an indicator for warnings or highlighted search-results. Maybe it should be even possible to use both for indications and scrollbar like in modern guis/ides. Preferably, it should be general enough that it should be not limited to the usecases I mentioned. This also could be used for guis like oni. | enhancement,ui-extensibility,column | low | Major |
506,258,941 | flutter | Support persistant page offset for PageController | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
I have a `PageView` that I want to have a `viewportFraction` of 0.8, which does what I need, but the page is centered and I want to have the page aligned at the beginning of the view and then have a sort of 'peek' of the next page.
In this case, the current page would take up 80% of the page and the beginning of the next page would take up the remaining 20%.
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
## Proposal
I believe this can be achieved by adding the offset value to the controller after the positions are calculated when using `animateToPage` and `jumpToPage`.
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
## Expected Result:

## Actual:

| c: new feature,framework,f: scrolling,P3,team-framework,triaged-framework | low | Critical |
506,274,549 | TypeScript | tsc --watch initial build 3x slower than tsc | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.7.0-dev.20191011, 3.6.4, 3.5.2
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
DeepReadonly
slow watch mode
**Code**
The slowness occurs on a codebase of around 1000 files. I can't distill it into a repro, but I can show the type that causes the slowness and an alternate type that does not.
I noticed the slowness when I replaced our implementation of `DeepReadonly` with the one from `ts-essentials`. One thing I should note in case it is helpful, is that in our codebase `DeepReadonly` is only used about 80 times. It's also used nested in some instances, a DeepReadonly type is included as a property of another DeepReadonly type, for example.
Here is the type from `ts-essentials`:
```ts
export type Primitive = string | number | boolean | bigint | symbol | undefined | null;
/** Like Readonly but recursive */
export type DeepReadonly<T> = T extends Primitive
? T
: T extends Function
? T
: T extends Date
? T
: T extends Map<infer K, infer V>
? ReadonlyMap<K, V>
: T extends Set<infer U>
? ReadonlySet<U>
: T extends {}
? { readonly [K in keyof T]: DeepReadonly<T[K]> }
: Readonly<T>;
interface ReadonlySet<ItemType> extends Set<DeepReadonly<ItemType>> {}
interface ReadonlyMap<KeyType, ValueType> extends Map<DeepReadonly<KeyType>, DeepReadonly<ValueType>> {}
```
Here is ours:
```ts
export type Primitive = number | boolean | string | symbol
export type DeepReadonly<T> = T extends ((...args: any[]) => any) | Primitive
? T
: T extends _DeepReadonlyArray<infer U>
? _DeepReadonlyArray<U>
: T extends _DeepReadonlyObject<infer V>
? _DeepReadonlyObject<V>
: T
export interface _DeepReadonlyArray<T> extends ReadonlyArray<DeepReadonly<T>> {}
export type _DeepReadonlyObject<T> = {
readonly [P in keyof T]: DeepReadonly<T[P]>
}
```
**Expected behavior:**
Both types, when used in our codebase would take a similar amount of time for both a `tsc` and the initial build of `tsc --watch`.
**Actual behavior:**
Our original `DeepReadonly` takes about 47 seconds to build using `tsc`. The initial build with `tsc --watch` also takes a similar amount of time, around 49 seconds.
With the `ts-essentials` version, a `tsc` build takes around 48 seconds. The initial build with `tsc --watch` takes anywhere from 3-5 minutes.
**Playground Link:**
N/A
**Related Issues:**
None for sure.
| Needs Investigation,Domain: Performance,Fix Available,Rescheduled | high | Major |
506,281,032 | vue | transition-group with duration property doesn't work | ### Version
2.6.10
### Reproduction link
[https://codesandbox.io/s/vue-template-lcrsy](https://codesandbox.io/s/vue-template-lcrsy)
### Steps to reproduce
Click "Move" button.
### What is expected?
Both two lists move with animation.
### What is actually happening?
Only second list which is applied duration with css has animation.
<!-- generated by vue-issues. DO NOT REMOVE --> | feature request,discussion,transition | medium | Minor |
506,286,868 | TypeScript | --noEmitOnError trace has incorrect number of errors | **TypeScript Version:** 3.6.4 (also tested typescript@next)
**Search Terms:** noEmitOnError
**Code**
```ts
const x: string = 5;
```
```json
{
"compilerOptions": {
"noEmitOnError": true
},
"include": [
"lib"
]
}
```
**Expected behavior:** Error number is 1.
**Actual behavior:** Error number is 2 when `--noEmitOnError` is enabled.
```
c:\Users\Username\Documents\projects\typescript_next>npx tsc
lib/test.ts:1:7 - error TS2322: Type '5' is not assignable to type 'string'.
1 const x: string = 5;
~
Found 2 errors.
c:\Users\Username\Documents\projects\typescript_next>npx tsc
lib/test.ts:1:7 - error TS2322: Type '5' is not assignable to type 'string'.
1 const x: string = 5;
~
Found 1 error.
```
**Playground Link:**
**Related Issues:**
| Bug | low | Critical |
506,296,556 | go | x/net/http2: ability to access raw HTTP/2 stream | I am interested in creating a HTTP proxy (note: _not_ a reverse proxy) in Go. I'm only interested in implementing the `CONNECT` method. All other methods are irrelevant for me. https://github.com/elazarl/goproxy comes very close to what I want to do, but I'm also interested in [supporting `CONNECT` when client is connected to proxy using HTTP/2](https://github.com/elazarl/goproxy/issues/361). AFAIK, according to RFC7540, `CONNECT` over HTTP/2 [should work by hijacking the stream and do TCP tunneling over it](https://httpwg.org/specs/rfc7540.html#CONNECT).
[*http2responseWriter does not implement http.Hijacker](#14797). One idea, would be that it implements `http.Hijacker` which will give access not to the underlying TCP connection but rather the HTTP stream.
...or is there any other approach I could take without having to implement low-level HTTP/2 stuff? | NeedsInvestigation,FeatureRequest | low | Major |
506,311,791 | flutter | Add Tap Handlers to `TableRow` | ## Use case
While laying out a drawer menu I decided to give Table a try because it seems a good fit if you want to show icons before the entries, sometimes another icon behind in a regular grid.
When the user wants to select a menu entry it should be possible that he taps anywhere on the Row.
Adding GestureDetectors for every cell seems not a good solution
IMHO the use case to make w whole row selectable is quite common for instance highlightning a row on tap.
## Proposal
Add an OnTap handler to TableRows that adds a GestureDetector to a row is a Handler is set. | c: new feature,framework,f: gestures,P2,team-framework,triaged-framework | low | Major |
506,326,327 | neovim | Backup file creation depends on umask and permissions of file | To reproduce:
- Create two files a and b with permissions 664 and 644 respectively \
- set `umask` to `0022`
```bash
touch a b
chmod 664 a
chmod 644 b
umask 0022
```
- Edit the files in vim with `backup` enabled and `backupdir` set to a nonexistent directory `foo`
- Observe that vim is unable to *write* to a backup file when writing `a`
- Observe that vim is unable to *make* a backup file when writing `b` (a different error)
- I would expect the behaviour to be the same between `a` and `b`, both failing with E510
```bash
vim -u NONE --cmd "set backup" --cmd "set backupdir=./foo//" --cmd "e a" --cmd w --cmd q -Vlog && cat log | grep ^E; rm log
# E506: Can't write to backup file (add ! to override) (libuv error no such file or directory)
vim -u NONE --cmd "set backup" --cmd "set backupdir=./foo//" --cmd "e b" --cmd w --cmd q -Vlog && cat log | grep ^E; rm log
# E510: Can't make backup file (add ! to override)
```
- Create the directory `foo`
- Observe that backup creation works as expected
```bash
mkdir foo
vim -u NONE --cmd "set backup" --cmd "set backupdir=./foo//" --cmd "e a" --cmd w --cmd q -Vlog && cat log | grep ^E; rm log
# No output
vim -u NONE --cmd "set backup" --cmd "set backupdir=./foo//" --cmd "e b" --cmd w --cmd q -Vlog && cat log | grep ^E; rm log
# No output
```
- Edit the files in vim with `backup` enabled and `backupdir` set to a nonexistent directory `bar` followed by an existing directory `foo`
- Observe that although `b` is able to write a backup file in `foo`, `a` is not
- I would expect both files to be able to be backed up in `foo`
```bash
vim -u NONE --cmd "set backup" --cmd "set backupdir=./bar//,./foo//" --cmd "e a" --cmd w --cmd q -Vlog && cat log | grep ^E; rm log
# E506: Can't write to backup file (add ! to override) (libuv error no such file or directory)
vim -u NONE --cmd "set backup" --cmd "set backupdir=./bar//,./foo//" --cmd "e b" --cmd w --cmd q -Vlog && cat log | grep ^E; rm log
# No output
```
- Repeat the above with `umask` set to `0002`
- Observe that backup files for both `a` and `b` are created in `foo`
when it is available and that both invocations fail with E510 with `foo`
is not present
The most buggy part here is that when an existing backup directory is
preceded with a missing one then no backup is created and E506 is
raised when `umask=0022` and the file being edited has permissions 0664.
At the very least I think `E506` should be documented with a pointer to
check file permissions and `umask`.
The execution path diverges
[here](https://github.com/neovim/neovim/blob/9af0fe529d2d91640e4d3388ab9f28159553f14c/src/nvim/fileio.c#L2693)
because `file_info.stat.st_mode != perm`. This is speculation: but
because the only side effect of that comparison passing is that
`backup_copy` becomes `true` the implication is that `backup_copy` is
actually broken entirely. | security,core | low | Critical |
506,329,929 | vue | Error compiling long string litteral (many + on many lines) | ### Version
2.6.10
### Reproduction link
[https://codesandbox.io/embed/vue-template-ysl83](https://codesandbox.io/embed/vue-template-ysl83)
### Steps to reproduce
Just click the link and you see the error and start editing.
Go to the second of the component and you see a red line (line no 24?)
It says that the string literal is not correct, but its because it loads just a piece of it.
### What is expected?
No compilation error
### What is actually happening?
A compilation error
<!-- generated by vue-issues. DO NOT REMOVE --> | bug,has workaround | medium | Critical |
506,337,955 | youtube-dl | Unable to download XML: HTTP Error 404: Not Found (caused by HTTPError()) | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.28. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2019.09.28**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
<!--
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2019.09.28
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
$ youtube-dl --version
2019.09.28
$ youtube-dl https://www.bbc.co.uk/iplayer/episode/m0007znv/ad/china-a-new-world-order-series-1-episode-1
[bbc.co.uk] m0007znv: Downloading video page
[bbc.co.uk] m0007znv: Downloading playlist JSON
[bbc.co.uk] m0007znv: Downloading legacy playlist XML
ERROR: Unable to download XML: HTTP Error 404: Not Found (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
$ youtube-dl --verbose https://www.bbc.co.uk/iplayer/episode/m0007znv/ad/china-a-new-world-order-series-1-episode-1
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'--verbose', u'https://www.bbc.co.uk/iplayer/episode/m0007znv/ad/china-a-new-world-order-series-1-episode-1']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.09.28
[debug] Python version 2.7.15+ (CPython) - Linux-4.15.0-65-generic-x86_64-with-Ubuntu-18.04-bionic
[debug] exe versions: ffmpeg 3.4.6, ffprobe 3.4.6, phantomjs 11000, rtmpdump 2.4
[debug] Proxy map: {}
[bbc.co.uk] m0007znv: Downloading video page
[bbc.co.uk] m0007znv: Downloading playlist JSON
[bbc.co.uk] m0007znv: Downloading legacy playlist XML
ERROR: Unable to download XML: HTTP Error 404: Not Found (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "/usr/local/bin/yt-dl/youtube_dl/extractor/common.py", line 627, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/bin/yt-dl/youtube_dl/YoutubeDL.py", line 2237, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python2.7/urllib2.py", line 435, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 548, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 473, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 556, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
```
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE
Video in the link above played without any problem inside Firefox. Downloading other programs from the same website was OK earlier today. But it stopped working this afternoon.
Thank you for investigating. | geo-restricted | low | Critical |
506,347,003 | flutter | How to create complex route transitions? | By complex animation I mean is that it not only transits to a new route, but also does something on an old route. For example, in every app (both android and iOS) it is considered a good practice to have a transition, which pushes new route from right and old route goes a bit to left and dims.
Your dev guides and API docs seem to be not touching this topic at all, googling also gave me no answers and that badly surprised me because it's such a basic thing in almost every native application.
[Stackoverflow question](https://stackoverflow.com/questions/58359812/how-to-make-listview-preserve-its-scroll-when-transitioning-to-another-route) | framework,a: animation,d: examples,c: proposal,P3,team-framework,triaged-framework | low | Major |
506,359,129 | godot | Particles2D don't disable emitting after one-shot, when starting off-screen | **Godot version:**
3.2 alpha2
**Issue description:**
When you create Particles2D, enable `one_shot` and start emitting when particles are outside visible radius, the emitting is never disabled and will start only when particles are again on screen. This is annoying, because I wan't to e.g. play explosion, but are too far to see it and then it will happen when I get close, even though it should be already finished. One shot should affect emitting whether particles are rendering or not.
**Steps to reproduce:**
1. Create Particles2D
2. Enable `one_shot`
3. Pan viewport away from particles
4. Enable `emitting`
5. Wait a bit
6. Pan to particles again
7. They will start emitting only then :< | bug,topic:rendering,confirmed,topic:particles | low | Major |
506,360,281 | terminal | Defer initial terminal sizing until terminal initialization | Right now the initial window size is calculated just as the window is created but before the terminal is. This has two disadvantages:
1. It doesn't have all the size information. Right now its only the width of a scrollbar that it has to guess but there could be more (potentially variable size) elements in the future. There might also be some additional spacing in layers above, like pane or terminal page.
2. It creates temporary `DxRenderer` just to locate required fond and calculate its size and then discards it.
If we instead calculated the initial size when terminal initializes this problems would be gone (it would be able to query sizes of the controls and use the real renderer of a terminal). We might potentially resolve them otherwise but this seems to be the simplest and most robust approach.
The problem is that it could result in two sizing operations (one when window opens and another one when the size is calculated) which could be noticeable to user or CPU, unless this can be somehow avoided. Thus I'm here to ask what do you think about this approach.
| Help Wanted,Area-Rendering,Area-UserInterface,Product-Terminal,Issue-Task | low | Minor |
506,366,824 | go | encoding/gob: type information missing from nested ignored interfaces | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
> println(runtime.Version())
go1.13.1
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<pre>
https://play.golang.org
</pre></details>
### What did you do?
I encoded a struct with multilevel nested interfaces, and then tried to ignore the encoded value when decoding.
https://play.golang.org/p/L0f2hXG46yx
### What did you expect to see?
Decoding the stream to have been successful with type information from both nested interfaces received, while ignoring the first value.
### What did you see instead?
`gob: bad data: field numbers out of bounds`
Also, the function `ignoreInterface` does not descend into its fields even if the concrete type name is registered. Without descending into the concrete type, it will not be able to get type information from any more interfaces nested inside the concrete type.
https://github.com/golang/go/blob/50bd1c4d4eb4fac8ddeb5f063c099daccfb71b26/src/encoding/gob/decode.go#L692 | NeedsInvestigation | low | Minor |
506,389,616 | youtube-dl | [YouTube] Conversion fail on encrypted video/audio files | ### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
```
C:\Users\Desktop\youtube-dl>youtube-dl.exe -f 225+329 --all-subs --convert-subs srt https://www.youtube.com/watch?v=QavMsSB-H0E --username PRIVATE--password PRIVATE -v
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-f', '225+329', '--all-subs', '--convert-subs', 'srt', 'https://www.youtube.com/watch?v=QavMsSB-H0E', '--username', 'PRIVATE', '--password', 'PRIVATE', '-v']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2019.09.28
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.18362
[debug] exe versions: ffmpeg N-94099-gdd662bbdd2, ffprobe N-94099-gdd662bbdd2, rtmpdump 2.4
[debug] Proxy map: {}
[youtube] Downloading login page
[youtube] Looking up account info
[youtube] Logging in
[youtube] Checking cookie
[youtube] QavMsSB-H0E: Downloading webpage
[youtube] QavMsSB-H0E: Downloading video info webpage
[youtube] {146} signature length 113, html5 player vflNSW9LL
[youtube] {226} signature length 113, html5 player vflNSW9LL
[youtube] {227} signature length 113, html5 player vflNSW9LL
[youtube] {275} signature length 109, html5 player vflNSW9LL
[youtube] {359} signature length 113, html5 player vflNSW9LL
[youtube] {360} signature length 113, html5 player vflNSW9LL
[youtube] {145} signature length 109, html5 player vflNSW9LL
[youtube] {224} signature length 113, html5 player vflNSW9LL
[youtube] {225} signature length 109, html5 player vflNSW9LL
[youtube] {274} signature length 109, html5 player vflNSW9LL
[youtube] {357} signature length 113, html5 player vflNSW9LL
[youtube] {358} signature length 113, html5 player vflNSW9LL
[youtube] {144} signature length 109, html5 player vflNSW9LL
[youtube] {222} signature length 113, html5 player vflNSW9LL
[youtube] {223} signature length 113, html5 player vflNSW9LL
[youtube] {273} signature length 113, html5 player vflNSW9LL
[youtube] {317} signature length 113, html5 player vflNSW9LL
[youtube] {318} signature length 109, html5 player vflNSW9LL
[youtube] {143} signature length 109, html5 player vflNSW9LL
[youtube] {280} signature length 109, html5 player vflNSW9LL
[youtube] {142} signature length 113, html5 player vflNSW9LL
[youtube] {279} signature length 109, html5 player vflNSW9LL
[youtube] {161} signature length 113, html5 player vflNSW9LL
[youtube] {148} signature length 113, html5 player vflNSW9LL
[youtube] {149} signature length 113, html5 player vflNSW9LL
[youtube] {150} signature length 109, html5 player vflNSW9LL
[youtube] {261} signature length 109, html5 player vflNSW9LL
[youtube] {326} signature length 113, html5 player vflNSW9LL
[youtube] {329} signature length 113, html5 player vflNSW9LL
[youtube] {350} signature length 113, html5 player vflNSW9LL
[youtube] {351} signature length 113, html5 player vflNSW9LL
[youtube] {352} signature length 113, html5 player vflNSW9LL
[youtube] {381} signature length 113, html5 player vflNSW9LL
[youtube] QavMsSB-H0E: Downloading MPD manifest
[info] Writing video subtitles to: Chapter Fifty-Eight - 'In Memoriam'-QavMsSB-H0E.en.vtt
[debug] Invoking downloader on 'https://r4---sn-4pcp5q3-j2ie.googlevideo.com/videoplayback?expire=1571024724&ei=9JqjXcuyI5jF1gKT2oWwDw&ip=5.186.125.16&id=o-AELOtvO8eQMa49lujzhUsIfGNU9rAau3kUDQhELUZO1z&itag=225&aitags=142%2C143%2C144%2C145%2C146%2C161%2C222%2C223%2C224%2C225%2C226%2C227%2C273%2C274%2C275%2C279%2C280%2C317%2C318%2C357%2C358%2C359%2C360&source=youtube&requiressl=yes&mm=31%2C29&mn=sn-4pcp5q3-j2ie%2Csn-5hne6nsr&ms=au%2Crdu&mv=m&mvi=3&pl=21&ctier=A&pfa=5&gcr=dk&initcwndbps=1962500&hightc=yes&mime=video%2Fmp4&gir=yes&clen=1061594088&dur=2528.609&lmt=1570634707938274&mt=1571002957&fvip=4&keepalive=yes&fexp=23842630&c=WEB&sparams=expire%2Cei%2Cip%2Cid%2Caitags%2Csource%2Crequiressl%2Cctier%2Cpfa%2Cgcr%2Chightc%2Cmime%2Cgir%2Cclen%2Cdur%2Clmt&lsparams=mm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=AHylml4wRQIgHss2SBqzcGTARtgReutfgwZfgvPHg4iPx8J8WT-1UjACIQC28KFAsIcasCLiD7gX__A4X4-AkRCpdj8r4v3BGJ76dg%3D%3D&sig=ALgxI2wwRAIgF7w22zjBtbl8EPSI8avLbkiFt-sqWAVD-QSWKBHaZ30CIGiv76PD2eOROnzv0OmUlzwMNtnxModFV4txFB9LziY2&ratebypass=yes'
[download] Destination: Chapter Fifty-Eight - 'In Memoriam'-QavMsSB-H0E.f225.mp4
[download] 100% of 1012.42MiB in 01:40
[debug] Invoking downloader on 'https://r4---sn-4pcp5q3-j2ie.googlevideo.com/videoplayback?expire=1571024724&ei=9JqjXcuyI5jF1gKT2oWwDw&ip=5.186.125.16&id=o-AELOtvO8eQMa49lujzhUsIfGNU9rAau3kUDQhELUZO1z&itag=329&source=youtube&requiressl=yes&mm=31%2C29&mn=sn-4pcp5q3-j2ie%2Csn-5hne6nsr&ms=au%2Crdu&mv=m&mvi=3&pl=21&ctier=A&pfa=5&gcr=dk&initcwndbps=1962500&hightc=yes&mime=audio%2Fmp4&gir=yes&clen=122048695&dur=2528.704&lmt=1570634634937181&mt=1571002957&fvip=4&keepalive=yes&fexp=23842630&c=WEB&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cctier%2Cpfa%2Cgcr%2Chightc%2Cmime%2Cgir%2Cclen%2Cdur%2Clmt&lsparams=mm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=AHylml4wRQIgHss2SBqzcGTARtgReutfgwZfgvPHg4iPx8J8WT-1UjACIQC28KFAsIcasCLiD7gX__A4X4-AkRCpdj8r4v3BGJ76dg%3D%3D&sig=ALgxI2wwRQIhAJptnopTETjJtuve7GNwWx7mW6XQI5g2qFlCnQhT_JbNAiAhZ95LvqrIyPnLXAcELuPLc6vDHdRv1vTEdveojdWUrg==&ratebypass=yes'
[download] Destination: Chapter Fifty-Eight - 'In Memoriam'-QavMsSB-H0E.f329.m4a
[download] 100% of 116.39MiB in 00:14
[ffmpeg] Merging formats into "Chapter Fifty-Eight - 'In Memoriam'-QavMsSB-H0E.mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel "repeat+info" -i "file:Chapter Fifty-Eight - 'In Memoriam'-QavMsSB-H0E.f225.mp4" -i "file:Chapter Fifty-Eight - 'In Memoriam'-QavMsSB-H0E.f329.m4a" -c copy -map "0:v:0" -map "1:a:0" "file:Chapter Fifty-Eight - 'In Memoriam'-QavMsSB-H0E.temp.mp4"
ERROR: Conversion failed!
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpyi91grvc\build\youtube_dl\YoutubeDL.py", line 2064, in post_process
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpyi91grvc\build\youtube_dl\postprocessor\ffmpeg.py", line 512, in run
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpyi91grvc\build\youtube_dl\postprocessor\ffmpeg.py", line 235, in run_ffmpeg_multiple_files
youtube_dl.postprocessor.ffmpeg.FFmpegPostProcessorError: Conversion failed!
...
<end of log>
```
---
- Single video: https://www.youtube.com/watch?v=QavMsSB-H0E
---
### Description of *issue*
Conversion fails. And using MediaInfo, I can see the files have an encryption. Is it possible to fix this with a decryption key? | account-needed | low | Critical |
506,449,763 | godot | Android deploy with Remote Debug or Network FS doesn't work over Wi-Fi | **Godot version:**
master (1fed266bf)
**OS/device including version:**
Host: Arch Linux, 64bit
Android Device: Oculus Quest, 9.0.0
**Issue description:**
The short version is `adb reverse ...` doesn't work over the network for some reason due to an [adb bug](https://issuetracker.google.com/issues/37066218) and we attempt to use it even if we're connected over the network.
The long version is in the steps to reproduce below.
Godot console shows that reverse fails with a non-zero status.
> Installing to device (please wait...): Oculus Quest
> --- Device API >= 21; debugging over USB ---
> --- DEVICE API >= 21; DEBUGGING OVER USB ---
> Reverse result: 1
`adb reverse` seems to always error with
>adb: error: more than one device/emulator
when connected wirelessly, even when used outside Godot, and even when there is only one device in `adb devices`.
**Steps to reproduce:**
Attempt to debug a project on an android device over the network (instead of USB).
1. Connect newish android device with ADB over tcpip
1. Connect android device that uses at least API level 22 via USB.
2. `adb tcpip 5555`
3. `adb connect x.x.x.x:5555` (android device's ip)
4. Unplug from USB.
2. Enable "Deploy with Remote Debug" and/or "Small Deploy with Network FS" from the "Debug" menu.
3. Deploy and run a project via the android button in the top-right corner of the UI.
Expected behavior:
* If Remote Debug enabled:
* breakpoints pause execution
* If Small Deploy with Network FS enabled:
* project runs
Actual behavior:
* If Remote Debug enabled:
* breakpoints do nothing
* If Small Deploy with Network FS enabled:
* error message on device instead of running
**Minimal reproduction project:**
N/A
Related to changes in #10792 | bug,platform:android,topic:editor,confirmed,topic:network | low | Critical |
506,490,937 | flutter | [local_auth] stickyAuth causes IllegalStateException | I'm receiving crash in Crashlytics with following stack trace:
```
Fatal Exception: java.lang.IllegalStateException
Can not perform this action after onSaveInstanceState
androidx.fragment.app.FragmentManagerImpl.checkStateLoss (FragmentManagerImpl.java:1536)
androidx.biometric.BiometricPrompt.authenticate (BiometricPrompt.java:658)
io.flutter.plugins.localauth.AuthenticationHelper$1.run (AuthenticationHelper.java:172)
android.os.Handler.handleCallback (Handler.java:789)
com.android.internal.os.ZygoteInit.main (ZygoteInit.java:1374)
```
It's not device/OS specific, (Android 6/7/8, manufacturers Samsung, Xioami, ZTE, LG and others)
Code that I'm using:
```dart
@override
void initState() {
super.initState();
_auth();
}
void _auth() {
localAuth.authenticateWithBiometrics(
localizedReason: "App is locked",
stickyAuth: true,
).then((x) {
if (x)
Navigator.of(context).pushReplacementNamed('/home');
}).catchError((e) {
if (++retries < 10)
_auth();
});
}
```
I'm using local_auth 0.6.0+1 | c: crash,platform-android,p: local_auth,package,P2,c: fatal crash,team-android,triaged-android | low | Critical |
506,521,065 | TypeScript | Describe function signature using an interface | ## Search Terms
function implements interface
## Suggestion
Allow functions to indicate interfaces that they implement, allowing usage of a function interface.
Properties defined on an interface would not be possible unless they are defined as optional
## Use Cases
When using function overloads, the return type is defined as `any`, this would allow the return type to be guarded and not use `any`. Referencing: https://www.typescriptlang.org/docs/handbook/functions.html#overloads
It would also allow optional properties to be defined on a function without doing type gymnastics.
## Examples
```
interface CoolFunction {
(first: string, second: number): number;
(first: number, second: string): string;
property?: boolean;
}
// args will be of type [string, number] | [number, string]
function coolFunction(...args) implements CoolFunction {
if (typeof args[0] === "string") {
// We're now reduced our type of args to [string, number]
return args[1];
} else {
// args can only be [number, string]
return args[1];
}
}
// This property is known as it is defined on the interface
coolFunction.property = true;
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | high | Critical |
506,565,489 | pytorch | RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 24 and 195 in dimension 0 at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/TH/generic/THTensor.cpp:689 | I am trying to use Tracing mechanism to trace my python model. Below is my nn.Module:
```
class GeneralizedRCNN(nn.Module):
"""
Main class for Generalized R-CNN. Currently supports boxes and masks.
It consists of three main parts:
- backbone
- rpn
- heads: takes the features + the proposals from the RPN and computes
detections / masks from it.
"""
def __init__(self, cfg):
super(GeneralizedRCNN, self).__init__()
self.backbone = build_backbone(cfg)
self.rpn = build_rpn(cfg, self.backbone.out_channels)
self.roi_heads = build_roi_heads(cfg, self.backbone.out_channels)
def forward(self, images, targets=None):
"""
Arguments:
images (list[Tensor] or ImageList): images to be processed
targets (list[BoxList]): ground-truth boxes present in the image (optional)
Returns:
result (list[BoxList] or dict[Tensor]): the output from the model.
During training, it returns a dict[Tensor] which contains the losses.
During testing, it returns list[BoxList] contains additional fields
like `scores`, `labels` and `mask` (for Mask R-CNN models).
"""
if self.training and targets is None:
raise ValueError("In training mode, targets should be passed")
images = to_image_list(images)
features = self.backbone(images.tensors)
proposals, proposal_losses = self.rpn(images, features, targets)
if self.roi_heads:
print(type(features))
print(type(proposals))
print(type(targets))
x, result, detector_losses = self.roi_heads(features, proposals, targets)
else:
# RPN-only models don't have roi_heads
x = features
result = proposals
detector_losses = {}
if self.training:
losses = {}
losses.update(detector_losses)
losses.update(proposal_losses)
return losses
print("result.....")
print(result[0].bbox.size())
return result[0].bbox
```
When I execute the file, it does print the last two lines as shown below:
**result…
torch.Size([24, 4])**
Here is my starting code snippet from trace.py file:
**`traced_script_module = torch.jit.trace(torch_model, y)`**
Following is the error that I receive when I execute the trace.py. **Please be noted that error is thrown after it does print everything until the second last line of nn.Module. The bbox is populated and it has valid values.**
```
Traceback (most recent call last):
File “/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/jit/init.py”, line 595, in run_mod_and_filter_tensor_outputs
outs = wrap_retval(mod(*_clone_inputs(inputs)))
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 24 and 195 in dimension 0 at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/TH/generic/THTensor.cpp:689
The above operation failed in interpreter, with the following stack trace:
/mnt/d/work/MASKRCNN/maskrcnn-benchmark/maskrcnn_benchmark/modeling/poolers.py(96): convert_to_roi_format
/mnt/d/work/MASKRCNN/maskrcnn-benchmark/maskrcnn_benchmark/modeling/poolers.py(110): forward
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/nn/modules/module.py(531): _slow_forward
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/nn/modules/module.py(545): call
/mnt/d/work/MASKRCNN/maskrcnn-benchmark/maskrcnn_benchmark/modeling/roi_heads/mask_head/roi_mask_feature_extractors.py(60): forward
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/nn/modules/module.py(531): _slow_forward
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/nn/modules/module.py(545): call
/mnt/d/work/MASKRCNN/maskrcnn-benchmark/maskrcnn_benchmark/modeling/roi_heads/mask_head/mask_head.py(70): forward
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/nn/modules/module.py(531): _slow_forward
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/nn/modules/module.py(545): call
/mnt/d/work/MASKRCNN/maskrcnn-benchmark/maskrcnn_benchmark/modeling/roi_heads/roi_heads.py(39): forward
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/nn/modules/module.py(531): _slow_forward
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/nn/modules/module.py(545): call
/mnt/d/work/MASKRCNN/maskrcnn-benchmark/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py(56): forward
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/nn/modules/module.py(531): _slow_forward
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/nn/modules/module.py(545): call
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/jit/init.py(904): trace_module
/home/fariha/anaconda3/envs/maskrcnn_benchmark/lib/python3.7/site-packages/torch/jit/init.py(772): trace
trace.py(117):
```
cc @suo | oncall: jit,module: nn,triaged | low | Critical |
506,579,897 | pytorch | caffe2 install VS2019 CUDA 10.1 lib\torch.lib : fatal error LNK1248: image size (10028FA9F) exceeds maximum allowable size (FFFFFFFF) | ## 🐛 Bug
Hi all I'm trying to build Caffe2 on Pytorch from Binaries, have used Python 2.7 with VS 2019 and CUDA 10.1. Cmake donwloaded manually and also updated via Conda as read from other users posting my issue. BUILD_SHARED_LIB=OFF per instructions in build_windows.bat, get error :
```
# #lib\torch.lib : fatal error LNK1248: image size (10028FA9F) exceeds maximum allowable size (FFFFFFFF)
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "tools\build_libtorch.py", line 23, in <module>
rerun_cmake=True, cmake_only=False, cmake=CMake())
File "c:\projects\pytorch\pytorch\tools\build_pytorch_libs.py", line 59, in build_caffe2
cmake.build(my_env)
File "c:\projects\pytorch\pytorch\tools\setup_helpers\cmake.py", line 334, in build
self.run(build_args, my_env)
File "c:\projects\pytorch\pytorch\tools\setup_helpers\cmake.py", line 142, in run
check_call(command, cwd=self.build_dir, env=env)
File "C:\Python27\Lib\subprocess.py", line 190, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '12']' returned non-zero exit status 1
"Caffe2 building failed"
```
## To Reproduce
Steps to reproduce the behavior:
> cd c:\projects\pytorch
> caffe2env\Scripts\activate
> scripts\build_windows.bat
Warning Generated:
```
CMake Warning at CMakeLists.txt:628 (message):
Generated cmake files are only available when building shared libs.
```
```
Summary;
******** Summary ********
-- General:
-- CMake version : 3.15.3
-- CMake command : C:/Program Files (x86)/CMake/bin/cmake.exe
-- System : Windows
-- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.23.28105/bin/Hostx64/x64/cl.exe
-- C++ compiler id : MSVC
-- C++ compiler version : 19.23.28105.4
-- BLAS : MKL
-- CXX flags : /DWIN32 /D_WINDOWS /GR /w /EHa /MP /bigobj -openmp:experimental
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNX_NAMESPACE=onnx_c2;_CRT_SECURE_NO_DEPRECATE=1;WIN32_LEAN_AND_MEAN
-- CMAKE_PREFIX_PATH : c:\projects\pytorch\caffe2env\Lib\site-packages;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1
-- CMAKE_INSTALL_PREFIX : C:/projects/pytorch/pytorch/torch
--
-- TORCH_VERSION : 1.1.0
-- CAFFE2_VERSION : 1.1.0
-- BUILD_CAFFE2_MOBILE : ON
-- USE_STATIC_DISPATCH : OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : False
-- BUILD_CAFFE2_OPS : ON
-- BUILD_SHARED_LIBS : OFF
-- BUILD_TEST : True
-- INTERN_BUILD_MOBILE :
-- USE_ASAN : OFF
-- USE_CUDA : True
-- CUDA static link : OFF
-- USE_CUDNN : OFF
-- CUDA version : 10.1
-- CUDA root directory : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1
-- CUDA library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/lib/x64/cuda.lib
-- cudart library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/lib/x64/cudart_static.lib
-- cublas library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/lib/x64/cublas.lib
-- cufft library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/lib/x64/cufft.lib
-- curand library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/lib/x64/curand.lib
-- nvrtc : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/lib/x64/nvrtc.lib
-- CUDA include path : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/include
-- NVCC executable : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/bin/nvcc.exe
-- CUDA host compiler : C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.23.28105/bin/Hostx64/x64/cl.exe
-- USE_TENSORRT : OFF
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FBGEMM : OFF
-- USE_FFMPEG : OFF
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : OFF
-- USE_METAL : OFF
-- USE_MKL : OFF
-- USE_MKLDNN : OFF
-- USE_NCCL : OFF
-- USE_NNPACK : OFF
-- USE_NUMPY : True
-- USE_OBSERVERS : OFF
-- USE_OPENCL : OFF
-- USE_OPENCV : OFF
-- USE_OPENMP : ON
-- USE_TBB : OFF
-- USE_PROF : OFF
-- USE_QNNPACK : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : OFF
-- BUILD_NAMEDTENSOR : OFF
-- Public Dependencies : Threads::Threads
-- Private Dependencies : cpuinfo;fp16;aten_op_header_gen;foxi_loader
-- Configuring done
-- Generating done
```
CMake Warning:
Manually-specified variables were not used by the project:
NUMPY_INCLUDE_DIR
PYTHON_INCLUDE_DIR
| caffe2 | medium | Critical |
506,626,409 | terminal | Terminal startup location partially off Desktop | <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
<!--
This bug tracker is monitored by Windows Terminal development team and other technical folks.
**Important: When reporting BSODs or security issues, DO NOT attach memory dumps, logs, or traces to Github issues**.
Instead, send dumps/traces to [email protected], referencing this GitHub issue.
If this is an application crash, please also provide a Feedback Hub submission link so we can find your diagnostic data on the backend. Use the category "Apps > Windows Terminal (Preview)" and choose "Share My Feedback" after submission to get the link.
Please use this form and describe your issue, concisely but precisely, with as much detail as possible.
-->
# Environment
```none
Windows build number: [run `[Environment]::OSVersion` for powershell, or `ver` for cmd]
10.0.18362.418
Windows Terminal version (if applicable):
0.5.2762.0
Any other software?
```
# Steps to reproduce
<!-- A description of how to trigger this bug. -->
1. change "initialRows": 44 in Settings
2. change "defaultProfile": "{c6eaf9f4-32a7-5fdc-b5cf-066e8a4b1e40}" # ubuntu-18.04
3. Start -> Windows Terminal (Preview)
# Expected behavior
<!-- A description of what you're expecting, possibly containing screenshots or reference material. -->
entire Terminal with default profile on screen in its entirety is expected to be seen on Desktop
# Actual behavior
<!-- What's actually happening? -->
Terminal appears to open with 14 lines off the bottom of the Desktop (behind and below the Taskbar). there does not seem to be "initialXpos" nor "initialYpos" in Settings to manually place Terminal in its entirety on the Desktop. | Help Wanted,Issue-Bug,Area-UserInterface,Product-Terminal,Priority-3 | medium | Critical |
506,675,235 | pytorch | Be able to build torch.distributed documentation easier | ## 📚Motivation
It is pretty hard to build pytorch documentation for releases right now. To make the releases easier, it would be great if we could build the documentation easier and on the platforms that devs use the most (linux and mac os). One of the trickier things to build is the torch.distributed documentation.
# Analysis
On Mac, distributed doesn't get built by default. I am not sure how easy it is to build distributed on mac. However, when it is not built, then the APIs and their respective docstrings aren't exposed: https://github.com/pytorch/pytorch/blob/master/torch/distributed/__init__.py.
One possible solution is to push the `is_available()` into the APIs. For example, something like DistributedDataParallel https://github.com/pytorch/pytorch/blob/4bcedb66702a49c6c7e89f0a35321312bc5efcdb/torch/nn/parallel/distributed.py#L33 has its API and docstring exposed; the existence of the API and docstring don't depend on whether or not distributed was built.
Another possible solution is to make torch.distributed easier to build on mac os. I'm not sure how easy this is right now.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @ezyang @zou3519 | oncall: distributed,triaged,module: doc infra | low | Major |
506,681,050 | rust | Replace most of rustc_metadata::schema with something based on queries. | Right now (cross-crate) "metadata" is encoded as an ad-hoc heterogeneous tree, described in [`rustc_metadata::schema`](https://github.com/rust-lang/rust/blob/d28a9c38fe14396e86ae274c7847e20ee0f78ca9/src/librustc_metadata/schema.rs), with `Lazy<T>` acting as indirection (as in "pointer to `T`", inside the "metadata" blob) and letting the user choose whether to decode of the `T` value.
There is also a random-access array (called "table" in #59953), which is currently only used for `Entry`.
This cross-crate system predates the on-demand/incremental query system, and we have accumulated a lot of data in the `schema` which is similar (but not always identical) to certain queries, and additional code to present that information through queries.
The disadvantages I see with the current approach are:
* a lot of `schema`/`encoder`/`decoder` boilerplate for everything
* most of which isn't documented well, perpetuating the ad-hoc-ness
* somewhat inconsistent organization
* e.g. `predicates` in `Entry` vs `super_predicates` in `TraitData`
* decoding more than is needed, even if mostly `Lazy` pointers
* `Entry`'s 15 fields are all decoded to read only 1, most of the time
* arguably a significant performance issue (although we save some space)
* #59953 is my attempt at solving this particular aspect
<hr/>
In #59953, the table of `Entry`s is replaced by a table for everything that used to be in an `Entry` field.
For example, the `predicates_of` query would then perform `predicates[i].decode()` instead of `entries[i].decode().predicates.decode()` (*irrelevant details elided*).
This is effectively a trade-off:
* using more space because most of those tables aren't 100% filled
* @michaelwoerister has some ideas about that in https://github.com/rust-lang/rust/pull/59953#discussion_r331979338
* taking less time because there are less unused details being decoded
* the query system dictates the granularity here, so matching it helps
* losing (some) cache locality might limit this win
However, we can go further - #59953 doesn't touch `EntryKind`, which is still a sprawling `enum` with even two levels of `Lazy` indirection in places.
<hr/>
Ultimately, we could have "cross-crate metadata" be one table per query in most cases. This would accentuate the trade-off from #59953 further, but it would also allow simplifying `rustc_metadata` and unifying it further with incremental save&restore.
One of the queries that would benefit most from this is `def_kind`, which could be stored as a fully-populated table of bytes, much more compact and cheaper to decode than `EntryKind` today. | C-cleanup,A-metadata,I-compiletime,T-compiler | low | Major |
506,708,584 | go | reflect: SetMapIndex does not permit adding nil values of interface type | ### What version of Go are you using (`go version`)?
I'm executing this on the Go playground (currently at 1.13.1). Sample provided.
### What did you do?
I tried to add an entry to a map of type map[string]interface{} using reflection. The behavior differs depending on if the actual value is an interface or a concrete value. There seem to be no way to actually add a nil value based on an interface type alone. Example can be executed here: https://play.golang.org/p/yfQo-wyP0B9
```go
package main
import (
"fmt"
"reflect"
"regexp"
)
func addAsHello(y interface{}) {
m := map[string]interface{}{}
// Add an entry for `hello` with the value nil value using reflection
mr := reflect.ValueOf(m)
mr.SetMapIndex(reflect.ValueOf(`hello`), reflect.ValueOf(y))
fmt.Println(m)
// Do the same without reflection
m[`hello`] = y
fmt.Println(m)
fmt.Println()
}
func main() {
addAsHello(nil)
addAsHello(fmt.Stringer(nil))
addAsHello((*regexp.Regexp)(nil))
}
```
### What did you expect to see?
```
map[hello:<nil>]
map[hello:<nil>]
map[hello:<nil>]
map[hello:<nil>]
map[hello:<nil>]
map[hello:<nil>]
```
### What did you see instead?
```
map[]
map[hello:<nil>]
map[]
map[hello:<nil>]
map[hello:<nil>]
map[hello:<nil>]
```
| Documentation,NeedsInvestigation,compiler/runtime | low | Major |
506,717,171 | TypeScript | type for Function.prototype.bind wrongly preserves fields on a function | When calling `bind` on a function Javascript does not copy the properties across that have been added to that function, however TS treats them as having been copied.
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.7-Beta
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** function bind
**Code**
```ts
const a = () => { }
a.hello = 'hi'
const b = a.bind({})
b.hello.split("") // Uncaught TypeError: Cannot read property 'split' of undefined
```
**Expected behavior:** `hello` field should not be present on `typeof b`.
**Actual behavior:** `hello` is deemed to exist on `typeof b` and the program crashes.
**Playground Link:** http://www.typescriptlang.org/play/?ts=3.7-Beta&ssl=1&ssc=1&pln=7&pc=1#code/MYewdgzgLgBAhjAvDAFASiQPhgbxgXwCg4A6ACwFMAbKkJGAcjIEsHDDRJYAje075mAAmKHPjTtu5arRIQADlWZQUAIlUYA9JpgBVMMDgBXAOZlYAFQCe8igFEATg5AOAXDADCcMGBCwHFHBCMPLOtg5QVowKSlAMMCAAZjBGwhSJghRChEA
**Related Issues:** https://github.com/Microsoft/TypeScript/issues/212 seems to be where this was discussed and finally implemented, although it's not directly related.
| Suggestion,Awaiting More Feedback | low | Critical |
506,717,762 | godot | Xbox 360 Controller not recognized in Firefox (Mac) | **Godot version:** 3.1.1
**OS/device including version:** macOS 10.14.6; Firefox 69.0.3
**Issue description:** In Chrome and the Mac build, I'm able to use an Xbox 360 Controller, which presents to the engine as "Xbox 360 Wired Controller." However, this doesn't get recognized in Firefox, perhaps because the name shows up as "45e-28e-Xbox 360 Wired Controller." I'm not sure where that extra string at the beginning is coming from, but it seems like something Godot should maybe anticipate and be able to handle? This might only be an issue on the Mac, maybe something with the 360Controller.kext that most folks use to get it running; I don't have access to my Windows machine at the moment so can't confirm.
**Steps to reproduce:**
1. Run on a Mac using the [360Controller](https://github.com/360Controller/360Controller/) kernel extension. (As I said, may be the case elsewhere; I just don't want to make overly broad claims.)
2. Load the linked project from below and export to HTML5.
3. Start a server to run the exported HTML.
4. Load the page in Firefox, making sure the console is visible.
5. Move a stick or press a button on an attached 360 Controller (since a webpage won't treat it as connected until it's interacted with.)
6. Note the console output:
```
Joystick 0 connected
name: 45e-28e-Xbox 360 Wired Controller
Device not recognized.
```
7. Run the same project from the editor.
8. Note the console output:
```
Joystick 0 connected
name: X360 Controller
Recognized joystick
X360 Controller connected
```
9. Load the page in Chrome, with console visible. Interact with the controller.
10. Note the console output:
```
Joystick 0 connected
name: Default Mapping
Recognized joystick
Default Mapping connected
```
**Minimal reproduction project:**
[sjml/Godot-Firefox-ControllerBug](https://github.com/sjml/Godot-Firefox-ControllerBug)
[Exported web version](https://shaneliesegang.com/tmp/ControllerBug/) | bug,platform:web,topic:input | low | Critical |
506,763,400 | flutter | invokeMethod on MockedMethodCallHandler should have an option to invoke the "real" handler and bypass the mock | ## Use case
I am mocking platform messages for Firebase. I have the option to setMockMethodCallHandler for the plugin to invoke code for tests to allow users to use sane defaults or provide their own custom responses. However, the internals of the plugin will respond to normal platform channel messages and allocate private types like FirebaseUser._(), trigger _onAuthChanged() callbacks, etc..
Because the MockMethodCallHandler is either in effect or not at any point in time, I can call setMockCallHandler(null) on the channel and trigger plugin code, but if that plugin code then sends any messages on the same channel, it will not reach the mocked handler, but instead throw an exception when running "flutter test".
## Proposal
If I could call invokeMethodOnHandler() instead of just invokeMethod(), where the call would be routed to the plugin code, just once, then the very next call would come back to my mocked tests and not require any tight coupling between user code or the test library. Just providing some mock overrides for platform channels would cover all cases.
A good example is triggering all of the onAuthStatusChanged callbacks in FirebaseAuth, but there are probably several other cases where there is more than a single-fire event to a platform channel on a call stack, and it would be preferable to explicitly tell the platform channel to call the registered handler once under test situations.
| a: tests,c: new feature,framework,c: proposal,P3,a: plugins,team-framework,triaged-framework | low | Minor |
506,778,301 | flutter | Provide visual feedback when the app is ready for the Dart Debug Extension | ### Current behavior
When I run a flutter web app using the following command:
```
flutter run -d chrome --no-web-browser-launch
```
The app would load but not render anything until the Dart Debug Extension is clicked.
### Problem
The problem is that the app takes sometime to be ready, and there's no way to know when it's ready (except by looking at the DevTools Network tab). So if the Dart Debug Extension is clicked before the app is ready, this is what happens:

### Feature request
Show visual feedback when the app is ready to be started by the Dart Debug Extension. | tool,platform-web,P3,team-web,triaged-web | low | Critical |
506,811,418 | flutter | AccessibilityBridge test on iOS | We should have tests to test if the accessibility bridge builds the iOS accessibility tree correct based on the semantics tree provided by the framework. It would also require a dedicated iOS device that has voice over on to run the test.
https://github.com/flutter/flutter/issues/42270 | a: tests,platform-ios,a: accessibility,P2,team-ios,triaged-ios | low | Major |
506,812,306 | flutter | Wire up automated code coverage reporting for the engine. | It is possible to generate total test coverage in the engine using `//flutter/build/generate_coverage.py`. However there is no automated reported of these numbers. This must be wired up on CI so engine developers don't have to run the script locally to figure out test coverage impact of their patches. | a: tests,engine,P2,team-engine,triaged-engine | low | Minor |
506,833,703 | terminal | Tile mode for background image stretch | # Description of the new feature/enhancement
I'm trying to use a patterned GIF as a background image, but didn't find an option to tile it. A new option for setting the stretch mode to tile the image would be great. It would be a nice option as images like these simply don't look good under other modes or require extra effort to make them work, such as editing them and tiling them manually.
Here's the GIF I'm trying to use:

# Proposed technical implementation details (optional)
Allow setting the `"backgroundImageStretchMode"` setting in the profiles to something like `"tile"`. This would display the image at its native resolution from the top-left corner, repeating the image across the background until it's entirely covered. | Help Wanted,Area-UserInterface,Product-Terminal,Issue-Task,Priority-3 | low | Major |
506,849,965 | kubernetes | kubelet does not honor shortened grace period on already-deleting pod | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**:
Delete a pod with a grace period of 120 seconds, then again with 60 seconds. It will still wait until the first 120 time elapses to delete the pod.
**What you expected to happen**:
For it to use the new grace period of 60 seconds
**How to reproduce it (as minimally and precisely as possible)**:
```
ben@shadowfax:~$ kubectl get pod -o yaml | egrep -i 'term|del'
blockOwnerDeletion: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
terminationGracePeriodSeconds: 900
ben@shadowfax:~$ kubectl delete pod $(kubectl get pods | grep python | grep Running | cut -f1 -d' ') --grace-period=60
pod "python-645896764c-f5p9k" deleted
ben@shadowfax:~$ kubectl get pod -o yaml | egrep -i 'term|del'
blockOwnerDeletion: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
terminationGracePeriodSeconds: 900
deletionGracePeriodSeconds: 60
deletionTimestamp: "2019-10-02T16:27:50Z"
blockOwnerDeletion: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
terminationGracePeriodSeconds: 900
ben@shadowfax:~$ kubectl delete pod $(kubectl get pods | grep python | grep Running | cut -f1 -d' ') --grace-period=10
pod "python-645896764c-cfsjz" deleted
ben@shadowfax:~$ kubectl get pod -o yaml | egrep -i 'term|del'
blockOwnerDeletion: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
terminationGracePeriodSeconds: 900
deletionGracePeriodSeconds: 10
deletionTimestamp: "2019-10-02T16:27:15Z"
blockOwnerDeletion: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
terminationGracePeriodSeconds: 900
deletionGracePeriodSeconds: 60
deletionTimestamp: "2019-10-02T16:27:50Z"
blockOwnerDeletion: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
terminationGracePeriodSeconds: 900
```
**Environment**:
- Kubernetes version (use `kubectl version`): 1.14
- Cloud provider or hardware configuration: Rancher/RKE/Hyperkube
- OS (e.g: `cat /etc/os-release`): Ubuntu 18.04
- Install tools: RKE/Rancher
| kind/bug,priority/backlog,sig/node,lifecycle/frozen,triage/accepted | medium | Critical |
506,865,924 | pytorch | `num_batches_tracked` update in `_BatchNorm` forward should be a single scalar update on host regardless of the residence of the layer | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
`num_batches_tracked` is single scalar that increments by 1 every time `forward` is called on the `_BatchNorm` layer with both `training` & `track_running_states` set to true. Our current implementation stores it as a single-element buffer that resides on the same device as with the rest of its parameters/buffers.
We request the update & storage of `num_batches_tracked` to be moved to host, despite the residence of the rest of parameters/buffers.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
When we have a BN layer on accelerators (GPUs), every forward call that updates `num_batches_tracked` triggers a single-element kernel launch, introducing unnecessary host overhead which could hurt our end-2-end perf in cpu bounded workload.
My last attempt to move `num_batches_tracked` to the host from the device gives 0%~11% performance gain across some common problem sizes. #26550
## Pitch
<!-- A clear and concise description of what you want to happen. -->
We need a way that `num_batches_tracked` would resides on device and be backward compatible for save/load modules. This involves relaxing some checks in python tests, which assumes that all values in `state_dict` are buffers/parameters passed by reference.
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
My implementation #26550 does not give full backward compatibility (failing tests and not loading `num_batches_tracked` by reference) and I don't know how to do that easily without a big hammer (rewriting `__setattr__` in `_BatchNorm` inherited from `nn.Module`).
But it does support save/load module, as well as allow assignment using tensor/scalar. | module: performance,module: nn,module: cuda,triaged,enhancement | low | Major |
506,883,407 | pytorch | Specifying `pos_weight` in F.binary_cross_entropy_with_logits leads to RuntimeError: class size not match | The function `F.binary_cross_entropy_with_logits` should be able to handle arbitrary logits shapes, but the argument, `pos_weight` still assumes the size of the second channel be the number of classes.
Example:
```
import torch
import torch.nn.functional as F
inputs = torch.randn(4, 100, 100)
targets = torch.empty(4, 100, 100).random_(2)
pos_weight = torch.tensor([0.1, 0.9]) # for class 0 and class 1, when dataset is biased.
```
`F.binary_cross_entropy_with_logits(inputs, targets)` works fine, but `F.binary_cross_entropy_with_logits(inputs, targets, pos_weight=pos_weight)` would gives out error:
```
RuntimeError: The size of tensor a (2) must match the size of tensor b (100) at non-singleton dimension 2
```
Could you take a look at it? Or is there any quick hack that can handle it?
cc @albanD @mruberry | module: nn,module: error checking,triaged | low | Critical |
506,895,341 | rust | Unable to resolve a recursive trait bound when resolution appears possible | I've a code that's unable to resolve a recursive trait bound when resolution appears possible. The code:
```rust
#![recursion_limit = "10"]
// Create a trait with some methods
pub trait Methods2<Other> {
type Output;
fn min(self, other: Other) -> Self::Output;
}
// Create a composite trait that contain multiple traits
trait AllCombos<NonRef>:
Methods2<NonRef, Output = NonRef>
+ for<'a> Methods2<&'a NonRef, Output = NonRef>
{
}
impl<T, NonRef> AllCombos<NonRef> for T where
T: Methods2<NonRef, Output = NonRef>
+ for<'a> Methods2<&'a NonRef, Output = NonRef>
{
}
// Implement this trait for f32
impl Methods2<f32> for f32 {
type Output = f32;
fn min(self, other: f32) -> Self::Output {
self.min(other)
}
}
impl Methods2<f32> for &f32 {
type Output = f32;
fn min(self, other: f32) -> Self::Output {
(*self).min(other)
}
}
impl Methods2<&f32> for f32 {
type Output = f32;
fn min(self, other: &f32) -> Self::Output {
self.min(*other)
}
}
impl Methods2<&f32> for &f32 {
type Output = f32;
fn min(self, other: &f32) -> Self::Output {
(*self).min(*other)
}
}
// Create a struct with a generic float inside
#[derive(Debug)]
struct MyStruct<Float> {
x: Float,
}
// Implement to Methods2 trait for MyStruct
impl<Float> Methods2<MyStruct<Float>> for MyStruct<Float>
where
Float: AllCombos<Float>,
for<'a> &'a Float: AllCombos<Float>,
{
type Output = MyStruct<Float>;
fn min(self, other: MyStruct<Float>) -> Self::Output {
MyStruct::<Float> {
x: self.x.min(other.x),
}
}
}
impl<Float> Methods2<MyStruct<Float>> for &MyStruct<Float>
where
Float: AllCombos<Float>,
for<'a> &'a Float: AllCombos<Float>,
{
type Output = MyStruct<Float>;
fn min(self, other: MyStruct<Float>) -> Self::Output {
MyStruct::<Float> {
x: (&self.x).min(other.x),
}
}
}
impl<Float> Methods2<&MyStruct<Float>> for MyStruct<Float>
where
Float: AllCombos<Float>,
for<'a> &'a Float: AllCombos<Float>,
{
type Output = MyStruct<Float>;
fn min(self, other: &MyStruct<Float>) -> Self::Output {
MyStruct::<Float> {
x: self.x.min(&other.x),
}
}
}
impl<Float> Methods2<&MyStruct<Float>> for &MyStruct<Float>
where
Float: AllCombos<Float>,
for<'a> &'a Float: AllCombos<Float>,
{
type Output = MyStruct<Float>;
fn min(self, other: &MyStruct<Float>) -> Self::Output {
MyStruct::<Float> {
x: (&self.x).min(&other.x),
}
}
}
// Lifts a variable into MyStruct
fn lift<Float>(x: Float) -> MyStruct<Float>
where
Float: AllCombos<Float>,
for<'a> &'a Float: AllCombos<Float>,
{
MyStruct { x }
}
// Create an element
fn main() {
let bar = lift(4.0_f32).min(lift(2.0_f32));
//let bar = lift::<f32>(4.0_f32).min(lift::<f32>(2.0_f32));
println!("{:?}", bar.x);
}
```
produces the compiler error:
```
error[E0275]: overflow evaluating the requirement `&'a MyStruct<_>: Methods2<MyStruct<_>>`
--> src/test06.rs:116:15
|
106 | fn lift<Float>(x: Float) -> MyStruct<Float>
| ----
...
109 | for<'a> &'a Float: AllCombos<Float>,
| ---------------- required by this bound in `lift`
...
116 | let bar = lift(4.0_f32).min(lift(2.0_f32));
| ^^^^
|
= help: consider adding a `#![recursion_limit="20"]` attribute to your crate
= note: required because of the requirements on the impl of `for<'a> AllCombos<MyStruct<_>>` for `&'a MyStruct<_>`
= note: required because of the requirements on the impl of `Methods2<MyStruct<MyStruct<_>>>` for `&'a MyStruct<MyStruct<_>>`
= note: required because of the requirements on the impl of `for<'a> AllCombos<MyStruct<MyStruct<_>>>` for `&'a MyStruct<MyStruct<_>>`
= note: required because of the requirements on the impl of `Methods2<MyStruct<MyStruct<MyStruct<_>>>>` for `&'a MyStruct<MyStruct<MyStruct<_>>>`
= note: required because of the requirements on the impl of `for<'a> AllCombos<MyStruct<MyStruct<MyStruct<_>>>>` for `&'a MyStruct<MyStruct<MyStruct<_>>>`
= note: required because of the requirements on the impl of `Methods2<MyStruct<MyStruct<MyStruct<MyStruct<_>>>>>` for `&'a MyStruct<MyStruct<MyStruct<MyStruct<_>>>>`
= note: required because of the requirements on the impl of `for<'a> AllCombos<MyStruct<MyStruct<MyStruct<MyStruct<_>>>>>` for `&'a MyStruct<MyStruct<MyStruct<MyStruct<_>>>>`
= note: required because of the requirements on the impl of `Methods2<MyStruct<MyStruct<MyStruct<MyStruct<MyStruct<_>>>>>>` for `&'a MyStruct<MyStruct<MyStruct<MyStruct<MyStruct<_>>>>>`
= note: required because of the requirements on the impl of `for<'a> AllCombos<MyStruct<MyStruct<MyStruct<MyStruct<MyStruct<_>>>>>>` for `&'a MyStruct<MyStruct<MyStruct<MyStruct<MyStruct<_>>>>>`
```
Essentially, the `AllCombos` trait is designed to contain all ref/val combinations for a series of methods that contain two arguments. The idea is that a user implements `Methods2` four different times for these combinations and then `AllCombos` can be used as the constraint. Anyway, it's possible to resolve the error by using the fully qualified syntax:
```rust
let bar = lift::<f32>(4.0_f32).min(lift::<f32>(2.0_f32));
```
However, it seems like type checker should be able to resolve the trait bounds using the original call:
```rust
let bar = lift(4.0_f32).min(lift(2.0_f32));
```
since the arguments straightforwardly require the trait for `f32`. I'm curious whether or not whether this is the intended behavior or whether there's a bug with the type checker. For reference:
```
$ rustc --version
rustc 1.40.0-nightly (c27f7568b 2019-10-13)
``` | A-resolve,A-trait-system,T-compiler | low | Critical |
506,922,178 | flutter | Floating action button has inconsistent reacting regions | ## Problem

FAB has 2 different regions, the outer region (rectangle) and the inner region (circular), and they react to different events:
- Hover of the tooltip reacts to the outer region, because `Tooltip` wraps the entire widget
- Hover of the inkwell reacts to the inner region, as shown by hover color
- Tap gestures react to the outer region, thanks to `_InputPadding`
This brings inconsistency: The gap between the 2 regions reacts to tooltips, but not the hover color (as well as the upcoming mouse cursor feature), but again, reacts to mouse clicks. Ideally we would like the region that hover events react to match that of the tap events; at least we would like only one hovering region, instead of 2 regions for different changes.
If we must use `_InputPadding`, I suggest we find a way to make it also proxy the mouse events of the `InkWell`, so that all events react to the outer region.
Flutter: dd43da71febaafc58852000a61a6bc1ac4fb57ef
Sample code:
```dart
import 'package:flutter/material.dart';
import 'package:flutter/widgets.dart';
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return DefaultTabController(
length: 2,
child: Scaffold(
body: Center(
child: FloatingActionButton(
onPressed: () {},
tooltip: 'floating',
hoverColor: Colors.deepOrange,
),
)),
);
}
}
``` | framework,f: material design,a: desktop,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Major |
506,942,969 | terminal | Feature Request: Smart Double-click Selection (regeces?) | <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
The current implementation of mouse double-click in terminal window only selects a word, and triple click selects the entire line.
It is often necessary to select an entire path or other forms of multi-word string (e.g. decimal number, percent value, abbreviated size, email address, ...) when using terminal and it would be nice to have an automatic detection and selection of such multi-word strings.
For example, when using git, typing `git status` returns the relative paths of the changed files:

If I wanted to copy the path of a specific file (e.g. to diff or open), with the current implementation, I would have to manually drag and select the path of the file. With the requested feature, however, I would only need to double click anywhere on the path of the file and this speeds things up by a lot.
GNOME Terminal already implements the behaviour described here, and it is an extremely useful feature that makes everyday tasks so much simpler and speeds things up significantly.
# Proposed technical implementation details (optional)
Define a set of regex rules for detecting and selecting commonly used multi-word strings. Maybe these rules can be customisable in the settings file. | Issue-Feature,Area-Extensibility,Product-Terminal | high | Critical |
506,946,502 | go | os: open stdin read-only and stdout/stderr write-only | Apparently at least macOS will let you write to stdin (and iTerm2 will show the output! Even if stdout/stderr are redirected!) but that's probably always an application error.
Sounds like we can save developers some debugging by opening those file descriptors read/write only. | NeedsInvestigation | low | Critical |
506,951,538 | youtube-dl | Site support request: startv.com.tr | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.28. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.09.28**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
_Single video:_ https://www.startv.com.tr/dizi/cocuk/bolumler/3-bolum
_Single video:_ https://www.startv.com.tr/dizi/cocuk/fragmanlar/5-bolum-fragmani
_Single video:_ https://www.startv.com.tr/dizi/cocuk/ekstralar/5-bolumun-nefes-kesen-final-sahnesi
_Single video:_ https://www.startv.com.tr/video/arsiv/dizi/avlu/44-bolum
_Single video:_ https://www.startv.com.tr/program/burcu-ile-haftasonu/bolumler/1-bolum
_Single video:_ https://www.startv.com.tr/program/burcu-ile-haftasonu/fragmanlar/2-fragman
_Single video:_ https://www.startv.com.tr/video/arsiv/program/buyukrisk/14-bolumde-hangi-unlu-ne-sordu-
_Single video split into 2 parts:_ https://www.startv.com.tr/video/arsiv/program/buyukrisk/buyuk-risk-334-bolum
_Single video split into 8 parts:_ https://www.startv.com.tr/video/arsiv/program/dada/dada-58-bolum
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
It's the official site of a Turkish TV channel. 100% legal.
I tried to provide example URLs for every possible url structure. Also please make the implementation recognize and download the multi-part videos as a single file.
Thanks! | site-support-request | low | Critical |
506,967,874 | pytorch | Expand Pytorch C10D backend to dynamic load third party communication library | ## Motivation
Expand Pytroch C10D backend to allow dynamic loading non-built-in communication libraries, as a preparation step to integrate Intel CCL (aka MLSL) to Pytorch as another c10d backend for supporting BFloat16 and future HW.
## Pitch
Enrich Pytorch for better scaling efficiency on multi-node training
## Additional Context
Expand Pytorch c10d built-in communication module mechanism to support dynamic loading 3rd communication python modules. The change is very small and made to c10d Python query mechanism. User needs specify a backend name and pass it to init_process_group() as a parameter in the python code, which calls c10d Python query mechanism. The c10d query mechanism is expanded to imported third party library according to the passed backend name. The third party library implements the process_group interface.
Intel CCL is added as third plug-in through Pytorch C++ extension. CCL threads can be pinned to specific cores through environment variables. It supports bfloat16 all reduce (bfloat16 gradient reduce to fp32) in the roadmap.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 | oncall: distributed,feature,triaged | low | Minor |
506,983,988 | pytorch | how to use libtorch library in cuda file with nvcc compiler(c++)? | ## ❓ Questions and Help
# Motivation
i want to implement nms in parallel processing with libtorch library.
i use this cuda code(https://github.com/gdlg/pytorch_nms)
# Environment
PyTorch version : 1.2.0
CUDA (nvcc compiler ) : 10.0
libtorch version : 1.2.0
system : win10
# Operation
the command :`i use nvcc -c nms_kernel.cu -L -lcudart -I D:\Code-software\NNF\libtorch\libtorch\include -I D:\Code-software\NNF\libtorch\libtorch\include\torch\csrc\api\include` to compiled it
# ERROR
`D:/Code-software/NNF/libtorch/libtorch/include\torch/csrc/jit/argument_spec.h(181): error: member "torch::jit::ArgumentSpecCreator::DEPTH_LIMIT" may not be initialized 1 error detected in the compilation of "C:/Users/Cason/AppData/Local/Temp/tmpxft_00001b28_00000000-10_nms_kernel.cpp1.ii"`
as long as i add `#include <torch/extension.h>` or `#include <torch/script.h>` in cuda files,It makes this kind of mistake.
cc @yf225 | module: cpp,triaged | low | Critical |
507,009,636 | bitcoin | GUI event loop should be block free | When I created the bitcoin-qt GUI I made a big mistake in its design. I copied this more or less exactly from the wxwindows GUI. I was aware of this back in the day, but was planning to fix it later. I never got around to it. Honestly speaking I don't think I ever will.
In any case, the event loop in the main thread of a Qt program (or any GUI program, for that matter) is never supposed to block. Any operation that can take non-trivial time (even in the order of 40ms) should be executed asynchronously.
We have many such cases; not only when the user does an explicit operation such as send, but also in response to internal notiications, and automatic periodic polls for new transactions, the current balance, and so on (these can take longer due to `cs_main` lock contention: worst during initial sync and when catching up). Also at the start of the application. Pretty much all communication with the node and wallet happens in the GUI thread itself.
This is more "redesign" than "refactor", anyhow, and definitely not a "good first issue" it needs to happen at some point. Even more if we want [snazzy animations in the android GUI](https://github.com/bitcoin/bitcoin/pull/16883).
(ref #17112 and plenty of other "GUI not responding" issues) | Brainstorming,GUI,Refactoring,good first issue | medium | Critical |
507,018,351 | rust | request: properly connect `cargo check` and `cargo build` | Currently `cargo check` and `cargo build` works in isolate. A much better way is get them connected. So that:
* After a successful `cargo build`, a `cargo check` should pick up the incremental build data, and finish (almost) immediately without doing real work.
* After a successful `cargo check`, a `cargo build` should pick up the incremental build data, and continue to do the codegen part, instead of starting from scratch. | E-hard,C-enhancement,T-compiler,A-incr-comp,T-cargo | low | Major |
507,037,348 | youtube-dl | Add site support for banned.video | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.28. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.09.28**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://banned.video/watch?id=5da4afaf9f040f0014733e18
- Single video: https://api.infowarsmedia.com/embed/5da4afaf9f040f0014733e18
- Playlist: https://banned.video/watch?id=5da558f10f4f1d001476cced&playlist=5d8a2abcf0ff0d001649b182
- Livestream: https://banned.video/channel/5b9301172abf762e22bc22fd
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Only video works after many fallbacks, no audio, takes long to init. Please also support automatic title to filename.
The single videos all correspond to the same video. Livestream links correspond to another video. "Channel Livestreams" don't work at all, unsupported URL.
**Additional domains:**
Player: `https://www.infowarsmedia.com/js/player.js`
Player2: `https://cdn.irsdn.net/videojs-hlsjs-plugin/1/stable/videojs-hlsjs-plugin.js`
Referr: `https://api.infowarsmedia.com/embed/5da4afaf9f040f0014733e18`
HLS: `https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/manifest/stream_3.m3u8`
There's also an "incremental" url but doesn't work by it self.
(OperaBrowser) Right Click -> Copy Video Address seems to throw out an ever changing `blob API` link using same domain as the embed/referr domain, but the guid is always different when copying and also always invalid, it won't connect. doesn't seem like it's important, but I mentioned it anyway.
`blob:https://api.infowarsmedia.com/14d3fa9f-dc4b-4f81-a29f-d25fb7f3ab39`
and sometimes another version: `blob:https://vod-api.infowars.com/40a4552c-20e0-44f3-bd2e-506ac1b1c17c`
------
Livestreams don't seem to have a special video link, they're inserted into the banned.video channel page. Maybe there is one, but the API and Blob ones I tried again return 404 or have errors (probably some kind of security system maybe, or metadata only accessible by scripts)
Channel Livestream M3U8: `https://infostream.secure.footprint.net/hls-live/infostream3-infostream3/_definst_/live.m3u8`
------
StaticDL: `https://api.infowarsmedia.com/api/video/5da4afaf9f040f0014733e18/download`
StaticDL Direct: `https://assets.infowarsmedia.com/videos/2892eae3-88b9-4f26-a56a-ac8252cd3238.mov` (for some reason another GUID here but corresponds to 5da4a..)
Static MOV download is the highest quality and separate of HLS VOD and Livestream, this works ofcourse out of the box as it's meant for downloading, but it's not the most optimal choice when it comes to storage space so.
------
**Verbose Output**
```
C:\Program Files Manual\YoutubeDL>youtube-dl -F https://banned.video/watch?id=5da4afaf9f040f0014733e18 -v
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-F', 'https://banned.video/watch?id=5da4afaf9f040f0014733e18', '-v']
[debug] Encodings: locale cp1250, fs mbcs, out cp852, pref cp1250
[debug] youtube-dl version 2019.09.28
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.14393
[debug] exe versions: ffmpeg N-94664-g0821bc4eee, ffprobe N-94664-g0821bc4eee
[debug] Proxy map: {}
[generic] watch?id=5da4afaf9f040f0014733e18: Requesting header
WARNING: Falling back on generic information extractor.
[generic] watch?id=5da4afaf9f040f0014733e18: Downloading webpage
[generic] watch?id=5da4afaf9f040f0014733e18: Extracting information
[generic] 5da4afaf9f040f0014733e18: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 5da4afaf9f040f0014733e18: Downloading webpage
[generic] 5da4afaf9f040f0014733e18: Extracting information
[generic] 5da4afaf9f040f0014733e18: Downloading m3u8 information
[download] Downloading playlist: video
[generic] playlist video: Collected 1 video ids (downloading 1 of them)
[download] Downloading video 1 of 1
[info] Available formats for 5da4afaf9f040f0014733e18:
format code extension resolution note
hls-group_audio-audio_0 mp4 audio only
hls-580 mp4 426x240 580k , avc1.42c015, video only
hls-1020 mp4 640x360 1020k , avc1.4d401e, video only
hls-2120 mp4 854x480 2120k , avc1.4d401f, video only
hls-4100 mp4 1280x720 4100k , avc1.4d401f, video only
hls-5860 mp4 1920x1080 5860k , avc1.4d4028, video only (best)
[download] Finished downloading playlist: video
C:\Program Files Manual\YoutubeDL>youtube-dl -f hls-4100 https://banned.video/watch?id=5da4afaf9f040f0014733e18 -v
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-f', 'hls-4100', 'https://banned.video/watch?id=5da4afaf9f040f0014733e18', '-v']
[debug] Encodings: locale cp1250, fs mbcs, out cp852, pref cp1250
[debug] youtube-dl version 2019.09.28
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.14393
[debug] exe versions: ffmpeg N-94664-g0821bc4eee, ffprobe N-94664-g0821bc4eee
[debug] Proxy map: {}
[generic] watch?id=5da4afaf9f040f0014733e18: Requesting header
WARNING: Falling back on generic information extractor.
[generic] watch?id=5da4afaf9f040f0014733e18: Downloading webpage
[generic] watch?id=5da4afaf9f040f0014733e18: Extracting information
[generic] 5da4afaf9f040f0014733e18: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 5da4afaf9f040f0014733e18: Downloading webpage
[generic] 5da4afaf9f040f0014733e18: Extracting information
[generic] 5da4afaf9f040f0014733e18: Downloading m3u8 information
[download] Downloading playlist: video
[generic] playlist video: Collected 1 video ids (downloading 1 of them)
[download] Downloading video 1 of 1
[debug] Invoking downloader on 'https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/manifest/stream_4.m3u8'
[download] Destination: video-5da4afaf9f040f0014733e18.mp4
[debug] ffmpeg command line: ffmpeg -y -loglevel verbose -headers "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3738.4 Safari/537.36
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Accept-Encoding: gzip, deflate
Accept-Language: en-us,en;q=0.5
Cookie: __cfduid=d6680d978e00d9fdf34688d4027c01ab41571117218
Referer: https://api.infowarsmedia.com/embed/5da4afaf9f040f0014733e18
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
" -i "https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/manifest/stream_4.m3u8" -c copy -f mp4 "file:video-5da4afaf9f040f0014733e18.mp4.part"
ffmpeg version N-94664-g0821bc4eee Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9.1.1 (GCC) 20190807
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
libavutil 56. 33.100 / 56. 33.100
libavcodec 58. 55.101 / 58. 55.101
libavformat 58. 31.104 / 58. 31.104
libavdevice 58. 9.100 / 58. 9.100
libavfilter 7. 58.101 / 7. 58.101
libswscale 5. 6.100 / 5. 6.100
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
[tcp @ 000001938b55c880] Starting connection attempt to 104.17.106.42 port 443
[tcp @ 000001938b55c880] Successfully connected to 104.17.106.42 port 443
[hls @ 000001938b559600] Skip ('#EXT-X-VERSION:6')
[hls @ 000001938b559600] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 000001938b559600] Skip ('#Stream job=ApPJdf5x4qI= try=998095')
[hls @ 000001938b559600] HLS request for url 'https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/video/720/stream_4-seg_0.ts', offset 0, playlist 0
[hls @ 000001938b559600] Opening 'https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/video/720/stream_4-seg_0.ts' for reading
[tcp @ 000001938bc21d40] Starting connection attempt to 104.17.106.42 port 443
[tcp @ 000001938bc21d40] Successfully connected to 104.17.106.42 port 443
[hls @ 000001938b559600] HLS request for url 'https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/video/720/stream_4-seg_1.ts', offset 0, playlist 0
[hls @ 000001938b559600] Opening 'https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/video/720/stream_4-seg_1.ts' for reading
[tcp @ 000001938bc109c0] Starting connection attempt to 104.17.106.42 port 443
[tcp @ 000001938bc109c0] Successfully connected to 104.17.106.42 port 443
[h264 @ 000001938bfc8fc0] Reinit context to 1280x720, pix_fmt: yuv420p
[hls @ 000001938b559600] HLS request for url 'https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/video/720/stream_4-seg_2.ts', offset 0, playlist 0
[https @ 000001938bc0aa80] Opening 'https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/video/720/stream_4-seg_2.ts' for reading
Input #0, hls, from 'https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/manifest/stream_4.m3u8':
Duration: 00:19:50.83, start: 0.066667, bitrate: 0 kb/s
Program 0
Metadata:
variant_bitrate : 0
Stream #0:0: Video: h264 (Main), 1 reference frame ([27][0][0][0] / 0x001B), yuv420p(left), 1280x720 [SAR 1:1 DAR 16:9], 30 fps, 30 tbr, 90k tbn, 60 tbc
Metadata:
variant_bitrate : 0
Output #0, mp4, to 'file:video-5da4afaf9f040f0014733e18.mp4.part':
Metadata:
encoder : Lavf58.31.104
Stream #0:0: Video: h264 (Main), 1 reference frame (avc1 / 0x31637661), yuv420p(left), 1280x720 (0x0) [SAR 1:1 DAR 16:9], q=2-31, 30 fps, 30 tbr, 90k tbn, 90k tbc
Metadata:
variant_bitrate : 0
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[hls @ 000001938b559600] HLS request for url 'https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/video/720/stream_4-seg_3.ts', offset 0, playlist 0
[https @ 000001938bda3040] Opening 'https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/video/720/stream_4-seg_3.ts' for reading
[hls @ 000001938b559600] HLS request for url 'https://bytehighway.net/7a99a5cbb9409bcb49b7ef8a3df84683/video/720/stream_4-seg_4.ts', offset 0, playlist 0
```
| site-support-request | low | Critical |
507,118,280 | TypeScript | Remove name assignment when renaming a renamed destructured property to its original name | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
## Search Terms
rename, destructuring
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
When renaming a variable, which comes from an object destructuring and is assigned with another name, back to its original name, the name assignment is not removed, and becomes a little extra and useless code. It would make the code cleaner if the assignment can be removed when rename is done.
<!-- A summary of what you'd like to see added or changed -->
## Use Cases
To have cleaner code.
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
```ts
const obj = { a: 1 };
const { a: b } = obj;
console.log(b);
```
Renaming `b` to `a`:
Now (at least up to 3.6.3):
```ts
const { a: a } = obj;
console.log(a);
```
Better:
```ts
const { a } = obj;
console.log(a);
```
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Experience Enhancement | low | Critical |
507,118,602 | go | gccgo: building non-empty struct with zero-sized trailing fields failed | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
go version go1.13 gollvm LLVM 10.0.0svn linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/hostname/.cache/go-build"
GOENV="/home/hostname/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/hostname/gopath"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/hostname/gollvm-master/install"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/hostname/gollvm-master/install/tools"
GCCGO="/home/hostname/gollvm-master/install/bin/llvm-goc"
AR="ar"
CC="/usr/bin/cc"
CXX="/usr/bin/c++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build409229965=/tmp/go-build -gno-record-gcc-switches -funwind-tables"
</pre></details>
### What did you do?
https://play.golang.org/p/9QxcZb4KPJh
Build and run the above code with gccgo (gcc backend) and gollvm go (llvm backend)
### What did you expect to see?
Nothing output.
### What did you see instead?
**For gollvm go, the error messages are as follow**:
llvm-goc: ./gollvm-master/llvm/tools/gollvm/bridge/go-llvm-materialize.cpp:860: Bexpression* Llvm_backend::materializeComposite(Bexpression*): Assertion `vals.size() == numElements' failed.
**For gccgo, the error messages are as follow**:
go1: internal compiler error: in return_statement, at go/go-gcc.cc:2168
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-9/README.Bugs> for instructions.
A simple profiling:
For non-empty struct with zero-sized trailing fields, we add an extra field "_" to it in gofrontend/go/types.cc:get_backend_struct_fields. But the **Bexpression** of the results has the same element size with the original results sequence. So the assert failed. I didn't dig into the gccgo code, but I guess the cause of this issue should be the same. There are multiple such assertions in gollvm code, simply removing them don't work. I don't know how gc go handles this case, but gc go can really handle this case.
CC @ianlancetaylor @thanm @cherrymui
| NeedsFix | low | Critical |
507,131,743 | pytorch | can't load model on cuda after call cudaDeviceReset functions | ## 🐛 Bug
hello, I load my .pt model in libtorch(version 1.0.0) and it works well, and then the code call the cudaDeviceReset function, I try to load the model again but it can't be loaded on gpu any more. If it doesn't call the cudaDeviceReset function, everything is ok. I don't know how this function affects the model.
```cpp
std::shared_ptr<torch::jit::script::Module> module_1 = torch::jit::load("mymodel.pt");
module_1->to(torch::kCUDA); //fine
cudaDeviceReset(); //call cudaDeviceReset function to clear gpu memory
//load model again
std::shared_ptr<torch::jit::script::Module> module_2= torch::jit::load("mymodel.pt"); //crash here
module_2->to(torch::kCUDA);
```
## Environment
- PyTorch Version :1.0
- Libtorch Version: 1.0
- OS : windows7
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.6.5
- CUDA/cuDNN version: cuda9.0 cudnn7.5.0
- GPU models and configuration: geforce gtx1060
- visual studio 2015 update3
cc @suo | oncall: jit,triaged | low | Critical |
507,187,838 | opencv | OpenCV JS Tutorial about VideoCapture on Safari for iOS is not working | Hi community,
the tutorials of OpenCV JS about the [FaceDetection](https://docs.opencv.org/3.4/df/d6c/tutorial_js_face_detection_camera.html) (and more in general about the VideoCapture) are not working on Safari for iOS. The symptom are the video canvas showing only the first frame on first run of the sample. On the second run the VideoCapture goes fullscreen.
The fix is quite simple: add the **playsinline=true** html attribute on the video tag.
`<video id="videoInput" width=320 height=240 playsinline=true></video>`
Bests. | platform: ios/osx,category: javascript (js) | low | Minor |
507,238,465 | pytorch | Sending CUDA tensors via queue between processes, memory of Consumer process grows infinitely | ## 🐛 Bug
When sending CUDA tensors via queue between processes, then memory of Consumer process grows infinitely.
## To Reproduce
Here is simple code snippet that demonstrates the issue:
```
import os
import time
import torch
import torch.multiprocessing as mp
import psutil
class Consumer(mp.Process):
def __init__(self, queue):
super(Consumer, self).__init__()
self.queue = queue
def run(self):
print('Consumer: ', os.getpid())
while True:
tensor = self.queue.get()
del tensor
process = psutil.Process()
print('Consumer mem: ', process.memory_info().rss, end='\r')
class Producer(mp.Process):
def __init__(self, queue):
super(Producer, self).__init__()
self.queue = queue
self.device = torch.device('cuda:0')
def run(self):
print('Producer: ', os.getpid())
while True:
tensor = torch.ones([2, 4], dtype=torch.float32, device=self.device)
self.queue.put(tensor)
time.sleep(0.001)
if __name__ == '__main__':
queue = mp.Queue()
consumer = Consumer(queue)
producer = Producer(queue)
consumer.start()
producer.start()
consumer.join()
producer.join()
```
## Expected behavior
Producer/consumer memory shouldn't grow
## Environment
PyTorch version: 1.4.0a0+19d83ab
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.1.168
GPU models and configuration:
GPU 0: TITAN X (Pascal)
GPU 1: TITAN X (Pascal)
GPU 2: GeForce GTX 1080 Ti
Nvidia driver version: 418.67
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.5.1.5
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.5
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7
Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
```
| module: multiprocessing,module: cuda,module: memory usage,triaged | low | Critical |
507,314,407 | godot | Artifacts when manipulating SHADOW_VEC on a light shader | **Godot version:**
3.2, alpha 2
**OS/device including version:**
Windows 10, NVIDIA GTX 1070
**Issue description:**
Trying to manipulate SHADOW_VEC on a shader, to make vertical shadows on the walls.
This tends to generate artifacts with the shadows. I don't know if this is really a problem or something wrong with my code but thought it should be reported.
**Steps to reproduce:**
Run minimal project and move character.
**Minimal reproduction project:**
[test_shadowvec.zip](https://github.com/godotengine/godot/files/3730064/test_shadowvec.zip)

| bug,topic:rendering,confirmed | medium | Major |
507,362,832 | opencv | standard packages for linuxes with CUDA-support as in pytorch, mxnet sites | Please, realize package distribution with CUDA for linuxes with choices as on sites for pytorch, mxnet.
opencv is convenient for working with neural networks, but trying to install CUDA support is difficult. | priority: low,category: infrastructure | low | Minor |
507,369,430 | rust | Closure-like blocks capture all generic type and const parameters | I'd like to write the following code:
```rust
use std::future::Future;
struct Foo<A>(A);
impl<A> Foo<A> {
fn bar<Q, R: Default>(&mut self, q: Q) -> impl Future<Output = R> {
let _ = q;
async move {
R::default()
}
}
fn baz(&mut self, x: &str) -> impl Future<Output = usize> {
self.bar(x)
}
}
```
In particular, I would like to have the `impl Trait` returned by `baz` _not_ be tied to the lifetime of its `&str` argument. Since `impl Trait` captures the lifetimes of all generic arguments (as per [RFC 1951](https://github.com/rust-lang/rfcs/blob/master/text/1951-expand-impl-trait.md)), I can't write the code this way though. So instead, I tried
```rust
#![feature(type_alias_impl_trait)]
use std::future::Future;
struct Foo<A>(A);
type BarFut<A, R> = impl Future<Output = R>;
impl<A> Foo<A> {
fn bar<Q, R: Default>(&mut self, q: Q) -> BarFut<A, R> {
let _ = q;
async move {
R::default()
}
}
fn baz(&mut self, x: &str) -> impl Future<Output = usize> {
self.bar(x)
}
}
```
However, with this, I get the error:
```
error: type parameter `Q` is part of concrete type but not used in parameter list for the `impl Trait` type alias
```
This seems odd, since `Q` is (intentionally) not used in the `async` block. I can work around this by adding an `async fn` and calling that instead of using `async move`, but that seems like an odd hack:
```rust
async fn make_r_fut<R: Default>() -> R {
R::default()
}
// ...
fn bar<Q, R: Default>(&mut self, q: Q) -> BarFut<A, R> {
let _ = q;
make_r_fut()
}
// ...
```
Is it intentional that the `async` block "captures" `Q` here, even though it never contains a `Q`? | T-lang,A-impl-trait,A-async-await,AsyncAwait-Triaged | low | Critical |
507,373,636 | kubernetes | No way to request table format using WebSockets | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**:
#71548 added support for `Table` and `PartialObjectMetadata` on watch. This does not work using WebSockets, however, since there is no way to pass an `Accept` header for content negotiation.
**What you expected to happen**:
A `format` query parameter to request `application/json;as=Table;v=v1beta1;g=meta.k8s.io` when watching a resource using WebSockets or some other way to perform content negotiation.
**Environment**:
- Kubernetes version (use `kubectl version`): v1.16.0-beta.2+af7ac0b
@smarterclayton | kind/bug,sig/api-machinery,lifecycle/frozen,triage/accepted | medium | Critical |
507,379,415 | vscode | "There are task errors" while typing in tasks.json | - Have a task like this:
```json
{
"type": "npm",
"script": "web",
"label": "Run web",
"isBackground": true,
"presentation": {
"reveal": "never"
}
}
```
- Add this
```json
"problemMatcher": {
"background": {
}
},
```
- Get a warning popup immediately, "There are task errors. See the output for details"
- I'm still typing, I know there are errors, it should leave me alone for a minute or until I close the editor | bug,tasks,polish | low | Critical |
507,391,457 | youtube-dl | report both the previously installed version and the updated version after an update | ## Checklist
- [x] I'm reporting a feature request
- [x] I've verified that I'm running youtube-dl version **2019.09.28**
- [x] I've searched the bugtracker for similar feature requests (including closed ones) and haven't found any similar ones
## Description
Currently `youtube-dl -U` reports only the latest version, either as the up-to-date version (such as “youtube-dl is up-to-date (2019.09.28)”) when an update is not necessary or as the version being installed when an update happens.
However, in the latter case the reported version is not enough for the comfortable reading of [the changelog.](https://github.com/ytdl-org/youtube-dl/blob/master/ChangeLog) I'd like to know where exactly should I stop reading the changelog, and thus I have to request the previously installed version to be also displayed after an update.
(Currently, as a workaround, I make sure to always run `youtube-dl --version` before I run `youtube-dl -U`, but that's twice longer than it should be.) | request | low | Critical |
507,427,002 | flutter | [shared_preferences] Update documentation to reflect testing changes | I feel like the documentation on shared_preferences could use a hint to the changes pulled here:
https://github.com/flutter/plugins/pull/1308
Mainly that you can use `SharedPreferences.setMockInitialValues()` to mock initial values and also "overwrite" values with it in later tests. I stumbled upon the problem with the singleton instance not calling `getAll` again after being initialized in some earlier test and thus still having the old/wrong values in its cache.
| d: api docs,p: shared_preferences,package,team-ecosystem,P2,triaged-ecosystem | low | Minor |
507,469,381 | create-react-app | Proposal: Option to build without linting | ### Is your proposal related to a problem?
I'm always frustrated when the build command is linting my code and there is no way for me to skip the linting step. My pipeline is `lint > test > build`. I want to lint before testing because my test files are also linted. With the current setup, linting happens twice.
### Describe the solution you'd like
I would like to be able to opt out of linting: `"build" : "react-scripts built --no-lint"`
Or (even better) have linting separated from the build command so I can remove it. It would appear this way in the default setup:
`"build": "react-scripts lint && react-scripts build"`
The README never mentions that linting will happen within the build step, so this makes the actual process much more visible.
### Describe alternatives you've considered
Somehow disabling linting by configuration before the build step happens.
### Additional context
A related issue was auto-staled without solution:
https://github.com/facebook/create-react-app/issues/7078 | issue: proposal,needs triage | medium | Major |
507,475,537 | pytorch | TestTorch.test_doc should be in TestDocCoverage | Afaict TestTorch.test_doc tests that certain functions on torch are documented. This is similar to what TestDocCoverage does and it would be great to have all of the documentation tests in one single location.
https://github.com/pytorch/pytorch/blob/fd3d6587e60de816106d6b8dc0f586f9fa52a7ac/test/test_torch.py#L183
cc @ezyang @zou3519 | module: docs,triaged,module: doc infra | low | Major |
507,477,774 | pytorch | Easy way to create previews of the docs website after any changes | It would be nice to be able to preview documentation changes; I find that in code review at least one of the following happens:
1. The reviewer asks for images of the documentation
2. After the PR gets merged, we discover that the syntax or formatting is incorrect.
`pytorch.github.io` has some infrastructure that previews changes to the website; maybe we can reuse that and provide some option.
cc @ezyang @zou3519 | triaged,module: doc infra | low | Minor |
507,486,369 | flutter | Support clear dependencies on specific Android embedding versions | ## Use case
Plugins may have different requirements depending on which version of the embedding they're registered with. See flutter/plugins#2196 for an example. When used with V2 of the Android embedding, the plugin absolutely needs a (relatively recent) bugfix within that embedding to work correctly. However when used with the V1 Android embedding, the plugin's Flutter SDK requirement is much lower since it doesn't need that bugfix.
Right now we don't have a way to programmatically check for and try and assert the higher requirement with the V2 embedding without artificially raising it for all users, because there's no way to know if the user is registering the plugin with the old or new embedding from the pub side where the Flutter version is being checked and no way (that I know of) to get the Flutter version from the Java side where we know which embedding is being used.
## Proposal
I have ~two~ three rough ideas.
### Split the embeddings into separate packages outside of flutter/engine
@dnfield has written up a proposal for this before, at https://flutter.dev/go/android-embedding-move. My (extremely rough) ask here on top of what's in that doc already is to also split V1 and V2 out into separate packages so that plugins could specify which version of each they depend on in their `pubspec.yaml`, similar to how the Flutter SDK is required as a whole.
I think this is the better solution.
### Automatically generate some versioning information in the Java code that's built today
It's technically possible to generate some Java class as part of the build process that has versioning information. It wouldn't be able to save Flutter versioning, since that's defined upstream, but it could generate something with the latest engine hash.
Actually using this in a plugin would be difficult, however. It would mean doing a runtime check on the class and then logging or throwing. The plugin would also need to somehow maintain Flutter and the Engine's Git history in order to know if a particular hash was before or after the "right" Flutter version.
### Manually add in some constant and start semantically versioning the v2 embedding where it is today
This is a lot like the above suggestion, where it relies on sketchy runtime checks on the part of the plugin developer. However instead of needing a plugin to know all of Flutter and the Engine's git history and using some generated class, we could start deliberately manually versioning the v2 embedding. I expect this would be prone to catastrophic failures often because it would be extremely hard to realize as an Engine developer that you'd need to increment this random constant if you touched this files. We could maybe add a presubmit check that would error if certain directories were touched and the constant wasn't altered.
_Edited to add a third idea_
/cc @matthew-carroll @dnfield @blasten @amirh | c: new feature,platform-android,engine,a: existing-apps,a: build,P2,a: plugins,team-android,triaged-android | low | Critical |
507,492,657 | flutter | Support iOS 13 CupertinoContextMenu Submenu | Flutter's [CupertinoContextMenu](https://main-api.flutter.dev/flutter/cupertino/CupertinoContextMenu-class.html) (as implemented in https://github.com/flutter/flutter/pull/37778) does not support submenu items, but they exist in native. We should support this one way or another.
Note that the [Apple's HIG say](https://developer.apple.com/design/human-interface-guidelines/ios/controls/context-menus/) that the submenu can't be more than one level deep.
CC @LongCatIsLooong who pointed this out to me.
Please :+1: if you need this feature! | c: new feature,framework,f: cupertino,customer: crowd,a: desktop,P2,team-design,triaged-design | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.